From patchwork Thu Aug 9 19:20:56 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 10561813 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 58DC51057 for ; Thu, 9 Aug 2018 19:23:15 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 48DDC26490 for ; Thu, 9 Aug 2018 19:23:15 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3D0F3283A6; Thu, 9 Aug 2018 19:23:15 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00,DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D79E426490 for ; Thu, 9 Aug 2018 19:23:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727141AbeHIVt1 (ORCPT ); Thu, 9 Aug 2018 17:49:27 -0400 Received: from mail-wm0-f67.google.com ([74.125.82.67]:36513 "EHLO mail-wm0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727318AbeHIVrg (ORCPT ); Thu, 9 Aug 2018 17:47:36 -0400 Received: by mail-wm0-f67.google.com with SMTP id w24-v6so1385921wmc.1 for ; Thu, 09 Aug 2018 12:21:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=f8+p7PdvKfl3q/+F/IGkt+C18+eHZ9wVnzYkW+BehT8=; b=D73aW0s551OGsYp9bn6Vo8ec1xt2vcZEVojeeg/hcq+xlUvedvfiPvD/l23Qztvnw+ wSLucQGfP6ySKLhpmhcRTW8CbF74oP4WdINOeTcHbe8nwxxrmgezabuzCNMGmUKm6fUK SRj9TMDDJRTKOFBl1AmQZS70YMKa8CzKaO5su8SVksnZW2NJ35YRrsXLjntvy6CpThYw jhc++KV7XImPnbR+UvilReSMks8aI8LJhZ/Q5JCfe3j8kwpu5Qt/uNXsGox6/MqM9h8v iW/pdTxfHLAju6h9IZcYcNKAUs7ZpyPeqQ/LQ2kxHatkzBl2m2u48Bp8zbJoniIuPUlQ Hegw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=f8+p7PdvKfl3q/+F/IGkt+C18+eHZ9wVnzYkW+BehT8=; b=GCXHGMnBygMWOFWz1hsEr2Sd34ZPqefpBcAS/kUX3FB+9hLFSYmdqMHiZfdT0tsQJc 3iT3jCuwMfmryQnVx8hhM21/smrafZ1MZmnzIOWm8w5gI0bEcVf+3JbS6q97ZksVskjt m+leHw8OOqPywKYro8Q7AgiGRviwvZL4D3U8acVjQdkW4F6zmhBQkIUucNZQp0mtlhKm RRrvori7H7pou0j9dHyYDpR06HR94OMIQ9Buusvi3D8GiuGUrfjmnINg8FX0ltbjU7nQ ZI5Rxx3pNuLXT4bsWEEBrgn9GIQ/tw2C7QAvuh2jf7LOyQ1964sD10rywd2/w/v3HPR8 bTOQ== X-Gm-Message-State: AOUpUlFZIjHuzsyW7QlcVijxP6smKR3q1Xc/RKE6DTmTArPqmR9iMQGH GsdNNzpc+LsbsMIFvgG5dbmPiA== X-Google-Smtp-Source: AA+uWPw3BJp/Y8iB0ddeTjQn74oZlVoIzUW/ZVOeQxFtaBKLU75DdXP/0uzrW82eVwx7QU/jEbWVJw== X-Received: by 2002:a1c:1252:: with SMTP id 79-v6mr2356794wms.70.1533842481486; Thu, 09 Aug 2018 12:21:21 -0700 (PDT) Received: from andreyknvl0.muc.corp.google.com ([2a00:79e0:15:10:84be:a42a:826d:c530]) by smtp.gmail.com with ESMTPSA id o14-v6sm14738797wmd.35.2018.08.09.12.21.19 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 09 Aug 2018 12:21:20 -0700 (PDT) From: Andrey Konovalov To: Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Catalin Marinas , Will Deacon , Christoph Lameter , Andrew Morton , Mark Rutland , Nick Desaulniers , Marc Zyngier , Dave Martin , Ard Biesheuvel , "Eric W . Biederman" , Ingo Molnar , Paul Lawrence , Geert Uytterhoeven , Arnd Bergmann , "Kirill A . Shutemov" , Greg Kroah-Hartman , Kate Stewart , Mike Rapoport , kasan-dev@googlegroups.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-sparse@vger.kernel.org, linux-mm@kvack.org, linux-kbuild@vger.kernel.org Cc: Kostya Serebryany , Evgeniy Stepanov , Lee Smith , Ramana Radhakrishnan , Jacob Bramley , Ruben Ayrapetyan , Jann Horn , Mark Brand , Chintan Pandya , Vishwath Mohan , Andrey Konovalov Subject: [PATCH v5 04/18] khwasan, arm64: adjust shadow size for CONFIG_KASAN_HW Date: Thu, 9 Aug 2018 21:20:56 +0200 Message-Id: X-Mailer: git-send-email 2.18.0.597.ga71716f1ad-goog In-Reply-To: References: MIME-Version: 1.0 Sender: linux-kbuild-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP KWHASAN uses 1 shadow byte for 16 bytes of kernel memory, so it requires 1/16th of the kernel virtual address space for the shadow memory. This commit sets KASAN_SHADOW_SCALE_SHIFT to 4 when KHWASAN is enabled. Signed-off-by: Andrey Konovalov --- arch/arm64/Makefile | 2 +- arch/arm64/include/asm/memory.h | 13 +++++++++---- 2 files changed, 10 insertions(+), 5 deletions(-) diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile index e7101b19d590..7c92dcbaf1a6 100644 --- a/arch/arm64/Makefile +++ b/arch/arm64/Makefile @@ -93,7 +93,7 @@ endif # KASAN_SHADOW_OFFSET = VA_START + (1 << (VA_BITS - KASAN_SHADOW_SCALE_SHIFT)) # - (1 << (64 - KASAN_SHADOW_SCALE_SHIFT)) # in 32-bit arithmetic -KASAN_SHADOW_SCALE_SHIFT := 3 +KASAN_SHADOW_SCALE_SHIFT := $(if $(CONFIG_KASAN_HW), 4, 3) KASAN_SHADOW_OFFSET := $(shell printf "0x%08x00000000\n" $$(( \ (0xffffffff & (-1 << ($(CONFIG_ARM64_VA_BITS) - 32))) \ + (1 << ($(CONFIG_ARM64_VA_BITS) - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) \ diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 49d99214f43c..6d084431b7f7 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -74,12 +74,17 @@ #define KERNEL_END _end /* - * KASAN requires 1/8th of the kernel virtual address space for the shadow - * region. KASAN can bloat the stack significantly, so double the (minimum) - * stack size when KASAN is in use. + * KASAN and KHWASAN require 1/8th and 1/16th of the kernel virtual address + * space for the shadow region respectively. They can bloat the stack + * significantly, so double the (minimum) stack size when they are in use. */ -#ifdef CONFIG_KASAN +#ifdef CONFIG_KASAN_GENERIC #define KASAN_SHADOW_SCALE_SHIFT 3 +#endif +#ifdef CONFIG_KASAN_HW +#define KASAN_SHADOW_SCALE_SHIFT 4 +#endif +#ifdef CONFIG_KASAN #define KASAN_SHADOW_SIZE (UL(1) << (VA_BITS - KASAN_SHADOW_SCALE_SHIFT)) #define KASAN_THREAD_SHIFT 1 #else