From patchwork Fri Sep 21 15:13:27 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 10610443 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9F909112B for ; Fri, 21 Sep 2018 15:15:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 90BBF2E49D for ; Fri, 21 Sep 2018 15:15:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 84B532E4BE; Fri, 21 Sep 2018 15:15:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, USER_IN_DEF_DKIM_WL autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 35ED62E49D for ; Fri, 21 Sep 2018 15:15:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390298AbeIUVFD (ORCPT ); Fri, 21 Sep 2018 17:05:03 -0400 Received: from mail-wm1-f65.google.com ([209.85.128.65]:38127 "EHLO mail-wm1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390448AbeIUVDP (ORCPT ); Fri, 21 Sep 2018 17:03:15 -0400 Received: by mail-wm1-f65.google.com with SMTP id t25-v6so3669903wmi.3 for ; Fri, 21 Sep 2018 08:13:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=sTTlDohLEJvb592OsyAPVR+u8uN9dIVJB3pJotLdeOk=; b=eFk3Aix99RmJaNZRwCRLFH/tDZNfk5/3JawlFf1a11HhqIDObEJMuPJ1S6LXnwafvv pDK0c73bjQBny01Z5FXE7/lMchlf2fzVj1CjiwcqDMTTgDOHopWD0Xsao4jr2F76rO9H sRTSfbicV3YQxXFxMGGkfCFrC7QW3EgUZvmN7yEkxsHMVhG9mzLQ22hOZVMlzaYFyiKq qqZ5w92kv5wzvOtcuL2bcpcyqlfncMFkBZuzP+z6GYR/0+6/C+PCRE7WLSWVJJymgBgc k5+LdjtnDdLQpZ9Mwb+AJaf3zvy/dB3bLa2M/Ru3XY8Je/6e0XKLgvNRH54XQJN7XAsO jXpA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=sTTlDohLEJvb592OsyAPVR+u8uN9dIVJB3pJotLdeOk=; b=ZdVDQ6UM582bp0PYvChngWOGS2FfhaM2yJUU/TJMhHdVqG3WuqHD8uBa5OGq37hcNs CRbhVtXUkNIl4/vVk8kaePI0UZvm0XiWDxlZTGHMUkC6JQsFMj0x3kBnSbFJ4T2kV1hc RtU0HZfi91lVyScVAEofkDZQMZUF5ZM+cXP0Dcwfi5WGeAPrutvzML2Kd5qgf4CspBHr LIUpJ5QRP2bdiOIwh7CBS96fM7FmGr5jzCauJnrb9D156OM1/hltbke+hD/lIh51gBDe YPllzVmP6SG3dVCo3h99uaniDL/c5gRp1QeZr0s0sVoQv4K/Wd120+RyFs1QMq8+mMMf bnsA== X-Gm-Message-State: APzg51AhVdJekQ0S7ZpbSVzZ+s0+Ly/lpdcWqMswvgQiIYFDMxpmqui6 BuLEOWaxcK9++RoeTqw/Ta0qXg== X-Google-Smtp-Source: ANB0VdYlfbjDbYUvenzNc17f98hLcknJNfP2DahKpQTTsCR+woEiIv9h2+SgVGtbCbMPsqia5n0+RA== X-Received: by 2002:a1c:1eca:: with SMTP id e193-v6mr7134810wme.99.1537542835102; Fri, 21 Sep 2018 08:13:55 -0700 (PDT) Received: from andreyknvl0.muc.corp.google.com ([2a00:79e0:15:10:84be:a42a:826d:c530]) by smtp.gmail.com with ESMTPSA id e7-v6sm27990271wru.46.2018.09.21.08.13.53 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 21 Sep 2018 08:13:54 -0700 (PDT) From: Andrey Konovalov To: Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Catalin Marinas , Will Deacon , Christoph Lameter , Andrew Morton , Mark Rutland , Nick Desaulniers , Marc Zyngier , Dave Martin , Ard Biesheuvel , "Eric W . Biederman" , Ingo Molnar , Paul Lawrence , Geert Uytterhoeven , Arnd Bergmann , "Kirill A . Shutemov" , Greg Kroah-Hartman , Kate Stewart , Mike Rapoport , kasan-dev@googlegroups.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-sparse@vger.kernel.org, linux-mm@kvack.org, linux-kbuild@vger.kernel.org Cc: Kostya Serebryany , Evgeniy Stepanov , Lee Smith , Ramana Radhakrishnan , Jacob Bramley , Ruben Ayrapetyan , Jann Horn , Mark Brand , Chintan Pandya , Vishwath Mohan , Andrey Konovalov Subject: [PATCH v9 05/20] kasan, arm64: adjust shadow size for tag-based mode Date: Fri, 21 Sep 2018 17:13:27 +0200 Message-Id: <10cf432f0ffdb67fbd495acc61bdd9517af5b7b7.1537542735.git.andreyknvl@google.com> X-Mailer: git-send-email 2.19.0.444.g18242da7ef-goog In-Reply-To: References: MIME-Version: 1.0 Sender: linux-kbuild-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Tag-based KASAN uses 1 shadow byte for 16 bytes of kernel memory, so it requires 1/16th of the kernel virtual address space for the shadow memory. This commit sets KASAN_SHADOW_SCALE_SHIFT to 4 when the tag-based KASAN mode is enabled. Signed-off-by: Andrey Konovalov --- arch/arm64/Makefile | 2 +- arch/arm64/include/asm/memory.h | 13 +++++++++---- 2 files changed, 10 insertions(+), 5 deletions(-) diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile index 106039d25e2f..11f4750d8d41 100644 --- a/arch/arm64/Makefile +++ b/arch/arm64/Makefile @@ -94,7 +94,7 @@ endif # KASAN_SHADOW_OFFSET = VA_START + (1 << (VA_BITS - KASAN_SHADOW_SCALE_SHIFT)) # - (1 << (64 - KASAN_SHADOW_SCALE_SHIFT)) # in 32-bit arithmetic -KASAN_SHADOW_SCALE_SHIFT := 3 +KASAN_SHADOW_SCALE_SHIFT := $(if $(CONFIG_KASAN_SW_TAGS), 4, 3) KASAN_SHADOW_OFFSET := $(shell printf "0x%08x00000000\n" $$(( \ (0xffffffff & (-1 << ($(CONFIG_ARM64_VA_BITS) - 32))) \ + (1 << ($(CONFIG_ARM64_VA_BITS) - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) \ diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index b96442960aea..0f1e024a951f 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -74,12 +74,17 @@ #define KERNEL_END _end /* - * KASAN requires 1/8th of the kernel virtual address space for the shadow - * region. KASAN can bloat the stack significantly, so double the (minimum) - * stack size when KASAN is in use. + * Generic and tag-based KASAN require 1/8th and 1/16th of the kernel virtual + * address space for the shadow region respectively. They can bloat the stack + * significantly, so double the (minimum) stack size when they are in use. */ -#ifdef CONFIG_KASAN +#ifdef CONFIG_KASAN_GENERIC #define KASAN_SHADOW_SCALE_SHIFT 3 +#endif +#ifdef CONFIG_KASAN_SW_TAGS +#define KASAN_SHADOW_SCALE_SHIFT 4 +#endif +#ifdef CONFIG_KASAN #define KASAN_SHADOW_SIZE (UL(1) << (VA_BITS - KASAN_SHADOW_SCALE_SHIFT)) #define KASAN_THREAD_SHIFT 1 #else