From patchwork Tue Nov 27 16:55:28 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 10701011 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 59CB3109C for ; Tue, 27 Nov 2018 16:57:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4867D2C3C7 for ; Tue, 27 Nov 2018 16:57:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3AE542C455; Tue, 27 Nov 2018 16:57:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, USER_IN_DEF_DKIM_WL autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A75432C425 for ; Tue, 27 Nov 2018 16:57:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731671AbeK1Dyl (ORCPT ); Tue, 27 Nov 2018 22:54:41 -0500 Received: from mail-wr1-f65.google.com ([209.85.221.65]:41362 "EHLO mail-wr1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731670AbeK1Dyj (ORCPT ); Tue, 27 Nov 2018 22:54:39 -0500 Received: by mail-wr1-f65.google.com with SMTP id x10so23443239wrs.8 for ; Tue, 27 Nov 2018 08:56:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=4oAU6JK5SXFQXeBM1Fe+Zgca3Nk1YiOq990k4sZRE+4=; b=chHHj6XiLJ1QKKEppmZn3nW3M1nj5ouQtd9MSTUznHKjvgqVF/v4xI9Y1YFsHlbbBq CSzpZBkiHVXiwuP2IvcEvmn2l+ACnXU70oPxn7BBA0EFig7Xp5XTEQ2lbPWBUm1TpnWl ztrMUYtCo/NZxGswmiCsj1OxlNGSTBWUB0zNlh7A2Vt+iiZz/rU8Un83hj8zK6J0e4Q5 9fYt35f8iAvFSDW5y+spN/XSXHhfS8Qn/tKymZQal4z1tzvGfnjhmOOp/TEjf97C2QnL osfR/x0T+JRZhCfPOwTfvnGCqmtng5ADUgrKmUslv7YfxPGHSUFdSoSk0KfyJUNdkUMw tjIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=4oAU6JK5SXFQXeBM1Fe+Zgca3Nk1YiOq990k4sZRE+4=; b=GMI/FKH8XsU6b0Qo1K4JZHSr4KCVwTfu+ONl7ezsak5BBVamaPHMHqcW8oESifjCVe camKUQpBvh5Tkf82xG28Z74I3ABiqu0gKuiTSqVyVoJ3g/pvEGph3KhQ698WDcOErQEv dH/vgPIZZDknDBBdKbIcGhJpmdPgOCY7x9ExR4sVW/nf4q5ujo0ji6nm7UPx0qiUomNF WCfWyZy/LArTi0s3gtQHJqBVLmca3AOfIT9MsdzdOFU2TzncsbGRF8uu5OUPzJqyV7bg 9sDZ5eYLWP4ap0EOmq1nI7kCblalpaVcHFGXnVrxm244XF67cntnStRAKAIDquPQPeSH Sf+g== X-Gm-Message-State: AA+aEWZiuXU0m1OHwM4nq2RYu3j2sePKN3QKus+I+IiCy/nGpwQ2euE2 N7NM8x/v8cp5Vq4Nm6f0iCPcKQ== X-Google-Smtp-Source: AFSGD/UiIJUobpt6jmnooRInVxYH9JCQFfF0BEjmP/f3qfSwiz7KnNbDRbiiK84/8RrZcPMtgzZFsA== X-Received: by 2002:a05:6000:1287:: with SMTP id f7mr29807646wrx.302.1543337767063; Tue, 27 Nov 2018 08:56:07 -0800 (PST) Received: from andreyknvl0.muc.corp.google.com ([2a00:79e0:15:10:3180:41f8:3010:ff61]) by smtp.gmail.com with ESMTPSA id k73sm6383099wmd.36.2018.11.27.08.56.05 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 27 Nov 2018 08:56:06 -0800 (PST) From: Andrey Konovalov To: Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Catalin Marinas , Will Deacon , Christoph Lameter , Andrew Morton , Mark Rutland , Nick Desaulniers , Marc Zyngier , Dave Martin , Ard Biesheuvel , "Eric W . Biederman" , Ingo Molnar , Paul Lawrence , Geert Uytterhoeven , Arnd Bergmann , "Kirill A . Shutemov" , Greg Kroah-Hartman , Kate Stewart , Mike Rapoport , kasan-dev@googlegroups.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-sparse@vger.kernel.org, linux-mm@kvack.org, linux-kbuild@vger.kernel.org Cc: Kostya Serebryany , Evgeniy Stepanov , Lee Smith , Ramana Radhakrishnan , Jacob Bramley , Ruben Ayrapetyan , Jann Horn , Mark Brand , Chintan Pandya , Vishwath Mohan , Andrey Konovalov Subject: [PATCH v12 10/25] kasan: add tag related helper functions Date: Tue, 27 Nov 2018 17:55:28 +0100 Message-Id: <643b46fbcd6433a4be18b3a19ce9f3e727618a8d.1543337629.git.andreyknvl@google.com> X-Mailer: git-send-email 2.20.0.rc0.387.gc7a69e6b6c-goog In-Reply-To: References: MIME-Version: 1.0 Sender: linux-kbuild-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This commit adds a few helper functions, that are meant to be used to work with tags embedded in the top byte of kernel pointers: to set, to get or to reset the top byte. Signed-off-by: Andrey Konovalov --- arch/arm64/include/asm/kasan.h | 8 +++++-- arch/arm64/include/asm/memory.h | 12 +++++++++++ arch/arm64/mm/kasan_init.c | 2 ++ include/linux/kasan.h | 13 ++++++++++++ mm/kasan/kasan.h | 31 +++++++++++++++++++++++++++ mm/kasan/tags.c | 37 +++++++++++++++++++++++++++++++++ 6 files changed, 101 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/kasan.h b/arch/arm64/include/asm/kasan.h index 8758bb008436..b52aacd2c526 100644 --- a/arch/arm64/include/asm/kasan.h +++ b/arch/arm64/include/asm/kasan.h @@ -4,12 +4,16 @@ #ifndef __ASSEMBLY__ -#ifdef CONFIG_KASAN - #include #include #include +#define arch_kasan_set_tag(addr, tag) __tag_set(addr, tag) +#define arch_kasan_reset_tag(addr) __tag_reset(addr) +#define arch_kasan_get_tag(addr) __tag_get(addr) + +#ifdef CONFIG_KASAN + /* * KASAN_SHADOW_START: beginning of the kernel virtual addresses. * KASAN_SHADOW_END: KASAN_SHADOW_START + 1/N of kernel virtual addresses, diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index e2c9857157f2..83c1366a1233 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -219,6 +219,18 @@ static inline unsigned long kaslr_offset(void) #define untagged_addr(addr) \ ((__typeof__(addr))sign_extend64((u64)(addr), 55)) +#ifdef CONFIG_KASAN_SW_TAGS +#define __tag_shifted(tag) ((u64)(tag) << 56) +#define __tag_set(addr, tag) (__typeof__(addr))( \ + ((u64)(addr) & ~__tag_shifted(0xff)) | __tag_shifted(tag)) +#define __tag_reset(addr) untagged_addr(addr) +#define __tag_get(addr) (__u8)((u64)(addr) >> 56) +#else +#define __tag_set(addr, tag) (addr) +#define __tag_reset(addr) (addr) +#define __tag_get(addr) 0 +#endif + /* * Physical vs virtual RAM address space conversion. These are * private definitions which should NOT be used outside memory.h diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c index 7a4a0904cac8..1df536bdabcb 100644 --- a/arch/arm64/mm/kasan_init.c +++ b/arch/arm64/mm/kasan_init.c @@ -253,6 +253,8 @@ void __init kasan_init(void) memset(kasan_early_shadow_page, KASAN_SHADOW_INIT, PAGE_SIZE); cpu_replace_ttbr1(lm_alias(swapper_pg_dir)); + kasan_init_tags(); + /* At this point kasan is fully initialized. Enable error messages */ init_task.kasan_depth = 0; pr_info("KernelAddressSanitizer initialized\n"); diff --git a/include/linux/kasan.h b/include/linux/kasan.h index c56af24bd3e7..a477ce2abdc9 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -169,6 +169,19 @@ static inline void kasan_cache_shutdown(struct kmem_cache *cache) {} #define KASAN_SHADOW_INIT 0xFF +void kasan_init_tags(void); + +void *kasan_reset_tag(const void *addr); + +#else /* CONFIG_KASAN_SW_TAGS */ + +static inline void kasan_init_tags(void) { } + +static inline void *kasan_reset_tag(const void *addr) +{ + return (void *)addr; +} + #endif /* CONFIG_KASAN_SW_TAGS */ #endif /* LINUX_KASAN_H */ diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h index 19b950eaccff..b080b8d92812 100644 --- a/mm/kasan/kasan.h +++ b/mm/kasan/kasan.h @@ -8,6 +8,10 @@ #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT) #define KASAN_SHADOW_MASK (KASAN_SHADOW_SCALE_SIZE - 1) +#define KASAN_TAG_KERNEL 0xFF /* native kernel pointers tag */ +#define KASAN_TAG_INVALID 0xFE /* inaccessible memory tag */ +#define KASAN_TAG_MAX 0xFD /* maximum value for random tags */ + #define KASAN_FREE_PAGE 0xFF /* page was freed */ #define KASAN_PAGE_REDZONE 0xFE /* redzone for kmalloc_large allocations */ #define KASAN_KMALLOC_REDZONE 0xFC /* redzone inside slub object */ @@ -126,6 +130,33 @@ static inline void quarantine_reduce(void) { } static inline void quarantine_remove_cache(struct kmem_cache *cache) { } #endif +#ifdef CONFIG_KASAN_SW_TAGS + +u8 random_tag(void); + +#else + +static inline u8 random_tag(void) +{ + return 0; +} + +#endif + +#ifndef arch_kasan_set_tag +#define arch_kasan_set_tag(addr, tag) ((void *)(addr)) +#endif +#ifndef arch_kasan_reset_tag +#define arch_kasan_reset_tag(addr) ((void *)(addr)) +#endif +#ifndef arch_kasan_get_tag +#define arch_kasan_get_tag(addr) 0 +#endif + +#define set_tag(addr, tag) ((void *)arch_kasan_set_tag((addr), (tag))) +#define reset_tag(addr) ((void *)arch_kasan_reset_tag(addr)) +#define get_tag(addr) arch_kasan_get_tag(addr) + /* * Exported functions for interfaces called from assembly or from generated * code. Declarations here to avoid warning about missing declarations. diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c index 04194923c543..1c4e7ce2e6fe 100644 --- a/mm/kasan/tags.c +++ b/mm/kasan/tags.c @@ -38,6 +38,43 @@ #include "kasan.h" #include "../slab.h" +static DEFINE_PER_CPU(u32, prng_state); + +void kasan_init_tags(void) +{ + int cpu; + + for_each_possible_cpu(cpu) + per_cpu(prng_state, cpu) = get_random_u32(); +} + +/* + * If a preemption happens between this_cpu_read and this_cpu_write, the only + * side effect is that we'll give a few allocated in different contexts objects + * the same tag. Since tag-based KASAN is meant to be used a probabilistic + * bug-detection debug feature, this doesn't have significant negative impact. + * + * Ideally the tags use strong randomness to prevent any attempts to predict + * them during explicit exploit attempts. But strong randomness is expensive, + * and we did an intentional trade-off to use a PRNG. This non-atomic RMW + * sequence has in fact positive effect, since interrupts that randomly skew + * PRNG at unpredictable points do only good. + */ +u8 random_tag(void) +{ + u32 state = this_cpu_read(prng_state); + + state = 1664525 * state + 1013904223; + this_cpu_write(prng_state, state); + + return (u8)(state % (KASAN_TAG_MAX + 1)); +} + +void *kasan_reset_tag(const void *addr) +{ + return reset_tag(addr); +} + void check_memory_region(unsigned long addr, size_t size, bool write, unsigned long ret_ip) {