From patchwork Wed Aug 29 11:35:11 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 10580013 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 75C301805 for ; Wed, 29 Aug 2018 11:36:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 66B5B2ABEF for ; Wed, 29 Aug 2018 11:36:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5A56D2ABDA; Wed, 29 Aug 2018 11:36:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, USER_IN_DEF_DKIM_WL autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C92E62AA9C for ; Wed, 29 Aug 2018 11:36:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728539AbeH2PcL (ORCPT ); Wed, 29 Aug 2018 11:32:11 -0400 Received: from mail-wm0-f65.google.com ([74.125.82.65]:55495 "EHLO mail-wm0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728516AbeH2PcK (ORCPT ); Wed, 29 Aug 2018 11:32:10 -0400 Received: by mail-wm0-f65.google.com with SMTP id f21-v6so4935446wmc.5 for ; Wed, 29 Aug 2018 04:35:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kb0anrSFrq4OO5HLjnBi1Wtd8/F9idg7tFDv1gOHdc4=; b=V77xIUts4G5R5xgxpZRlzf+b6PxUbpyYldPcR1vDedUxJCnCO1Cd7FjmZsRhUO5GK6 DqSPZhRHZ24fg1dYh8PnaOSNZMErt53IHxij5lTJDzXliZRTTZg6ONDGkTsFT29mRK57 1XMNNSGwspzUN55Y05uoa5Yd2K8bcaHFFqkFZFufVMI0yVygdcneSYIykpOHpW3hVuSo t6CB96Bg8wUZTsnJtcciwXjlaeD6HHL+bQ91OTSIWQNBdi2yclcFf3iSbu1yGChiIovQ J1HVI08VWA93LS3Z1IPZxaC2zKyBdBFK7g53WLalomO8XnP9SCMeu4wEPYIdpwdYBXUd 1EzQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kb0anrSFrq4OO5HLjnBi1Wtd8/F9idg7tFDv1gOHdc4=; b=biRHc6nbdBTWPTYh0ruAzLjrqTuivmfbFspKyzMev5f3yj7n15aooVYaWcm5pqc1C/ JpiT/bZrDgVcrVF54VXB8gLpO/c5sxNbKdW+f8IXkiP621vKGtFAB5qBOBSlQ69r3p2W VOMRpvbvs423oaY8CmW7/MV7FdMQc8fISt5BZcY8tZHY+vxGG82xaFDjkrdRf3UBzr1I z9/6jkstkwfL1zZAhKkySHW9/DjYLw9HOEQH82chmXpyvJJALAYMHhthzZaQqQfTjSV9 wPVCNslLw/RHrhDgzA2oBgNagmaQK7B5HVMYVBx2bBVwu7ronFywizGoDvt5Cj9baj8f N1KA== X-Gm-Message-State: APzg51B6GalbRr3TuGrQ1QeHi9xTznf4+ujKt+/xXzn2L9iCAcX6Hqqp KZyZtq5aJqZ0CtcZqzRRHenggw== X-Google-Smtp-Source: ANB0VdZAmbo5vWf2UbC1iC4mfzsIcGelknxtQTTEElaiO8Z6gaQQGbaRPFvyEa08sXABod5h77vz/Q== X-Received: by 2002:a1c:adcc:: with SMTP id w195-v6mr4157719wme.41.1535542538782; Wed, 29 Aug 2018 04:35:38 -0700 (PDT) Received: from andreyknvl0.muc.corp.google.com ([2a00:79e0:15:10:84be:a42a:826d:c530]) by smtp.gmail.com with ESMTPSA id s10-v6sm7800454wmd.22.2018.08.29.04.35.37 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 29 Aug 2018 04:35:38 -0700 (PDT) From: Andrey Konovalov To: Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Catalin Marinas , Will Deacon , Christoph Lameter , Andrew Morton , Mark Rutland , Nick Desaulniers , Marc Zyngier , Dave Martin , Ard Biesheuvel , "Eric W . Biederman" , Ingo Molnar , Paul Lawrence , Geert Uytterhoeven , Arnd Bergmann , "Kirill A . Shutemov" , Greg Kroah-Hartman , Kate Stewart , Mike Rapoport , kasan-dev@googlegroups.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-sparse@vger.kernel.org, linux-mm@kvack.org, linux-kbuild@vger.kernel.org Cc: Kostya Serebryany , Evgeniy Stepanov , Lee Smith , Ramana Radhakrishnan , Jacob Bramley , Ruben Ayrapetyan , Jann Horn , Mark Brand , Chintan Pandya , Vishwath Mohan , Andrey Konovalov Subject: [PATCH v6 07/18] khwasan: add tag related helper functions Date: Wed, 29 Aug 2018 13:35:11 +0200 Message-Id: <6cd298a90d02068969713f2fd440eae21227467b.1535462971.git.andreyknvl@google.com> X-Mailer: git-send-email 2.19.0.rc0.228.g281dcd1b4d0-goog In-Reply-To: References: MIME-Version: 1.0 Sender: linux-kbuild-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This commit adds a few helper functions, that are meant to be used to work with tags embedded in the top byte of kernel pointers: to set, to get or to reset (set to 0xff) the top byte. Signed-off-by: Andrey Konovalov --- arch/arm64/mm/kasan_init.c | 2 ++ include/linux/kasan.h | 29 +++++++++++++++++ mm/kasan/kasan.h | 55 ++++++++++++++++++++++++++++++++ mm/kasan/khwasan.c | 65 ++++++++++++++++++++++++++++++++++++++ 4 files changed, 151 insertions(+) diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c index 7a31e8ccbad2..e7f37c0b7e14 100644 --- a/arch/arm64/mm/kasan_init.c +++ b/arch/arm64/mm/kasan_init.c @@ -250,6 +250,8 @@ void __init kasan_init(void) memset(kasan_zero_page, KASAN_SHADOW_INIT, PAGE_SIZE); cpu_replace_ttbr1(lm_alias(swapper_pg_dir)); + khwasan_init(); + /* At this point kasan is fully initialized. Enable error messages */ init_task.kasan_depth = 0; pr_info("KernelAddressSanitizer initialized\n"); diff --git a/include/linux/kasan.h b/include/linux/kasan.h index 1c31bb089154..1f852244e739 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -166,6 +166,35 @@ static inline void kasan_cache_shutdown(struct kmem_cache *cache) {} #define KASAN_SHADOW_INIT 0xFF +void khwasan_init(void); + +void *khwasan_reset_tag(const void *addr); + +void *khwasan_preset_slub_tag(struct kmem_cache *cache, const void *addr); +void *khwasan_preset_slab_tag(struct kmem_cache *cache, unsigned int idx, + const void *addr); + +#else /* CONFIG_KASAN_HW */ + +static inline void khwasan_init(void) { } + +static inline void *khwasan_reset_tag(const void *addr) +{ + return (void *)addr; +} + +static inline void *khwasan_preset_slub_tag(struct kmem_cache *cache, + const void *addr) +{ + return (void *)addr; +} + +static inline void *khwasan_preset_slab_tag(struct kmem_cache *cache, + unsigned int idx, const void *addr) +{ + return (void *)addr; +} + #endif /* CONFIG_KASAN_HW */ #endif /* LINUX_KASAN_H */ diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h index 19b950eaccff..a7cc27d96608 100644 --- a/mm/kasan/kasan.h +++ b/mm/kasan/kasan.h @@ -8,6 +8,10 @@ #define KASAN_SHADOW_SCALE_SIZE (1UL << KASAN_SHADOW_SCALE_SHIFT) #define KASAN_SHADOW_MASK (KASAN_SHADOW_SCALE_SIZE - 1) +#define KHWASAN_TAG_KERNEL 0xFF /* native kernel pointers tag */ +#define KHWASAN_TAG_INVALID 0xFE /* inaccessible memory tag */ +#define KHWASAN_TAG_MAX 0xFD /* maximum value for random tags */ + #define KASAN_FREE_PAGE 0xFF /* page was freed */ #define KASAN_PAGE_REDZONE 0xFE /* redzone for kmalloc_large allocations */ #define KASAN_KMALLOC_REDZONE 0xFC /* redzone inside slub object */ @@ -126,6 +130,57 @@ static inline void quarantine_reduce(void) { } static inline void quarantine_remove_cache(struct kmem_cache *cache) { } #endif +#ifdef CONFIG_KASAN_HW + +#define KHWASAN_TAG_SHIFT 56 +#define KHWASAN_TAG_MASK (0xFFUL << KHWASAN_TAG_SHIFT) + +u8 random_tag(void); + +static inline void *set_tag(const void *addr, u8 tag) +{ + u64 a = (u64)addr; + + a &= ~KHWASAN_TAG_MASK; + a |= ((u64)tag << KHWASAN_TAG_SHIFT); + + return (void *)a; +} + +static inline u8 get_tag(const void *addr) +{ + return (u8)((u64)addr >> KHWASAN_TAG_SHIFT); +} + +static inline void *reset_tag(const void *addr) +{ + return set_tag(addr, KHWASAN_TAG_KERNEL); +} + +#else /* CONFIG_KASAN_HW */ + +static inline u8 random_tag(void) +{ + return 0; +} + +static inline void *set_tag(const void *addr, u8 tag) +{ + return (void *)addr; +} + +static inline u8 get_tag(const void *addr) +{ + return 0; +} + +static inline void *reset_tag(const void *addr) +{ + return (void *)addr; +} + +#endif /* CONFIG_KASAN_HW */ + /* * Exported functions for interfaces called from assembly or from generated * code. Declarations here to avoid warning about missing declarations. diff --git a/mm/kasan/khwasan.c b/mm/kasan/khwasan.c index e2c3a7f7fd1f..9d91bf3c8246 100644 --- a/mm/kasan/khwasan.c +++ b/mm/kasan/khwasan.c @@ -38,6 +38,71 @@ #include "kasan.h" #include "../slab.h" +static DEFINE_PER_CPU(u32, prng_state); + +void khwasan_init(void) +{ + int cpu; + + for_each_possible_cpu(cpu) + per_cpu(prng_state, cpu) = get_random_u32(); +} + +/* + * If a preemption happens between this_cpu_read and this_cpu_write, the only + * side effect is that we'll give a few allocated in different contexts objects + * the same tag. Since KHWASAN is meant to be used a probabilistic bug-detection + * debug feature, this doesn’t have significant negative impact. + * + * Ideally the tags use strong randomness to prevent any attempts to predict + * them during explicit exploit attempts. But strong randomness is expensive, + * and we did an intentional trade-off to use a PRNG. This non-atomic RMW + * sequence has in fact positive effect, since interrupts that randomly skew + * PRNG at unpredictable points do only good. + */ +u8 random_tag(void) +{ + u32 state = this_cpu_read(prng_state); + + state = 1664525 * state + 1013904223; + this_cpu_write(prng_state, state); + + return (u8)(state % (KHWASAN_TAG_MAX + 1)); +} + +void *khwasan_reset_tag(const void *addr) +{ + return reset_tag(addr); +} + +void *khwasan_preset_slub_tag(struct kmem_cache *cache, const void *addr) +{ + /* + * Since it's desirable to only call object contructors ones during + * slab allocation, we preassign tags to all such objects. + * Also preassign tags for SLAB_TYPESAFE_BY_RCU slabs to avoid + * use-after-free reports. + */ + if (cache->ctor || cache->flags & SLAB_TYPESAFE_BY_RCU) + return set_tag(addr, random_tag()); + return (void *)addr; +} + +void *khwasan_preset_slab_tag(struct kmem_cache *cache, unsigned int idx, + const void *addr) +{ + /* + * See comment in khwasan_preset_slub_tag. + * For SLAB allocator we can't preassign tags randomly since the + * freelist is stored as an array of indexes instead of a linked + * list. Assign tags based on objects indexes, so that objects that + * are next to each other get different tags. + */ + if (cache->ctor || cache->flags & SLAB_TYPESAFE_BY_RCU) + return set_tag(addr, (u8)idx); + return (void *)addr; +} + void check_memory_region(unsigned long addr, size_t size, bool write, unsigned long ret_ip) {