From patchwork Wed Jan 2 17:36:06 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 10746549 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2BF526C5 for ; Wed, 2 Jan 2019 17:36:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0DFB62841F for ; Wed, 2 Jan 2019 17:36:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F0D1428420; Wed, 2 Jan 2019 17:36:18 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 798A528405 for ; Wed, 2 Jan 2019 17:36:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726010AbfABRgR (ORCPT ); Wed, 2 Jan 2019 12:36:17 -0500 Received: from mail-wm1-f66.google.com ([209.85.128.66]:52433 "EHLO mail-wm1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726635AbfABRgQ (ORCPT ); Wed, 2 Jan 2019 12:36:16 -0500 Received: by mail-wm1-f66.google.com with SMTP id m1so27279780wml.2 for ; Wed, 02 Jan 2019 09:36:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ZsJFoVJf3IIY1mc4G8PoWo9I+7LoITd3hZ2Vo5zkWAc=; b=lAlrMqV+AGSkWFg5yrz2JZbPk6RoKIqYdaAHGwn+o5V/PT3wER7ZaMKwkE5hdb8SuZ LPObNhnGQkXOjiShOE7KLEe+L83ocrpqNvR5qQwkMP5Jd9WvyCHU1XtGvb8QckxB9cCo gFwgxa8I2Z6nAI/XlNVC2sTC4UcKZCyjZClhL4LmKKz7nwSQmrvkj3eDWI5v8Qyf3dVU KN+qP7/l7ObKbX4pR3rxTA8VymtrWoOGIyD2lKYFGn5sBzxkrkvUhOi4CFKYCGGGVkVv 6IVZkI9DYKZ4d6sI3dIg+OQntY2oMujijkTJspbpA7hPOjSgmSOqiAXGwZvhxQwTzZsC vurA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ZsJFoVJf3IIY1mc4G8PoWo9I+7LoITd3hZ2Vo5zkWAc=; b=ECPJDFFUAaFm0QEiCR7tHjeaWf9y8QbPTKB2VMFA2YmwqrfF9v1fCMVt4q0kLSeqOC pNnwdVDGcxA6vbcaHnElH6n56x8CFrSHi5BB10jayRLGY799GQD+P6ZNqTQ7z9K35nJ9 DteyJiu1mcJDYlCuI+8xGgEf13TAiGURp0MhmvJcW9KYYFs7q50e4prl1Eh57l55muOU 2TuPG+0iBTpYqKJ1Iq5xOoyDt2Fy7PC8bhKpmpeAILaGOgszVlgXvLoQjAJMwbntw7SU HQ0GvKL0xBwIr6WD3kGb319v9sQiylX7Vex8vtKFwB1RHceBgpdmYau+bl9qhlVK7UO/ GKiw== X-Gm-Message-State: AA+aEWZphAl+Df4RCYPKenLUNsqzyfWNFaIx1Th7FQ/IM0F4wn/FOE0D +HmdUttQlLnDphEixoI0vX79Wg== X-Google-Smtp-Source: AFSGD/VaDhJ7tSu1Glf9joOK/KwNwzfxgXuAYl8ckeRSP/aFiyqPuu4bYT2MGG+9MfJfx5N6OAZhBA== X-Received: by 2002:a1c:7511:: with SMTP id o17mr35511822wmc.42.1546450574842; Wed, 02 Jan 2019 09:36:14 -0800 (PST) Received: from andreyknvl0.muc.corp.google.com ([2a00:79e0:15:13:8ce:d7fa:9f4c:492]) by smtp.gmail.com with ESMTPSA id t4sm45987076wrm.6.2019.01.02.09.36.13 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 02 Jan 2019 09:36:13 -0800 (PST) From: Andrey Konovalov To: Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Catalin Marinas , Will Deacon , Christoph Lameter , Andrew Morton , Mark Rutland , Nick Desaulniers , Marc Zyngier , Dave Martin , Ard Biesheuvel , "Eric W . Biederman" , Ingo Molnar , Paul Lawrence , Geert Uytterhoeven , Arnd Bergmann , "Kirill A . Shutemov" , Greg Kroah-Hartman , Kate Stewart , Mike Rapoport , Vincenzo Frascino , kasan-dev@googlegroups.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-sparse@vger.kernel.org, linux-mm@kvack.org, linux-kbuild@vger.kernel.org Cc: Kostya Serebryany , Evgeniy Stepanov , Lee Smith , Ramana Radhakrishnan , Jacob Bramley , Ruben Ayrapetyan , Jann Horn , Mark Brand , Chintan Pandya , Vishwath Mohan , Andrey Konovalov Subject: [PATCH v2 1/3] kasan, arm64: use ARCH_SLAB_MINALIGN instead of manual aligning Date: Wed, 2 Jan 2019 18:36:06 +0100 Message-Id: X-Mailer: git-send-email 2.20.1.415.g653613c723-goog In-Reply-To: References: MIME-Version: 1.0 Sender: linux-kbuild-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Instead of changing cache->align to be aligned to KASAN_SHADOW_SCALE_SIZE in kasan_cache_create() we can reuse the ARCH_SLAB_MINALIGN macro. Suggested-by: Vincenzo Frascino Signed-off-by: Andrey Konovalov --- arch/arm64/include/asm/kasan.h | 4 ++++ include/linux/slab.h | 1 + mm/kasan/common.c | 2 -- 3 files changed, 5 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/kasan.h b/arch/arm64/include/asm/kasan.h index b52aacd2c526..ba26150d578d 100644 --- a/arch/arm64/include/asm/kasan.h +++ b/arch/arm64/include/asm/kasan.h @@ -36,6 +36,10 @@ #define KASAN_SHADOW_OFFSET (KASAN_SHADOW_END - (1ULL << \ (64 - KASAN_SHADOW_SCALE_SHIFT))) +#ifdef CONFIG_KASAN_SW_TAGS +#define ARCH_SLAB_MINALIGN (1ULL << KASAN_SHADOW_SCALE_SHIFT) +#endif + void kasan_init(void); void kasan_copy_shadow(pgd_t *pgdir); asmlinkage void kasan_early_init(void); diff --git a/include/linux/slab.h b/include/linux/slab.h index 11b45f7ae405..d87f913ab4e8 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -16,6 +16,7 @@ #include #include #include +#include /* diff --git a/mm/kasan/common.c b/mm/kasan/common.c index 03d5d1374ca7..44390392d4c9 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -298,8 +298,6 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, return; } - cache->align = round_up(cache->align, KASAN_SHADOW_SCALE_SIZE); - *flags |= SLAB_KASAN; } From patchwork Wed Jan 2 17:36:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 10746561 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 58DB817E6 for ; Wed, 2 Jan 2019 17:36:25 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3EE4628405 for ; Wed, 2 Jan 2019 17:36:25 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3206328417; Wed, 2 Jan 2019 17:36:25 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, USER_IN_DEF_DKIM_WL autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E414B28409 for ; Wed, 2 Jan 2019 17:36:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726078AbfABRgT (ORCPT ); Wed, 2 Jan 2019 12:36:19 -0500 Received: from mail-wm1-f67.google.com ([209.85.128.67]:33612 "EHLO mail-wm1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726046AbfABRgS (ORCPT ); Wed, 2 Jan 2019 12:36:18 -0500 Received: by mail-wm1-f67.google.com with SMTP id r24so33207434wmh.0 for ; Wed, 02 Jan 2019 09:36:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=wqfuZcAjUCBtOLKWkS6SRZq/hte3EYpV7rEPQw3hkz8=; b=GBABedigx8sH5GFwpRqipEDZ14ymhRvEFULjEU8wLa/LptdTisVFf9skUy8ylEKwgj RF12pUwr344Vyvy3ZqJC4iKm1OZSWqoVLD5mJZkl91WIY/lo6o1i2e7+QvZjCRXC8gqD TalGJ7+keqs9g4ydv5W/68JCwOiEcJUrlW170vCsYKVKJdezXyBqilxWlN06NaZHYTDc QQgWpPrpqe6SqK7zrP/HpOaRELc378Wew1Pp2kpkP+K7TKwG3na64p2Wzdh2fIyKQxWR 416+4i3iHDx7qlDktBw/RgIj4tv1H8OekGka2BVVugetEBDgCsmpkgmw/g7V3gWXuDyc OjoQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=wqfuZcAjUCBtOLKWkS6SRZq/hte3EYpV7rEPQw3hkz8=; b=i2Pk/J2KcxUAbx3vsyPIHe7m9M2vx/UuvSELks3IPLQNd5Q8svdgYeahoakn8FWzxk XQ1Cy1Y9OfSSB5dXN0UUFF53MqRQIPybpqqyx8rN6OT2kKfYhFhr1IAZssuOiBK8iGet Sg9Yi6Tw7Hvu+f0CLT16AGK7Srf5jJ4Zlg0h8P+iGO0AbE0OapyHySGZHknD44rREcJT 6d6bDnEIQG/vBKc6oFs5sgZAW3ZOq7hkMUdt3w7jx0nv0K6VnqAu2FOGTTmI3hdn+m3n glbKg+lP1/22YqyL4joN/mTxvpPOXIbKPAElFG5G9CXDA8hAtKlk7QTUIMHCNgAd8aKk aFfA== X-Gm-Message-State: AA+aEWabJT1lmlEyCbv/r2s9erx9RZCOu+NCZcuuFoZrrQYHjEU0vOhP MF08v7iPY/8qiWvWJXUv+F6y2Q== X-Google-Smtp-Source: AFSGD/W97uWDkFBwjy9qVKlmOmTQCuBOdfdQInnihelGax0GIIIDp5gYRmDwwP96Pfhda8t/d8RaVw== X-Received: by 2002:a1c:4108:: with SMTP id o8mr34987293wma.91.1546450576552; Wed, 02 Jan 2019 09:36:16 -0800 (PST) Received: from andreyknvl0.muc.corp.google.com ([2a00:79e0:15:13:8ce:d7fa:9f4c:492]) by smtp.gmail.com with ESMTPSA id t4sm45987076wrm.6.2019.01.02.09.36.14 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 02 Jan 2019 09:36:15 -0800 (PST) From: Andrey Konovalov To: Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Catalin Marinas , Will Deacon , Christoph Lameter , Andrew Morton , Mark Rutland , Nick Desaulniers , Marc Zyngier , Dave Martin , Ard Biesheuvel , "Eric W . Biederman" , Ingo Molnar , Paul Lawrence , Geert Uytterhoeven , Arnd Bergmann , "Kirill A . Shutemov" , Greg Kroah-Hartman , Kate Stewart , Mike Rapoport , Vincenzo Frascino , kasan-dev@googlegroups.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-sparse@vger.kernel.org, linux-mm@kvack.org, linux-kbuild@vger.kernel.org Cc: Kostya Serebryany , Evgeniy Stepanov , Lee Smith , Ramana Radhakrishnan , Jacob Bramley , Ruben Ayrapetyan , Jann Horn , Mark Brand , Chintan Pandya , Vishwath Mohan , Andrey Konovalov Subject: [PATCH v2 2/3] kasan: make tag based mode work with CONFIG_HARDENED_USERCOPY Date: Wed, 2 Jan 2019 18:36:07 +0100 Message-Id: <21de3c171438760a232d51cea56792c886bc9160.1546450432.git.andreyknvl@google.com> X-Mailer: git-send-email 2.20.1.415.g653613c723-goog In-Reply-To: References: MIME-Version: 1.0 Sender: linux-kbuild-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP With CONFIG_HARDENED_USERCOPY enabled __check_heap_object() compares and then subtracts a potentially tagged pointer with a non-tagged address of the page that this pointer belongs to, which leads to unexpected behavior. Untag the pointer in __check_heap_object() before doing any of these operations. Signed-off-by: Andrey Konovalov --- mm/slub.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/mm/slub.c b/mm/slub.c index 36c0befeebd8..1e3d0ec4e200 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3846,6 +3846,8 @@ void __check_heap_object(const void *ptr, unsigned long n, struct page *page, unsigned int offset; size_t object_size; + ptr = kasan_reset_tag(ptr); + /* Find object and usable object size. */ s = page->slab_cache; From patchwork Wed Jan 2 17:36:08 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Konovalov X-Patchwork-Id: 10746567 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A06F991E for ; Wed, 2 Jan 2019 17:36:35 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8307528405 for ; Wed, 2 Jan 2019 17:36:35 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7603A2841D; Wed, 2 Jan 2019 17:36:35 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, USER_IN_DEF_DKIM_WL autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B070328405 for ; Wed, 2 Jan 2019 17:36:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725993AbfABRgd (ORCPT ); Wed, 2 Jan 2019 12:36:33 -0500 Received: from mail-wm1-f65.google.com ([209.85.128.65]:36794 "EHLO mail-wm1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726044AbfABRgU (ORCPT ); Wed, 2 Jan 2019 12:36:20 -0500 Received: by mail-wm1-f65.google.com with SMTP id p6so28219716wmc.1 for ; Wed, 02 Jan 2019 09:36:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JbDh3nO/eQ5r3H90w51/iWL5ZM5B6uICF84/YZhcCg4=; b=J3EioQAmzqQ510tIYUdGn3niW30hZBCQDaqUxXA6W8Tx56EgefOMxfeiYePggvLtzV duXz2IeiYTCZpxPro8vAGHyjWQH/Y1/d/Ak/b3m9WVFcOtcdpIuYK79GinCB8sSiZtG3 ldCh6avAFDLKYMtq9F3XaB02vQPo22UkXsFhXhyIBx7oOFhnHXxUUD8YJ7FKz6fLxj4g UdEA7DwTWevqPwpxJS3DNw0FJygWBLCgnIL+wZ0Gdd6dAoG2Qh+1ORT2/g8fuq65aSCh hwax2bMe285RhifAqx23AgtZ3ATuL6cdl/MSN8F0BuYYlWJ2a2yf/dyePmmOzCrkGSnZ /Clg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JbDh3nO/eQ5r3H90w51/iWL5ZM5B6uICF84/YZhcCg4=; b=L1Ta3RxqCfFRISN3zhTsxQWzvQymX5/ZA+oVbSxw95bfZyPPB+gwx5I5fXnFG92YuP X7NhI8TDq+iE2r+xR+bMM85rdPZj3R5p/Br0RCqwLFk7rqS5Yeben8IEzZrU2bFfdGZG WZjITgZ27GFMPB/pzz2ONcLCkkMQZGcigEFhr2sSepLBEXPhU0a4vGl3dqHiwtuwvr5t XthBXahBK8IUXNLgTa451CnbAkU7emYumnldpj44C4yPE2o7sKrkv65v0NxYpHv1Rr4h E0IYgRRWCe/ICsp6Zbj82PLLfNIlw9CZ3AnOjbFrYGCMTgvMJkL/y4gzSkuGNe82tbh8 ptFQ== X-Gm-Message-State: AA+aEWY5aZh5o0OHP9z6c6IVAOoepQUet94/ChLfBmPmFOBAaWqou8Q2 IpQ3M/oQi6K+EUVxFCcRIypR3Q== X-Google-Smtp-Source: ALg8bN4dqeXqnEdkw7vZRXClez1sw3njLXNHpEGFyX9nPGsXXD3dzvB0Oz/M0AbKGq7DF3oZ68k8Uw== X-Received: by 2002:a1c:e715:: with SMTP id e21mr36108254wmh.101.1546450578353; Wed, 02 Jan 2019 09:36:18 -0800 (PST) Received: from andreyknvl0.muc.corp.google.com ([2a00:79e0:15:13:8ce:d7fa:9f4c:492]) by smtp.gmail.com with ESMTPSA id t4sm45987076wrm.6.2019.01.02.09.36.16 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 02 Jan 2019 09:36:17 -0800 (PST) From: Andrey Konovalov To: Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , Catalin Marinas , Will Deacon , Christoph Lameter , Andrew Morton , Mark Rutland , Nick Desaulniers , Marc Zyngier , Dave Martin , Ard Biesheuvel , "Eric W . Biederman" , Ingo Molnar , Paul Lawrence , Geert Uytterhoeven , Arnd Bergmann , "Kirill A . Shutemov" , Greg Kroah-Hartman , Kate Stewart , Mike Rapoport , Vincenzo Frascino , kasan-dev@googlegroups.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-sparse@vger.kernel.org, linux-mm@kvack.org, linux-kbuild@vger.kernel.org Cc: Kostya Serebryany , Evgeniy Stepanov , Lee Smith , Ramana Radhakrishnan , Jacob Bramley , Ruben Ayrapetyan , Jann Horn , Mark Brand , Chintan Pandya , Vishwath Mohan , Andrey Konovalov Subject: [PATCH v2 3/3] kasan: fix krealloc handling for tag-based mode Date: Wed, 2 Jan 2019 18:36:08 +0100 Message-Id: X-Mailer: git-send-email 2.20.1.415.g653613c723-goog In-Reply-To: References: MIME-Version: 1.0 Sender: linux-kbuild-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Right now tag-based KASAN can retag the memory that is reallocated via krealloc and return a differently tagged pointer even if the same slab object gets used and no reallocated technically happens. There are a few issues with this approach. One is that krealloc callers can't rely on comparing the return value with the passed argument to check whether reallocation happened. Another is that if a caller knows that no reallocation happened, that it can access object memory through the old pointer, which leads to false positives. Look at nf_ct_ext_add() to see an example. Fix this by keeping the same tag if the memory don't actually gets reallocated during krealloc. Signed-off-by: Andrey Konovalov --- include/linux/kasan.h | 14 +++++--------- include/linux/slab.h | 4 ++-- mm/kasan/common.c | 20 ++++++++++++-------- mm/slab.c | 8 ++++---- mm/slab_common.c | 2 +- mm/slub.c | 10 +++++----- 6 files changed, 29 insertions(+), 29 deletions(-) diff --git a/include/linux/kasan.h b/include/linux/kasan.h index b40ea104dd36..7576fff90923 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -57,9 +57,8 @@ void * __must_check kasan_kmalloc_large(const void *ptr, size_t size, void kasan_kfree_large(void *ptr, unsigned long ip); void kasan_poison_kfree(void *ptr, unsigned long ip); void * __must_check kasan_kmalloc(struct kmem_cache *s, const void *object, - size_t size, gfp_t flags); -void * __must_check kasan_krealloc(const void *object, size_t new_size, - gfp_t flags); + size_t size, gfp_t flags, bool krealloc); +void kasan_krealloc(const void *object, size_t new_size, gfp_t flags); void * __must_check kasan_slab_alloc(struct kmem_cache *s, void *object, gfp_t flags); @@ -118,15 +117,12 @@ static inline void *kasan_kmalloc_large(void *ptr, size_t size, gfp_t flags) static inline void kasan_kfree_large(void *ptr, unsigned long ip) {} static inline void kasan_poison_kfree(void *ptr, unsigned long ip) {} static inline void *kasan_kmalloc(struct kmem_cache *s, const void *object, - size_t size, gfp_t flags) -{ - return (void *)object; -} -static inline void *kasan_krealloc(const void *object, size_t new_size, - gfp_t flags) + size_t size, gfp_t flags, bool krealloc) { return (void *)object; } +static inline void kasan_krealloc(const void *object, size_t new_size, + gfp_t flags) {} static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object, gfp_t flags) diff --git a/include/linux/slab.h b/include/linux/slab.h index d87f913ab4e8..1cd168758c05 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -445,7 +445,7 @@ static __always_inline void *kmem_cache_alloc_trace(struct kmem_cache *s, { void *ret = kmem_cache_alloc(s, flags); - ret = kasan_kmalloc(s, ret, size, flags); + ret = kasan_kmalloc(s, ret, size, flags, false); return ret; } @@ -456,7 +456,7 @@ kmem_cache_alloc_node_trace(struct kmem_cache *s, { void *ret = kmem_cache_alloc_node(s, gfpflags, node); - ret = kasan_kmalloc(s, ret, size, gfpflags); + ret = kasan_kmalloc(s, ret, size, gfpflags, false); return ret; } #endif /* CONFIG_TRACING */ diff --git a/mm/kasan/common.c b/mm/kasan/common.c index 44390392d4c9..b6633ab86160 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -392,7 +392,7 @@ void * __must_check kasan_init_slab_obj(struct kmem_cache *cache, void * __must_check kasan_slab_alloc(struct kmem_cache *cache, void *object, gfp_t flags) { - return kasan_kmalloc(cache, object, cache->object_size, flags); + return kasan_kmalloc(cache, object, cache->object_size, flags, false); } static inline bool shadow_invalid(u8 tag, s8 shadow_byte) @@ -451,7 +451,7 @@ bool kasan_slab_free(struct kmem_cache *cache, void *object, unsigned long ip) } void * __must_check kasan_kmalloc(struct kmem_cache *cache, const void *object, - size_t size, gfp_t flags) + size_t size, gfp_t flags, bool krealloc) { unsigned long redzone_start; unsigned long redzone_end; @@ -468,8 +468,12 @@ void * __must_check kasan_kmalloc(struct kmem_cache *cache, const void *object, redzone_end = round_up((unsigned long)object + cache->object_size, KASAN_SHADOW_SCALE_SIZE); - if (IS_ENABLED(CONFIG_KASAN_SW_TAGS)) - tag = assign_tag(cache, object, false); + if (IS_ENABLED(CONFIG_KASAN_SW_TAGS)) { + if (krealloc) + tag = get_tag(object); + else + tag = assign_tag(cache, object, false); + } /* Tag is ignored in set_tag without CONFIG_KASAN_SW_TAGS */ kasan_unpoison_shadow(set_tag(object, tag), size); @@ -508,19 +512,19 @@ void * __must_check kasan_kmalloc_large(const void *ptr, size_t size, return (void *)ptr; } -void * __must_check kasan_krealloc(const void *object, size_t size, gfp_t flags) +void kasan_krealloc(const void *object, size_t size, gfp_t flags) { struct page *page; if (unlikely(object == ZERO_SIZE_PTR)) - return (void *)object; + return; page = virt_to_head_page(object); if (unlikely(!PageSlab(page))) - return kasan_kmalloc_large(object, size, flags); + kasan_kmalloc_large(object, size, flags); else - return kasan_kmalloc(page->slab_cache, object, size, flags); + kasan_kmalloc(page->slab_cache, object, size, flags, true); } void kasan_poison_kfree(void *ptr, unsigned long ip) diff --git a/mm/slab.c b/mm/slab.c index 73fe23e649c9..09b54386cf67 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3604,7 +3604,7 @@ kmem_cache_alloc_trace(struct kmem_cache *cachep, gfp_t flags, size_t size) ret = slab_alloc(cachep, flags, _RET_IP_); - ret = kasan_kmalloc(cachep, ret, size, flags); + ret = kasan_kmalloc(cachep, ret, size, flags, false); trace_kmalloc(_RET_IP_, ret, size, cachep->size, flags); return ret; @@ -3647,7 +3647,7 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *cachep, ret = slab_alloc_node(cachep, flags, nodeid, _RET_IP_); - ret = kasan_kmalloc(cachep, ret, size, flags); + ret = kasan_kmalloc(cachep, ret, size, flags, false); trace_kmalloc_node(_RET_IP_, ret, size, cachep->size, flags, nodeid); @@ -3668,7 +3668,7 @@ __do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long caller) if (unlikely(ZERO_OR_NULL_PTR(cachep))) return cachep; ret = kmem_cache_alloc_node_trace(cachep, flags, node, size); - ret = kasan_kmalloc(cachep, ret, size, flags); + ret = kasan_kmalloc(cachep, ret, size, flags, false); return ret; } @@ -3706,7 +3706,7 @@ static __always_inline void *__do_kmalloc(size_t size, gfp_t flags, return cachep; ret = slab_alloc(cachep, flags, caller); - ret = kasan_kmalloc(cachep, ret, size, flags); + ret = kasan_kmalloc(cachep, ret, size, flags, false); trace_kmalloc(caller, ret, size, cachep->size, flags); diff --git a/mm/slab_common.c b/mm/slab_common.c index 81732d05e74a..b55c58178f83 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -1507,7 +1507,7 @@ static __always_inline void *__do_krealloc(const void *p, size_t new_size, ks = ksize(p); if (ks >= new_size) { - p = kasan_krealloc((void *)p, new_size, flags); + kasan_krealloc((void *)p, new_size, flags); return (void *)p; } diff --git a/mm/slub.c b/mm/slub.c index 1e3d0ec4e200..20aa0547acbf 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2763,7 +2763,7 @@ void *kmem_cache_alloc_trace(struct kmem_cache *s, gfp_t gfpflags, size_t size) { void *ret = slab_alloc(s, gfpflags, _RET_IP_); trace_kmalloc(_RET_IP_, ret, size, s->size, gfpflags); - ret = kasan_kmalloc(s, ret, size, gfpflags); + ret = kasan_kmalloc(s, ret, size, gfpflags, false); return ret; } EXPORT_SYMBOL(kmem_cache_alloc_trace); @@ -2791,7 +2791,7 @@ void *kmem_cache_alloc_node_trace(struct kmem_cache *s, trace_kmalloc_node(_RET_IP_, ret, size, s->size, gfpflags, node); - ret = kasan_kmalloc(s, ret, size, gfpflags); + ret = kasan_kmalloc(s, ret, size, gfpflags, false); return ret; } EXPORT_SYMBOL(kmem_cache_alloc_node_trace); @@ -3364,7 +3364,7 @@ static void early_kmem_cache_node_alloc(int node) init_tracking(kmem_cache_node, n); #endif n = kasan_kmalloc(kmem_cache_node, n, sizeof(struct kmem_cache_node), - GFP_KERNEL); + GFP_KERNEL, false); page->freelist = get_freepointer(kmem_cache_node, n); page->inuse = 1; page->frozen = 0; @@ -3779,7 +3779,7 @@ void *__kmalloc(size_t size, gfp_t flags) trace_kmalloc(_RET_IP_, ret, size, s->size, flags); - ret = kasan_kmalloc(s, ret, size, flags); + ret = kasan_kmalloc(s, ret, size, flags, false); return ret; } @@ -3823,7 +3823,7 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node) trace_kmalloc_node(_RET_IP_, ret, size, s->size, flags, node); - ret = kasan_kmalloc(s, ret, size, flags); + ret = kasan_kmalloc(s, ret, size, flags, false); return ret; }