From patchwork Thu Oct 22 20:23:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11851873 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4FD7C388F7 for ; Thu, 22 Oct 2020 20:25:37 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6F87224654 for ; Thu, 22 Oct 2020 20:25:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="1alhuvYE"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="BuwZP/EK" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6F87224654 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:To:From:Subject:References:Mime-Version:Message-Id: In-Reply-To:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=l9fW/lCqzfXDeECc1x1R5mZoc2tIWd1kyF7qOCgZrWU=; b=1alhuvYEun1MUhr49ms5SUm90 Vng3SEE+q8bdKDk8LNbJtmC/fBy/tYgs5FmS3SRnnh8bn6WVybzzj5YYX4VaAm5Ill6A2Xdqyn707 LDY+TMZXKip27bnejmIkyzEpFrm9b94SqTma84nJng4AQ+4SnDMv90UhmRvBwtZ4jkBgnwXNEXkEy TnTyAQsjbXUl04cDQWtVkpt3YdH/l2JcuxAq49A+jX8cIzn+Lw49VFcSb7IWy2zx3/KzC3WYNhimw VJiasimZoa8J9BsYPNVI10mAx+HJWu+0nb+0OTj+isInnZgwZ430y9x852OvAE5FQPaQr6rGUG5LF s6jFhrcBQ==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kVh8A-0001or-GD; Thu, 22 Oct 2020 20:24:14 +0000 Received: from mail-pg1-x549.google.com ([2607:f8b0:4864:20::549]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kVh84-0001mn-Dm for linux-arm-kernel@lists.infradead.org; Thu, 22 Oct 2020 20:24:09 +0000 Received: by mail-pg1-x549.google.com with SMTP id e16so1553016pgm.1 for ; Thu, 22 Oct 2020 13:24:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=zw2T1AigDoes2ZXL/BhvqfeuP2I6Xp6hCsVSILgFLs8=; b=BuwZP/EKu0LtETCQvIPBCjdbGmanyNFJwkuQZhTKogFSUwBPbwGZGu1uD808VWHLJq YBl5YdZ2JKT0Z0cTOXDHgNa0D1KRhlSRTqFKfyIsvd37jzKZDraiVkVWdOjYP7j+g9j3 DLKWOsllpnL9e96v7q73jsKOADyqAeeAAYlYjjtKAlX/UAw5HrKqp9qH73S2bic77OPm T0lUA4AH1laQOIAwEHSpRp9pvYfrp10DhzYj9tEyODtVAFIF2/adDcNxoL6BGnS9Kx1U iTIgrdHwXruRHx6h3SA6UH9pfV0Q5xzR53yxG3zGPkQ/B42bEum9lvNYWV0mMiHlrrvf TUrQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=zw2T1AigDoes2ZXL/BhvqfeuP2I6Xp6hCsVSILgFLs8=; b=ep2uJ2sdDz7YcKuysxfC423bptT0wgiFJZWPU9efI3z5UzH7usHcyyq6eQtIT8CVjo +WXQ0R0/CG56HUthoflp/xviAGWgcikmFAsdhDRJjb4V7DGq7vjFOty+iFKhowHDIc0N TGgGukBbQckf94zQ3P8MooH0JmVvK5ApcqtPyBOXF1IkZcNoj/lu6vxLb6z6oa2FXDkx 9ZxFxgG/hFLHPG7wzuJfohTjRxLDf3DBONDTzX0crN+4e+2VSkODxVRwL+DiwnIIlSq8 5m7t1yKgbm1CVOQHTtgMg5Pj0iZ3QJXuXypfXgNb5OFXtlCBdSLAFcUfnaB80zb682GB C6+Q== X-Gm-Message-State: AOAM530UPBjwURG05LCucacSSpLA3JpfSG52emN3pgocQ/rapOzgKCXs PQRrqOQ8JX2+88jC2KI7NOaY9GPEDNc2431oq0U= X-Google-Smtp-Source: ABdhPJxRFySAVxGLLNyCelqlhm8iYc1/ihDpFk9R8qj83Ysvq9Y2hRy81LujM+tgbEbBj+4tq73wBvubyx9vMXe4Sfc= X-Received: from samitolvanen1.mtv.corp.google.com ([2620:15c:201:2:f693:9fff:fef4:1b6d]) (user=samitolvanen job=sendgmr) by 2002:a63:1906:: with SMTP id z6mr3801421pgl.286.1603398244708; Thu, 22 Oct 2020 13:24:04 -0700 (PDT) Date: Thu, 22 Oct 2020 13:23:54 -0700 In-Reply-To: <20201022202355.3529836-1-samitolvanen@google.com> Message-Id: <20201022202355.3529836-2-samitolvanen@google.com> Mime-Version: 1.0 References: <20201022202355.3529836-1-samitolvanen@google.com> X-Mailer: git-send-email 2.29.0.rc1.297.gfa9743e501-goog Subject: [PATCH 1/2] scs: switch to vmapped shadow stacks From: Sami Tolvanen To: Will Deacon , Catalin Marinas X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201022_162408_479234_5ED20BD6 X-CRM114-Status: GOOD ( 20.28 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Kees Cook , Ard Biesheuvel , linux-kernel@vger.kernel.org, James Morse , Sami Tolvanen , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The kernel currently uses kmem_cache to allocate shadow call stacks, which means an overflow may not be immediately detected and can potentially result in another task's shadow stack to be overwritten. This change switches SCS to use virtually mapped shadow stacks, which increases shadow stack size to a full page and provides more robust overflow detection similarly to VMAP_STACK. Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook --- include/linux/scs.h | 7 +---- kernel/scs.c | 63 ++++++++++++++++++++++++++++++++++++++------- 2 files changed, 55 insertions(+), 15 deletions(-) diff --git a/include/linux/scs.h b/include/linux/scs.h index 6dec390cf154..86e3c4b7b714 100644 --- a/include/linux/scs.h +++ b/include/linux/scs.h @@ -15,12 +15,7 @@ #ifdef CONFIG_SHADOW_CALL_STACK -/* - * In testing, 1 KiB shadow stack size (i.e. 128 stack frames on a 64-bit - * architecture) provided ~40% safety margin on stack usage while keeping - * memory allocation overhead reasonable. - */ -#define SCS_SIZE SZ_1K +#define SCS_SIZE PAGE_SIZE #define GFP_SCS (GFP_KERNEL | __GFP_ZERO) /* An illegal pointer value to mark the end of the shadow stack. */ diff --git a/kernel/scs.c b/kernel/scs.c index 4ff4a7ba0094..2136edba548d 100644 --- a/kernel/scs.c +++ b/kernel/scs.c @@ -5,50 +5,95 @@ * Copyright (C) 2019 Google LLC */ +#include #include #include #include -#include +#include #include -static struct kmem_cache *scs_cache; - static void __scs_account(void *s, int account) { - struct page *scs_page = virt_to_page(s); + struct page *scs_page = vmalloc_to_page(s); mod_node_page_state(page_pgdat(scs_page), NR_KERNEL_SCS_KB, account * (SCS_SIZE / SZ_1K)); } +/* Matches NR_CACHED_STACKS for VMAP_STACK */ +#define NR_CACHED_SCS 2 +static DEFINE_PER_CPU(void *, scs_cache[NR_CACHED_SCS]); + static void *scs_alloc(int node) { - void *s = kmem_cache_alloc_node(scs_cache, GFP_SCS, node); + int i; + void *s; + + for (i = 0; i < NR_CACHED_SCS; i++) { + s = this_cpu_xchg(scs_cache[i], NULL); + if (s) { + memset(s, 0, SCS_SIZE); + goto out; + } + } + + /* + * We allocate a full page for the shadow stack, which should be + * more than we need. Check the assumption nevertheless. + */ + BUILD_BUG_ON(SCS_SIZE > PAGE_SIZE); + + s = __vmalloc_node_range(PAGE_SIZE, SCS_SIZE, + VMALLOC_START, VMALLOC_END, + GFP_SCS, PAGE_KERNEL, 0, + node, __builtin_return_address(0)); if (!s) return NULL; +out: *__scs_magic(s) = SCS_END_MAGIC; /* * Poison the allocation to catch unintentional accesses to * the shadow stack when KASAN is enabled. */ - kasan_poison_object_data(scs_cache, s); + kasan_poison_vmalloc(s, SCS_SIZE); __scs_account(s, 1); return s; } static void scs_free(void *s) { + int i; + __scs_account(s, -1); - kasan_unpoison_object_data(scs_cache, s); - kmem_cache_free(scs_cache, s); + kasan_unpoison_vmalloc(s, SCS_SIZE); + + for (i = 0; i < NR_CACHED_SCS; i++) + if (this_cpu_cmpxchg(scs_cache[i], 0, s) == NULL) + return; + + vfree_atomic(s); +} + +static int scs_cleanup(unsigned int cpu) +{ + int i; + void **cache = per_cpu_ptr(scs_cache, cpu); + + for (i = 0; i < NR_CACHED_SCS; i++) { + vfree(cache[i]); + cache[i] = NULL; + } + + return 0; } void __init scs_init(void) { - scs_cache = kmem_cache_create("scs_cache", SCS_SIZE, 0, 0, NULL); + cpuhp_setup_state(CPUHP_BP_PREPARE_DYN, "scs:scs_cache", NULL, + scs_cleanup); } int scs_prepare(struct task_struct *tsk, int node)