From patchwork Mon Mar 14 20:01:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 12780747 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D6543C433F5 for ; Mon, 14 Mar 2022 20:06:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:From:Subject:References:Mime-Version :Message-Id:In-Reply-To:Date:Reply-To:To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=VSmINSsClS5o9b2V9wDMzwzVSTHj/ENXyveyAAvaPtQ=; b=vW+vCPhWJkqD81 tzTiFjwzQN+DBa63mQG1cu/5tCgZVZ8/5PUvQLYzuMRj/e4ojdzribsBkgQYdL4xuOvOzeTlH22TQ v6PrQxB4zflYaE7rVUI4IOfpmSAlumTRCEz6qmDaaKvcB3EeMqaUC4/DkZt6VooJZc0Q1s2mRqqny TSEfGrimX48pidxtU4fc3Fq6W7o1xD8R1Mf1VUZxvVbnXah0W/Pi8tcPuW0nXs6gFA4kefgM6nT6P Fk0z+AQCDQNQDm8MPDZLlo8e7zhWx2vL9OxQqHcXYwSWt0KUxfIoMReJYDB2shEZ5Bu9b56ComyQf Or4yLgnrVf9rN7Q6vdcw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nTqvp-006hvT-V6; Mon, 14 Mar 2022 20:04:42 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nTqvj-006ht1-Eg for linux-arm-kernel@lists.infradead.org; Mon, 14 Mar 2022 20:04:37 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id e4-20020a056902034400b00633691534d5so1203743ybs.7 for ; Mon, 14 Mar 2022 13:04:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:cc; bh=wcrKUjLTdB/HMJdpeOMPENhm20Kul5UdkQa5+z1O4C4=; b=ly66flbISMQbhQBRlyU0Ey6SFlvMFzMx+MCgdnpmKL7gbVZVnWJLeZ+eBkxab3u2uQ /BAPbfsTDdal7NC9tiorOpcTnS1SPCwqZUI5C63IE+zdaccPQGfmYSxoacTwGxFep/2p sYVB6HiHODgLOoNDiEG83gzblrNaMgs4O6zNyjkLeAiPKK3DV/bg6YbvMdHsVKlKv7KY 550C2M5Wqi0qqi2+Rf5QnAcPMwneFdc65JXbl6PhmeS78GfP9FQWcDKMbAEnn8H1+9ZJ SCEW87y7rwuaOAGH1aDA7RKYVL+X6hcCfbwXAskLXQyF15WTxFS95qoyIpo7yUBKpcK4 iU7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:cc; bh=wcrKUjLTdB/HMJdpeOMPENhm20Kul5UdkQa5+z1O4C4=; b=4p4zMM7r4l72UIwdNmyo7qF4AZ6XkXB6dOvI0xktJw/8OixmlZKAv8CboRdXp5Uy// o94Hbs/+JG+IpZZqTbF/O2fiMr/AK5CTPfwfuaAnlV9S432gD36E6Dn42u5KZ6TFCkSJ LB9f6YXx9OcNkZR9bJqpWDY0QjY/YMb2SokrkwYduSfCo47Fu6bvSMqaFvB9SPFAuIXb H0AtszHRLuCd65ROz1y6itSdiG4p8wJITQowXUA8DmUrcZNp2DwPf4Vbqsz0RNasgp/W 0WuzT42oB+0oXD0jbOYJRBmggDEoH7XazevDcocZ1dqiaE0tZnOVxrHIdICqvqb1U8y3 xaeQ== X-Gm-Message-State: AOAM531vwlnXTk1buJVtO5cGN8y0E0yUIgvFwaT+uqT5lURgYPmKWmWC jOAkDC2ByipathBMEKmGZdonJiMIDzI/SZxcKw== X-Google-Smtp-Source: ABdhPJzCQjLXyp0zrSq0fPr0qCvgIfWkpLUI1kq/oOWycXP2KR703/KzwXb0OI2paMiDufMo2S2nH7jF9T3m4zbQpw== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:500a:9ac2:e73e:69e3]) (user=kaleshsingh job=sendgmr) by 2002:a81:9c4c:0:b0:2dc:ba4:4ee with SMTP id n12-20020a819c4c000000b002dc0ba404eemr20662044ywa.248.1647288273266; Mon, 14 Mar 2022 13:04:33 -0700 (PDT) Date: Mon, 14 Mar 2022 13:01:13 -0700 In-Reply-To: <20220314200148.2695206-1-kaleshsingh@google.com> Message-Id: <20220314200148.2695206-5-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220314200148.2695206-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.35.1.723.g4982287a31-goog Subject: [PATCH v6 4/8] KVM: arm64: Add guard pages for pKVM (protected nVHE) hypervisor stack From: Kalesh Singh Cc: will@kernel.org, maz@kernel.org, qperret@google.com, tabba@google.com, surenb@google.com, kernel-team@android.com, Kalesh Singh , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Mark Rutland , Mark Brown , Masami Hiramatsu , Peter Collingbourne , "Madhavan T. Venkataraman" , Andrew Scull , Ard Biesheuvel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220314_130435_537512_06A68F79 X-CRM114-Status: GOOD ( 17.57 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Map the stack pages in the flexible private VA range and allocate guard pages below the stack as unbacked VA space. The stack is aligned so that any valid stack address has PAGE_SHIFT bit as 1 - this is used for overflow detection (implemented in a subsequent patch in the series) Signed-off-by: Kalesh Singh Tested-by: Fuad Tabba Reviewed-by: Fuad Tabba --- Changes in v6: - Update call to pkvm_alloc_private_va_range() (return val and params) Changes in v5: - Use a single allocation for stack and guard pages to ensure they are contiguous, per Marc Changes in v4: - Replace IS_ERR_OR_NULL check with IS_ERR check now that pkvm_alloc_private_va_range() returns an error for null pointer, per Fuad Changes in v3: - Handle null ptr in IS_ERR_OR_NULL checks, per Mark arch/arm64/kvm/hyp/nvhe/setup.c | 31 ++++++++++++++++++++++++++++--- 1 file changed, 28 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index 27af337f9fea..e8d4ea2fcfa0 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -99,17 +99,42 @@ static int recreate_hyp_mappings(phys_addr_t phys, unsigned long size, return ret; for (i = 0; i < hyp_nr_cpus; i++) { + struct kvm_nvhe_init_params *params = per_cpu_ptr(&kvm_init_params, i); + unsigned long hyp_addr; + start = (void *)kern_hyp_va(per_cpu_base[i]); end = start + PAGE_ALIGN(hyp_percpu_size); ret = pkvm_create_mappings(start, end, PAGE_HYP); if (ret) return ret; - end = (void *)per_cpu_ptr(&kvm_init_params, i)->stack_hyp_va; - start = end - PAGE_SIZE; - ret = pkvm_create_mappings(start, end, PAGE_HYP); + /* + * Allocate a contiguous HYP private VA range for the stack + * and guard page. The allocation is also aligned based on + * the order of its size. + */ + ret = pkvm_alloc_private_va_range(PAGE_SIZE * 2, &hyp_addr); + if (ret) + return ret; + + /* + * Since the stack grows downwards, map the stack to the page + * at the higher address and leave the lower guard page + * unbacked. + * + * Any valid stack address now has the PAGE_SHIFT bit as 1 + * and addresses corresponding to the guard page have the + * PAGE_SHIFT bit as 0 - this is used for overflow detection. + */ + hyp_spin_lock(&pkvm_pgd_lock); + ret = kvm_pgtable_hyp_map(&pkvm_pgtable, hyp_addr + PAGE_SIZE, + PAGE_SIZE, params->stack_pa, PAGE_HYP); + hyp_spin_unlock(&pkvm_pgd_lock); if (ret) return ret; + + /* Update stack_hyp_va to end of the stack's private VA range */ + params->stack_hyp_va = hyp_addr + (2 * PAGE_SIZE); } /*