From patchwork Fri Jul 20 16:22:22 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joerg Roedel X-Patchwork-Id: 10537943 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 42A656053F for ; Fri, 20 Jul 2018 16:22:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 31D4B2957B for ; Fri, 20 Jul 2018 16:22:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 24AEF2971B; Fri, 20 Jul 2018 16:22:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B4E9E2957B for ; Fri, 20 Jul 2018 16:22:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1396D6B0007; Fri, 20 Jul 2018 12:22:42 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0F3EA6B000A; Fri, 20 Jul 2018 12:22:42 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F175F6B000C; Fri, 20 Jul 2018 12:22:41 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-ed1-f69.google.com (mail-ed1-f69.google.com [209.85.208.69]) by kanga.kvack.org (Postfix) with ESMTP id 8BB4B6B000A for ; Fri, 20 Jul 2018 12:22:41 -0400 (EDT) Received: by mail-ed1-f69.google.com with SMTP id b9-v6so4733712edn.18 for ; Fri, 20 Jul 2018 09:22:41 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=eBRDVzQeOa+0UuP6ZXXtRgYAFSJ7+TrBdKI2hibesHo=; b=afrp5DjutRuGsmGgsaU9sd/Mhjdclvach/nDsvTwVunSdbMbRjgV06ysMSXIO4BYLJ grYmj6m/umhLtSShsPwXBM/8UAusoIeixLhRd1cZGUKQlB8AjwDMfn4JjkcK7cddwxDL y+3iRWlKLTHuvGCXICelaCw61EeEgMUkqWlAb6YmuhpRVigMMT0igL1Zp2jH5Cx0hNf6 zPAlec3CAIQ2o1d0Wwy8ITfhnOCk7HGC99Q1IVPuNTgwSUkHquN5mWTUibbbVggJd6mj YUC3cJdn7oz0Ts5lou2aDIXHwKNUKLqzVqQukGuH9MJ0JynwOIg/i0FffXHZ1X0e4gXu qbiw== X-Gm-Message-State: AOUpUlEeVlEN9lwDwi9WerRWT9nu08AQZ4ugp5ZulpV2jHzDwNLbVN+n CFEqygiGZZOqH0nxPkFAKJGb8b26pbFtKcU8oq1ztBLnJ9xhf3U62pV8uix6AoDzi1cO1VuuiTU UfqHU7hIM2M+q/9dAkOMFLUNq5pCcmGBRQCByLLnE2UO6X6T3Eh20VwmDWSEatCh24A== X-Received: by 2002:a50:944f:: with SMTP id q15-v6mr3391083eda.70.1532103761115; Fri, 20 Jul 2018 09:22:41 -0700 (PDT) X-Google-Smtp-Source: AAOMgpfWJtNcl0p8RX3lsewAsVKhpOgPrmI6fqXoScsNIdDKOiT/gu6EPalL0ATBSgmblJgt/9xX X-Received: by 2002:a50:944f:: with SMTP id q15-v6mr3391054eda.70.1532103760490; Fri, 20 Jul 2018 09:22:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1532103760; cv=none; d=google.com; s=arc-20160816; b=SngbtcLt+/j7uQdqWScqf7olHdqfxvSoZUvmUpANI0WZ54VIQbefP2N/0cFARBNEUx JwA20o771ycjeoYpQD9bfreJKbb8ZCsLz3EcW/Rq4Ki97gSZ6Z4Zg57KTsZ7l75PPHYn YV9zwHzwRbcjQGWgZj5iT/J85oatOfrJOa3kKpijfheGD/JI49hy9kIYTWNhhdfiO/+Z NqukCsnj/WWI7szJ5UBMAXhF6c+FjYeU8vAmyDK2oM73GkCC+hPeEfm8fFeGKuqIlNmm 72hhU2n6vxRX45vGD84epU0QxCvoYtoW38UR9uomGtUfUmfOJvltC4y6xa2FzWQ6JGfq ejEQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=eBRDVzQeOa+0UuP6ZXXtRgYAFSJ7+TrBdKI2hibesHo=; b=xpbvcaaH4C7k1cXVmAVvZq5jsLp6zsoUlZ0P22Sw1B9lTwA3ll3jVy99lLplkn7TSz Gxkh5SBE+K19U348KmjeLkO1T5QhlIo9qa0zXi2UMrI66Y4a2jRpPf+k2D5cXbI4CWor EgNbxsFAE1wdxg9v8xNO0IcPd+/5O9o7LIaTkJdjwfH3lC3S1YB59pv4Qm7/I9DZgMf9 E8J0KTHA7xnF+RdxUStIvx90764IbqnJPrT+hvvJxHRdhnPN4q6Mq3J1UDIjqd5mgHJ2 8/lbbcAGwPZMdcP9maYmNJh8eZfy7/uO8+b6aWDTvwO14d23b+oRwy3u0Z/aJpcPgXCP ROQg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass (test mode) header.i=@8bytes.org header.s=mail-1 header.b=n6tSlkqW; spf=pass (google.com: domain of joro@8bytes.org designates 2a01:238:4383:600:38bc:a715:4b6d:a889 as permitted sender) smtp.mailfrom=joro@8bytes.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=8bytes.org Received: from theia.8bytes.org (8bytes.org. [2a01:238:4383:600:38bc:a715:4b6d:a889]) by mx.google.com with ESMTPS id e21-v6si2661183edj.214.2018.07.20.09.22.40 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 20 Jul 2018 09:22:40 -0700 (PDT) Received-SPF: pass (google.com: domain of joro@8bytes.org designates 2a01:238:4383:600:38bc:a715:4b6d:a889 as permitted sender) client-ip=2a01:238:4383:600:38bc:a715:4b6d:a889; Authentication-Results: mx.google.com; dkim=pass (test mode) header.i=@8bytes.org header.s=mail-1 header.b=n6tSlkqW; spf=pass (google.com: domain of joro@8bytes.org designates 2a01:238:4383:600:38bc:a715:4b6d:a889 as permitted sender) smtp.mailfrom=joro@8bytes.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=8bytes.org Received: by theia.8bytes.org (Postfix, from userid 1000) id 2C1D8257; Fri, 20 Jul 2018 18:22:39 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=8bytes.org; s=mail-1; t=1532103759; bh=Mf9lS0qkLiHNasS6CeFKKOwNfTQB9DBJzg6ADMqVxH0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=n6tSlkqWtba2Ob6NAdePKpBAwofVcfKinGkae42J4H+DMR7xjvLdEJvnBdRhIvNY8 3doxlalxMYfwfIbuSHi6ZxL/2iGtJ9+RFhNYNFxOjL7xWJaWG74p8GW8ySZbx4hUrs TEP4BAQ6bff2Ewk3KWjMU8R4uO9Pt7gbPdNp7daqGYTcdgEGQjavapIqhNovlS6au4 SfvSllyLzaF29+Xj5Ga9oeXNmTrAkZtUvJSHb3GGAlYneXPJipz1vFdDklHkvByhb+ 2dxbN5QD+hLEiJ+DDbT3V8JKEsErznSKtLDBy3aYsLSRYzDgDAaunbDu/G6K/+ChTw y/1rjSlJgGgyQ== From: Joerg Roedel To: Thomas Gleixner , Ingo Molnar , "H . Peter Anvin" Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Linus Torvalds , Andy Lutomirski , Dave Hansen , Josh Poimboeuf , Juergen Gross , Peter Zijlstra , Borislav Petkov , Jiri Kosina , Boris Ostrovsky , Brian Gerst , David Laight , Denys Vlasenko , Eduardo Valentin , Greg KH , Will Deacon , aliguori@amazon.com, daniel.gruss@iaik.tugraz.at, hughd@google.com, keescook@google.com, Andrea Arcangeli , Waiman Long , Pavel Machek , "David H . Gutteridge" , jroedel@suse.de, Arnaldo Carvalho de Melo , Alexander Shishkin , Jiri Olsa , Namhyung Kim , joro@8bytes.org Subject: [PATCH 1/3] perf/core: Make sure the ring-buffer is mapped in all page-tables Date: Fri, 20 Jul 2018 18:22:22 +0200 Message-Id: <1532103744-31902-2-git-send-email-joro@8bytes.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1532103744-31902-1-git-send-email-joro@8bytes.org> References: <1532103744-31902-1-git-send-email-joro@8bytes.org> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Joerg Roedel The ring-buffer is accessed in the NMI handler, so we better avoid faulting on it. Sync the vmalloc range with all page-tables in system to make sure everyone has it mapped. This fixes a WARN_ON_ONCE() that can be triggered with PTI enabled on x86-32: WARNING: CPU: 4 PID: 0 at arch/x86/mm/fault.c:320 vmalloc_fault+0x220/0x230 This triggers because with PTI enabled on an PAE kernel the PMDs are no longer shared between the page-tables, so the vmalloc changes do not propagate automatically. Signed-off-by: Joerg Roedel --- kernel/events/ring_buffer.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c index 5d3cf40..7b0e9aa 100644 --- a/kernel/events/ring_buffer.c +++ b/kernel/events/ring_buffer.c @@ -814,6 +814,9 @@ static void rb_free_work(struct work_struct *work) vfree(base); kfree(rb); + + /* Make sure buffer is unmapped in all page-tables */ + vmalloc_sync_all(); } void rb_free(struct ring_buffer *rb) @@ -840,6 +843,13 @@ struct ring_buffer *rb_alloc(int nr_pages, long watermark, int cpu, int flags) if (!all_buf) goto fail_all_buf; + /* + * The buffer is accessed in NMI handlers, make sure it is + * mapped in all page-tables in the system so that we don't + * fault on the range in an NMI handler. + */ + vmalloc_sync_all(); + rb->user_page = all_buf; rb->data_pages[0] = all_buf + PAGE_SIZE; if (nr_pages) {