From patchwork Mon Aug 21 20:20:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13359783 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2BCCCEE49A6 for ; Mon, 21 Aug 2023 20:20:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AF8F594000B; Mon, 21 Aug 2023 16:20:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AA8928E0012; Mon, 21 Aug 2023 16:20:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 948F394000B; Mon, 21 Aug 2023 16:20:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 82C348E0015 for ; Mon, 21 Aug 2023 16:20:25 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 568801404E0 for ; Mon, 21 Aug 2023 20:20:25 +0000 (UTC) X-FDA: 81149229210.10.BE254D0 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf26.hostedemail.com (Postfix) with ESMTP id 9CF0A140015 for ; Mon, 21 Aug 2023 20:20:23 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=FvNZJNN8; dmarc=none; spf=none (imf26.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692649223; a=rsa-sha256; cv=none; b=zORnsBYw97PU/Wu4Fyptse4XTmKDL0uNsHjLun0BmvcLEBSl53PCdTdtv1eT1Jepkg/AAC w2GnPqF6fWVyJoEZXhCnbLmNryjSmz+PWYomIMc0h6YCUfEKIYebaaNo4WduS/NdwSJiu5 QXnhvUj4gYmnaoPbCafk3CxJdSL4jdc= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=FvNZJNN8; dmarc=none; spf=none (imf26.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692649223; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=bqsHWtg0sTY6IqpOvfeSnvnFR6wJrIohLDKSxFgMNTg=; b=wQLdZU1um8pe4ggJw67ZFihLy1m1g759bF2DfJyLeS7kmF/HvN6JklCH3UlPPUxgwhBfeh 9RE3i54kHMZWdlc9Y1Qj/hvlEitP4Z2uKFHOZ+S/2vpp0Nv10k/WNzVLkmDmx+yY9onMoO iO0DfRy2qHN8bXccFVwcxZtBhAeisIY= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=bqsHWtg0sTY6IqpOvfeSnvnFR6wJrIohLDKSxFgMNTg=; b=FvNZJNN8SUL6GfqaGvvyHGSebj B04lwo0uLo0HB4Ovf2BY9W7vSiLLv57FTFUd+gAEMLFpaVoYhXnv7Hu1CMd3sLB8RPorQidikfaaK 8AnrKAuFUCPgeUksT8EE8mOYLTghJrEl/lttVNSi5egYqWX6HD8zN+d+zaAEaHJUSZ81Xqo4V52Dx CJrv0mZXC8cbG+o0QMH4iDlmGhLhmk/U3V1VlVgBlJIdZ/Wyza2LRheOeA4zDDnZ/RdQbQKSBs/vo hacE20AMuNgi0XQhOO2aLi0N9IJXUjbKX8QsFSzQnLJng3iIdgpjCI8hYK2Au07dIkdvcKYe3fDwn za+Hsikg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qYBNr-00CD73-Q2; Mon, 21 Aug 2023 20:20:20 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-perf-users@vger.kernel.org, peterz@infradead.org, mingo@redhat.com, acme@kernel.org, urezki@gmail.com, hch@infradead.org, lstoakes@gmail.com Subject: [RFC PATCH 4/4] perf: Use folios for the aux ringbuffer & pagefault path Date: Mon, 21 Aug 2023 21:20:16 +0100 Message-Id: <20230821202016.2910321-5-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230821202016.2910321-1-willy@infradead.org> References: <20230821202016.2910321-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 9CF0A140015 X-Stat-Signature: b977n9y8kpk1ixrroyf7x8f8fwrykm3q X-HE-Tag: 1692649223-301562 X-HE-Meta: U2FsdGVkX18jZMSvPJLKaVLPOwDDTFEqZkDj1MJ9x8zURpE1C8ZIVCSb+FET79459ZFv35z1zgsiJRFxMfHQiYrOdSV53TaUhBntbU0B553Rp+1ugU/MBL42TbZA3huUzc6evtxE142oBtDnMvGPDNKhjxJPfUsuLEh92uhuLsCXSiuCE6nqsEXVIAGKV7bFD12fePku2GjJe+WgDp6QyQUqQybPAE7QJ9lFGnMylvI4GSAciILMZSTT4Tai3WWs4yoZi9kUskIMy8hI6ZtKlhEE6bucRaUSMawaXpVgJsBQJNpxzMzL0O5ow32iIHJEbeiDVzpOuGShpzHHjhEsFxIQ20hBut6g7b3+SUHWl/XPMCKN6cOVqvD9wv+fitj8XoplSb3pL+ML6mQs2PgWB6loHIb6M7rOyyptChEPe0atfO5U5zsoRNvfWuaAmIoZDxPjDjJ1b8x8ibQMwOs7pT0yBXmf+VN3Y6XBc2f+JwaEFv2YpyJ8zV2U3yg5pPjHFhdSQo0dUovrQXPDYSvw92LcJ5hn7hTRAg/HgqyJuPeehp3ttPUWASrxsVSoeXee+2dYNHBVU1W72et25h+uGfUy2mcb7cLJ7JVIgxLdWGPcWA3bIpSb61vAxvjndzPQKxPFTDZYyC+W2ABXmLwqbH7WxK3FFLmAifii+OZslME9EcC+734hLTHgOzS5TterOWM3wVhR2+xCj+Wmr6evIrZBYa4eUnuUKgb7N5Ir1iSq8GZwEbnNasX6eGxu77gUtGkuPfHUcYiF5iqrk6Qy+gI+WNu2U4Xs09n4VynmE71PLO4cegwx+Hzw0mGZ8qCIWeq7tmBUQ3ReRplbzPfmgz+IuPmvKh/SkTjpkEy6xXhgrV/bL90cuS2YgBm0GHCSYIx041LKkv2fiYiF7/coQgWVKhljEg7UOBbdgkbknwcdChqab8yZBXLoJBDxyXUONkqAYY6Qx0W4HSvFnvm tkPmUdMa +zZFzkO11Gz3csxjgz90P6hHx4IWATfPY8kpLAkPDP4571SDcS7PVMRIfTkB8JC46uNYk8Lw8n7+rWnmYjftnsyNaPtBe+K/Rw8a0ZYV6vpeqjwCBZ7r89riOfsH60l6ks9JCrNxRj9tjjncjywku/F+l+cH0quD6yh8Axu/whcdAbn6RYNt3GnhC0g== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Instead of allocating a non-compound page and splitting it, allocate a folio and make its refcount the count of the number of pages in it. That way, when we free each page in the folio, we'll only actually free it when the last page in the folio is freed. Keeping the memory intact is better for the MM system than allocating it and splitting it. Now, instead of setting each page->mapping, we only set folio->mapping which is better for our cacheline usage, as well as helping towards the goal of eliminating page->mapping. We remove the setting of page->index; I do not believe this is needed. And we return with the folio locked, which the fault handler should have been doing all along. Signed-off-by: Matthew Wilcox (Oracle) --- kernel/events/core.c | 13 +++++++--- kernel/events/ring_buffer.c | 51 ++++++++++++++++--------------------- 2 files changed, 31 insertions(+), 33 deletions(-) diff --git a/kernel/events/core.c b/kernel/events/core.c index 4c72a41f11af..59d4f7c48c8c 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -29,6 +29,7 @@ #include #include #include +#include #include #include #include @@ -6083,6 +6084,7 @@ static vm_fault_t perf_mmap_fault(struct vm_fault *vmf) { struct perf_event *event = vmf->vma->vm_file->private_data; struct perf_buffer *rb; + struct folio *folio; vm_fault_t ret = VM_FAULT_SIGBUS; if (vmf->flags & FAULT_FLAG_MKWRITE) { @@ -6102,12 +6104,15 @@ static vm_fault_t perf_mmap_fault(struct vm_fault *vmf) vmf->page = perf_mmap_to_page(rb, vmf->pgoff); if (!vmf->page) goto unlock; + folio = page_folio(vmf->page); - get_page(vmf->page); - vmf->page->mapping = vmf->vma->vm_file->f_mapping; - vmf->page->index = vmf->pgoff; + folio_get(folio); + rcu_read_unlock(); + folio_lock(folio); + if (!folio->mapping) + folio->mapping = vmf->vma->vm_file->f_mapping; - ret = 0; + return VM_FAULT_LOCKED; unlock: rcu_read_unlock(); diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c index 56939dc3bf33..0a026e5ff4f5 100644 --- a/kernel/events/ring_buffer.c +++ b/kernel/events/ring_buffer.c @@ -606,39 +606,28 @@ long perf_output_copy_aux(struct perf_output_handle *aux_handle, #define PERF_AUX_GFP (GFP_KERNEL | __GFP_ZERO | __GFP_NOWARN | __GFP_NORETRY) -static struct page *rb_alloc_aux_page(int node, int order) +static struct folio *rb_alloc_aux_folio(int node, int order) { - struct page *page; + struct folio *folio; if (order > MAX_ORDER) order = MAX_ORDER; do { - page = alloc_pages_node(node, PERF_AUX_GFP, order); - } while (!page && order--); - - if (page && order) { - /* - * Communicate the allocation size to the driver: - * if we managed to secure a high-order allocation, - * set its first page's private to this order; - * !PagePrivate(page) means it's just a normal page. - */ - split_page(page, order); - SetPagePrivate(page); - set_page_private(page, order); - } + folio = __folio_alloc_node(PERF_AUX_GFP, order, node); + } while (!folio && order--); - return page; + if (order) + folio_ref_add(folio, (1 << order) - 1); + return folio; } static void rb_free_aux_page(struct perf_buffer *rb, int idx) { - struct page *page = virt_to_page(rb->aux_pages[idx]); + struct folio *folio = virt_to_folio(rb->aux_pages[idx]); - ClearPagePrivate(page); - page->mapping = NULL; - __free_page(page); + folio->mapping = NULL; + folio_put(folio); } static void __rb_free_aux(struct perf_buffer *rb) @@ -672,7 +661,7 @@ int rb_alloc_aux(struct perf_buffer *rb, struct perf_event *event, pgoff_t pgoff, int nr_pages, long watermark, int flags) { bool overwrite = !(flags & RING_BUFFER_WRITABLE); - int node = (event->cpu == -1) ? -1 : cpu_to_node(event->cpu); + int node = (event->cpu == -1) ? numa_mem_id() : cpu_to_node(event->cpu); int ret = -ENOMEM, max_order; if (!has_aux(event)) @@ -707,17 +696,21 @@ int rb_alloc_aux(struct perf_buffer *rb, struct perf_event *event, rb->free_aux = event->pmu->free_aux; for (rb->aux_nr_pages = 0; rb->aux_nr_pages < nr_pages;) { - struct page *page; - int last, order; + struct folio *folio; + unsigned int i, nr, order; + void *addr; order = min(max_order, ilog2(nr_pages - rb->aux_nr_pages)); - page = rb_alloc_aux_page(node, order); - if (!page) + folio = rb_alloc_aux_folio(node, order); + if (!folio) goto out; + addr = folio_address(folio); + nr = folio_nr_pages(folio); - for (last = rb->aux_nr_pages + (1 << page_private(page)); - last > rb->aux_nr_pages; rb->aux_nr_pages++) - rb->aux_pages[rb->aux_nr_pages] = page_address(page++); + for (i = 0; i < nr; i++) { + rb->aux_pages[rb->aux_nr_pages++] = addr; + addr += PAGE_SIZE; + } } /*