From patchwork Tue May 23 14:02:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 13252377 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC0D5C7EE26 for ; Tue, 23 May 2023 14:08:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 47DEB280002; Tue, 23 May 2023 10:08:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 42E99900002; Tue, 23 May 2023 10:08:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2F633280002; Tue, 23 May 2023 10:08:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 1FD9F900002 for ; Tue, 23 May 2023 10:08:33 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id D1879A060D for ; Tue, 23 May 2023 14:08:32 +0000 (UTC) X-FDA: 80821700064.24.6A76885 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by imf03.hostedemail.com (Postfix) with ESMTP id 85882203EA for ; Tue, 23 May 2023 14:02:13 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=linutronix.de header.s=2020 header.b=YXf55I3i; dkim=pass header.d=linutronix.de header.s=2020e header.b=BDGQmGjS; dmarc=pass (policy=none) header.from=linutronix.de; spf=pass (imf03.hostedemail.com: domain of tglx@linutronix.de designates 193.142.43.55 as permitted sender) smtp.mailfrom=tglx@linutronix.de ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684850535; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding:in-reply-to: references:references:dkim-signature; bh=/ljeNuPK0Y0r+8ryvB1gqemQMdtD7u29uImiDwFSDwU=; b=hSKKYQ3rI2S96wDmzf7+hxdo2J0XEOftQWwl+m0EHI177kyZr6A20H9x/g+tA6eDOuWoyc J9y5H9bN7tKd1wyWsjSnE7G5Eu8dFB29HgzEiYWUe4dzwZmxYUMQvchPrmHB1VAaOpLMRI 57rcxKBpQbJbzC82R6PS1f1mLE9lR6U= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=linutronix.de header.s=2020 header.b=YXf55I3i; dkim=pass header.d=linutronix.de header.s=2020e header.b=BDGQmGjS; dmarc=pass (policy=none) header.from=linutronix.de; spf=pass (imf03.hostedemail.com: domain of tglx@linutronix.de designates 193.142.43.55 as permitted sender) smtp.mailfrom=tglx@linutronix.de ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684850535; a=rsa-sha256; cv=none; b=NGbHaWC+RTowVqCVgiqZcVVva23Qycqpo545zu1eQq0bL8x4GBhvACNZuXeIgzfiB4Jldi qCXniu6EKKuLnK66Ae7b9IlbH0H3QsKZDQeeQy0idq8Do4MTS87Z1JnkV5res0PCPJhvOS dkTOsWyP55hK/n9uWwqxZiZorgFLT68= Message-ID: <20230523140002.634591885@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1684850532; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=/ljeNuPK0Y0r+8ryvB1gqemQMdtD7u29uImiDwFSDwU=; b=YXf55I3i8YVr3NmFlljXi7ZWnKakzMztc8pcYxWMEm6SLsrhw7Qi6QFQghQmJxj9GXyNC8 zCpKHKvAxN79grvp4RRofvDE4cmA20z2sDmmG2kwt+g9wIoC7Y5VaNgwy76YxjJyo8+R0/ jFk7M/cMTT7YIT/g8bVTDYGvyYG/CVubzbGsFnlnLTW/1XcS86RHf3Nkbe1kCXRjCjd4FU 2inTrfz3ZnqiZu0TgOvrW2b3b0iDZ2aQOGzu2s7jZUSah9XXo69KjBtdykHriy78LJyAlR 48sjbUhJxPKfpwfiLtWV1wiVbwPvD5xmPirSP/T3b8R8z5T5BxkEsVmn/jrYAQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1684850532; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=/ljeNuPK0Y0r+8ryvB1gqemQMdtD7u29uImiDwFSDwU=; b=BDGQmGjSlNblIsuUQapdcvgx1rfcrZtxwg/Ht8GVdvW7b8nkoFNkj7xg43H77NK19YsNRr 6vcAJ//zJi+6DSAA== From: Thomas Gleixner To: linux-mm@kvack.org Cc: Andrew Morton , Christoph Hellwig , Uladzislau Rezki , Lorenzo Stoakes , Peter Zijlstra , Baoquan He Subject: [patch 2/6] mm/vmalloc: Avoid iterating over per CPU vmap blocks twice References: <20230523135902.517032811@linutronix.de> MIME-Version: 1.0 Date: Tue, 23 May 2023 16:02:12 +0200 (CEST) X-Rspamd-Queue-Id: 85882203EA X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: 6q1af8gat69pindxhtasgf8j954839p9 X-HE-Tag: 1684850533-28292 X-HE-Meta: U2FsdGVkX18VdPLfX8HxERcl5pme9OEpr137nfGWbOrtAluR78+wufnfFeb6vaeQu7VMv+lnExF178wfPDULXqoEF9dFwzCE0Mq0uyHIIaltRGRaiIZqYQU2vcHBs35hIggUQ5It7opGbPsQ9hkI81fEp1VVVTOW3N51JhV1nDnG8gLAo0eOdinOetZ6u7TvbXXH1O6qYJL7l7eYh4bz7m4WCNEwfYQ4AJDxFQSh1jGhHDZQW73cqqG9EV/Db1ci2leDQ9ZiYreAuIJEHjoL2MuGs9qnW7wOHlMD23G76taUpHQEY1FAT9VzIofsL+fugboGkT+hMgqYCsfj9rQNcwYu5QCeVbcu1GeWjsrlV/tXDqV+lmh3c6pHFjm6+mZuonlLmX1Q6NxhM2bygShdB5k9eOQ7TQoanK6xT66XTRGDEbqglDJHIPfzvlcFJC8i7lAbPlNpQcl5oAsi9BkpWvvDoi+94IrX7Hp4lPW+aLUDk2mx0QzL9B8uIyU1R9DbKypNIG0aPi6CyeDVYKlJvbrlc1ABoSPJo74v/LrtPNQ7COUXzejicM6dXodV+477A1h6hyadzDEqYSIksVTHsicTHkWrbYOpZPhcv55DC9DmNe3iHlpqZj4b3QS5abDlWSQk6Gp2W19RuchLkXu1fkfAKnFnSrFuAOnaBWpbtgf9OhVsaS/4G5/RGdbzL9ijla3iVMH/JG7u4D0/U8UANwfawe/ZLdqLslZE7vA0amS0LnfTjRi+IZyGclHniZd5NLawfRh59ATWs+lPvE6I2M6iokwFgidTyfwVCRmOCAMRuOJ/1NzbbjywsYE0cNxRCd3A3aal98CDg/m3sWf/h07iY6jHTs1Gybg07rGbQbc8i0MjRfF8HEAi9EbmeqNfG0xJOsCCvGL6UnkBiBM71jJ7Huy7GtxzWnqDgOC/Z9ekl9QhQqAOEpMtlskV4fn6Tb5AX7U7UuHl+aOGynN aRN7aWZl d7uQylNcEuixEAOdkKDSpYhe4+qwP7ICN4nvdB1Z2nxMF2/aqeSOiXaMBN8DBi7wF5DEOoSO2jltTNLQs9PU0WM2Yq3kpODP0GF0M X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: _vunmap_aliases() walks the per CPU xarrays to find partially unmapped blocks and then walks the per cpu free lists to purge fragmented blocks. Arguably that's waste of CPU cycles and cache lines as the full xarray walk already touches every block. Avoid this double iteration: - Split out the code to purge one block and the code to free the local purge list into helper functions. - Try to purge the fragmented blocks in the xarray walk before looking at their dirty space. Signed-off-by: Thomas Gleixner --- mm/vmalloc.c | 66 ++++++++++++++++++++++++++++++++++++++--------------------- 1 file changed, 43 insertions(+), 23 deletions(-) --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2086,39 +2086,52 @@ static void free_vmap_block(struct vmap_ kfree_rcu(vb, rcu_head); } +static bool purge_fragmented_block(struct vmap_block *vb, struct vmap_block_queue *vbq, + struct list_head *purge_list) +{ + if (!(vb->free + vb->dirty == VMAP_BBMAP_BITS && vb->dirty != VMAP_BBMAP_BITS)) + return false; + + /* prevent further allocs after releasing lock */ + vb->free = 0; + /* prevent purging it again */ + vb->dirty = VMAP_BBMAP_BITS; + vb->dirty_min = 0; + vb->dirty_max = VMAP_BBMAP_BITS; + spin_lock(&vbq->lock); + list_del_rcu(&vb->free_list); + spin_unlock(&vbq->lock); + list_add_tail(&vb->purge, purge_list); + return true; +} + +static void free_purged_blocks(struct list_head *purge_list) +{ + struct vmap_block *vb, *n_vb; + + list_for_each_entry_safe(vb, n_vb, purge_list, purge) { + list_del(&vb->purge); + free_vmap_block(vb); + } +} + static void purge_fragmented_blocks(int cpu) { LIST_HEAD(purge); struct vmap_block *vb; - struct vmap_block *n_vb; struct vmap_block_queue *vbq = &per_cpu(vmap_block_queue, cpu); rcu_read_lock(); list_for_each_entry_rcu(vb, &vbq->free, free_list) { - if (!(vb->free + vb->dirty == VMAP_BBMAP_BITS && vb->dirty != VMAP_BBMAP_BITS)) continue; spin_lock(&vb->lock); - if (vb->free + vb->dirty == VMAP_BBMAP_BITS && vb->dirty != VMAP_BBMAP_BITS) { - vb->free = 0; /* prevent further allocs after releasing lock */ - vb->dirty = VMAP_BBMAP_BITS; /* prevent purging it again */ - vb->dirty_min = 0; - vb->dirty_max = VMAP_BBMAP_BITS; - spin_lock(&vbq->lock); - list_del_rcu(&vb->free_list); - spin_unlock(&vbq->lock); - spin_unlock(&vb->lock); - list_add_tail(&vb->purge, &purge); - } else - spin_unlock(&vb->lock); + purge_fragmented_block(vb, vbq, &purge); + spin_unlock(&vb->lock); } rcu_read_unlock(); - - list_for_each_entry_safe(vb, n_vb, &purge, purge) { - list_del(&vb->purge); - free_vmap_block(vb); - } + free_purged_blocks(&purge); } static void purge_fragmented_blocks_allcpus(void) @@ -2226,12 +2239,13 @@ static void vb_free(unsigned long addr, static void _vm_unmap_aliases(unsigned long start, unsigned long end, int flush) { + LIST_HEAD(purge_list); int cpu; if (unlikely(!vmap_initialized)) return; - might_sleep(); + mutex_lock(&vmap_purge_lock); for_each_possible_cpu(cpu) { struct vmap_block_queue *vbq = &per_cpu(vmap_block_queue, cpu); @@ -2241,7 +2255,14 @@ static void _vm_unmap_aliases(unsigned l rcu_read_lock(); xa_for_each(&vbq->vmap_blocks, idx, vb) { spin_lock(&vb->lock); - if (vb->dirty && vb->dirty != VMAP_BBMAP_BITS) { + + /* + * Try to purge a fragmented block first. If it's + * not purgeable, check whether there is dirty + * space to be flushed. + */ + if (!purge_fragmented_block(vb, vbq, &purge_list) && + vb->dirty && vb->dirty != VMAP_BBMAP_BITS) { unsigned long va_start = vb->va->va_start; unsigned long s, e; @@ -2257,9 +2278,8 @@ static void _vm_unmap_aliases(unsigned l } rcu_read_unlock(); } + free_purged_blocks(&purge_list); - mutex_lock(&vmap_purge_lock); - purge_fragmented_blocks_allcpus(); if (!__purge_vmap_area_lazy(start, end) && flush) flush_tlb_kernel_range(start, end); mutex_unlock(&vmap_purge_lock);