From patchwork Thu May 25 12:57:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 13255205 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C4881C77B7A for ; Thu, 25 May 2023 12:57:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F0924900006; Thu, 25 May 2023 08:57:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E90CC900005; Thu, 25 May 2023 08:57:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D32AD900006; Thu, 25 May 2023 08:57:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id C20EF900002 for ; Thu, 25 May 2023 08:57:08 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 4C80B1C6EEE for ; Thu, 25 May 2023 12:57:08 +0000 (UTC) X-FDA: 80828777736.13.A4EE826 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by imf07.hostedemail.com (Postfix) with ESMTP id 57A4D40014 for ; Thu, 25 May 2023 12:57:05 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=linutronix.de header.s=2020 header.b=O2T291AX; dkim=pass header.d=linutronix.de header.s=2020e header.b=V6tXDyP8; dmarc=pass (policy=none) header.from=linutronix.de; spf=pass (imf07.hostedemail.com: domain of tglx@linutronix.de designates 193.142.43.55 as permitted sender) smtp.mailfrom=tglx@linutronix.de ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1685019426; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding:in-reply-to: references:references:dkim-signature; bh=AqnzjItm8iBJFVO+123e/2NMjynUO8IQdrgkc2TuONQ=; b=kjcBFLgrO1/X81490FoLsHWJSptoA0CTo/s3V3wdYXu1jcVpHa6rIBsnyWKaaRaE2dmQbo 9miVZo/UF8HU6zeatL+VfW3c19Ro/HTySU//flCWWF6N5O7vCqIhwxSnzNpUWKw6qTJWXs B9pu1ENfBklO7J2p0j4M4v/hThDcsJw= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=linutronix.de header.s=2020 header.b=O2T291AX; dkim=pass header.d=linutronix.de header.s=2020e header.b=V6tXDyP8; dmarc=pass (policy=none) header.from=linutronix.de; spf=pass (imf07.hostedemail.com: domain of tglx@linutronix.de designates 193.142.43.55 as permitted sender) smtp.mailfrom=tglx@linutronix.de ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1685019426; a=rsa-sha256; cv=none; b=xn1NpGVHuAwgoubFZ0Bife0wI7C/rWwhLpKuPHBBYX4TRDtWqHbM4Ymo453mFp6W2xUWqq FLRKyXraRCWwL8JUHwwpKg347rTcFeP519EUoZnYwS7pIytN4Winy++UDdi+ZmXy4MGRtV k0xvOtqj6i7XW8IZvDItVapCT/7Zuco= Message-ID: <20230525124504.633469722@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1685019424; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=AqnzjItm8iBJFVO+123e/2NMjynUO8IQdrgkc2TuONQ=; b=O2T291AXbi/Eb+6iwkmreB60AbEulTvpCgfqoGVzjlEZbM0cpXX98x4Ez3KG2mgS/K6TTh Wl71Boph85unWrym9QIlh6F5JQLk0zCMhR7b8HA5r792yfNomsUj0vvZn6tbkbe2NqN4F/ pWjaGBwkYL4DKh117GFaPrZ+1Pu7I++FzjTdLa6RG743zT+eOq/TJSeFHRr6GbIb/oRNzX FZiDZe9TIoc0/feO4OI3POLBJOVivZP5CMd/4QcDDaEUfPfPQhwoAVbm94VMM58/73EMFz lTdA5Skt0LjjfpbGBNxPVU5WqSezHNgbimLHGavRX9hrQOXpDXQpZFTSx6/vWQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1685019424; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=AqnzjItm8iBJFVO+123e/2NMjynUO8IQdrgkc2TuONQ=; b=V6tXDyP82nkMVrJLR0f/ekdnBVLSCCAsxRU65tzXoVgeDflEznpuXztKXDuRd5e97aYkTL uKbopIrTBfNpPSAQ== From: Thomas Gleixner To: linux-mm@kvack.org Cc: Andrew Morton , Christoph Hellwig , Uladzislau Rezki , Lorenzo Stoakes , Peter Zijlstra , Baoquan He Subject: [V2 patch 2/6] mm/vmalloc: Avoid iterating over per CPU vmap blocks twice References: <20230525122342.109672430@linutronix.de> MIME-Version: 1.0 Date: Thu, 25 May 2023 14:57:04 +0200 (CEST) X-Rspam-User: X-Stat-Signature: cpma7zrueaj7egj5kmk375gwerqbbjek X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 57A4D40014 X-HE-Tag: 1685019425-922571 X-HE-Meta: U2FsdGVkX1/UX3A42WzAwpLm+gE18BQ6sjV4V6nBq8lxlz3S1adjQZEXbuIlumgWbWnLIJX/k2ceUGtCgWnpZ7iVgIjX97DTnf8Cj8YJh6586Eq9exg0KyZREFQvRFswS76ZuSXxwP2lU7AkZzwB9dr/+sct6UfUBQnrj/IMhj3nCVshe9pzTki+Vmwm7YtAr9XLzOa3eTYepTQhdCdYGTLSVbDMIF7h9MK/GWFDreBfb/nDuS2YdFi+8v2q7h1Cs4BH6eyGhMguqnxu/iwW6Qe2csio6BvZH0J9GogQkp0E1x7bhLKv/bv0vKhTYfBue+UrYGyNCoeUv4C8kDg74tLFEhAEFCKHrAe5Q+8O5wTLfJHPW1rt6ke3tdPECbrKJzA5ymORkMKVgwv38Pth9iSmxWBL/IBqxkX+XMNu9u2E+aKe0cVJEdSUGoO48CII+mSn6yF/Oc3ls0Zvw3nJSWOTY3z6MmOgdrayWOf3NCevBce+vxHW6MtDhidK9/miyxAt4VIvNBwLxzcG1G/qn1EVjzIF0vDkh7OGb9uSgi+u9A+5cdg+7heE9dRYEGq+TAE/hw/wbuSWd8YZsB9yV8Y07DJxIjNif0SnmzFSZVcmRVLeSYLlK9FmMUdS4n09hDIf43/Ir6DrslFAHJMtm36xGNmAU0XZrciCP5dF9oXrykfhOjsu0vp9iO3Y3d3Mj5mkbTvsC2+5U6ldK/JqbyssqwluQpYBjOPlgh/AtlK9LO6h7Y7z2PkMoVhyK1i1aUp8rKxn77Xsx739uGQ+yhpiZA33KlyqlN1SkEQmYSdYOsd7W5+sxB+Fmon+XCd0cA/Cx66JIY6/4gTTgDPdhAXRO5zIJo/qw90o4/zXcNgH1rAs4SQ3seSA10NOylwL5GbLJop4IEdkPrzu4Vlg1GZ+Us0TIMIFLolwFQLp7rdmrG/ShxxXVW/8zj+s4sbrquNJ2sKhE1l855ST57p hexdwHgS nKa1RksskRfIqjH7t/0SpnNNq0zMLP5IIF0O9BPXRsXCk906dIYaV5Wcn3myc2u2pdIgc3E2voz8sVB/PuNK60z+v5rWH1Ti8dNpM X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: _vunmap_aliases() walks the per CPU xarrays to find partially unmapped blocks and then walks the per cpu free lists to purge fragmented blocks. Arguably that's waste of CPU cycles and cache lines as the full xarray walk already touches every block. Avoid this double iteration: - Split out the code to purge one block and the code to free the local purge list into helper functions. - Try to purge the fragmented blocks in the xarray walk before looking at their dirty space. Signed-off-by: Thomas Gleixner Reviewed-by: Christoph Hellwig Reviewed-by: Lorenzo Stoakes --- V2: Fix coding style issues - Christoph --- mm/vmalloc.c | 70 ++++++++++++++++++++++++++++++++++++++--------------------- 1 file changed, 46 insertions(+), 24 deletions(-) --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2086,39 +2086,54 @@ static void free_vmap_block(struct vmap_ kfree_rcu(vb, rcu_head); } +static bool purge_fragmented_block(struct vmap_block *vb, + struct vmap_block_queue *vbq, struct list_head *purge_list) +{ + if (vb->free + vb->dirty != VMAP_BBMAP_BITS || + vb->dirty == VMAP_BBMAP_BITS) + return false; + + /* prevent further allocs after releasing lock */ + vb->free = 0; + /* prevent purging it again */ + vb->dirty = VMAP_BBMAP_BITS; + vb->dirty_min = 0; + vb->dirty_max = VMAP_BBMAP_BITS; + spin_lock(&vbq->lock); + list_del_rcu(&vb->free_list); + spin_unlock(&vbq->lock); + list_add_tail(&vb->purge, purge_list); + return true; +} + +static void free_purged_blocks(struct list_head *purge_list) +{ + struct vmap_block *vb, *n_vb; + + list_for_each_entry_safe(vb, n_vb, purge_list, purge) { + list_del(&vb->purge); + free_vmap_block(vb); + } +} + static void purge_fragmented_blocks(int cpu) { LIST_HEAD(purge); struct vmap_block *vb; - struct vmap_block *n_vb; struct vmap_block_queue *vbq = &per_cpu(vmap_block_queue, cpu); rcu_read_lock(); list_for_each_entry_rcu(vb, &vbq->free, free_list) { - - if (!(vb->free + vb->dirty == VMAP_BBMAP_BITS && vb->dirty != VMAP_BBMAP_BITS)) + if (vb->free + vb->dirty != VMAP_BBMAP_BITS || + vb->dirty == VMAP_BBMAP_BITS) continue; spin_lock(&vb->lock); - if (vb->free + vb->dirty == VMAP_BBMAP_BITS && vb->dirty != VMAP_BBMAP_BITS) { - vb->free = 0; /* prevent further allocs after releasing lock */ - vb->dirty = VMAP_BBMAP_BITS; /* prevent purging it again */ - vb->dirty_min = 0; - vb->dirty_max = VMAP_BBMAP_BITS; - spin_lock(&vbq->lock); - list_del_rcu(&vb->free_list); - spin_unlock(&vbq->lock); - spin_unlock(&vb->lock); - list_add_tail(&vb->purge, &purge); - } else - spin_unlock(&vb->lock); + purge_fragmented_block(vb, vbq, &purge); + spin_unlock(&vb->lock); } rcu_read_unlock(); - - list_for_each_entry_safe(vb, n_vb, &purge, purge) { - list_del(&vb->purge); - free_vmap_block(vb); - } + free_purged_blocks(&purge); } static void purge_fragmented_blocks_allcpus(void) @@ -2226,12 +2241,13 @@ static void vb_free(unsigned long addr, static void _vm_unmap_aliases(unsigned long start, unsigned long end, int flush) { + LIST_HEAD(purge_list); int cpu; if (unlikely(!vmap_initialized)) return; - might_sleep(); + mutex_lock(&vmap_purge_lock); for_each_possible_cpu(cpu) { struct vmap_block_queue *vbq = &per_cpu(vmap_block_queue, cpu); @@ -2241,7 +2257,14 @@ static void _vm_unmap_aliases(unsigned l rcu_read_lock(); xa_for_each(&vbq->vmap_blocks, idx, vb) { spin_lock(&vb->lock); - if (vb->dirty && vb->dirty != VMAP_BBMAP_BITS) { + + /* + * Try to purge a fragmented block first. If it's + * not purgeable, check whether there is dirty + * space to be flushed. + */ + if (!purge_fragmented_block(vb, vbq, &purge_list) && + vb->dirty && vb->dirty != VMAP_BBMAP_BITS) { unsigned long va_start = vb->va->va_start; unsigned long s, e; @@ -2257,9 +2280,8 @@ static void _vm_unmap_aliases(unsigned l } rcu_read_unlock(); } + free_purged_blocks(&purge_list); - mutex_lock(&vmap_purge_lock); - purge_fragmented_blocks_allcpus(); if (!__purge_vmap_area_lazy(start, end) && flush) flush_tlb_kernel_range(start, end); mutex_unlock(&vmap_purge_lock);