From patchwork Thu May 25 12:57:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 13255204 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 19084C77B7E for ; Thu, 25 May 2023 12:57:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A7425900004; Thu, 25 May 2023 08:57:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A245E900002; Thu, 25 May 2023 08:57:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 91462900004; Thu, 25 May 2023 08:57:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 77150900002 for ; Thu, 25 May 2023 08:57:07 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 3C9D9AD925 for ; Thu, 25 May 2023 12:57:07 +0000 (UTC) X-FDA: 80828777694.25.D63B752 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by imf02.hostedemail.com (Postfix) with ESMTP id 4B13D80014 for ; Thu, 25 May 2023 12:57:04 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=linutronix.de header.s=2020 header.b=KEv1FJcW; dkim=pass header.d=linutronix.de header.s=2020e header.b="G7/CpQCG"; dmarc=pass (policy=none) header.from=linutronix.de; spf=pass (imf02.hostedemail.com: domain of tglx@linutronix.de designates 193.142.43.55 as permitted sender) smtp.mailfrom=tglx@linutronix.de ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1685019425; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding:in-reply-to: references:references:dkim-signature; bh=TpSIWN1012C/jnF1qSmFVH5HMCSWBYgnygUEcktXhkI=; b=hzWbFUuG4PoTQngeb+khOmJ/1uQX7q++KqkXwScql1TTmQNX3GZgxPGtmYOSMMdHuveBoK 0xtEexL9P5xoQSwZptbC27XJ0EyOOH1IAhRnOCicrm+G4rL7O1x38xQXrbiZ7RffJwVeVr 8QSkXRkJuF3Prm2oUeMwhJyWH2a1HKY= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=linutronix.de header.s=2020 header.b=KEv1FJcW; dkim=pass header.d=linutronix.de header.s=2020e header.b="G7/CpQCG"; dmarc=pass (policy=none) header.from=linutronix.de; spf=pass (imf02.hostedemail.com: domain of tglx@linutronix.de designates 193.142.43.55 as permitted sender) smtp.mailfrom=tglx@linutronix.de ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1685019425; a=rsa-sha256; cv=none; b=dbawLZ2aIkIpKKW2//09XpA9FJMF+3Fz3cSbn8gCBO2+bkl+a76fR9xZ6JQ86W7lUwdpsz NpmINEecmeCb0RV0WikjvZkrbEu8/mDi4RUEs086a8L4Df8GdcMd3zmDmvbLFEHNHDJgTC 85a6c35rlFRsFobILJaGqdGovzH1JBc= Message-ID: <20230525124504.573987880@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1685019423; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=TpSIWN1012C/jnF1qSmFVH5HMCSWBYgnygUEcktXhkI=; b=KEv1FJcWXGjYuPTissm98dgXrjzYtKGeTchv/bqymtn7Hob0sLHq+4JkooubgNejovd6Pe R2eXOssc13Pl6nnUhqQ6ZhdTTG+tF23Ab7zA9kyggK0cTYAWwlXkBt+xKJFScHc18lkRab vJiLUl3qHF3qIzskn+WwI1ovI8lT2BAE3nKlhBjZsNmEz4TpMU7k9TmOsAoMvYUUHNTKFT pSJKlU5ZB9fd8zjd4ePtQLR9FhoxKm/sYTVPZ7i119ha8p+/4P6MOx1w9eIHKXQ8bBD9+h jBizkB1tQO44Fmw/lUuQsZUqAUw7CFi1f8jEPpBWH6nRPux0TOBJ8Q+DTJeDvA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1685019423; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=TpSIWN1012C/jnF1qSmFVH5HMCSWBYgnygUEcktXhkI=; b=G7/CpQCG9iUMYHs06QnhMQl0ubuNCS3SSEqBlEP2RF8a+sc8C8xIfphrOiIz8H1aJxfxB2 iWfaN2YwgXYMZlDw== From: Thomas Gleixner To: linux-mm@kvack.org Cc: Andrew Morton , Christoph Hellwig , Uladzislau Rezki , Lorenzo Stoakes , Peter Zijlstra , Baoquan He Subject: [V2 patch 1/6] mm/vmalloc: Prevent stale TLBs in fully utilized blocks References: <20230525122342.109672430@linutronix.de> MIME-Version: 1.0 Date: Thu, 25 May 2023 14:57:03 +0200 (CEST) X-Rspam-User: X-Stat-Signature: 33x64w7tqta9qyn3no3gwx3hw77f1b13 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 4B13D80014 X-HE-Tag: 1685019424-676522 X-HE-Meta: U2FsdGVkX197Q+ELY4iMWurhxHLDn0fSBRkJrtcsyYZNpBz2jxrN0kUpEckpvnpePUT1p+eKpKKQToT5XUs4GX3OVf7i20UMTWcPhpNGX2PBd1ZX6aBHYAJtxqoDAvUuwFkqA5oPB/L9gRgbA1zx7oZTA1DSIMTnGOh+0mr2eY4Is/wfaliDejEwr+elDBCtomcmtS/WnCsMdarA3lB9RTi2BGPuBqSrWrxZKm2sZNEtqh2+LfNNd69JihX2zrc7cmQZrpV4UrUT8vbKyXEOpF0a3pI6ra1b4MdbHzc0s7wrNUclpz2ZPQDyFvhByVgRBCAn0BIiVG7zT69sdX4Ryn9yBR2BB6ck1kidSrV6BYGtMDLgSgTBOupq+0kHL4Ij15oREZoL8g7URga9UlKTTQQlxufY7RWBSCXq1ltp5kxovKoZr/jQMPUiuacwMvx+rqfW7TC7DdW2l2TiPgFIBYxRqksCnEgrq0Yo13S59V5wPX9+v2PKEn4RouVZAjXzN2hhF9aYqbpllp86P0zNFZ1FLZZwV/pkcn3ICAhLlfzkxaNuUU+DEFX/GxgAC1oTpn8nOCr829k38EVvUGtVhJRbzV0sMXP5j6sn8/g+xG9Cgm+PSqebw7oHUXjp8MJONTAoKG0GcSvBkbZQXO2hu4Pkds9eZhSTlVAuX8DceOdudWZyelvR4Ii6THnomUvmKrQFq80Y73Tyil3MOSKpn6UJ6xm+HB/nzslo2TmMiAsHHcicyAVemTOUTHL1OoOuhndls4e7GmMU7Rxloc2tpaNxjDyJQFXBV0AQ5nrKWYcFqRxT9zhY9/QJ7i8iauhrC0qO90/Es9lRFv/OFMrT6GFz2a8gAPc2XgDk32uRiqzGJmBlGUfFWRsy/LKHG9Amfu4Brz9XBwAsJLXPU+QgebYJWOAqCjSPLvxJ5H3p6BYGknYK0iaAMNxTG5c0HU4+N8whBUfcWm+rBTYUGZ6 LS85toeX YOteovS6ZEv4NA4FsdX4qHT9RHroPkjFdPov5AjnKkdRQRG9gMCIVrn2vvK+wkkirIQIg7gTA64ipjRKBRAaVtUib0ixBh6/KrYDwpLn+fFgluoDMC8d/eI9vIA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: _vm_unmap_aliases() is used to ensure that no unflushed TLB entries for a page are left in the system. This is required due to the lazy TLB flush mechanism in vmalloc. This is tried to achieve by walking the per CPU free lists, but those do not contain fully utilized vmap blocks because they are removed from the free list once the blocks free space became zero. When the block is not fully unmapped then it is not on the purge list either. So neither the per CPU list iteration nor the purge list walk find the block and if the page was mapped via such a block and the TLB has not yet been flushed, the guarantee of _vm_unmap_aliases() that there are no stale TLBs after returning is broken: x = vb_alloc() // Removes vmap_block from free list because vb->free became 0 vb_free(x) // Unmaps page and marks in dirty_min/max range // Block has still mappings and is not put on purge list // Page is reused vm_unmap_aliases() // Can't find vmap block with the dirty space -> FAIL So instead of walking the per CPU free lists, walk the per CPU xarrays which hold pointers to _all_ active blocks in the system including those removed from the free lists. Fixes: db64fe02258f ("mm: rewrite vmap layer") Signed-off-by: Thomas Gleixner Reviewed-by: Christoph Hellwig Reviewed-by: Lorenzo Stoakes Reviewed-by: Uladzislau Rezki (Sony) --- V2: Amended changelog so it's clear that the block is not on purge list either. --- mm/vmalloc.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2236,9 +2236,10 @@ static void _vm_unmap_aliases(unsigned l for_each_possible_cpu(cpu) { struct vmap_block_queue *vbq = &per_cpu(vmap_block_queue, cpu); struct vmap_block *vb; + unsigned long idx; rcu_read_lock(); - list_for_each_entry_rcu(vb, &vbq->free, free_list) { + xa_for_each(&vbq->vmap_blocks, idx, vb) { spin_lock(&vb->lock); if (vb->dirty && vb->dirty != VMAP_BBMAP_BITS) { unsigned long va_start = vb->va->va_start; From patchwork Thu May 25 12:57:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 13255205 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C4881C77B7A for ; Thu, 25 May 2023 12:57:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F0924900006; Thu, 25 May 2023 08:57:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E90CC900005; Thu, 25 May 2023 08:57:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D32AD900006; Thu, 25 May 2023 08:57:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id C20EF900002 for ; Thu, 25 May 2023 08:57:08 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 4C80B1C6EEE for ; Thu, 25 May 2023 12:57:08 +0000 (UTC) X-FDA: 80828777736.13.A4EE826 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by imf07.hostedemail.com (Postfix) with ESMTP id 57A4D40014 for ; Thu, 25 May 2023 12:57:05 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=linutronix.de header.s=2020 header.b=O2T291AX; dkim=pass header.d=linutronix.de header.s=2020e header.b=V6tXDyP8; dmarc=pass (policy=none) header.from=linutronix.de; spf=pass (imf07.hostedemail.com: domain of tglx@linutronix.de designates 193.142.43.55 as permitted sender) smtp.mailfrom=tglx@linutronix.de ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1685019426; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding:in-reply-to: references:references:dkim-signature; bh=AqnzjItm8iBJFVO+123e/2NMjynUO8IQdrgkc2TuONQ=; b=kjcBFLgrO1/X81490FoLsHWJSptoA0CTo/s3V3wdYXu1jcVpHa6rIBsnyWKaaRaE2dmQbo 9miVZo/UF8HU6zeatL+VfW3c19Ro/HTySU//flCWWF6N5O7vCqIhwxSnzNpUWKw6qTJWXs B9pu1ENfBklO7J2p0j4M4v/hThDcsJw= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=linutronix.de header.s=2020 header.b=O2T291AX; dkim=pass header.d=linutronix.de header.s=2020e header.b=V6tXDyP8; dmarc=pass (policy=none) header.from=linutronix.de; spf=pass (imf07.hostedemail.com: domain of tglx@linutronix.de designates 193.142.43.55 as permitted sender) smtp.mailfrom=tglx@linutronix.de ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1685019426; a=rsa-sha256; cv=none; b=xn1NpGVHuAwgoubFZ0Bife0wI7C/rWwhLpKuPHBBYX4TRDtWqHbM4Ymo453mFp6W2xUWqq FLRKyXraRCWwL8JUHwwpKg347rTcFeP519EUoZnYwS7pIytN4Winy++UDdi+ZmXy4MGRtV k0xvOtqj6i7XW8IZvDItVapCT/7Zuco= Message-ID: <20230525124504.633469722@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1685019424; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=AqnzjItm8iBJFVO+123e/2NMjynUO8IQdrgkc2TuONQ=; b=O2T291AXbi/Eb+6iwkmreB60AbEulTvpCgfqoGVzjlEZbM0cpXX98x4Ez3KG2mgS/K6TTh Wl71Boph85unWrym9QIlh6F5JQLk0zCMhR7b8HA5r792yfNomsUj0vvZn6tbkbe2NqN4F/ pWjaGBwkYL4DKh117GFaPrZ+1Pu7I++FzjTdLa6RG743zT+eOq/TJSeFHRr6GbIb/oRNzX FZiDZe9TIoc0/feO4OI3POLBJOVivZP5CMd/4QcDDaEUfPfPQhwoAVbm94VMM58/73EMFz lTdA5Skt0LjjfpbGBNxPVU5WqSezHNgbimLHGavRX9hrQOXpDXQpZFTSx6/vWQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1685019424; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=AqnzjItm8iBJFVO+123e/2NMjynUO8IQdrgkc2TuONQ=; b=V6tXDyP82nkMVrJLR0f/ekdnBVLSCCAsxRU65tzXoVgeDflEznpuXztKXDuRd5e97aYkTL uKbopIrTBfNpPSAQ== From: Thomas Gleixner To: linux-mm@kvack.org Cc: Andrew Morton , Christoph Hellwig , Uladzislau Rezki , Lorenzo Stoakes , Peter Zijlstra , Baoquan He Subject: [V2 patch 2/6] mm/vmalloc: Avoid iterating over per CPU vmap blocks twice References: <20230525122342.109672430@linutronix.de> MIME-Version: 1.0 Date: Thu, 25 May 2023 14:57:04 +0200 (CEST) X-Rspam-User: X-Stat-Signature: cpma7zrueaj7egj5kmk375gwerqbbjek X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 57A4D40014 X-HE-Tag: 1685019425-922571 X-HE-Meta: U2FsdGVkX1/UX3A42WzAwpLm+gE18BQ6sjV4V6nBq8lxlz3S1adjQZEXbuIlumgWbWnLIJX/k2ceUGtCgWnpZ7iVgIjX97DTnf8Cj8YJh6586Eq9exg0KyZREFQvRFswS76ZuSXxwP2lU7AkZzwB9dr/+sct6UfUBQnrj/IMhj3nCVshe9pzTki+Vmwm7YtAr9XLzOa3eTYepTQhdCdYGTLSVbDMIF7h9MK/GWFDreBfb/nDuS2YdFi+8v2q7h1Cs4BH6eyGhMguqnxu/iwW6Qe2csio6BvZH0J9GogQkp0E1x7bhLKv/bv0vKhTYfBue+UrYGyNCoeUv4C8kDg74tLFEhAEFCKHrAe5Q+8O5wTLfJHPW1rt6ke3tdPECbrKJzA5ymORkMKVgwv38Pth9iSmxWBL/IBqxkX+XMNu9u2E+aKe0cVJEdSUGoO48CII+mSn6yF/Oc3ls0Zvw3nJSWOTY3z6MmOgdrayWOf3NCevBce+vxHW6MtDhidK9/miyxAt4VIvNBwLxzcG1G/qn1EVjzIF0vDkh7OGb9uSgi+u9A+5cdg+7heE9dRYEGq+TAE/hw/wbuSWd8YZsB9yV8Y07DJxIjNif0SnmzFSZVcmRVLeSYLlK9FmMUdS4n09hDIf43/Ir6DrslFAHJMtm36xGNmAU0XZrciCP5dF9oXrykfhOjsu0vp9iO3Y3d3Mj5mkbTvsC2+5U6ldK/JqbyssqwluQpYBjOPlgh/AtlK9LO6h7Y7z2PkMoVhyK1i1aUp8rKxn77Xsx739uGQ+yhpiZA33KlyqlN1SkEQmYSdYOsd7W5+sxB+Fmon+XCd0cA/Cx66JIY6/4gTTgDPdhAXRO5zIJo/qw90o4/zXcNgH1rAs4SQ3seSA10NOylwL5GbLJop4IEdkPrzu4Vlg1GZ+Us0TIMIFLolwFQLp7rdmrG/ShxxXVW/8zj+s4sbrquNJ2sKhE1l855ST57p hexdwHgS nKa1RksskRfIqjH7t/0SpnNNq0zMLP5IIF0O9BPXRsXCk906dIYaV5Wcn3myc2u2pdIgc3E2voz8sVB/PuNK60z+v5rWH1Ti8dNpM X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: _vunmap_aliases() walks the per CPU xarrays to find partially unmapped blocks and then walks the per cpu free lists to purge fragmented blocks. Arguably that's waste of CPU cycles and cache lines as the full xarray walk already touches every block. Avoid this double iteration: - Split out the code to purge one block and the code to free the local purge list into helper functions. - Try to purge the fragmented blocks in the xarray walk before looking at their dirty space. Signed-off-by: Thomas Gleixner Reviewed-by: Christoph Hellwig Reviewed-by: Lorenzo Stoakes --- V2: Fix coding style issues - Christoph --- mm/vmalloc.c | 70 ++++++++++++++++++++++++++++++++++++++--------------------- 1 file changed, 46 insertions(+), 24 deletions(-) --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2086,39 +2086,54 @@ static void free_vmap_block(struct vmap_ kfree_rcu(vb, rcu_head); } +static bool purge_fragmented_block(struct vmap_block *vb, + struct vmap_block_queue *vbq, struct list_head *purge_list) +{ + if (vb->free + vb->dirty != VMAP_BBMAP_BITS || + vb->dirty == VMAP_BBMAP_BITS) + return false; + + /* prevent further allocs after releasing lock */ + vb->free = 0; + /* prevent purging it again */ + vb->dirty = VMAP_BBMAP_BITS; + vb->dirty_min = 0; + vb->dirty_max = VMAP_BBMAP_BITS; + spin_lock(&vbq->lock); + list_del_rcu(&vb->free_list); + spin_unlock(&vbq->lock); + list_add_tail(&vb->purge, purge_list); + return true; +} + +static void free_purged_blocks(struct list_head *purge_list) +{ + struct vmap_block *vb, *n_vb; + + list_for_each_entry_safe(vb, n_vb, purge_list, purge) { + list_del(&vb->purge); + free_vmap_block(vb); + } +} + static void purge_fragmented_blocks(int cpu) { LIST_HEAD(purge); struct vmap_block *vb; - struct vmap_block *n_vb; struct vmap_block_queue *vbq = &per_cpu(vmap_block_queue, cpu); rcu_read_lock(); list_for_each_entry_rcu(vb, &vbq->free, free_list) { - - if (!(vb->free + vb->dirty == VMAP_BBMAP_BITS && vb->dirty != VMAP_BBMAP_BITS)) + if (vb->free + vb->dirty != VMAP_BBMAP_BITS || + vb->dirty == VMAP_BBMAP_BITS) continue; spin_lock(&vb->lock); - if (vb->free + vb->dirty == VMAP_BBMAP_BITS && vb->dirty != VMAP_BBMAP_BITS) { - vb->free = 0; /* prevent further allocs after releasing lock */ - vb->dirty = VMAP_BBMAP_BITS; /* prevent purging it again */ - vb->dirty_min = 0; - vb->dirty_max = VMAP_BBMAP_BITS; - spin_lock(&vbq->lock); - list_del_rcu(&vb->free_list); - spin_unlock(&vbq->lock); - spin_unlock(&vb->lock); - list_add_tail(&vb->purge, &purge); - } else - spin_unlock(&vb->lock); + purge_fragmented_block(vb, vbq, &purge); + spin_unlock(&vb->lock); } rcu_read_unlock(); - - list_for_each_entry_safe(vb, n_vb, &purge, purge) { - list_del(&vb->purge); - free_vmap_block(vb); - } + free_purged_blocks(&purge); } static void purge_fragmented_blocks_allcpus(void) @@ -2226,12 +2241,13 @@ static void vb_free(unsigned long addr, static void _vm_unmap_aliases(unsigned long start, unsigned long end, int flush) { + LIST_HEAD(purge_list); int cpu; if (unlikely(!vmap_initialized)) return; - might_sleep(); + mutex_lock(&vmap_purge_lock); for_each_possible_cpu(cpu) { struct vmap_block_queue *vbq = &per_cpu(vmap_block_queue, cpu); @@ -2241,7 +2257,14 @@ static void _vm_unmap_aliases(unsigned l rcu_read_lock(); xa_for_each(&vbq->vmap_blocks, idx, vb) { spin_lock(&vb->lock); - if (vb->dirty && vb->dirty != VMAP_BBMAP_BITS) { + + /* + * Try to purge a fragmented block first. If it's + * not purgeable, check whether there is dirty + * space to be flushed. + */ + if (!purge_fragmented_block(vb, vbq, &purge_list) && + vb->dirty && vb->dirty != VMAP_BBMAP_BITS) { unsigned long va_start = vb->va->va_start; unsigned long s, e; @@ -2257,9 +2280,8 @@ static void _vm_unmap_aliases(unsigned l } rcu_read_unlock(); } + free_purged_blocks(&purge_list); - mutex_lock(&vmap_purge_lock); - purge_fragmented_blocks_allcpus(); if (!__purge_vmap_area_lazy(start, end) && flush) flush_tlb_kernel_range(start, end); mutex_unlock(&vmap_purge_lock); From patchwork Thu May 25 12:57:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 13255207 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 97B30C77B7E for ; Thu, 25 May 2023 12:57:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BD7F3280001; Thu, 25 May 2023 08:57:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AC371900005; Thu, 25 May 2023 08:57:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8C49B280001; Thu, 25 May 2023 08:57:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 7C09C900005 for ; Thu, 25 May 2023 08:57:09 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 584BA80AB8 for ; Thu, 25 May 2023 12:57:09 +0000 (UTC) X-FDA: 80828777778.03.DEF723F Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by imf26.hostedemail.com (Postfix) with ESMTP id 85B9B14001F for ; Thu, 25 May 2023 12:57:07 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=linutronix.de header.s=2020 header.b=rlO2m6VM; dkim=pass header.d=linutronix.de header.s=2020e header.b="/ufsKLoB"; dmarc=pass (policy=none) header.from=linutronix.de; spf=pass (imf26.hostedemail.com: domain of tglx@linutronix.de designates 193.142.43.55 as permitted sender) smtp.mailfrom=tglx@linutronix.de ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1685019427; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding:in-reply-to: references:references:dkim-signature; bh=T9X04eIuoMWJVvm2fkVnAhHMuhnbMVSYQ3LmFepsGxw=; b=WtV1VGmcmPHFLYG4vvR9ozokIHWaT3XD5CHhIKfkOwXnNFvApj6NMNYwC2yOca5uNs77pJ SJHMwwkETaB9PhFlorL7GZXbC6oKYtlVCIYpRKyHEl9yxFRZD2CO4JyHOcoTJV61sDrZVI 6yGJVpBEv+K+QWoxtedFF7Xlz9ECvd8= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=linutronix.de header.s=2020 header.b=rlO2m6VM; dkim=pass header.d=linutronix.de header.s=2020e header.b="/ufsKLoB"; dmarc=pass (policy=none) header.from=linutronix.de; spf=pass (imf26.hostedemail.com: domain of tglx@linutronix.de designates 193.142.43.55 as permitted sender) smtp.mailfrom=tglx@linutronix.de ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1685019427; a=rsa-sha256; cv=none; b=1lDDGnysC6JKmhtqVDiEyZgTLBZ7C9tME2UJAWJ2YuQMQta9Gn3nIRDJ6S6b4mvpYb7fI1 eji94fYr3ObSOnvpz8OIgcGTuAj/bt0o8Ugc7s0o03OanJ3PwAneTXvX0yl/0I2rKNd7me 9/EdX9EeFSu9iRP64Q4nTkpbRQa8jdQ= Message-ID: <20230525124504.692056496@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1685019426; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=T9X04eIuoMWJVvm2fkVnAhHMuhnbMVSYQ3LmFepsGxw=; b=rlO2m6VM6GQ0LI379hJTVFYgZzzQdDZ7VzV5EGpvjClhuEUnBCt2aKPM5nzvWOXkjtzjbu 5AOegWNPAgH7GTKccUr74Se/7m/NRQMiDmtWQYxSCITL2tmSNy8nYJ/UuHmOww72/v7mAl TeeluOsUzmPo53RBGKbKlFdTJH7ll1nbNAGeWz7Q0BeSEtVZ0Xn/lpeLxO+SJK93mRgDZW 0rxdmUBqy2DDl2dDlTUwCyPA/tu4HaJf8q/QC42lBqtYnzHIwoerfceVN6OdAoWac5+5zT 7s/6D7x6x4BdjWk041ja3VoqBMTjzrEyN/sik0uh4PF8GWmMNNt8J1u7ZTbTZw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1685019426; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=T9X04eIuoMWJVvm2fkVnAhHMuhnbMVSYQ3LmFepsGxw=; b=/ufsKLoBHId/Ag0HD+JC4AbWAwwhTHkkoacO9h9MiCzx10bPuQq7NTrSITwcFIhP3iV3gC 6CkBGss0psOEQwDg== From: Thomas Gleixner To: linux-mm@kvack.org Cc: Andrew Morton , Christoph Hellwig , Uladzislau Rezki , Lorenzo Stoakes , Peter Zijlstra , Baoquan He Subject: [V2 patch 3/6] mm/vmalloc: Prevent flushing dirty space over and over References: <20230525122342.109672430@linutronix.de> MIME-Version: 1.0 Date: Thu, 25 May 2023 14:57:05 +0200 (CEST) X-Rspamd-Queue-Id: 85B9B14001F X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: 7uq6ydhmpshkhyx1yj6ftegjoyq4iqmu X-HE-Tag: 1685019427-746783 X-HE-Meta: U2FsdGVkX1/4HECkvuuuIjmqcNMc8MjnhZoxhPSHxTe0mVEthu8ODNzNeqj/Dpcm8V7yHIIHXCnRHy81nYoOKd2XQ9mqmziHslbmM9SuimgVcvkl6yiSPrULeULwHPTeobWYEE5EY9gNjIfMjbV6yoAlnJ2kg2z9VTOSbQOSP18WvdCSYwPP8YStLy1PNYP1yw9US3WBelsWw/rPxQ4DSZz7S0+yMJEHAG86hwBvY3WiSxbRcR6Bnv4DPa9ogaHgD+x6bgC8Qrx1j80G82qXP9JLWlIHfMiCXyRm5/lfCUpxs8QZXJJ/gnAPfphe9jr1KrKk2gOKZd8R1tVAX+FiZ5Ustc2yJxwhR9PG5i/O527X5cSRv9emac7TtXG9pniWEhjr/2D6McKg8Z5ApJcpytfPnRG1Ksrsl2McPpcZPZMqmwF+Xmeunfwoiazbzr8aJyzRssuWS4+kZcPeKqEweiOhDrUoVvSreiWw6LSoQr/ncSU1jEiQsetYuo5D20ViUk8/YO493A7cgfYV70uMopHPvfpFZg0ew3VVkiecf4O1AElC0EYv/Q7PDu9l8dA0SY6JEWPanG3qF+4b1/FZYsMxejOS0PA4TSvt/BktfNK1vRWIGYto2gRbypA56IHW9XEcFx3i/bueZcB1IwwGrAX2PfoaHTK9di8g8YF+XYDpAC9jeqmamTpQmHADS7W3S4xzh3/D8G1o6+FAcGsoccWzd8EgdrksTpIzYHVY8zdcqFmF5uYt0nq5ft6LnXek3UaBZgKUYyNLxmWOMvnZ+KWWWcqQ7pGcmLDfXPOtlcE7Ssu5eIRvnazS+/Q9Czj+ImrnRgRgUgOhNyQ/q7+wLxQULS/SY+KQ0tksnH+Wy/IEwa/CS/Koyf8kFPCNNgfjMVPFqEhvh1Fg3eC8lCQnmTed/TDprop+CQERajv/bYjDPm1J4nmCxv9ne0ZzOadmMzCGBdDaVcHzaBBTCGV uajsmIZG z4bWpEzMbkR2NV1K+3+hdQIofY9tzdIfnCQFOqdCZDLWBWtt/I7aiC1e+XMJjTGWKOK+XxUQhL+DAR6h7CglV3/AHLIz2ixBhmFTBLC0DX3XVCCknMOQrrNopPw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: vmap blocks which have active mappings cannot be purged. Allocations which have been freed are accounted for in vmap_block::dirty_min/max, so that they can be detected in _vm_unmap_aliases() as potentially stale TLBs. If there are several invocations of _vm_unmap_aliases() then each of them will flush the dirty range. That's pointless and just increases the probability of full TLB flushes. Avoid that by resetting the flush range after accounting for it. That's safe versus other invocations of _vm_unmap_aliases() because this is all serialized with vmap_purge_lock. Signed-off-by: Thomas Gleixner Reviewed-by: Baoquan He Reviewed-by: Christoph Hellwig Reviewed-by: Lorenzo Stoakes --- mm/vmalloc.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2226,7 +2226,7 @@ static void vb_free(unsigned long addr, spin_lock(&vb->lock); - /* Expand dirty range */ + /* Expand the not yet TLB flushed dirty range */ vb->dirty_min = min(vb->dirty_min, offset); vb->dirty_max = max(vb->dirty_max, offset + (1UL << order)); @@ -2264,7 +2264,7 @@ static void _vm_unmap_aliases(unsigned l * space to be flushed. */ if (!purge_fragmented_block(vb, vbq, &purge_list) && - vb->dirty && vb->dirty != VMAP_BBMAP_BITS) { + vb->dirty_max && vb->dirty != VMAP_BBMAP_BITS) { unsigned long va_start = vb->va->va_start; unsigned long s, e; @@ -2274,6 +2274,10 @@ static void _vm_unmap_aliases(unsigned l start = min(s, start); end = max(e, end); + /* Prevent that this is flushed again */ + vb->dirty_min = VMAP_BBMAP_BITS; + vb->dirty_max = 0; + flush = 1; } spin_unlock(&vb->lock); From patchwork Thu May 25 12:57:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 13255208 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4DC5FC7EE2F for ; Thu, 25 May 2023 12:57:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EE1BE900005; Thu, 25 May 2023 08:57:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E923D280002; Thu, 25 May 2023 08:57:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CE82D900007; Thu, 25 May 2023 08:57:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id B0678900005 for ; Thu, 25 May 2023 08:57:11 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 3F2D7AD82A for ; Thu, 25 May 2023 12:57:11 +0000 (UTC) X-FDA: 80828777862.22.40A4050 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by imf01.hostedemail.com (Postfix) with ESMTP id 2048740016 for ; Thu, 25 May 2023 12:57:08 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=linutronix.de header.s=2020 header.b=eEuCpdM5; dkim=pass header.d=linutronix.de header.s=2020e header.b="1/sPmaXx"; dmarc=pass (policy=none) header.from=linutronix.de; spf=pass (imf01.hostedemail.com: domain of tglx@linutronix.de designates 193.142.43.55 as permitted sender) smtp.mailfrom=tglx@linutronix.de ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1685019429; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding:in-reply-to: references:references:dkim-signature; bh=6uWwBTxz6tojlOj9FGm/76Ept6hCGnDqJKP1bPRYWAo=; b=k3bZsJW7r5ul/j4gF8YTkDz7Uo+NwwCkUMR+G+DV6/as1pOJjWu2+cS6zB1HrHSSjf9Fef ln4LX19/PglVCI+Am+soknmqQaYM2rYiPb5+PLP6a07KglIsz0jyedkBZNXFNT+WLCtLbr nqjWZLqbsco0V9/2lnoFhHmIrpoVdUA= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=linutronix.de header.s=2020 header.b=eEuCpdM5; dkim=pass header.d=linutronix.de header.s=2020e header.b="1/sPmaXx"; dmarc=pass (policy=none) header.from=linutronix.de; spf=pass (imf01.hostedemail.com: domain of tglx@linutronix.de designates 193.142.43.55 as permitted sender) smtp.mailfrom=tglx@linutronix.de ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1685019429; a=rsa-sha256; cv=none; b=YrstOnxH9SYjnZGR2bOiUL43HjprHNiSNcIkCUnVWe0z5eh0QvZVs7bOWRZc5zdgPz6lTR m/HnCnyuJiAnRu7ZaOk03Q5L3GDKus3QKA6XMZzZfN2W12fyan9VsMAzwdh2dwmKOo99OS W2U99iBxhuoUk+J+MdCfqyeykp9tD/s= Message-ID: <20230525124504.750481992@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1685019427; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=6uWwBTxz6tojlOj9FGm/76Ept6hCGnDqJKP1bPRYWAo=; b=eEuCpdM5KGDQdp7RZI6/QP67TdvymuqiVzHxXVbhD2+J62fTrqsWtP26XGfyinID/rKEd6 iy4s2cZUqEojbNx8iwlRj7LtqVncQPCMjP9FQcIthP2FZQldDLW5PW/uaM6It4VcrL/r5b BL57sLp+bh0EzDkMnlXhER2EM5XZCQCe4z9F+Wsbw2dInBqHozuaClJXFHB7uQp/oJjnAS QdXeOy4hjBQbogaVIpP2x+gn2qz3dul9PhIdPjM+d6MyEOTin1UPb2ooP4yuMyrDxfvM1A jO7bpFdPOcqTgVFZi64vc8MOZYn0ugEbfLS53AJPhqOA6RcFRbjSH9BR6h1jcQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1685019427; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=6uWwBTxz6tojlOj9FGm/76Ept6hCGnDqJKP1bPRYWAo=; b=1/sPmaXxDPv+7gTvfu++uGcJAGio+8mLgkCwHJK/twdkMZ72ZEZp8A9Oew6Mc2ySuMXgBt jUy19ErdrO4UJuBg== From: Thomas Gleixner To: linux-mm@kvack.org Cc: Andrew Morton , Christoph Hellwig , Uladzislau Rezki , Lorenzo Stoakes , Peter Zijlstra , Baoquan He Subject: [V2 patch 4/6] mm/vmalloc: Check free space in vmap_block lockless References: <20230525122342.109672430@linutronix.de> MIME-Version: 1.0 Date: Thu, 25 May 2023 14:57:07 +0200 (CEST) X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 2048740016 X-Stat-Signature: 5wkrgtqxnxr9k38tuf1nx6r8wa7ge1wg X-Rspam-User: X-HE-Tag: 1685019428-927385 X-HE-Meta: U2FsdGVkX1//cDOOh6FEbB6Pr8aP2TxPDDvnheGuh5er0xpPWutgiprzA5uh2yG/07fXLIn2rxWwV+YaRQjU/Amkd/HNsA7WUcjtemoJBu1fECc+2H+1M6b4gb5PmJr+oFEOYZFMao5CqOhVjDfzr+hekrkbDUsVEBx7yDZ7f1iUiqTPY7md2Ail3YCtPOl9u33SfAGkUZqD8LwN4U/S7x1q/Cw+Aa2XZK5bsQmS5bwDsaobJiIEGazrbx3eC6L4kCSdOX7RtbLSgB/v6s5Wb5RzTqPNFqISwcmT71PySGK93PUr2UneDhh5Iwmf0F86VfWt/ckaIQmT1DaWAKo0DMDzAf9b99h7ch+J9UZoteQJgc1AiCBBJBtVWMuQfZNg+U3Ron+kS+cFZpesma524loQc7s3hOnZXmzDAbVSpiCHJGz+2IX/qKXGRuwChNfOxZETQlTi7QanWfa/26kPT/PQix155qXweq2c7kdygJHE+XugZ0Rb14oIxuSwrHbUMqygOCzLg3rxDfchQS/5Oe9gIT9ecn4wJ2zsJE8nKacF1YkwGEPNvAhezhuR2ox4JtRTHOYLBiMKQQFsZtwnHlcgOy3cQIPgA0FCBvwhi6Xo/6GKsPOzV5u9MQS2QPyjdR6tM+vuHqNgCz3WsaGZSCJD8WONLkf1l4QFUNJkOQeFAD1lm/QPFc/Y7fd/Pi8aWBIComWJx+9kF7x+WzuqPEOOysWt4qDgSqwaIlyKKCJ4v+qYXYdUAwhf5NRPzt6L391gTEPZLyrCr2TDx1HPJuvPZVyvd/tqJENCcjdqOxGF/bEkkfwLO3NDkfBNFbuNFA+hcv2lu6+thVVaFlh7xymjrC99U3kfntqp2TSrn0m0WEg4gfag1yl98EUh7q/ODnDZo1IFulO4NQmmVf2ZbRcQoXwDcEpbwUiwlNmHRANbT5ctX4PgtW8v7OpriEH9bFatWJo1zC6zeo8Dcmi ouyNOaWj uU/MmVTdk4j5nqStD9JcuKKHKe5wTJOuXwJFdUfeBEGRdm6gNYWIjQ+X4igd1TTjhBlLN6ZV4us7W7j6nAlTIXSk5oE/8Daketr7fVFUKCH+AQPgUfaMRLAUNUHuHOBsaa5WYSfzXWzu7Z28HDZA7yPt7/P8G5oj37Jlson4M9zKnOZk= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: vb_alloc() unconditionally locks a vmap_block on the free list to check the free space. This can be done locklessly because vmap_block::free never increases, it's only decreased on allocations. Check the free space lockless and only if that succeeds, recheck under the lock. Signed-off-by: Thomas Gleixner Reviewed-by: Uladzislau Rezki (Sony) Reviewed-by: Christoph Hellwig Reviewed-by: Lorenzo Stoakes --- mm/vmalloc.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2168,6 +2168,9 @@ static void *vb_alloc(unsigned long size list_for_each_entry_rcu(vb, &vbq->free, free_list) { unsigned long pages_off; + if (READ_ONCE(vb->free) < (1UL << order)) + continue; + spin_lock(&vb->lock); if (vb->free < (1UL << order)) { spin_unlock(&vb->lock); @@ -2176,7 +2179,7 @@ static void *vb_alloc(unsigned long size pages_off = VMAP_BBMAP_BITS - vb->free; vaddr = vmap_block_vaddr(vb->va->va_start, pages_off); - vb->free -= 1UL << order; + WRITE_ONCE(vb->free, vb->free - (1UL << order)); bitmap_set(vb->used_map, pages_off, (1UL << order)); if (vb->free == 0) { spin_lock(&vbq->lock); From patchwork Thu May 25 12:57:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 13255209 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4DE20C77B7A for ; Thu, 25 May 2023 12:57:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 71903280003; Thu, 25 May 2023 08:57:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6CA59280002; Thu, 25 May 2023 08:57:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5BAD9280003; Thu, 25 May 2023 08:57:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 34D4C280002 for ; Thu, 25 May 2023 08:57:12 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id F35CA1C737D for ; Thu, 25 May 2023 12:57:11 +0000 (UTC) X-FDA: 80828777862.11.646FB95 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by imf23.hostedemail.com (Postfix) with ESMTP id 3C76514001B for ; Thu, 25 May 2023 12:57:09 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=linutronix.de header.s=2020 header.b=uXMSCRUo; dkim=pass header.d=linutronix.de header.s=2020e header.b=kp28YZKh; spf=pass (imf23.hostedemail.com: domain of tglx@linutronix.de designates 193.142.43.55 as permitted sender) smtp.mailfrom=tglx@linutronix.de; dmarc=pass (policy=none) header.from=linutronix.de ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1685019430; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding:in-reply-to: references:references:dkim-signature; bh=ze7oyOSqZQkkVBNsE/8CNWjfH2V5Y7y7n3jv3AIkJWU=; b=dM6FSwIGIGAmcrar/9+SmUzG0bq1ftTCCiK2K+kqAexnM3aPglum8KJbGHKlSF7R43rkwG KXk4SXG+5Qy7dM7l9gqCX0q9gJugAuFkffVuMve+ZwqTs8HRjlpCIyvVR+gEeSeQRo8Mtl 9n7CuYnT6JRBVFS//B01d16a3gLLHLs= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=linutronix.de header.s=2020 header.b=uXMSCRUo; dkim=pass header.d=linutronix.de header.s=2020e header.b=kp28YZKh; spf=pass (imf23.hostedemail.com: domain of tglx@linutronix.de designates 193.142.43.55 as permitted sender) smtp.mailfrom=tglx@linutronix.de; dmarc=pass (policy=none) header.from=linutronix.de ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1685019430; a=rsa-sha256; cv=none; b=29071F3QaGAhJIdqcQCSNw8fdjU/FV3w4Sgxx+cqnB0SbvrKipHwRzG+GOE6qgDaEF77VR qvjAUeb72kSBwA4GQ8YdWH0fWXNAhTfWkbvFrDXWBS+YImfM8S9ZT48dn0ZxEfswk+bm0P dkCcthy6/Nge1b9W1l7w2Hq/JxYFaXY= Message-ID: <20230525124504.807356682@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1685019428; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=ze7oyOSqZQkkVBNsE/8CNWjfH2V5Y7y7n3jv3AIkJWU=; b=uXMSCRUo2T9j5sQKMTrPEPWgMSapFR4l2yGg+q8O745ZIlK+9jc7TXY73rVJqW82x2M6oO mjX0WDMK3DnQdD+GM6i+dLJ2fnqmzDpyB8fJv7ttkACl5vGiPGtHINe2vJWHa8ikM4sE5c MNpCpopGrPOro1BGS/CBkll85JlWeEYF7U7YH/zdWzjE7Bk5O7Bs4JuqMoHfi1/hRiCKuV A5kINwSdwGmSwTLtoa3FEXRrmGOV6PyBaGsgkCuNiG+BrxTSwuN3UnHc/e0NATqzEW5rw2 AK5aP0ghXQDiLOtHmkY8VR+coRjmoi+HVLviE2g7FcRKi9Ow/a/SG1YfNR2M4g== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1685019428; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=ze7oyOSqZQkkVBNsE/8CNWjfH2V5Y7y7n3jv3AIkJWU=; b=kp28YZKhnSQyP1pwMtHm3vHfBL90++rU86ht3qOwP/rD5axI34ruN2bQ4tUjVVEAOyqVHV 8M8mafcR8osybHCg== From: Thomas Gleixner To: linux-mm@kvack.org Cc: Andrew Morton , Christoph Hellwig , Uladzislau Rezki , Lorenzo Stoakes , Peter Zijlstra , Baoquan He Subject: [V2 patch 5/6] mm/vmalloc: Add missing READ/WRITE_ONCE() annotations References: <20230525122342.109672430@linutronix.de> MIME-Version: 1.0 Date: Thu, 25 May 2023 14:57:08 +0200 (CEST) X-Rspamd-Queue-Id: 3C76514001B X-Rspam-User: X-Stat-Signature: zq4i7ugm3ypjpaog75jrr5u4ddeed6yn X-Rspamd-Server: rspam01 X-HE-Tag: 1685019429-930031 X-HE-Meta: U2FsdGVkX188DJ/RBbAHKwWoFAbOPc3NgNrLvvzmZSG9AB/bx6L+4MBXZBsIbN8Lf5tU9Odp/ky+NmIkbBsQse4F7xZEXgU19GLo92LNuqqjZ/Wwyhs5FWDloI6XH9t9Z1DKcCpwo80fFiYN+/ErV/C+fuD6WMw5ILifpZM3XOt3bYeqe/O9umep4t5bidr6k64rst3A6/14vPdzBknHW++7Vzmd7YshExTg2myTFaq9IQfUfe166jRDnDXJEn9C1B8MQ6mLtU+ymEonW81SZsb5POmdyQDTDQMLRoIo2uVuJ9JMjETwik2XC973PkqjbpSJUyy7+9K9//DAV3J+MK2sZIDbK08PGSGi8raziq50uXvuq1ruP66zY8PcYtjUWzPbfdsegChfPQN3Y8jdl92jljhRNeKbyxBmpps1uRNtwOtioGVP0BoazO70zoTAhKVLzGAfIZ0pq5TgVSb8tVqjVHYqk/605jnCCoeOa+3Ec5ZCp5w2SUDU8Mzgms/7Mxio6aaRHa+/xSEW3N+XRnn+516Mwtcr7lygMUi3bP3sJ4CTSxIKQR44rFYZNWqSQVbWzVKhV5huV3tR2eElFG9e65wczCni/EPdilOr0ScvLyf5OI6QBp7F2iNqk2p/GJKZNT3It1UX40nfpX4eQA74+e2mUl/FO5i7uQpbfzaIGlmh0fIbA6g9h+ZdBMWkKY1ub9RumrQQrecgMWOxm6OaNn2PNZANNQHLWU/pPCY5Mzl/8omiQLKvP5OVDMsYhI6esnBIp/EXCbnYdbXbWo8mnZwrjFibF50D2CE49x9Efw4c2A5XP1KRtqt3101omeA43n7GDakhR9dsKyQr38bOg9rKaq6C9TJscme8AqxgBH2MtjbXZuFLR09EKMu2Z0rw7UYIhRAZEURSJ+wNnkcZvIVnR4ypHCXI2P0tk5YaH4VDfpHc4Ax0NdHkfZHnPMzGV3v9XCH/ehDs+BL 1HrBofrD +854WwLWAyZiifatEKI1p50QfPKLx6o5U4U9np0KSqGDLUvpQN8zA6ynX+xeuY7H0OddwD1Ra89CuFTLISO62FV8PrgbrJmr5FguxImWysIxZv2lQwrK2b8dtvOAkfUG05Ld3ooxnCv7Ak6+hETy0dpDr0uOe0fXv+0F8D8xqjbIZYfo= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: purge_fragmented_blocks() accesses vmap_block::free and vmap_block::dirty lockless for a quick check. Add the missing READ/WRITE_ONCE() annotations. Signed-off-by: Thomas Gleixner Reviewed-by: Uladzislau Rezki (Sony) Reviewed-by: Christoph Hellwig --- mm/vmalloc.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2094,9 +2094,9 @@ static bool purge_fragmented_block(struc return false; /* prevent further allocs after releasing lock */ - vb->free = 0; + WRITE_ONCE(vb->free, 0); /* prevent purging it again */ - vb->dirty = VMAP_BBMAP_BITS; + WRITE_ONCE(vb->dirty, VMAP_BBMAP_BITS); vb->dirty_min = 0; vb->dirty_max = VMAP_BBMAP_BITS; spin_lock(&vbq->lock); @@ -2124,8 +2124,11 @@ static void purge_fragmented_blocks(int rcu_read_lock(); list_for_each_entry_rcu(vb, &vbq->free, free_list) { - if (vb->free + vb->dirty != VMAP_BBMAP_BITS || - vb->dirty == VMAP_BBMAP_BITS) + unsigned long free = READ_ONCE(vb->free); + unsigned long dirty = READ_ONCE(vb->dirty); + + if (free + dirty != VMAP_BBMAP_BITS || + dirty == VMAP_BBMAP_BITS) continue; spin_lock(&vb->lock); @@ -2233,7 +2236,7 @@ static void vb_free(unsigned long addr, vb->dirty_min = min(vb->dirty_min, offset); vb->dirty_max = max(vb->dirty_max, offset + (1UL << order)); - vb->dirty += 1UL << order; + WRITE_ONCE(vb->dirty, vb->dirty + (1UL << order)); if (vb->dirty == VMAP_BBMAP_BITS) { BUG_ON(vb->free); spin_unlock(&vb->lock); From patchwork Thu May 25 12:57:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 13255210 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id ECE6DC77B7E for ; Thu, 25 May 2023 12:57:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4F64B280004; Thu, 25 May 2023 08:57:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4805A280002; Thu, 25 May 2023 08:57:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2FA03280004; Thu, 25 May 2023 08:57:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id EA9B6280002 for ; Thu, 25 May 2023 08:57:12 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id CB7F4120BA2 for ; Thu, 25 May 2023 12:57:12 +0000 (UTC) X-FDA: 80828777904.09.A14A6D3 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by imf19.hostedemail.com (Postfix) with ESMTP id 11C091A0009 for ; Thu, 25 May 2023 12:57:10 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=linutronix.de header.s=2020 header.b=q80b8+pU; dkim=pass header.d=linutronix.de header.s=2020e header.b=1uyjWFVC; dmarc=pass (policy=none) header.from=linutronix.de; spf=pass (imf19.hostedemail.com: domain of tglx@linutronix.de designates 193.142.43.55 as permitted sender) smtp.mailfrom=tglx@linutronix.de ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1685019431; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding:in-reply-to: references:references:dkim-signature; bh=KNvR8V42JNc63MdTcShzQO5ErNm2SoBdzVTbIT2n5ek=; b=yuw2P6nvMcGQbj3RsuRp0Ui3hor4gvCH9I6NEzyTMdoBWIGTKzT7HPQZcqozX9mJwFkW5p ft5fzXLylWscRnxv3LxOj8CISHC+kUlQkJ4ekdI4axtSj5B1I/xAy8OEHChjj3wySqJiOe FpwItbP0WIYT3X1Pg8apklukDNIo/fA= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=linutronix.de header.s=2020 header.b=q80b8+pU; dkim=pass header.d=linutronix.de header.s=2020e header.b=1uyjWFVC; dmarc=pass (policy=none) header.from=linutronix.de; spf=pass (imf19.hostedemail.com: domain of tglx@linutronix.de designates 193.142.43.55 as permitted sender) smtp.mailfrom=tglx@linutronix.de ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1685019431; a=rsa-sha256; cv=none; b=R9vxjrIrujGC4bSKMYEyIi8grt1OD4Q3SkTlohxvHO1JgkkLRjdrSw/Lkktm/ubMlZ0266 lvsf3AijKYFV7Jnz/11kr3gg3Z0yAtSqxvi6wbWRV9RZKWVfkQXGJP49AecwVaNrzKfUEn 7686WhiSQgv38dESXSKvKL9kWB0bMeQ= Message-ID: <20230525124504.864005691@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1685019429; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=KNvR8V42JNc63MdTcShzQO5ErNm2SoBdzVTbIT2n5ek=; b=q80b8+pUP/lVLX5WbNBjsSWhVDla7d4ZylW48/CGVRqJ2YSOScPr156S3SLXusnfZXZwZa biQCgD+7yvXsxxfzbQhovIzDaFwbbdIulZM8hk4BRP8Dk5zTIhSyC7hG33VQUSo7IsuUh9 X8VSM0oypRwR5HtumEWL2XfmXEJrxFCIh1ilvI9yhflABWOzGe+x4fZTLdUGqO+kPOr6I+ 5/1zkUvDO6zHkxZpC0vXyCLCfmwnczHP3qVVR4HrRXeCtXIltamtpbahGSiKzDBfx5eQow gO3fwK0hULCU6AxhzUt/Q91SqnzmpW7Jbf5HP7MsXQE3O27YCg18zChcw0tH1A== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1685019429; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: references:references; bh=KNvR8V42JNc63MdTcShzQO5ErNm2SoBdzVTbIT2n5ek=; b=1uyjWFVCho+vjKL7w8AXzALC6IyAq9+I3kGxo7dZJz5U8V6nihh0IIzRwKQLOZDmAp/kPn CZcydAuQJPKoFkAA== From: Thomas Gleixner To: linux-mm@kvack.org Cc: Andrew Morton , Christoph Hellwig , Uladzislau Rezki , Lorenzo Stoakes , Peter Zijlstra , Baoquan He Subject: [V2 patch 6/6] mm/vmalloc: Dont purge usable blocks unnecessarily References: <20230525122342.109672430@linutronix.de> MIME-Version: 1.0 Date: Thu, 25 May 2023 14:57:09 +0200 (CEST) X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 11C091A0009 X-Stat-Signature: 8zzxd4pnra9f7nruqz56wg7z4ctk5bur X-Rspam-User: X-HE-Tag: 1685019430-578757 X-HE-Meta: U2FsdGVkX198aoC8WogbGskI2hq/UzpfrSkY8Of8C+ML3Z8Lu8nyHVaUzRWx7JAg3U7VuA06RpGE3SNOLqreEJFz7PsaCiGagSZC1VTd+7mS+UKLb8syt/jwf4+EXTBytPlmnwoKxCz7KGVfEo0/HPNQIqcDg1J+MjHS/1Nc6sVrpekqMXZpD+aQotQyM6bak36yX5t0Nnp+IWcj+IFvB3RO72tBUZTYIwbF5bV8zuLuEpTOSMrIeDfF3xyz0wA+34lCmqCfMHSl+8EtzPLSVBcSkvLXyc5JiFHltxj2YcsjPPkT2s8YST6R6YtaPan3iVUktZ4SusCVCaoF+GqlvG0NoAXFhyze1Z55seYxg9Bm4YmWirV94+SL+8URvNlA+YhNqM1EpGIBBzXS/qBAllfW+hxAtsISrg5tokG9sxHNSgmwfgmP7z4C7PEoMXOSTIo8l0rCSpzV6rujjsh5DMjaCkwGbkhQOrioQf97Rn8ge6n7d25Kh/XzbyjoyLVESci/XJuNhXuLN2TrgK7//YrWWjK7Ec7Wdw/oJIgZiNaasCNLI+pX7y4ApDCIYGteiwEVH5PiRtM7EVpUWwP2pIcxZ64qoJC3im0XEIEsBi5vb+mQ+s544X+3PkcUQsGLR6CB2uOA2qsIEmsd5QrN79vN2RskSawr2DTS3frBx+t9aN1MVGYW9wGp/yrr3XEtoAklvnEoYzoV6tXwRnTglXFUIi8nfMhaSdw4E/PH65G9opGkGTl+vNUuDF+O897nLA9c2zB9Xw7WfLIplHsF7P77U4VA2Z0m0Hx97pjSFwc+pfdug7+d1d0PZ6oqvzQEJ9p42ZASSJLdevIl3OWJ7RaNPiDeplXK+9NA2j6AmywXhn/0CFe6ag76IsPZo6BR9KZjfD3Dm2NYebHmMM2ZYCPDMPWbXAexsRcFT8MrWWj8ZwfO6PyQalQOdW2u8vgN8vDHEvpzspBpWNaIeBF AjaA6bsD n0VMRsRXP7W8UjhZWiFDqf46ZkMnCpbJJ9BAZppbDK4ESmZrdZP2zcKOS791IvNUf66uNR8NmPXg3eFIjG15/7rKbnha3EcxyfaTy X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Purging fragmented blocks is done unconditionally in several contexts: 1) From drain_vmap_area_work(), when the number of lazy to be freed vmap_areas reached the threshold 2) Reclaiming vmalloc address space from pcpu_get_vm_areas() 3) _vm_unmap_aliases() #1 There is no reason to zap fragmented vmap blocks unconditionally, simply because reclaiming all lazy areas drains at least 32MB * fls(num_online_cpus()) per invocation which is plenty. #2 Reclaiming when running out of space or due to memory pressure makes a lot of sense #3 _unmap_aliases() requires to touch everything because the caller has no clue which vmap_area used a particular page last and the vmap_area lost that information too. Except for the vfree + VM_FLUSH_RESET_PERMS case, which removes the vmap area first and then cares about the flush. That in turn requires a full walk of _all_ vmap areas including the one which was just added to the purge list. But as this has to be flushed anyway this is an opportunity to combine outstanding TLB flushes and do the housekeeping of purging freed areas, but like #1 there is no real good reason to zap usable vmap blocks unconditionally. Add a @force_purge argument to the newly split out block purge function and if not true only purge fragmented blocks which have less than 1/4 of their capacity left. Rename purge_vmap_area_lazy() to reclaim_and_purge_vmap_areas() to make it clear what the function does. Signed-off-by: Thomas Gleixner --- V2: Add the missing force_purge argument in _vm_unmap_aliases() Remove force_purge argument from the reclaim path - Baoquan --- mm/vmalloc.c | 28 ++++++++++++++++++++-------- 1 file changed, 20 insertions(+), 8 deletions(-) --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -791,7 +791,7 @@ get_subtree_max_size(struct rb_node *nod RB_DECLARE_CALLBACKS_MAX(static, free_vmap_area_rb_augment_cb, struct vmap_area, rb_node, unsigned long, subtree_max_size, va_size) -static void purge_vmap_area_lazy(void); +static void reclaim_and_purge_vmap_areas(void); static BLOCKING_NOTIFIER_HEAD(vmap_notify_list); static void drain_vmap_area_work(struct work_struct *work); static DECLARE_WORK(drain_vmap_work, drain_vmap_area_work); @@ -1649,7 +1649,7 @@ static struct vmap_area *alloc_vmap_area overflow: if (!purged) { - purge_vmap_area_lazy(); + reclaim_and_purge_vmap_areas(); purged = 1; goto retry; } @@ -1785,9 +1785,10 @@ static bool __purge_vmap_area_lazy(unsig } /* - * Kick off a purge of the outstanding lazy areas. + * Reclaim vmap areas by purging fragmented blocks and purge_vmap_area_list. */ -static void purge_vmap_area_lazy(void) +static void reclaim_and_purge_vmap_areas(void) + { mutex_lock(&vmap_purge_lock); purge_fragmented_blocks_allcpus(); @@ -1908,6 +1909,12 @@ static struct vmap_area *find_unlink_vma #define VMAP_BLOCK_SIZE (VMAP_BBMAP_BITS * PAGE_SIZE) +/* + * Purge threshold to prevent overeager purging of fragmented blocks for + * regular operations: Purge if vb->free is less than 1/4 of the capacity. + */ +#define VMAP_PURGE_THRESHOLD (VMAP_BBMAP_BITS / 4) + #define VMAP_RAM 0x1 /* indicates vm_map_ram area*/ #define VMAP_BLOCK 0x2 /* mark out the vmap_block sub-type*/ #define VMAP_FLAGS_MASK 0x3 @@ -2087,12 +2094,17 @@ static void free_vmap_block(struct vmap_ } static bool purge_fragmented_block(struct vmap_block *vb, - struct vmap_block_queue *vbq, struct list_head *purge_list) + struct vmap_block_queue *vbq, struct list_head *purge_list, + bool force_purge) { if (vb->free + vb->dirty != VMAP_BBMAP_BITS || vb->dirty == VMAP_BBMAP_BITS) return false; + /* Don't overeagerly purge usable blocks unless requested */ + if (!force_purge && vb->free < VMAP_PURGE_THRESHOLD) + return false; + /* prevent further allocs after releasing lock */ WRITE_ONCE(vb->free, 0); /* prevent purging it again */ @@ -2132,7 +2144,7 @@ static void purge_fragmented_blocks(int continue; spin_lock(&vb->lock); - purge_fragmented_block(vb, vbq, &purge); + purge_fragmented_block(vb, vbq, &purge, true); spin_unlock(&vb->lock); } rcu_read_unlock(); @@ -2269,7 +2281,7 @@ static void _vm_unmap_aliases(unsigned l * not purgeable, check whether there is dirty * space to be flushed. */ - if (!purge_fragmented_block(vb, vbq, &purge_list) && + if (!purge_fragmented_block(vb, vbq, &purge_list, false) && vb->dirty_max && vb->dirty != VMAP_BBMAP_BITS) { unsigned long va_start = vb->va->va_start; unsigned long s, e; @@ -4175,7 +4187,7 @@ struct vm_struct **pcpu_get_vm_areas(con overflow: spin_unlock(&free_vmap_area_lock); if (!purged) { - purge_vmap_area_lazy(); + reclaim_and_purge_vmap_areas(); purged = true; /* Before "retry", check if we recover. */