From patchwork Tue Apr 18 19:12:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Weiner X-Patchwork-Id: 13216097 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79B38C6FD18 for ; Tue, 18 Apr 2023 19:13:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8E926900008; Tue, 18 Apr 2023 15:13:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 870EE900003; Tue, 18 Apr 2023 15:13:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 69BDE900008; Tue, 18 Apr 2023 15:13:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 54A6E900003 for ; Tue, 18 Apr 2023 15:13:32 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 35226803FB for ; Tue, 18 Apr 2023 19:13:32 +0000 (UTC) X-FDA: 80695460664.10.C6CEA9D Received: from mail-qv1-f48.google.com (mail-qv1-f48.google.com [209.85.219.48]) by imf12.hostedemail.com (Postfix) with ESMTP id 6149840014 for ; Tue, 18 Apr 2023 19:13:30 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=cmpxchg-org.20221208.gappssmtp.com header.s=20221208 header.b=tASioUHS; spf=pass (imf12.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.219.48 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (policy=none) header.from=cmpxchg.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1681845210; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=3ziCYFLP0gIvrTMKVGAOENHj/570OA5tLu/LhfprcHs=; b=2t9ZcNNUaw1Ek6urcUuzCAH1i9GP1S71PZba24uDz90CNjON2pJJo1za8zGhhIehoiR0Ni NZbAPjY8QOIkYa4t2tbOWpn5b50ziLKf8OaBU51EXGa3hkcSMSGraOKa0q43qZT/gNTT2K z3vVJoid0XbBiuD9yXPkofWZEUZ0yPI= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=cmpxchg-org.20221208.gappssmtp.com header.s=20221208 header.b=tASioUHS; spf=pass (imf12.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.219.48 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (policy=none) header.from=cmpxchg.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1681845210; a=rsa-sha256; cv=none; b=O2X31uZvhdUI7e0pxe3KAIJPa00moLQvQihTG8Lt8CQ5TyTOa0K5KWGTXSrjkPDmHowOr1 Gv2kjIl6dZF44Fghh3ufJliZJk1pDl13di6d3WR/fS/i3KCHrH3CZ//BJmTWAu6zFZsDJJ fBrJFHfr0lP4HumZQQA/b8fJvv1FKcI= Received: by mail-qv1-f48.google.com with SMTP id oj8so1343027qvb.11 for ; Tue, 18 Apr 2023 12:13:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20221208.gappssmtp.com; s=20221208; t=1681845209; x=1684437209; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=3ziCYFLP0gIvrTMKVGAOENHj/570OA5tLu/LhfprcHs=; b=tASioUHSg0O8JLU9lmpT2o9qieQ5UPbSCjZ2FFf4DPmc+sm1lG5E2sefokRjX82PWv /naa0xXtkW6heFV0tkRw9s2v57ltr8jqTBis+P8WeOGGhuTUTbP/NaZa2KY0naFlxm+B W0CWp0uB41dkHuQ1/kpuzy+c28wZgenF79DWz7OZV3qE+8W75G/CgO9SPnK2fL9a320t yXKyc58RVcAaAikakCIJxYNwp+SzurktP4fqmo2U9wFIB3ZelkMdhT468zPi5iN0Uonq qlshkMrXESVN82SilYYT0M1b22r6Yho1HwmEKjRb0CLDZMxgSBPEuAYI8Xtu54okMtlA dd3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681845209; x=1684437209; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=3ziCYFLP0gIvrTMKVGAOENHj/570OA5tLu/LhfprcHs=; b=TAo/9NglGmwenbQ9zTcUsAS+Wb6OjFYSDjU3b+gdtukiO9ODhbQtWB+OerJlptxpZQ 5VEkLrZfGL2in192A8A823hBNLVhezC0soTq2JXiX8RyEbOsSwO9CKiFu24havDFvujj oe8o/FC4EOl4rSqn68okLxGexio+GR35jYBm2aFLZJD2jKwAyx9Nu43UM0vP+B5PbIr5 Sfqvv1/NUmMen8eUZymE6eAypOd7Jr4wFa0RkaaZzWCDi1/ObEbI0NAcWvX9ZynBVbSH 7XvX7qI/LNJqk/naWVHku61ptYi1k46y4zS08kibVbIufggsHHs1HcLVH9rS6mja2Eg+ ckAA== X-Gm-Message-State: AAQBX9dLu3vruqskKC3/+bEHrOBI2aeDiy4H/U7PQURKQjshh7+KBJE+ pxsD3s7/tFd3NEl7tg0Fgh8LOxvwDQNGkKEkcUA= X-Google-Smtp-Source: AKy350ZOi+88hEvm9GEFxPHg078nMJgdO2hJcrbQjNkq+nM1p74G7tq0XzGJcBNJQ1SbVH8WMthykg== X-Received: by 2002:a05:6214:f23:b0:5ef:77c4:4540 with SMTP id iw3-20020a0562140f2300b005ef77c44540mr16271521qvb.27.1681845209552; Tue, 18 Apr 2023 12:13:29 -0700 (PDT) Received: from localhost ([2620:10d:c091:400::5:e646]) by smtp.gmail.com with ESMTPSA id dr3-20020a05621408e300b005dd8b9345dbsm3884921qvb.115.2023.04.18.12.13.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 18 Apr 2023 12:13:29 -0700 (PDT) From: Johannes Weiner To: linux-mm@kvack.org Cc: Kaiyang Zhao , Mel Gorman , Vlastimil Babka , David Rientjes , linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: [RFC PATCH 09/26] mm: page_alloc: move expand() above compaction_capture() Date: Tue, 18 Apr 2023 15:12:56 -0400 Message-Id: <20230418191313.268131-10-hannes@cmpxchg.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230418191313.268131-1-hannes@cmpxchg.org> References: <20230418191313.268131-1-hannes@cmpxchg.org> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 6149840014 X-Rspam-User: X-Stat-Signature: hwk4ppedyn6hthi5yizjeoqp44uygg84 X-HE-Tag: 1681845210-434455 X-HE-Meta: U2FsdGVkX19zVpH4iJ7rPU0mduUv7RHMbaYjppB+Sk6h3Fo5cWmfpVLTJR8PMcvBuFOT3l5DD4QgQmK2V4F4SdK95DT5PrjmXJHhsKKLPGqwx3w4jOn5j03aCfePjILA6zdkE7zqsP05RfS28hoFKaCgPZBqHSA9QPf830Dh+ihNXX7uvZqlK4/AvMv8HNqV0gUTUTs3i/w1cv3sqiXRqb4JgjFqU4veFmIHpwFTsWrqvCwYSyKuRIfICDXmm20I5khrOYnwDrf6JiU8oLiFpvJUcm9Tk5hHxgvl0MeyNRgJaPmjcp7eJ4iqt5LHLYUOuS7VTdlZRIeaSq9lxnV49AZlw1TrL4fzsh+8rtqAID9byhNG9cTaVZ0JHfaGmuAJGJGvnN4ro0WLXooug2OZPwqNR60f+G/snuRFWVSGUuweBbWK8y9zEb5MjD+P9TlprHy5fMAlRarCk3uerH1m9l6VNxVL4YviG7jjHfgPijmeg16/pZhDjxmarkwyl3499gOhx48+5lNlt138mzfRReUXt7ygHd7gQYexhyE4AWzs/MrGElkW2UnoYz3gStDh+iWuBC5sHDJ64E7m1jTdPXddue4cGZhulIYeRAXlwAaGjeKUCMpTDqeB0g2cp717V9C7mnGC2DU+aDMb9VjiLyj0mzuQ6hkznoP+kcAsiTrUE7cAtar3fzpXiczdQ3e11vJ7ZkrE/NLwxfeGWJOp/Fhiq92gG0bpJFZAx516RHL0JE3j6aeqk//huc2MVU63IfTaDTVRdmB+foBYPwDymQUwglu39lpt4kndeM8US1Kv//9GgasWzyjz0eDs9B9kz4bdhrc0N/7R6qfe1/3lSPlZunzhLnzYTrin5XPKk/yJlrNEC5ykaYjfpeVIsOTihzPz3+yGOgQ3lhz2r2Tq3Nz1kBVKUjjjUsVF72MBZIHssW+XnBTXLqovHi0Y0RTHvOaojBL9weOz3G9gcP2 EYZXBnkp LWll41tDoEF/xeqPULezpNdlsUSgLD474pfrp6vcQORG5Rjx03xHyZbwhdHB6tW0Zsel0B2kqyU6wbjRrxrTbMD4p0WAvuMCKr9dNUA1o0/OCxodpVbicfbyOqY6S6tkC6keBU0t0YeaLKi4R+eGuKyY620ugLgZ8UftIAyuoAYHG0qNOqrBmL8Afbv1Qt2n9Hsh3Go6lhuppN88flYud2fS21/wNzlKyWl7AI/6yav678EpHrpIMFvtZ3nB0ID9GAJByMGvPstLFcy5NarqYT6A8v19ros8pxRnJyWUHaC5RnRMICDCJd72D8Y0emObC1hxwrCZ2dAcJf0EyTOHEvG1/IF8qexTX/cR+dfGwi7/Fj+2cmkZC6LWhy5y/PoYKZW/jAxRBCg9T1YxgogIPJht26zQqf8fKLe/qPxo6Bc/Eh1+ey5ACekR9BIudefTqLmtNdsNTzziy5dbuwYZFL0Y9THBZxvSOt14aZdxTxwIvS8KrVbdnP12y1JKOnvf/e9pWaEEKiVFTY0S2wgGS0M/NIA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The next patch will allow compaction to capture from larger-than-requested page blocks and free the remainder. Rearrange the code in advance to make the diff more readable. No functional change. Signed-off-by: Johannes Weiner --- mm/page_alloc.c | 186 ++++++++++++++++++++++++------------------------ 1 file changed, 93 insertions(+), 93 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 8e5996f8b4b4..cd86f80d7bbe 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -950,61 +950,6 @@ static inline void set_buddy_order(struct page *page, unsigned int order) __SetPageBuddy(page); } -#ifdef CONFIG_COMPACTION -static inline struct capture_control *task_capc(struct zone *zone) -{ - struct capture_control *capc = current->capture_control; - - return unlikely(capc && capc->cc) && - !(current->flags & PF_KTHREAD) && - !capc->page && - capc->cc->zone == zone ? capc : NULL; -} - -static inline bool -compaction_capture(struct capture_control *capc, struct page *page, - int order, int migratetype) -{ - if (!capc || order != capc->cc->order) - return false; - - /* Do not accidentally pollute CMA or isolated regions*/ - if (is_migrate_cma(migratetype) || - is_migrate_isolate(migratetype)) - return false; - - if (order >= pageblock_order) { - migratetype = capc->migratetype; - change_pageblock_range(page, order, migratetype); - } else if (migratetype == MIGRATE_MOVABLE) { - /* - * Do not let lower order allocations pollute a - * movable pageblock. This might let an unmovable - * request use a reclaimable pageblock and vice-versa - * but no more than normal fallback logic which can - * have trouble finding a high-order free page. - */ - return false; - } - - capc->page = page; - return true; -} - -#else -static inline struct capture_control *task_capc(struct zone *zone) -{ - return NULL; -} - -static inline bool -compaction_capture(struct capture_control *capc, struct page *page, - int order, int migratetype) -{ - return false; -} -#endif /* CONFIG_COMPACTION */ - static inline void account_freepages(struct page *page, struct zone *zone, int nr_pages, int migratetype) { @@ -1072,6 +1017,99 @@ static inline void del_page_from_free_list(struct page *page, struct zone *zone, account_freepages(page, zone, -(1 << order), migratetype); } +/* + * The order of subdivision here is critical for the IO subsystem. + * Please do not alter this order without good reasons and regression + * testing. Specifically, as large blocks of memory are subdivided, + * the order in which smaller blocks are delivered depends on the order + * they're subdivided in this function. This is the primary factor + * influencing the order in which pages are delivered to the IO + * subsystem according to empirical testing, and this is also justified + * by considering the behavior of a buddy system containing a single + * large block of memory acted on by a series of small allocations. + * This behavior is a critical factor in sglist merging's success. + * + * -- nyc + */ +static inline void expand(struct zone *zone, struct page *page, + int low, int high, int migratetype) +{ + unsigned long size = 1 << high; + + while (high > low) { + high--; + size >>= 1; + VM_BUG_ON_PAGE(bad_range(zone, &page[size]), &page[size]); + + /* + * Mark as guard pages (or page), that will allow to + * merge back to allocator when buddy will be freed. + * Corresponding page table entries will not be touched, + * pages will stay not present in virtual address space + */ + if (set_page_guard(zone, &page[size], high)) + continue; + + add_to_free_list(&page[size], zone, high, migratetype, false); + set_buddy_order(&page[size], high); + } +} + +#ifdef CONFIG_COMPACTION +static inline struct capture_control *task_capc(struct zone *zone) +{ + struct capture_control *capc = current->capture_control; + + return unlikely(capc && capc->cc) && + !(current->flags & PF_KTHREAD) && + !capc->page && + capc->cc->zone == zone ? capc : NULL; +} + +static inline bool +compaction_capture(struct capture_control *capc, struct page *page, + int order, int migratetype) +{ + if (!capc || order != capc->cc->order) + return false; + + /* Do not accidentally pollute CMA or isolated regions*/ + if (is_migrate_cma(migratetype) || + is_migrate_isolate(migratetype)) + return false; + + if (order >= pageblock_order) { + migratetype = capc->migratetype; + change_pageblock_range(page, order, migratetype); + } else if (migratetype == MIGRATE_MOVABLE) { + /* + * Do not let lower order allocations pollute a + * movable pageblock. This might let an unmovable + * request use a reclaimable pageblock and vice-versa + * but no more than normal fallback logic which can + * have trouble finding a high-order free page. + */ + return false; + } + + capc->page = page; + return true; +} + +#else +static inline struct capture_control *task_capc(struct zone *zone) +{ + return NULL; +} + +static inline bool +compaction_capture(struct capture_control *capc, struct page *page, + int order, int migratetype) +{ + return false; +} +#endif /* CONFIG_COMPACTION */ + /* * If this is not the largest possible page, check if the buddy * of the next-highest order is free. If it is, it's possible @@ -2345,44 +2383,6 @@ void __init init_cma_reserved_pageblock(struct page *page) } #endif -/* - * The order of subdivision here is critical for the IO subsystem. - * Please do not alter this order without good reasons and regression - * testing. Specifically, as large blocks of memory are subdivided, - * the order in which smaller blocks are delivered depends on the order - * they're subdivided in this function. This is the primary factor - * influencing the order in which pages are delivered to the IO - * subsystem according to empirical testing, and this is also justified - * by considering the behavior of a buddy system containing a single - * large block of memory acted on by a series of small allocations. - * This behavior is a critical factor in sglist merging's success. - * - * -- nyc - */ -static inline void expand(struct zone *zone, struct page *page, - int low, int high, int migratetype) -{ - unsigned long size = 1 << high; - - while (high > low) { - high--; - size >>= 1; - VM_BUG_ON_PAGE(bad_range(zone, &page[size]), &page[size]); - - /* - * Mark as guard pages (or page), that will allow to - * merge back to allocator when buddy will be freed. - * Corresponding page table entries will not be touched, - * pages will stay not present in virtual address space - */ - if (set_page_guard(zone, &page[size], high)) - continue; - - add_to_free_list(&page[size], zone, high, migratetype, false); - set_buddy_order(&page[size], high); - } -} - static void check_new_page_bad(struct page *page) { if (unlikely(page->flags & __PG_HWPOISON)) {