From patchwork Wed Apr 12 15:23:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 13209250 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE9F2C7619A for ; Wed, 12 Apr 2023 15:23:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3E48F900003; Wed, 12 Apr 2023 11:23:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 393EA6B0078; Wed, 12 Apr 2023 11:23:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 25BA2900003; Wed, 12 Apr 2023 11:23:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 11ECC6B0074 for ; Wed, 12 Apr 2023 11:23:43 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id BC6F5C0107 for ; Wed, 12 Apr 2023 15:23:42 +0000 (UTC) X-FDA: 80673108684.14.A32B1E7 Received: from mail-qt1-f176.google.com (mail-qt1-f176.google.com [209.85.160.176]) by imf22.hostedemail.com (Postfix) with ESMTP id 0577AC000E for ; Wed, 12 Apr 2023 15:23:40 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b="Kp9E8Fd/"; spf=pass (imf22.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.160.176 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1681313021; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=/E244aiAIcJteNUm2g4L6wCEgk4wHVyuDmdP/JQSqeM=; b=xpuRCW3wB4SQqjJcXRPaaItcOlJuSaJuqW6y3ULsMOm4EnxWxc1zJifnkE2J1GpuStZsqS WMf1gDYrzLhUKc02wKZTeWlVR5PD9lU52hSDNjP9Yy7kE4NiiUrn9xRpqRPVSDI0rjWFVm sycPhhdohGxQfPLdb0C1Hnr733wFuzI= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=soleen.com header.s=google header.b="Kp9E8Fd/"; spf=pass (imf22.hostedemail.com: domain of pasha.tatashin@soleen.com designates 209.85.160.176 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1681313021; a=rsa-sha256; cv=none; b=stcmrMHSapkYxGE1MIyI4Dwi72nRRX9lslaIpwdfSCCEtMBr8jBzqCFtcEEn/rx7q0htcC 01VKrO4Fvz/yypK/EwY+OCAog1QCzyG+tU2g3oh9+tDvAKBtb7NJeiLPkHPWJefpeInpQf 4+7/bmhOVhKX21QeYRyTLp0OnUs4rhQ= Received: by mail-qt1-f176.google.com with SMTP id bn8so10201929qtb.2 for ; Wed, 12 Apr 2023 08:23:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; t=1681313020; x=1683905020; h=content-transfer-encoding:mime-version:message-id:date:subject:to :from:from:to:cc:subject:date:message-id:reply-to; bh=/E244aiAIcJteNUm2g4L6wCEgk4wHVyuDmdP/JQSqeM=; b=Kp9E8Fd/WTrYSSeoo9KBh1EftdY3j+iuaOJTMcIvpFxA2yPnI3cDartcEmE7A9/Agf oyYsrrwGfKn5NzQCtxcbog/wucmjejZx6KPj7Tp4LY6v1WQS3n6wcTTIhUN0L9VJ/D+D Cd7QmiowIfGjwME5ia6RCy8QLRzk1AECUKluPcK3RMq1VYncfv7Vdtjv3pfWa1oztjFW 7HeYoQMVerCbNDjyHsAXA1Nt6j9hXO3D7L1LCDJ4uEN+jv3Z7mh/D6ER92Yvw+laY7g4 oBalJAWScdUdRo9t6zfsse9mDN3hW2hDb5gGxpELlfPMHzcRFAGusr8MNHAe667T2w/c jKpQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1681313020; x=1683905020; h=content-transfer-encoding:mime-version:message-id:date:subject:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=/E244aiAIcJteNUm2g4L6wCEgk4wHVyuDmdP/JQSqeM=; b=at4LQOSoHBbKGx+dMxxrc9G3Cmx8R/bBfpgsUIvGe2KfC7d/hFS8Rhm4/XRiyMh/eA stI0HolPIKpIiDX/oBMqS0Lxi8RQgBOPGkGw8sxf5qfvRIwSBSqsGwA96U/hSJd2x1IC 1qAiK+JpgGs9ZCXXWEvaDxulbOaAEj9Z/cKX8qik15Dkh/Rkzx2Bk1UO6WuLWwBWIkS1 OLcEoUU3dJBGG3moIUIJSKQUtzKXt8a6fCGrERhZcgUrMVSmA+LyIOBWX/mcXaIHF/Pk r+iUwjNJSYI+bAeVLOLA6tzGip2YoQNqL0Ky4c9h6HNXKAOkg7eDbQFnJcByk32hBtVm Gjrg== X-Gm-Message-State: AAQBX9d/v+Q9m/BCxOyRJ2fmUtX2pN/Zw2frix6gn7UMwswmPV7WYPr7 2HuGiskBDNO90/6tllMpGHD3kw== X-Google-Smtp-Source: AKy350YdHHKPja00olWR+pbPgiAi8UG3+5u351p3NI4uYhU7UksPv84K8e5Odt3i/EYydukTbCGCEQ== X-Received: by 2002:ac8:5a46:0:b0:3e4:ed0d:6a87 with SMTP id o6-20020ac85a46000000b003e4ed0d6a87mr4107807qta.32.1681313019992; Wed, 12 Apr 2023 08:23:39 -0700 (PDT) Received: from soleen.c.googlers.com.com (193.132.150.34.bc.googleusercontent.com. [34.150.132.193]) by smtp.gmail.com with ESMTPSA id he35-20020a05622a602300b003d9650a7a9csm3154296qtb.46.2023.04.12.08.23.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Apr 2023 08:23:39 -0700 (PDT) From: Pasha Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, mike.kravetz@oracle.com, muchun.song@linux.dev, rientjes@google.com, souravpanda@google.com Subject: [PATCH] mm: hugetlb_vmemmap: provide stronger vmemmap allocaction gurantees Date: Wed, 12 Apr 2023 15:23:37 +0000 Message-Id: <20230412152337.1203254-1-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.40.0.577.gac1e443424-goog MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 0577AC000E X-Stat-Signature: xcukyh3jmdrx1qfpoz5orawawoqqc1jw X-Rspam-User: X-HE-Tag: 1681313020-385452 X-HE-Meta: U2FsdGVkX19hWV3UupA4/sd6jb8Inl8qoPCE/8EV5cjUPNElF/SvQEButf7nr+JYXJQB4JVmoL7Z3cA5nu2TSCizpZqtuY4BQcchxY7hH2u66KDcYY10094Evi6zNHj4XhboSzRr4BlMLWLlaHExkV5YJGoxHESv9zMYEQwyl4pCkSvBZYJ+HuwKYPEmC84EHklX88dlrPMESo6RLgpTrBZKiP8FBkzueLTsz8W+aOvFDFeD2nTvafULl7HMJ7eHa1zwYq1rK8x0WfEku4z6Urxbt2be/86LmKjh355Z/ne4IKyxJMHlVLOxcrisUFNNphqZLGGDyQgtBZdhwdRh8708wP3HZ4zfLhWRe/ybX73DVcc+b4Kh8o7MxBHHMrlqoklaZqnx+sFNLk3AIAGvgvuATFT2OX6K5R2y5APYH7GiXoJWa1PnfvCM+Z7bQG5SBUlYxdu/1DgzcopNF+w+V2hkl65HrKhj2HvQ24iyHAKiePDyfwPqEd7b9YF93JAll+ZKPoRmNZbf8giEQUlKcBT63NjvP+QOiryd57WmRT3bS1jtVpRhFcfiKXhRoLA3svMGA1pN5akKya4zyF85cVb97PD4rsNsrKcPLEd29sGcCzUbOnDQhbscCtuEZ2USVLogDN8WX3VNlxGXhjs6GpTBcoTKQR1CUtWPUFL6Q+gBnDHCqTghHzfyDOdxDTTxLjlPMh36emSpz+sfV+EGqXk50CkEfKBsqOLxH0IB7YqkKvFleucXQHejG4dicd1hL96h0+FYt+IIIZ3/vPQFrPm2zpo1VTmw7T7Ygq8Ehr3Sb3soL3HVsosR9+pJckABCQ5mX3qsyfl6fUgqsrFCDPnL3haZua9or6MJV7hGJ6Ewk5fqKWifLMt7kMgyy6VXk5V73OVeQTnRGjZGEPwo5Vr3Fhw8tN9RxAWsafH7J/qT1BujjtAIGEHS6TlxMPPHIvOfJMgNP0D+pVsgA/T mYeIk6X/ r9cSpqeg1NzIDUF4l4qRn7aBddi3rjYKhEiYnS76bb7B557qa7xz4TmGsaVfuZyOpHeucmYA8JxFNKS38X9u2JUC4pzGR/qD4wxTtnZn2ieVWQwE/uV2yilwv5oPJbQZbbxTA7oPFIkO3KGAeT+iVkcYzaF7nc1wKuBwLmNd2AImRHbW3/aDuXV/0mzMiBbDvQWK3ed+QZm4/KttFmomgAVaehEDesnfy3Hu5NGn9RiXh/ADAYBPhyCD0OMqLIF52pc6IH5IkW/dxqPuSqc/ovQikYvR+9ZFobTZSeWs/zli9FFHBsTp9qF14WGaJUTTohp4pxfMmQ8u9TKOYkzgr8021mBgljg3wi8r1xA/+CJLG94pPSU/2BbScbfqXh5CuXBE/OfZsgHark2La+U8uRGXUh31CIyNaZ+IAYAIQZJnFnZm8wjMUxL2CPVOBQuo/dFIuMyRwiKeUc7k= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: HugeTLB pages have a struct page optimizations where struct pages for tail pages are freed. However, when HugeTLB pages are destroyed, the memory for struct pages (vmemmap) need to be allocated again. Currently, __GFP_NORETRY flag is used to allocate the memory for vmemmap, but given that this flag makes very little effort to actually reclaim memory the returning of huge pages back to the system can be problem. Lets use __GFP_RETRY_MAYFAIL instead. This flag is also performs graceful reclaim without causing ooms, but at least it may perform a few retries, and will fail only when there is genuinely little amount of unused memory in the system. Signed-off-by: Pasha Tatashin Suggested-by: David Rientjes --- mm/hugetlb_vmemmap.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index a559037cce00..c4226d2af7cc 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -475,9 +475,12 @@ int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head) * the range is mapped to the page which @vmemmap_reuse is mapped to. * When a HugeTLB page is freed to the buddy allocator, previously * discarded vmemmap pages must be allocated and remapping. + * + * Use __GFP_RETRY_MAYFAIL to fail only when there is genuinely little + * unused memory in the system. */ ret = vmemmap_remap_alloc(vmemmap_start, vmemmap_end, vmemmap_reuse, - GFP_KERNEL | __GFP_NORETRY | __GFP_THISNODE); + GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_THISNODE); if (!ret) { ClearHPageVmemmapOptimized(head); static_branch_dec(&hugetlb_optimize_vmemmap_key);