From patchwork Mon Feb 3 23:22:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 11363655 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E32C4921 for ; Mon, 3 Feb 2020 23:23:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9941520732 for ; Mon, 3 Feb 2020 23:23:27 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="YkgvUjOh" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9941520732 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8D5316B000E; Mon, 3 Feb 2020 18:23:26 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 85E956B0010; Mon, 3 Feb 2020 18:23:26 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 725B96B0266; Mon, 3 Feb 2020 18:23:26 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0097.hostedemail.com [216.40.44.97]) by kanga.kvack.org (Postfix) with ESMTP id 596366B000E for ; Mon, 3 Feb 2020 18:23:26 -0500 (EST) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 0D83F2DFA for ; Mon, 3 Feb 2020 23:23:26 +0000 (UTC) X-FDA: 76450394412.18.quilt13_1debc082e231f X-Spam-Summary: 2,0,0,8a64749e59de962d,d41d8cd98f00b204,3bks4xgskcj07ij7povjfk7dlldib.9ljifkru-jjhs79h.lod@flex--almasrymina.bounces.google.com,:mike.kravetz@oracle.com:shuah@kernel.org:almasrymina@google.com:rientjes@google.com:shakeelb@google.com:gthelen@google.com:akpm@linux-foundation.org:linux-kernel@vger.kernel.org::linux-kselftest@vger.kernel.org:cgroups@vger.kernel.org,RULES_HIT:41:152:355:379:541:800:960:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1535:1544:1593:1594:1605:1711:1730:1747:1777:1792:2198:2199:2393:2559:2562:2693:2731:2902:3138:3139:3140:3141:3142:3152:3865:3866:3867:3868:3870:3871:3872:4118:4605:5007:6119:6261:6653:7875:7903:7904:8908:9010:9969:10004:11026:11473:11658:11914:12043:12291:12296:12297:12438:12555:12683:12895:13161:13229:14096:14097:14181:14394:14659:14721:21080:21444:21450:21611:21627:21939:21966:21990:30034:30054:30070:30075,0,RBL:209.85.214.202:@flex--almasrymina.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100. 201.100, X-HE-Tag: quilt13_1debc082e231f X-Filterd-Recvd-Size: 7161 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) by imf48.hostedemail.com (Postfix) with ESMTP for ; Mon, 3 Feb 2020 23:23:25 +0000 (UTC) Received: by mail-pl1-f202.google.com with SMTP id q24so7022237pls.6 for ; Mon, 03 Feb 2020 15:23:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=AAnnuPr/HPGLn+hB/1uqyzxSnBEz8gxTGjNzCNKjlS0=; b=YkgvUjOhEnSOR192jl1bKhbfajQljLIUBasKGgoyrMqoBix5dJvEkT/w2gCbif5WQg AxALwxZVORapq4E2odlv0R7fR5oDwsdOtA4u60L4ddNBtTyyNtEO02kYWzB4r2Sp7dvS yNv6hQnHsomyne2JwoZvzi4fAPIq5CJfZ/x6ZOmmbQsm8DUB9YTqi16x2upBDlR9mamg HVGabJAx3mW4ia2vc/uNw+qQ6nXSFvrWYIvNMb/ucRE1rZmjK8JozF3Sd+61ODjhFdVq Mehl/BIchmECuN2gfIXQtg+NWrrITCnPqrRsQiwIJLD19ZWvKOZjDW0cITiMBcAX89oU iAEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=AAnnuPr/HPGLn+hB/1uqyzxSnBEz8gxTGjNzCNKjlS0=; b=YBpSgXKObQ0wcz7htXIe0LpZsGMoF5j4o4ra/Bvb02m9/OpM8pzMkA5jr7VsAzSO8G /Kb047MlWg/QVQzosyULkaMsh2Hkt3mhiNXmAaAXEb0OF6ciGEnM/6+htkV2DObMkzCm fQE5CQAODtUFZhwPwzU0JM8FeWs0dz55fGcn8iAZNpsa23dE9c15l1YJIli6OTm2Ksl+ zm8HfrRpLyNxILoKfQqZhUNS+ndLg7dRQXnnDWDDoVMZgmg7hsV8rJqfmDx+DvCsOFSB 7UciLytWvHj1DBCe2w89dojw4EDJpBFxaWhA2ZlEgjQIYbQiZTjcOXLZea0ont6nzwCh i3Tg== X-Gm-Message-State: APjAAAUdbOE1ciyRS7nNPGF4BvFUlVW4014geb6S3zQcaFqYFPpVrI9p jaeurFbHOymFvA+TqELYfFjetY517SgujG0sbQ== X-Google-Smtp-Source: APXvYqyhLae0QP76xxs/IosBsmNX7AreR4agocuTC7BThUsrEMEoBlcXac4AiXDBVJSoapD/oaFOMzb07uO0AMuHyw== X-Received: by 2002:a63:4853:: with SMTP id x19mr28162927pgk.385.1580772204625; Mon, 03 Feb 2020 15:23:24 -0800 (PST) Date: Mon, 3 Feb 2020 15:22:46 -0800 In-Reply-To: <20200203232248.104733-1-almasrymina@google.com> Message-Id: <20200203232248.104733-7-almasrymina@google.com> Mime-Version: 1.0 References: <20200203232248.104733-1-almasrymina@google.com> X-Mailer: git-send-email 2.25.0.341.g760bfbb309-goog Subject: [PATCH v11 7/9] hugetlb: support file_region coalescing again From: Mina Almasry To: mike.kravetz@oracle.com Cc: shuah@kernel.org, almasrymina@google.com, rientjes@google.com, shakeelb@google.com, gthelen@google.com, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, cgroups@vger.kernel.org X-Bogosity: Ham, tests=bogofilter, spamicity=0.000296, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: An earlier patch in this series disabled file_region coalescing in order to hang the hugetlb_cgroup uncharge info on the file_region entries. This patch re-adds support for coalescing of file_region entries. Essentially everytime we add an entry, we check to see if the hugetlb_cgroup uncharge info is the same as any adjacent entries. If it is, instead of adding an entry we simply extend the appropriate entry. This is an important performance optimization as private mappings add their entries page by page, and we could incur big performance costs for large mappings with lots of file_region entries in their resv_map. Signed-off-by: Mina Almasry --- mm/hugetlb.c | 62 +++++++++++++++++++++++++++++++++++++++++++--------- 1 file changed, 52 insertions(+), 10 deletions(-) -- 2.25.0.341.g760bfbb309-goog diff --git a/mm/hugetlb.c b/mm/hugetlb.c index ec0b55ea1506e..058dd9c8269cf 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -272,6 +272,22 @@ static void record_hugetlb_cgroup_uncharge_info(struct hugetlb_cgroup *h_cg, #endif } +static bool has_same_uncharge_info(struct file_region *rg, + struct hugetlb_cgroup *h_cg, + struct hstate *h) +{ +#ifdef CONFIG_CGROUP_HUGETLB + return rg && + rg->reservation_counter == + &h_cg->rsvd_hugepage[hstate_index(h)] && + rg->pages_per_hpage == pages_per_huge_page(h) && + rg->css == &h_cg->css; + +#else + return true; +#endif +} + /* Must be called with resv->lock held. Calling this with count_only == true * will count the number of pages to be added but will not modify the linked * list. If regions_needed != NULL and count_only == true, then regions_needed @@ -286,7 +302,7 @@ static long add_reservation_in_range(struct resv_map *resv, long f, long t, long add = 0; struct list_head *head = &resv->regions; long last_accounted_offset = f; - struct file_region *rg = NULL, *trg = NULL, *nrg = NULL; + struct file_region *rg = NULL, *trg = NULL, *nrg = NULL, *prg = NULL; if (regions_needed) *regions_needed = 0; @@ -318,16 +334,34 @@ static long add_reservation_in_range(struct resv_map *resv, long f, long t, if (rg->from > last_accounted_offset) { add += rg->from - last_accounted_offset; if (!count_only) { - nrg = get_file_region_entry_from_cache( - resv, last_accounted_offset, rg->from); - record_hugetlb_cgroup_uncharge_info(h_cg, nrg, - h); - list_add(&nrg->link, rg->link.prev); + /* Check if the last region can be extended. */ + if (prg && prg->to == last_accounted_offset && + has_same_uncharge_info(prg, h_cg, h)) { + prg->to = rg->from; + /* Check if the next region can be extended. */ + } else if (has_same_uncharge_info(rg, h_cg, + h)) { + rg->from = last_accounted_offset; + /* If neither of the regions can be extended, + * add a region. + */ + } else { + nrg = get_file_region_entry_from_cache( + resv, last_accounted_offset, + rg->from); + record_hugetlb_cgroup_uncharge_info( + h_cg, nrg, h); + list_add(&nrg->link, rg->link.prev); + } } else if (regions_needed) *regions_needed += 1; } last_accounted_offset = rg->to; + /* Record rg as the 'previous file region' incase we need it + * for the next iteration. + */ + prg = rg; } /* Handle the case where our range extends beyond @@ -336,10 +370,18 @@ static long add_reservation_in_range(struct resv_map *resv, long f, long t, if (last_accounted_offset < t) { add += t - last_accounted_offset; if (!count_only) { - nrg = get_file_region_entry_from_cache( - resv, last_accounted_offset, t); - record_hugetlb_cgroup_uncharge_info(h_cg, nrg, h); - list_add(&nrg->link, rg->link.prev); + /* Check if the last region can be extended. */ + if (prg && prg->to == last_accounted_offset && + has_same_uncharge_info(prg, h_cg, h)) { + prg->to = last_accounted_offset; + } else { + /* If not, just create a new region. */ + nrg = get_file_region_entry_from_cache( + resv, last_accounted_offset, t); + record_hugetlb_cgroup_uncharge_info(h_cg, nrg, + h); + list_add(&nrg->link, rg->link.prev); + } } else if (regions_needed) *regions_needed += 1; }