From patchwork Fri Jan 8 15:58:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shakeel Butt X-Patchwork-Id: 12006911 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4ECB6C433DB for ; Fri, 8 Jan 2021 15:58:47 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D8DED23884 for ; Fri, 8 Jan 2021 15:58:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D8DED23884 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 65C8A8D0189; Fri, 8 Jan 2021 10:58:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 60C1C8D0188; Fri, 8 Jan 2021 10:58:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4FB048D0189; Fri, 8 Jan 2021 10:58:46 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0128.hostedemail.com [216.40.44.128]) by kanga.kvack.org (Postfix) with ESMTP id 3B8308D0188 for ; Fri, 8 Jan 2021 10:58:46 -0500 (EST) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 0538C824556B for ; Fri, 8 Jan 2021 15:58:46 +0000 (UTC) X-FDA: 77683065852.01.sink31_150303b274f4 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin01.hostedemail.com (Postfix) with ESMTP id C555710047BD9 for ; Fri, 8 Jan 2021 15:58:45 +0000 (UTC) X-HE-Tag: sink31_150303b274f4 X-Filterd-Recvd-Size: 6405 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) by imf02.hostedemail.com (Postfix) with ESMTP for ; Fri, 8 Jan 2021 15:58:45 +0000 (UTC) Received: by mail-pj1-f73.google.com with SMTP id c21so7199376pjr.8 for ; Fri, 08 Jan 2021 07:58:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=K2Y9mypxMIxy8dN+zcRs31GW/sxwp8pLzjC2qV3HZYM=; b=fMonvPlmyErI1mcmDqi8RukzV5Gouy8wYMiM08zbjaJoYY/yMo2UeZ3eFOCbCO4G3V 1dNSMEgcrcAW4KhGmKYCnZ6gHz6Zm6OxbpYBIC2OlWG4q+2ERmSUQ+06FvficujEJlCP aclJ2Mgm+Z11APWXPofJaQ0TC866/C2I07zzrH+U1rVnBYRILKTYVa6BZvc7AaDn9HCn KJWPYY8Sp062kGre+hawhEd+73NbNkCUdLbUcINVGzRBsBLfZyFMwCRxWPidkrtik3r3 Em3OIUYOin0H1HHj4pyovunplJrm6QkBM5y+nA5yXxjXToe7++wf/2hu7JXNCItGfNmc dGOA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=K2Y9mypxMIxy8dN+zcRs31GW/sxwp8pLzjC2qV3HZYM=; b=ZAT7/DUWeUzSBEvI7qI2B2EfaLtQfGL48MqHQKmRyA0wKsw4YuiYMHXDJ2wbMwXQGf syd+Gl+uLioN1QQNsz4ZoT+t+IGvaMWwD6TaLRahpGqeZ8F6yYY0G5PvrC6tB7wxx3O0 OnR9Ls7BgdeI4kkXq6yt56tKWtDP8h9WKsrOeGMablQH31iBZ9sDkOFzAXgrPZ5l1eZA AN1SJ72J3n/I+h5b9IhfBpoLFBNMFzDAlI5BtwzvRRGildk2v3V/2kqiMi6rDQVaOgIm d8K+2TLkJt5XdtcPUtowfpfIfZ7z57xf72aPaoL3fw9Uqu10Oy1hpSWeTUUUUQ+LFuTq tqng== X-Gm-Message-State: AOAM532ALwwh05HFhhl5pEaqiPXhZcBYry1yfMYhV2nydwDPylaO+VEE QruAgEeY9dOi/FXtiTeVZAXyPpxICPB65Q== X-Google-Smtp-Source: ABdhPJy2Jp4f8exDZLZva3gOaWtknVud3DXstpbjibsQkH3dGVI/OiWvSUtfZt1GwnzEcZ/HZqVbQGIJ2IBEKg== X-Received: from shakeelb.svl.corp.google.com ([100.116.77.44]) (user=shakeelb job=sendgmr) by 2002:a65:460d:: with SMTP id v13mr7632675pgq.414.1610121523757; Fri, 08 Jan 2021 07:58:43 -0800 (PST) Date: Fri, 8 Jan 2021 07:58:12 -0800 In-Reply-To: <20210108155813.2914586-1-shakeelb@google.com> Message-Id: <20210108155813.2914586-2-shakeelb@google.com> Mime-Version: 1.0 References: <20210108155813.2914586-1-shakeelb@google.com> X-Mailer: git-send-email 2.29.2.729.g45daf8777d-goog Subject: [PATCH v2 2/3] mm: fix numa stats for thp migration From: Shakeel Butt To: Johannes Weiner , Roman Gushchin , Michal Hocko , Yang Shi Cc: Andrew Morton , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Shakeel Butt , stable@vger.kernel.org X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently the kernel is not correctly updating the numa stats for NR_FILE_PAGES and NR_SHMEM on THP migration. Fix that. For NR_FILE_DIRTY and NR_ZONE_WRITE_PENDING, although at the moment there is no need to handle THP migration as kernel still does not have write support for file THP but to be more future proof, this patch adds the THP support for those stats as well. Fixes: e71769ae52609 ("mm: enable thp migration for shmem thp") Signed-off-by: Shakeel Butt Acked-by: Yang Shi Reviewed-by: Roman Gushchin Cc: --- Changes since v1: - Fixed a typo mm/migrate.c | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index 613794f6a433..c0efe921bca5 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -402,6 +402,7 @@ int migrate_page_move_mapping(struct address_space *mapping, struct zone *oldzone, *newzone; int dirty; int expected_count = expected_page_refs(mapping, page) + extra_count; + int nr = thp_nr_pages(page); if (!mapping) { /* Anonymous page without mapping */ @@ -437,7 +438,7 @@ int migrate_page_move_mapping(struct address_space *mapping, */ newpage->index = page->index; newpage->mapping = page->mapping; - page_ref_add(newpage, thp_nr_pages(page)); /* add cache reference */ + page_ref_add(newpage, nr); /* add cache reference */ if (PageSwapBacked(page)) { __SetPageSwapBacked(newpage); if (PageSwapCache(page)) { @@ -459,7 +460,7 @@ int migrate_page_move_mapping(struct address_space *mapping, if (PageTransHuge(page)) { int i; - for (i = 1; i < HPAGE_PMD_NR; i++) { + for (i = 1; i < nr; i++) { xas_next(&xas); xas_store(&xas, newpage); } @@ -470,7 +471,7 @@ int migrate_page_move_mapping(struct address_space *mapping, * to one less reference. * We know this isn't the last reference. */ - page_ref_unfreeze(page, expected_count - thp_nr_pages(page)); + page_ref_unfreeze(page, expected_count - nr); xas_unlock(&xas); /* Leave irq disabled to prevent preemption while updating stats */ @@ -493,17 +494,17 @@ int migrate_page_move_mapping(struct address_space *mapping, old_lruvec = mem_cgroup_lruvec(memcg, oldzone->zone_pgdat); new_lruvec = mem_cgroup_lruvec(memcg, newzone->zone_pgdat); - __dec_lruvec_state(old_lruvec, NR_FILE_PAGES); - __inc_lruvec_state(new_lruvec, NR_FILE_PAGES); + __mod_lruvec_state(old_lruvec, NR_FILE_PAGES, -nr); + __mod_lruvec_state(new_lruvec, NR_FILE_PAGES, nr); if (PageSwapBacked(page) && !PageSwapCache(page)) { - __dec_lruvec_state(old_lruvec, NR_SHMEM); - __inc_lruvec_state(new_lruvec, NR_SHMEM); + __mod_lruvec_state(old_lruvec, NR_SHMEM, -nr); + __mod_lruvec_state(new_lruvec, NR_SHMEM, nr); } if (dirty && mapping_can_writeback(mapping)) { - __dec_lruvec_state(old_lruvec, NR_FILE_DIRTY); - __dec_zone_state(oldzone, NR_ZONE_WRITE_PENDING); - __inc_lruvec_state(new_lruvec, NR_FILE_DIRTY); - __inc_zone_state(newzone, NR_ZONE_WRITE_PENDING); + __mod_lruvec_state(old_lruvec, NR_FILE_DIRTY, -nr); + __mod_zone_page_state(oldzone, NR_ZONE_WRITE_PENDING, -nr); + __mod_lruvec_state(new_lruvec, NR_FILE_DIRTY, nr); + __mod_zone_page_state(newzone, NR_ZONE_WRITE_PENDING, nr); } } local_irq_enable();