From patchwork Fri May 18 04:41:15 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: TSUKADA Koutaro X-Patchwork-Id: 10408201 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id C0E1D601F9 for ; Fri, 18 May 2018 04:41:25 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AFE4A28802 for ; Fri, 18 May 2018 04:41:25 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A491128857; Fri, 18 May 2018 04:41:25 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 23BC028802 for ; Fri, 18 May 2018 04:41:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3D6036B0578; Fri, 18 May 2018 00:41:24 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 387626B0579; Fri, 18 May 2018 00:41:24 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 29D166B057A; Fri, 18 May 2018 00:41:24 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf0-f197.google.com (mail-pf0-f197.google.com [209.85.192.197]) by kanga.kvack.org (Postfix) with ESMTP id DE9F56B0578 for ; Fri, 18 May 2018 00:41:23 -0400 (EDT) Received: by mail-pf0-f197.google.com with SMTP id s16-v6so4046951pfm.1 for ; Thu, 17 May 2018 21:41:23 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:subject:to:cc :references:from:message-id:date:user-agent:mime-version:in-reply-to :content-language:content-transfer-encoding; bh=VXlztX8MBDXy7qTYYWrf+6yn9xGnyxe74WVTleGn1LE=; b=ZSpDzNtvDpMrgY3X69WYGnROY1WKgUZ4KHlfrsZVr2wDRdRfKxHeDFvZMmDw9wIMun NUZJcj8ql5SSpu4AnqFM/D2+FAjwrqupQqNHbhKIJUC0FqVczCj0SQthopfCPX9LZ5tN glIo1U/+HieGpFOyzv0ETB3sMtCQO+gnbtXBSCaeae5gfCpPZYZruDavG1j8onkRYUiw NAeHgOO3CABCS/opB3sqy1XbU2B/Ly83UiUOvDoqNt5HU2062JzzbaSCihWPSxVlG7zr vW36gz03t1qASWIs3ZsuZGti/kb/CdHSEJsnKF6BgFgWEFKyc5V7CMgpWZKYmJoEG5Go Q+MA== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of tsukada@ascade.co.jp designates 218.224.228.194 as permitted sender) smtp.mailfrom=tsukada@ascade.co.jp X-Gm-Message-State: ALKqPwcIluUtWKt0WOkJ2JkFGrDrvyoaDgs61PFvac04TVmrUzTGqRiy 0lmBEQn2c8IaOnLITbCDjI79gAlraflISCpvE09dFxrdg6C3VXyws2ZNzUZW4SYxCRs3SSa191o iXKnhRul//I/fhR+2uQ3XXZ5Mxn+xy9gHZAcrzZYi66fpvjX/3oKxhPcjbN4/jny4Yw== X-Received: by 2002:a17:902:1004:: with SMTP id b4-v6mr7815259pla.82.1526618483593; Thu, 17 May 2018 21:41:23 -0700 (PDT) X-Google-Smtp-Source: AB8JxZofOxn43F9WshtS3TgY0cjiW32K1nuPpa+rkZMgDrIqv9D4xoXXVht2yE11DhaU2eZLqbwg X-Received: by 2002:a17:902:1004:: with SMTP id b4-v6mr7815237pla.82.1526618482979; Thu, 17 May 2018 21:41:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1526618482; cv=none; d=google.com; s=arc-20160816; b=KH35SFtjs6vWvlvgFe/epHGhCloInVyZCWmBxncN1gr2OQJp+IP3REDR8pu73S06e+ NqE+mplq8oALX4NYep3a6qncc4lrUhVTzBU2pnWttaz3NqxZR8JVjFGjZxNNJsKmrx2O PGjhjcVwQmDH7z8YNXKl01r1mcXSpshaudoebAktoOSvbBjensxy33dRBF4eynvrdTVI 2vfZUtL+YFToi3wJmJ1iSNNAlm5lCOYJUBTnSPNShRwzO74Ksmy/XHyVaLB0Zv9kmsz8 8VLra66WcmS+SXnG6Eyltv8aMqIR8TCrWbpXgTDwV0TVVevxKic4O/vZifMWRr8GT0kA jDzg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:content-language:in-reply-to:mime-version :user-agent:date:message-id:from:references:cc:to:subject :arc-authentication-results; bh=VXlztX8MBDXy7qTYYWrf+6yn9xGnyxe74WVTleGn1LE=; b=EteD51i8d8BWkXxqYJNKdL4VJa5014DMaQ5i6UJCohoJgkkUx7OciRds5zMBRu/N35 MukNR/qxpRPywyiZO/tePKaN2zvmVsCguFsl2ZelYndExLMqVmbCVM9CYhv9t1VyFo1i fQDYFfYaNvxnnno5pZou0C3csRqDVgcD5hZFDJoplqsWzsszzOZkf8+fr8B63GYsNu6x p3pCdAWVl96gv7LvgE/hKn+nsDZfXQdkOQ5NPSi3TCvPu8Nr7iyVxgL2YDkkb4RMEMYn +8so0NNyOuao0uzI+hSAzyb3bVaW1qnm0SXOhhD602bi4tPOUvO7HmJQE24ZHj0C7XmW +wAw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of tsukada@ascade.co.jp designates 218.224.228.194 as permitted sender) smtp.mailfrom=tsukada@ascade.co.jp Received: from ns.ascade.co.jp (ext-host0001.ascade.co.jp. [218.224.228.194]) by mx.google.com with ESMTP id v2-v6si6821976plo.138.2018.05.17.21.41.22 for ; Thu, 17 May 2018 21:41:22 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of tsukada@ascade.co.jp designates 218.224.228.194 as permitted sender) client-ip=218.224.228.194; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of tsukada@ascade.co.jp designates 218.224.228.194 as permitted sender) smtp.mailfrom=tsukada@ascade.co.jp Received: from server0001.ascade.co.jp (server0001.ascade.co.jp [10.1.1.63]) by ns.ascade.co.jp (Postfix) with ESMTP id 02FEF993BD; Fri, 18 May 2018 13:41:22 +0900 (JST) Received: from [IPv6:::1] (server0001.ascade.co.jp [10.1.1.63]) by server0001.ascade.co.jp (Postfix) with ESMTP id A05BD10050E; Fri, 18 May 2018 13:41:20 +0900 (JST) Subject: [PATCH v2 7/7] memcg: supports movement of surplus hugepages statistics To: Johannes Weiner , Michal Hocko , Vladimir Davydov , Jonathan Corbet , "Luis R. Rodriguez" , Kees Cook Cc: Andrew Morton , Roman Gushchin , David Rientjes , Mike Kravetz , "Aneesh Kumar K.V" , Naoya Horiguchi , Anshuman Khandual , Marc-Andre Lureau , Punit Agrawal , Dan Williams , Vlastimil Babka , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org, tsukada@ascade.co.jp References: From: TSUKADA Koutaro Message-ID: Date: Fri, 18 May 2018 13:41:15 +0900 User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.7.0 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP When the task that charged surplus hugepages moves memory cgroup, it updates the statistical information correctly. Signed-off-by: TSUKADA Koutaro --- memcontrol.c | 99 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 99 insertions(+) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index a8f1ff8..63f0922 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -4698,12 +4698,110 @@ static int mem_cgroup_count_precharge_pte_range(pmd_t *pmd, return 0; } +#ifdef CONFIG_HUGETLB_PAGE +static enum mc_target_type get_mctgt_type_hugetlb(struct vm_area_struct *vma, + unsigned long addr, pte_t *pte, union mc_target *target) +{ + struct page *page = NULL; + pte_t entry; + enum mc_target_type ret = MC_TARGET_NONE; + + if (!(mc.flags & MOVE_ANON)) + return ret; + + entry = huge_ptep_get(pte); + if (!pte_present(entry)) + return ret; + + page = pte_page(entry); + VM_BUG_ON_PAGE(!page || !PageHead(page), page); + if (likely(!PageSurplusCharge(page))) + return ret; + if (page->mem_cgroup == mc.from) { + ret = MC_TARGET_PAGE; + if (target) { + get_page(page); + target->page = page; + } + } + + return ret; +} + +static int hugetlb_count_precharge_pte_range(pte_t *pte, unsigned long hmask, + unsigned long addr, unsigned long end, + struct mm_walk *walk) +{ + struct vm_area_struct *vma = walk->vma; + struct mm_struct *mm = walk->mm; + spinlock_t *ptl; + union mc_target target; + + ptl = huge_pte_lock(hstate_vma(vma), mm, pte); + if (get_mctgt_type_hugetlb(vma, addr, pte, &target) == MC_TARGET_PAGE) { + mc.precharge += (1 << compound_order(target.page)); + put_page(target.page); + } + spin_unlock(ptl); + + return 0; +} + +static int hugetlb_move_charge_pte_range(pte_t *pte, unsigned long hmask, + unsigned long addr, unsigned long end, + struct mm_walk *walk) +{ + struct vm_area_struct *vma = walk->vma; + struct mm_struct *mm = walk->mm; + spinlock_t *ptl; + enum mc_target_type target_type; + union mc_target target; + struct page *page; + unsigned long nr_pages; + + ptl = huge_pte_lock(hstate_vma(vma), mm, pte); + target_type = get_mctgt_type_hugetlb(vma, addr, pte, &target); + if (target_type == MC_TARGET_PAGE) { + page = target.page; + nr_pages = (1 << compound_order(page)); + if (mc.precharge < nr_pages) { + put_page(page); + goto unlock; + } + if (!mem_cgroup_move_account(page, true, mc.from, mc.to)) { + mc.precharge -= nr_pages; + mc.moved_charge += nr_pages; + } + put_page(page); + } +unlock: + spin_unlock(ptl); + + return 0; +} +#else +static int hugetlb_count_precharge_pte_range(pte_t *pte, unsigned long hmask, + unsigned long addr, unsigned long end, + struct mm_walk *walk) +{ + return 0; +} + +static int hugetlb_move_charge_pte_range(pte_t *pte, unsigned long hmask, + unsigned long addr, unsigned long end, + struct mm_walk *walk) +{ + return 0; +} +#endif + static unsigned long mem_cgroup_count_precharge(struct mm_struct *mm) { unsigned long precharge; struct mm_walk mem_cgroup_count_precharge_walk = { .pmd_entry = mem_cgroup_count_precharge_pte_range, + .hugetlb_entry = hugetlb_count_precharge_pte_range, .mm = mm, }; down_read(&mm->mmap_sem); @@ -4981,6 +5079,7 @@ static void mem_cgroup_move_charge(void) { struct mm_walk mem_cgroup_move_charge_walk = { .pmd_entry = mem_cgroup_move_charge_pte_range, + .hugetlb_entry = hugetlb_move_charge_pte_range, .mm = mc.mm, };