From patchwork Fri Mar 27 17:03:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 11462699 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C0C04913 for ; Fri, 27 Mar 2020 17:03:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 76F2920838 for ; Fri, 27 Mar 2020 17:03:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="UKAxQ3og" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 76F2920838 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A099F6B0010; Fri, 27 Mar 2020 13:03:54 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 9BAAE6B0032; Fri, 27 Mar 2020 13:03:54 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8F7B26B0037; Fri, 27 Mar 2020 13:03:54 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0183.hostedemail.com [216.40.44.183]) by kanga.kvack.org (Postfix) with ESMTP id 781056B0010 for ; Fri, 27 Mar 2020 13:03:54 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 41D71180AD811 for ; Fri, 27 Mar 2020 17:03:54 +0000 (UTC) X-FDA: 76641764388.29.sleep03_5c39592fead3b X-Spam-Summary: 2,0,0,ee0c761cb45e852b,d41d8cd98f00b204,kirill@shutemov.name,,RULES_HIT:41:69:355:379:541:960:966:973:988:989:1260:1311:1314:1345:1437:1515:1535:1544:1605:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:2693:3138:3139:3140:3141:3142:3865:3867:3868:3870:3871:3872:3874:4118:4250:4385:5007:6119:6120:6261:6653:7901:7903:8660:8957:9592:10004:10226:11026:11232:11473:11658:11914:12043:12295:12296:12297:12438:12517:12519:12555:12683:12895:13148:13230:13894:14096:14110:14181:14394:14721:21080:21444:21451:21627:21990:30054,0,RBL:209.85.208.196:@shutemov.name:.lbl8.mailshell.net-62.14.0.100 66.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: sleep03_5c39592fead3b X-Filterd-Recvd-Size: 7099 Received: from mail-lj1-f196.google.com (mail-lj1-f196.google.com [209.85.208.196]) by imf46.hostedemail.com (Postfix) with ESMTP for ; Fri, 27 Mar 2020 17:03:53 +0000 (UTC) Received: by mail-lj1-f196.google.com with SMTP id r7so3317665ljg.13 for ; Fri, 27 Mar 2020 10:03:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=iEb1i/1dam767tXvuNF7Sp30M1yJHeTtnfytL2XD9+0=; b=UKAxQ3oghy+KZvU5JfMJspP8dHawVw7uDcnn47JgJG7naaZjSSbK3N/IV1OHC4LJiD y4XVmBqPEg8SEU+wmheP20Xupt0JU5rP9FMo9KWUEmky3NsncP1TBOc1V6LAv+cNI2MW hOb0MWLTA0sCb2gN8xaaF7IKfrgbtHv1VBtgI5alO0NUxnDbF47pj8AwJI2tCqaQKrqL ET+rtMfzfCxvwusObnkT4M6ZrFUOdSnbXV35HKtvOZGbxcZ3UPExH9BpRUL4A67zVJDt tCST/F2usiNPzXp8GA33dK4aj0lB5usiYOYRV9iU/g5UYYCBdV9GZfhgIFp+uZ5VkzXk 00xA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=iEb1i/1dam767tXvuNF7Sp30M1yJHeTtnfytL2XD9+0=; b=XiJnV6SZrKu0uPnzf7FuqYHUVo7nbBkNiXPPh6YJ6hYe8LLvF6IoRSQnRp4daJCK05 9N4Qhj0VzAEHbmE9JnKeqqqPFwdHGjsdsmCFt49k59JQf5funVl+QAkQ2Icy1w8A+Uur DgRGeAsjkWx9P4kDT0S0Tc0G5y7rmNoYxUGFu1eu2ykHIuW+DTz9rFq+UTQeEVES1ncw ljw7K407KDYnh/vc7vhGNhzKUqiogP5IwcRgtAmkamOnsWHZtyL3hpY9In5sydlLFq6Z l5DoRK5dgJFn4yk+kVlTjZPfGhlor8yvRQmBrVm98SpuWJ/ljqGpUysElNmH37n9WYLo MqXQ== X-Gm-Message-State: ANhLgQ0S/2eiIMum7nQwSm1rXToby8BmNtX5fT2hheBiihwh7mch/mVP lVL+VeICO05/KPqsfgtEDBwR/othy7M= X-Google-Smtp-Source: ADFU+vsaOT1upNhaotWwn6axLPCq69xi3Z89C1aMdJzXgAdGhWzDJWqFhqI079xcIrBwzRY46uYzpw== X-Received: by 2002:a2e:b6c2:: with SMTP id m2mr8958115ljo.72.1585328631545; Fri, 27 Mar 2020 10:03:51 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id j125sm2083865lfj.38.2020.03.27.10.03.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 27 Mar 2020 10:03:50 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id 8BD32100D24; Fri, 27 Mar 2020 20:03:54 +0300 (+03) To: akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCH] thp: Simplify splitting PMD mapping huge zero page Date: Fri, 27 Mar 2020 20:03:53 +0300 Message-Id: <20200327170353.17734-1-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.0 MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Splitting PMD mapping huge zero page can be simplified a lot: we can just unmap it and fallback to PTE handling. Signed-off-by: Kirill A. Shutemov --- mm/huge_memory.c | 57 ++++-------------------------------------------- 1 file changed, 4 insertions(+), 53 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 42407e16bd80..ef6a6bcb291f 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2114,40 +2114,6 @@ void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud, } #endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ -static void __split_huge_zero_page_pmd(struct vm_area_struct *vma, - unsigned long haddr, pmd_t *pmd) -{ - struct mm_struct *mm = vma->vm_mm; - pgtable_t pgtable; - pmd_t _pmd; - int i; - - /* - * Leave pmd empty until pte is filled note that it is fine to delay - * notification until mmu_notifier_invalidate_range_end() as we are - * replacing a zero pmd write protected page with a zero pte write - * protected page. - * - * See Documentation/vm/mmu_notifier.rst - */ - pmdp_huge_clear_flush(vma, haddr, pmd); - - pgtable = pgtable_trans_huge_withdraw(mm, pmd); - pmd_populate(mm, &_pmd, pgtable); - - for (i = 0; i < HPAGE_PMD_NR; i++, haddr += PAGE_SIZE) { - pte_t *pte, entry; - entry = pfn_pte(my_zero_pfn(haddr), vma->vm_page_prot); - entry = pte_mkspecial(entry); - pte = pte_offset_map(&_pmd, haddr); - VM_BUG_ON(!pte_none(*pte)); - set_pte_at(mm, haddr, pte, entry); - pte_unmap(pte); - } - smp_wmb(); /* make pte visible before pmd */ - pmd_populate(mm, pmd, pgtable); -} - static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, unsigned long haddr, bool freeze) { @@ -2167,7 +2133,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, count_vm_event(THP_SPLIT_PMD); - if (!vma_is_anonymous(vma)) { + if (!vma_is_anonymous(vma) || is_huge_zero_pmd(*pmd)) { _pmd = pmdp_huge_clear_flush_notify(vma, haddr, pmd); /* * We are going to unmap this huge page. So @@ -2175,7 +2141,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, */ if (arch_needs_pgtable_deposit()) zap_deposited_table(mm, pmd); - if (vma_is_dax(vma)) + if (vma_is_dax(vma) || is_huge_zero_pmd(*pmd)) return; page = pmd_page(_pmd); if (!PageDirty(page) && pmd_dirty(_pmd)) @@ -2186,17 +2152,6 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, put_page(page); add_mm_counter(mm, mm_counter_file(page), -HPAGE_PMD_NR); return; - } else if (is_huge_zero_pmd(*pmd)) { - /* - * FIXME: Do we want to invalidate secondary mmu by calling - * mmu_notifier_invalidate_range() see comments below inside - * __split_huge_pmd() ? - * - * We are going from a zero huge page write protected to zero - * small page also write protected so it does not seems useful - * to invalidate secondary mmu at this time. - */ - return __split_huge_zero_page_pmd(vma, haddr, pmd); } /* @@ -2339,13 +2294,9 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, spin_unlock(ptl); /* * No need to double call mmu_notifier->invalidate_range() callback. - * They are 3 cases to consider inside __split_huge_pmd_locked(): + * They are 2 cases to consider inside __split_huge_pmd_locked(): * 1) pmdp_huge_clear_flush_notify() call invalidate_range() obvious - * 2) __split_huge_zero_page_pmd() read only zero page and any write - * fault will trigger a flush_notify before pointing to a new page - * (it is fine if the secondary mmu keeps pointing to the old zero - * page in the meantime) - * 3) Split a huge pmd into pte pointing to the same page. No need + * 2) Split a huge pmd into pte pointing to the same page. No need * to invalidate secondary tlb entry they are all still valid. * any further changes to individual pte will notify. So no need * to call mmu_notifier->invalidate_range()