From patchwork Mon Jun 29 13:15:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: bibo mao X-Patchwork-Id: 11630931 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 87807912 for ; Mon, 29 Jun 2020 13:15:50 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5E31223D56 for ; Mon, 29 Jun 2020 13:15:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5E31223D56 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=loongson.cn Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 362F56B002D; Mon, 29 Jun 2020 09:15:49 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 313B76B002F; Mon, 29 Jun 2020 09:15:49 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 202796B0030; Mon, 29 Jun 2020 09:15:49 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0079.hostedemail.com [216.40.44.79]) by kanga.kvack.org (Postfix) with ESMTP id 045D86B002D for ; Mon, 29 Jun 2020 09:15:48 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 6B24C1EE6 for ; Mon, 29 Jun 2020 13:15:48 +0000 (UTC) X-FDA: 76982296776.21.paste47_051408d26e6f Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin21.hostedemail.com (Postfix) with ESMTP id 4C5A5180442CD for ; Mon, 29 Jun 2020 13:15:48 +0000 (UTC) X-Spam-Summary: 1,0,0,5f57ad944c368038,d41d8cd98f00b204,maobibo@loongson.cn,,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1261:1345:1359:1431:1437:1535:1544:1711:1730:1747:1777:1792:1801:1981:2194:2199:2393:2559:2562:2693:2895:3138:3139:3140:3141:3142:3355:3865:3866:3867:3868:3870:3871:3872:4117:4321:4605:5007:6119:6120:6261:7576:7901:7903:8603:8957:10004:11026:11473:11657:11658:11914:12043:12048:12160:12296:12297:12438:12555:12679:12895:14096:14181:14394:14721:21063:21080:21451:21627:21810:21990:30003:30054:30070,0,RBL:114.242.206.163:@loongson.cn:.lbl8.mailshell.net-64.201.201.201 62.14.2.100;04yr8yzjwp8juhsh31p934ccytnteycehtgzo79ck9cp6bwijx5mikqphq6pw9u.77c6toqf66fyi7i775nnbc7iacbhygnb5ob7p4zscb39kheod1y5wnnbzuhuda7.g-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:45,LUA_SUMMARY:none X-HE-Tag: paste47_051408d26e6f X-Filterd-Recvd-Size: 6638 Received: from loongson.cn (mail.loongson.cn [114.242.206.163]) by imf46.hostedemail.com (Postfix) with ESMTP for ; Mon, 29 Jun 2020 13:15:46 +0000 (UTC) Received: from kvm-dev1.localdomain (unknown [10.2.5.134]) by mail.loongson.cn (Coremail) with SMTP id AQAAf9Dxn9566flezmRMAA--.4772S3; Mon, 29 Jun 2020 21:15:43 +0800 (CST) From: bibo mao To: Thomas Bogendoerfer , Anshuman Khandual , Andrew Morton , Mike Kravetz Cc: linux-mips@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Bibo Mao Subject: [PATCH 2/2] hugetlb: use lightweight tlb flush when update huge tlb on mips Date: Mon, 29 Jun 2020 21:15:33 +0800 Message-Id: <1593436533-8645-2-git-send-email-maobibo@loongson.cn> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1593436533-8645-1-git-send-email-maobibo@loongson.cn> References: <1593436533-8645-1-git-send-email-maobibo@loongson.cn> X-CM-TRANSID: AQAAf9Dxn9566flezmRMAA--.4772S3 X-Coremail-Antispam: 1UD129KBjvJXoWxuF15Ww1xJw1DZw4xAry7Jrb_yoWrCw1rpF 9rCan8C3y8trWkurZ7Zr4qvr15Jwn0ga4IvryIqayYvw1aqw1a9F4DGw4fA3yrurWrGay7 Ca1Ygrs8WF4fZw7anT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUBF14x267AKxVW5JVWrJwAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_Jr4l82xGYIkIc2 x26xkF7I0E14v26r1I6r4UM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0 Y4vE2Ix0cI8IcVAFwI0_JFI_Gr1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr0_Cr1l84 ACjcxK6I8E87Iv67AKxVWxJr0_GcWl84ACjcxK6I8E87Iv6xkF7I0E14v26rxl6s0DM2AI xVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7xfMcIj6xIIjxv20x vE14v26r106r15McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xv r2IYc2Ij64vIr41lF7I21c0EjII2zVCS5cI20VAGYxC7MxkIecxEwVCm-wCF04k20xvY0x 0EwIxGrwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E14v26r1j6r18MI8I3I0E 7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_Jw0_GFylIxkGc2Ij64vIr41lIxAIcV C0I7IYx2IY67AKxVWUJVWUCwCI42IY6xIIjxv20xvEc7CjxVAFwI0_Gr0_Cr1lIxAIcVCF 04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r1j6r4UMIIF0xvEx4A2jsIEc7 CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x0JUHyIUUUUUU= X-CM-SenderInfo: xpdruxter6z05rqj20fqof0/ X-Rspamd-Queue-Id: 4C5A5180442CD X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Bibo Mao On mips platform huge pte pointers to invalid_pte_table if huge_pte_none return true. TLB entry with normal page size is added if huge pte entry is none. When updating huge pte entry, older tlb entry with normal page needs to be invalid. This patch uses lightweight tlb flush function local_flush_tlb_page, rather than flush_tlb_range which will flush all tlb entries instead. Also this patch adds new huge tlb update function named update_mmu_cache_huge, page faulting address is passed rather than huge page start address. Signed-off-by: Bibo Mao --- arch/mips/include/asm/hugetlb.h | 17 ++++++++++++----- include/linux/hugetlb.h | 9 +++++++++ mm/hugetlb.c | 12 +++++++----- 3 files changed, 28 insertions(+), 10 deletions(-) diff --git a/arch/mips/include/asm/hugetlb.h b/arch/mips/include/asm/hugetlb.h index c214440..fce09b4 100644 --- a/arch/mips/include/asm/hugetlb.h +++ b/arch/mips/include/asm/hugetlb.h @@ -72,15 +72,22 @@ static inline int huge_ptep_set_access_flags(struct vm_area_struct *vma, if (changed) { set_pte_at(vma->vm_mm, addr, ptep, pte); - /* - * There could be some standard sized pages in there, - * get them all. - */ - flush_tlb_range(vma, addr, addr + HPAGE_SIZE); } return changed; } +#define update_mmu_cache_huge update_mmu_cache_huge +static inline void update_mmu_cache_huge(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep) +{ + /* + * There could be some standard sized page in there, + * parameter address must be page faulting address rather than + * start address of huge page + */ + local_flush_tlb_page(vma, address); + update_mmu_cache(vma, address & huge_page_mask(hstate_vma(vma)), ptep); +} #include #endif /* __ASM_HUGETLB_H */ diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 858522e..2f3f9eb 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -746,6 +746,15 @@ static inline void huge_ptep_modify_prot_commit(struct vm_area_struct *vma, } #endif +#ifndef update_mmu_cache_huge +#define update_mmu_cache_huge update_mmu_cache_huge +static inline void update_mmu_cache_huge(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep) +{ + update_mmu_cache(vma, address & huge_page_mask(hstate_vma(vma)), ptep); +} +#endif + #else /* CONFIG_HUGETLB_PAGE */ struct hstate {}; diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 1410e62..96faad7 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3757,10 +3757,12 @@ static void set_huge_ptep_writable(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) { pte_t entry; + struct hstate *h = hstate_vma(vma); + unsigned long haddr = address & huge_page_mask(h); entry = huge_pte_mkwrite(huge_pte_mkdirty(huge_ptep_get(ptep))); - if (huge_ptep_set_access_flags(vma, address, ptep, entry, 1)) - update_mmu_cache(vma, address, ptep); + if (huge_ptep_set_access_flags(vma, haddr, ptep, entry, 1)) + update_mmu_cache_huge(vma, address, ptep); } bool is_hugetlb_entry_migration(pte_t pte) @@ -4128,7 +4130,7 @@ static vm_fault_t hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma, * and just make the page writable */ if (page_mapcount(old_page) == 1 && PageAnon(old_page)) { page_move_anon_rmap(old_page, vma); - set_huge_ptep_writable(vma, haddr, ptep); + set_huge_ptep_writable(vma, address, ptep); return 0; } @@ -4630,7 +4632,7 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, entry = pte_mkyoung(entry); if (huge_ptep_set_access_flags(vma, haddr, ptep, entry, flags & FAULT_FLAG_WRITE)) - update_mmu_cache(vma, haddr, ptep); + update_mmu_cache_huge(vma, address, ptep); out_put_page: if (page != pagecache_page) unlock_page(page); @@ -4770,7 +4772,7 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm, hugetlb_count_add(pages_per_huge_page(h), dst_mm); /* No need to invalidate - it was non-present before */ - update_mmu_cache(dst_vma, dst_addr, dst_pte); + update_mmu_cache_huge(dst_vma, dst_addr, dst_pte); spin_unlock(ptl); set_page_huge_active(page);