From patchwork Mon Apr 6 03:55:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: chenqiwu X-Patchwork-Id: 11474961 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9F46B92A for ; Mon, 6 Apr 2020 03:55:30 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4689F20769 for ; Mon, 6 Apr 2020 03:55:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="b2nGYvzO" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4689F20769 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2D2CD8E000E; Sun, 5 Apr 2020 23:55:29 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 283998E000D; Sun, 5 Apr 2020 23:55:29 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 14C1E8E000E; Sun, 5 Apr 2020 23:55:29 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0059.hostedemail.com [216.40.44.59]) by kanga.kvack.org (Postfix) with ESMTP id EBCA28E000D for ; Sun, 5 Apr 2020 23:55:28 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id ACA0C180AD802 for ; Mon, 6 Apr 2020 03:55:28 +0000 (UTC) X-FDA: 76676065536.03.blade29_2fc4e376e4a58 X-Spam-Summary: 2,0,0,c948304612c6bcef,d41d8cd98f00b204,qiwuchen55@gmail.com,,RULES_HIT:1:2:41:69:355:379:541:800:960:968:973:988:989:1260:1345:1437:1605:1730:1747:1777:1792:2393:2559:2562:2892:3138:3139:3140:3141:3142:3834:3865:3866:3867:3868:3870:3871:3872:3873:4052:4321:5007:6119:6261:6653:7576:7875:7903:8957:9036:9413:10004:11026:11232:11473:11658:11914:12043:12048:12296:12297:12438:12517:12519:12555:12679:12895:13138:13231:13972:14096:14394:21080:21444:21451:21611:21627:21666:21939:21990:30003:30012:30054:30056:30070:30075:30080,0,RBL:209.85.210.195:@gmail.com:.lbl8.mailshell.net-62.50.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: blade29_2fc4e376e4a58 X-Filterd-Recvd-Size: 11739 Received: from mail-pf1-f195.google.com (mail-pf1-f195.google.com [209.85.210.195]) by imf22.hostedemail.com (Postfix) with ESMTP for ; Mon, 6 Apr 2020 03:55:28 +0000 (UTC) Received: by mail-pf1-f195.google.com with SMTP id a13so6920134pfa.2 for ; Sun, 05 Apr 2020 20:55:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=ftIOkj+LDMCt9JFgN0wud5BmBkz+9KLsuCvkFD18aVY=; b=b2nGYvzOwQGX+JySByne64LDT8EkjsQVWFGjgmL+gSIcSwZ+E8iD0HqGdTiLaLjQzq FD+H6loZrXPGqcO8WEu5TC0WxNGkG7/uk24wIEXQKoYxNdPItlKTtKycoqu47h1vmjiD gc1UHneoXl6FRjcxzujSO80HmVJoq+OHVwLI3wXRQWvGvCE7UNhR0B1Wku0EBobsB0e6 W0eZ/KXxHN+0O0M2JCLng8wLXeGUy44wRXsf5sez3QePCxIm/hppY/TX+rs2bLCm9VY4 qF43J/cG+Xdvxx7Hx3nZO7SOzYTjvEzYpz9/Ear2ycYgNgiVdpSOlSD3h2qv+pqvMuzp +GEg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=ftIOkj+LDMCt9JFgN0wud5BmBkz+9KLsuCvkFD18aVY=; b=aP6QNSnihwDc3kXDbE/rrh3YFfRxOM3dnmQ3uz6ovB9Pa7KStgHhnUM3D87B3JWjEJ EbmNioW01WxnavCsVJc8QxGgpuvyropVn6JlNTAsJ9pFlM+juiT7++EOcpl7IPY8izWG +nBZchPaWnh+fEUYMxQy4jKUvznCKetLlQnBYDMV47RT6RFOursv30HjWfe52ESBunnc dYtoofsC+1A0sWj0mU5xIyaM1JivRsCdL2nOkqIeqv/6JpmWYuMgwWrq8FsXsMz6zpiB wiWbORs1CTMXhHh1c7E4S7s0zrLCty5LwrH37qhvpJq1qw3bW8jm9lXM+WfK/s1pZXH8 KPsw== X-Gm-Message-State: AGi0PubKRCxTiFn5ML5LwrjZ8MXM8mN19T0DXlkFojCCCr3/VZAvbtBR fspVe6ZROEjIL38+f9ZZbss= X-Google-Smtp-Source: APiQypIhbyzB6Jl6CD7WOZX8slKjLfWU+RkIAXM9TilMGpwYAoXNwpj8JkqLULLmBHwTGnQssEkH1A== X-Received: by 2002:a63:b251:: with SMTP id t17mr7035503pgo.44.1586145327015; Sun, 05 Apr 2020 20:55:27 -0700 (PDT) Received: from localhost ([43.224.245.179]) by smtp.gmail.com with ESMTPSA id f200sm10586816pfa.177.2020.04.05.20.55.25 (version=TLS1_2 cipher=AES128-SHA bits=128/128); Sun, 05 Apr 2020 20:55:26 -0700 (PDT) From: qiwuchen55@gmail.com To: akpm@linux-foundation.org, willy@infradead.org, david@redhat.com, richard.weiyang@gmail.com, mhocko@suse.com, pankaj.gupta.linux@gmail.com, yang.shi@linux.alibaba.com, cai@lca.pw, bhe@redhat.com Cc: linux-mm@kvack.org, chenqiwu Subject: [PATCH] mm: use VM_BUG_ON*() helpers to dump more debugging info Date: Mon, 6 Apr 2020 11:55:21 +0800 Message-Id: <1586145321-23767-1-git-send-email-qiwuchen55@gmail.com> X-Mailer: git-send-email 1.9.1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: chenqiwu This patch use VM_BUG_ON*() helpers instead of simple BUG_ON() in some of the main mm codes. If CONFIG_DEBUG_VM is set, we can get more debugging information when the bug is hit. Signed-off-by: chenqiwu --- mm/memory.c | 25 +++++++++++++------------ mm/mmap.c | 8 ++++---- mm/rmap.c | 10 +++++----- mm/swapfile.c | 16 ++++++++-------- mm/vmscan.c | 6 +++--- 5 files changed, 33 insertions(+), 32 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 586271f..082472f 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -912,7 +912,7 @@ static inline int copy_pud_range(struct mm_struct *dst_mm, struct mm_struct *src if (pud_trans_huge(*src_pud) || pud_devmap(*src_pud)) { int err; - VM_BUG_ON_VMA(next-addr != HPAGE_PUD_SIZE, vma); + VM_BUG_ON_VMA(next - addr != HPAGE_PUD_SIZE, vma); err = copy_huge_pud(dst_mm, src_mm, dst_pud, src_pud, addr, vma); if (err == -ENOMEM) @@ -1245,7 +1245,7 @@ void unmap_page_range(struct mmu_gather *tlb, pgd_t *pgd; unsigned long next; - BUG_ON(addr >= end); + VM_BUG_ON(addr >= end); tlb_start_vma(tlb, vma); pgd = pgd_offset(vma->vm_mm, addr); do { @@ -1507,8 +1507,8 @@ int vm_insert_page(struct vm_area_struct *vma, unsigned long addr, if (!page_count(page)) return -EINVAL; if (!(vma->vm_flags & VM_MIXEDMAP)) { - BUG_ON(down_read_trylock(&vma->vm_mm->mmap_sem)); - BUG_ON(vma->vm_flags & VM_PFNMAP); + VM_BUG_ON_VMA(down_read_trylock(&vma->vm_mm->mmap_sem), vma); + VM_BUG_ON_VMA(vma->vm_flags & VM_PFNMAP, vma); vma->vm_flags |= VM_MIXEDMAP; } return insert_page(vma, addr, page, vma->vm_page_prot); @@ -1679,11 +1679,12 @@ vm_fault_t vmf_insert_pfn_prot(struct vm_area_struct *vma, unsigned long addr, * consistency in testing and feature parity among all, so we should * try to keep these invariants in place for everybody. */ - BUG_ON(!(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))); - BUG_ON((vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) == - (VM_PFNMAP|VM_MIXEDMAP)); - BUG_ON((vma->vm_flags & VM_PFNMAP) && is_cow_mapping(vma->vm_flags)); - BUG_ON((vma->vm_flags & VM_MIXEDMAP) && pfn_valid(pfn)); + VM_BUG_ON_VMA(!(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)), vma); + VM_BUG_ON_VMA((vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) == + (VM_PFNMAP|VM_MIXEDMAP), vma); + VM_BUG_ON_VMA((vma->vm_flags & VM_PFNMAP) && + is_cow_mapping(vma->vm_flags), vma); + VM_BUG_ON_VMA((vma->vm_flags & VM_MIXEDMAP) && pfn_valid(pfn), vma); if (addr < vma->vm_start || addr >= vma->vm_end) return VM_FAULT_SIGBUS; @@ -1987,7 +1988,7 @@ int remap_pfn_range(struct vm_area_struct *vma, unsigned long addr, vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP; - BUG_ON(addr >= end); + VM_BUG_ON(addr >= end); pfn -= addr >> PAGE_SHIFT; pgd = pgd_offset(mm, addr); flush_cache_range(vma, addr, end); @@ -2075,7 +2076,7 @@ static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd, pte_offset_map_lock(mm, pmd, addr, &ptl); } - BUG_ON(pmd_huge(*pmd)); + VM_BUG_ON(pmd_huge(*pmd)); arch_enter_lazy_mmu_mode(); @@ -2102,7 +2103,7 @@ static int apply_to_pmd_range(struct mm_struct *mm, pud_t *pud, unsigned long next; int err = 0; - BUG_ON(pud_huge(*pud)); + VM_BUG_ON(pud_huge(*pud)); if (create) { pmd = pmd_alloc(mm, pud, addr); diff --git a/mm/mmap.c b/mm/mmap.c index 94ae183..6a0d8ad 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -3192,7 +3192,7 @@ int insert_vm_struct(struct mm_struct *mm, struct vm_area_struct *vma) * Similarly in do_mmap_pgoff and in do_brk. */ if (vma_is_anonymous(vma)) { - BUG_ON(vma->anon_vma); + VM_BUG_ON_VMA(vma->anon_vma, vma); vma->vm_pgoff = vma->vm_start >> PAGE_SHIFT; } @@ -3550,7 +3550,7 @@ int mm_take_all_locks(struct mm_struct *mm) struct vm_area_struct *vma; struct anon_vma_chain *avc; - BUG_ON(down_read_trylock(&mm->mmap_sem)); + VM_BUG_ON_MM(down_read_trylock(&mm->mmap_sem), mm); mutex_lock(&mm_all_locks_mutex); @@ -3630,8 +3630,8 @@ void mm_drop_all_locks(struct mm_struct *mm) struct vm_area_struct *vma; struct anon_vma_chain *avc; - BUG_ON(down_read_trylock(&mm->mmap_sem)); - BUG_ON(!mutex_is_locked(&mm_all_locks_mutex)); + VM_BUG_ON_MM(down_read_trylock(&mm->mmap_sem), mm); + VM_BUG_ON(!mutex_is_locked(&mm_all_locks_mutex)); for (vma = mm->mmap; vma; vma = vma->vm_next) { if (vma->anon_vma) diff --git a/mm/rmap.c b/mm/rmap.c index 2df75a1..13ed1ac 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -999,7 +999,7 @@ int page_mkclean(struct page *page) .invalid_vma = invalid_mkclean_vma, }; - BUG_ON(!PageLocked(page)); + VM_BUG_ON_PAGE(!PageLocked(page), page); if (!page_mapped(page)) return 0; @@ -1054,7 +1054,7 @@ static void __page_set_anon_rmap(struct page *page, { struct anon_vma *anon_vma = vma->anon_vma; - BUG_ON(!anon_vma); + VM_BUG_ON_VMA(!anon_vma, vma); if (PageAnon(page)) return; @@ -1965,8 +1965,8 @@ void hugepage_add_anon_rmap(struct page *page, struct anon_vma *anon_vma = vma->anon_vma; int first; - BUG_ON(!PageLocked(page)); - BUG_ON(!anon_vma); + VM_BUG_ON_PAGE(!PageLocked(page), page); + VM_BUG_ON_VMA(!anon_vma, vma); /* address might be in next vma when migration races vma_adjust */ first = atomic_inc_and_test(compound_mapcount_ptr(page)); if (first) @@ -1976,7 +1976,7 @@ void hugepage_add_anon_rmap(struct page *page, void hugepage_add_new_anon_rmap(struct page *page, struct vm_area_struct *vma, unsigned long address) { - BUG_ON(address < vma->vm_start || address >= vma->vm_end); + VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma); atomic_set(compound_mapcount_ptr(page), 0); if (hpage_pincount_available(page)) atomic_set(compound_pincount_ptr(page), 0); diff --git a/mm/swapfile.c b/mm/swapfile.c index 273a923..986ae8d 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -2326,7 +2326,7 @@ static void destroy_swap_extents(struct swap_info_struct *sis) if (parent) { se = rb_entry(parent, struct swap_extent, rb_node); - BUG_ON(se->start_page + se->nr_pages != start_page); + VM_BUG_ON(se->start_page + se->nr_pages != start_page); if (se->start_block + se->nr_pages == start_block) { /* Merge it */ se->nr_pages += nr_pages; @@ -2528,7 +2528,7 @@ bool has_usable_swap(void) if (!capable(CAP_SYS_ADMIN)) return -EPERM; - BUG_ON(!current->mm); + VM_BUG_ON(!current->mm); pathname = getname(specialfile); if (IS_ERR(pathname)) @@ -3586,7 +3586,7 @@ int add_swap_count_continuation(swp_entry_t entry, gfp_t gfp_mask) * but it does always reset its private field. */ if (!page_private(head)) { - BUG_ON(count & COUNT_CONTINUED); + VM_BUG_ON(count & COUNT_CONTINUED); INIT_LIST_HEAD(&head->lru); set_page_private(head, SWP_CONTINUED); si->flags |= SWP_CONTINUED; @@ -3647,7 +3647,7 @@ static bool swap_count_continued(struct swap_info_struct *si, head = vmalloc_to_page(si->swap_map + offset); if (page_private(head) != SWP_CONTINUED) { - BUG_ON(count & COUNT_CONTINUED); + VM_BUG_ON_PAGE(count & COUNT_CONTINUED, head); return false; /* need to add count continuation */ } @@ -3666,7 +3666,7 @@ static bool swap_count_continued(struct swap_info_struct *si, while (*map == (SWAP_CONT_MAX | COUNT_CONTINUED)) { kunmap_atomic(map); page = list_entry(page->lru.next, struct page, lru); - BUG_ON(page == head); + VM_BUG_ON_PAGE(page == head, page); map = kmap_atomic(page) + offset; } if (*map == SWAP_CONT_MAX) { @@ -3694,14 +3694,14 @@ static bool swap_count_continued(struct swap_info_struct *si, /* * Think of how you subtract 1 from 1000 */ - BUG_ON(count != COUNT_CONTINUED); + VM_BUG_ON(count != COUNT_CONTINUED); while (*map == COUNT_CONTINUED) { kunmap_atomic(map); page = list_entry(page->lru.next, struct page, lru); - BUG_ON(page == head); + VM_BUG_ON_PAGE(page == head, page); map = kmap_atomic(page) + offset; } - BUG_ON(*map == 0); + VM_BUG_ON(*map == 0); *map -= 1; if (*map == 0) count = 0; diff --git a/mm/vmscan.c b/mm/vmscan.c index 2e8e690..1b4bc87 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -231,7 +231,7 @@ static void unregister_memcg_shrinker(struct shrinker *shrinker) { int id = shrinker->id; - BUG_ON(id < 0); + VM_BUG_ON(id < 0); down_write(&shrinker_rwsem); idr_remove(&shrinker_idr, id); @@ -854,8 +854,8 @@ static int __remove_mapping(struct address_space *mapping, struct page *page, unsigned long flags; int refcount; - BUG_ON(!PageLocked(page)); - BUG_ON(mapping != page_mapping(page)); + VM_BUG_ON_PAGE(!PageLocked(page), page); + VM_BUG_ON_PAGE(mapping != page_mapping(page), page); xa_lock_irqsave(&mapping->i_pages, flags); /*