Message ID | 20210607141623.1971-1-wangbin224@huawei.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm: hugetlbfs: add hwcrp_hugepages to record memory failure on hugetlbfs | expand |
On 6/7/21 7:16 AM, wangbin wrote: > From: Bin Wang <wangbin224@huawei.com> > > In the current hugetlbfs memory failure handler, reserved huge page > counts are used to record the number of huge pages with hwposion. I do not believe this is an accurate statement. Naoya is the memory error expert and may disagree, but I do not see anywhere where reserve counts are being used to track huge pages with memory errors. IIUC, the routine hugetlbfs_error_remove_page is called after unmapping the page from all user mappings. The routine will simply, remove the page from the cache. This effectively removes the page from the file as hugetlbfs is a memory only filesystem. The subsequent call to hugetlb_unreserve_pages cleans up any reserve map entries associated with the page and adjusts the reserve count if necessary. The reserve count adjustment is based on removing the page from the file, rather than the memory error. The same adjustment would be made if the page was hole punched from the file. What specific problem are you trying to solve? Are trying to see how many huge pages were hit by memory errors?
Resend with new e-mail for Naoya On 6/7/21 7:16 AM, wangbin wrote: > From: Bin Wang <wangbin224@huawei.com> > > In the current hugetlbfs memory failure handler, reserved huge page > counts are used to record the number of huge pages with hwposion. I do not believe this is an accurate statement. Naoya is the memory error expert and may disagree, but I do not see anywhere where reserve counts are being used to track huge pages with memory errors. IIUC, the routine hugetlbfs_error_remove_page is called after unmapping the page from all user mappings. The routine will simply, remove the page from the cache. This effectively removes the page from the file as hugetlbfs is a memory only filesystem. The subsequent call to hugetlb_unreserve_pages cleans up any reserve map entries associated with the page and adjusts the reserve count if necessary. The reserve count adjustment is based on removing the page from the file, rather than the memory error. The same adjustment would be made if the page was hole punched from the file. What specific problem are you trying to solve? Are trying to see how many huge pages were hit by memory errors?
> What specific problem are you trying to solve? Are trying to see how > many huge pages were hit by memory errors? Yes, I'd like to know how many huge pages are not available because of the memory errors. Just like HardwareCorrupted in the /proc/meminfo. But the HardwareCorrupted only adds one page size when a huge page is hit by memory errors, and mixes with normal pages. So I think we should add a new counts to track the memory errors on hugetlbfs. -- Bin Wang
Thanks for forwarding the message, Mike. On Mon, Jun 07, 2021 at 12:13:03PM -0700, Mike Kravetz wrote: > Resend with new e-mail for Naoya > > On 6/7/21 7:16 AM, wangbin wrote: > > From: Bin Wang <wangbin224@huawei.com> > > > > In the current hugetlbfs memory failure handler, reserved huge page > > counts are used to record the number of huge pages with hwposion. > > I do not believe this is an accurate statement. Naoya is the memory > error expert and may disagree, but I do not see anywhere where reserve > counts are being used to track huge pages with memory errors. And Mike is right, hugetlb's reservation count is not linked to accounting of hwpoisoned pages. > > IIUC, the routine hugetlbfs_error_remove_page is called after > unmapping the page from all user mappings. The routine will simply, > remove the page from the cache. This effectively removes the page > from the file as hugetlbfs is a memory only filesystem. The subsequent > call to hugetlb_unreserve_pages cleans up any reserve map entries > associated with the page and adjusts the reserve count if necessary. > The reserve count adjustment is based on removing the page from the > file, rather than the memory error. The same adjustment would be made > if the page was hole punched from the file. This logic totally makes sense to me. Unmapping done in memory_failure() might increment the reserve count, but that's the cancel of the consumed reservation by unmapping. Thanks, Naoya Horigcuhi
On Tue, Jun 08, 2021 at 10:24:50AM +0800, wangbin wrote: > > What specific problem are you trying to solve? Are trying to see how > > many huge pages were hit by memory errors? > > Yes, I'd like to know how many huge pages are not available because of > the memory errors. Just like HardwareCorrupted in the /proc/meminfo. > But the HardwareCorrupted only adds one page size when a huge page is > hit by memory errors, and mixes with normal pages. So I think we should > add a new counts to track the memory errors on hugetlbfs. If you can use root privilege in your use-case, an easy way to get the number of corrupted hugepages is to use page-types.c (which reads /proc/kpageflags) like below: $ page-types -b huge,hwpoison=huge,hwpoison flags page-count MB symbolic-flags long-symbolic-flags 0x00000000000a8000 1 0 _______________H_G_X_______________________ compound_head,huge,hwpoison total 1 0 But I guess that many usecases do not permit access to this interface, where some new accounting interface for corrupted hugepages could be helpful as you suggest. Thanks, Naoya Horiguchi
> If you can use root privilege in your use-case, an easy way to get the > number of corrupted hugepages is to use page-types.c (which reads > /proc/kpageflags) like below: > > $ page-types -b huge,hwpoison=huge,hwpoison > flags page-count MB symbolic-flags long-symbolic-flags > 0x00000000000a8000 1 0 _______________H_G_X_______________________ compound_head,huge,hwpoison > total 1 0 > > But I guess that many usecases do not permit access to this interface, > where some new accounting interface for corrupted hugepages could be > helpful as you suggest. Thanks for your suggestion very much. This approach is helpful to me. But as you say, root privilege is not permitted in most cases. And I also want to know the number of corrupted hugepages per node. -- Bin Wang
diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c index 55efd3dd04f6..3c094f533981 100644 --- a/fs/hugetlbfs/inode.c +++ b/fs/hugetlbfs/inode.c @@ -985,8 +985,7 @@ static int hugetlbfs_error_remove_page(struct address_space *mapping, pgoff_t index = page->index; remove_huge_page(page); - if (unlikely(hugetlb_unreserve_pages(inode, index, index + 1, 1))) - hugetlb_fix_reserve_counts(inode); + hugetlb_fix_hwcrp_counts(page); return 0; } diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index b92f25ccef58..130f244f3bef 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -153,6 +153,7 @@ void putback_active_hugepage(struct page *page); void move_hugetlb_state(struct page *oldpage, struct page *newpage, int reason); void free_huge_page(struct page *page); void hugetlb_fix_reserve_counts(struct inode *inode); +void hugetlb_fix_hwcrp_counts(struct page *page); extern struct mutex *hugetlb_fault_mutex_table; u32 hugetlb_fault_mutex_hash(struct address_space *mapping, pgoff_t idx); @@ -576,12 +577,14 @@ struct hstate { unsigned long free_huge_pages; unsigned long resv_huge_pages; unsigned long surplus_huge_pages; + unsigned long hwcrp_huge_pages; unsigned long nr_overcommit_huge_pages; struct list_head hugepage_activelist; struct list_head hugepage_freelists[MAX_NUMNODES]; unsigned int nr_huge_pages_node[MAX_NUMNODES]; unsigned int free_huge_pages_node[MAX_NUMNODES]; unsigned int surplus_huge_pages_node[MAX_NUMNODES]; + unsigned int hwcrp_huge_pages_node[MAX_NUMNODES]; #ifdef CONFIG_CGROUP_HUGETLB /* cgroup control files */ struct cftype cgroup_files_dfl[7]; diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 95918f410c0f..dae91f118c18 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -761,6 +761,15 @@ void hugetlb_fix_reserve_counts(struct inode *inode) pr_warn("hugetlb: Huge Page Reserved count may go negative.\n"); } +void hugetlb_fix_hwcrp_counts(struct page *page) +{ + struct hstate *h = &default_hstate; + int nid = page_to_nid(page); + + h->hwcrp_huge_pages++; + h->hwcrp_huge_pages_node[nid]++; +} + /* * Count and return the number of huge pages in the reserve map * that intersect with the range [f, t). @@ -3089,12 +3098,30 @@ static ssize_t surplus_hugepages_show(struct kobject *kobj, } HSTATE_ATTR_RO(surplus_hugepages); +static ssize_t hwcrp_hugepages_show(struct kobject *kobj, + struct kobj_attribute *attr, char *buf) +{ + struct hstate *h; + unsigned long hwcrp_huge_pages; + int nid; + + h = kobj_to_hstate(kobj, &nid); + if (nid == NUMA_NO_NODE) + hwcrp_huge_pages = h->hwcrp_huge_pages; + else + hwcrp_huge_pages = h->hwcrp_huge_pages_node[nid]; + + return sprintf(buf, "%lu\n", hwcrp_huge_pages); +} +HSTATE_ATTR_RO(hwcrp_hugepages); + static struct attribute *hstate_attrs[] = { &nr_hugepages_attr.attr, &nr_overcommit_hugepages_attr.attr, &free_hugepages_attr.attr, &resv_hugepages_attr.attr, &surplus_hugepages_attr.attr, + &hwcrp_hugepages_attr.attr, #ifdef CONFIG_NUMA &nr_hugepages_mempolicy_attr.attr, #endif @@ -3164,6 +3191,7 @@ static struct attribute *per_node_hstate_attrs[] = { &nr_hugepages_attr.attr, &free_hugepages_attr.attr, &surplus_hugepages_attr.attr, + &hwcrp_hugepages_attr.attr, NULL, }; @@ -3657,11 +3685,13 @@ void hugetlb_report_meminfo(struct seq_file *m) "HugePages_Free: %5lu\n" "HugePages_Rsvd: %5lu\n" "HugePages_Surp: %5lu\n" + "HugePages_Hwcrp: %5lu\n" "Hugepagesize: %8lu kB\n", count, h->free_huge_pages, h->resv_huge_pages, h->surplus_huge_pages, + h->hwcrp_huge_pages, huge_page_size(h) / SZ_1K); }