diff mbox series

[-next] mm/hugetlb_cgroup: introduce peak and rsvd.peak to v2

Message ID 20240702125728.2743143-1-xiujianfeng@huawei.com (mailing list archive)
State New
Headers show
Series [-next] mm/hugetlb_cgroup: introduce peak and rsvd.peak to v2 | expand

Commit Message

Xiu Jianfeng July 2, 2024, 12:57 p.m. UTC
Introduce peak and rsvd.peak to v2 to show the historical maximum
usage of resources, as in some scenarios it is necessary to configure
the value of max/rsvd.max based on the peak usage of resources.

Signed-off-by: Xiu Jianfeng <xiujianfeng@huawei.com>
---
 Documentation/admin-guide/cgroup-v2.rst |  8 ++++++++
 mm/hugetlb_cgroup.c                     | 19 +++++++++++++++++++
 2 files changed, 27 insertions(+)

Comments

Andrew Morton July 3, 2024, 1:58 a.m. UTC | #1
On Tue, 2 Jul 2024 12:57:28 +0000 Xiu Jianfeng <xiujianfeng@huawei.com> wrote:

> Introduce peak and rsvd.peak to v2 to show the historical maximum
> usage of resources, as in some scenarios it is necessary to configure
> the value of max/rsvd.max based on the peak usage of resources.

"in some scenarios it is necessary" is not a strong statement.  It
would be helpful to fully describe these scenarios so that others can
better understand the value of this change.
Xiu Jianfeng July 3, 2024, 2:45 a.m. UTC | #2
On 2024/7/3 9:58, Andrew Morton wrote:
> On Tue, 2 Jul 2024 12:57:28 +0000 Xiu Jianfeng <xiujianfeng@huawei.com> wrote:
> 
>> Introduce peak and rsvd.peak to v2 to show the historical maximum
>> usage of resources, as in some scenarios it is necessary to configure
>> the value of max/rsvd.max based on the peak usage of resources.
> 
> "in some scenarios it is necessary" is not a strong statement.  It
> would be helpful to fully describe these scenarios so that others can
> better understand the value of this change.
> 

Hi Andrew,

Is the following description acceptable for you?


Since HugeTLB doesn't support page reclaim, enforcing the limit at
page fault time implies that, the application will get SIGBUS signal
if it tries to fault in HugeTLB pages beyond its limit. Therefore the
application needs to know exactly how many HugeTLB pages it uses before
hand, and the sysadmin needs to make sure that there are enough
available on the machine for all the users to avoid processes getting
SIGBUS.

When running some open-source software, it may not be possible to know
the exact amount of hugetlb it consumes, so cannot correctly configure
the max value. If there is a peak metric, we can run the open-source
software first and then configure the max based on the peak value.
In cgroup v1, the hugetlb controller provides the max_usage_in_bytes
and rsvd.max_usage_in_bytes interface to display the historical maximum
usage, so introduce peak and rsvd.peak to v2 to address this issue.
Andrew Morton July 3, 2024, 8:38 p.m. UTC | #3
On Wed, 3 Jul 2024 10:45:56 +0800 xiujianfeng <xiujianfeng@huawei.com> wrote:

> 
> 
> On 2024/7/3 9:58, Andrew Morton wrote:
> > On Tue, 2 Jul 2024 12:57:28 +0000 Xiu Jianfeng <xiujianfeng@huawei.com> wrote:
> > 
> >> Introduce peak and rsvd.peak to v2 to show the historical maximum
> >> usage of resources, as in some scenarios it is necessary to configure
> >> the value of max/rsvd.max based on the peak usage of resources.
> > 
> > "in some scenarios it is necessary" is not a strong statement.  It
> > would be helpful to fully describe these scenarios so that others can
> > better understand the value of this change.
> > 
> 
> Hi Andrew,
> 
> Is the following description acceptable for you?
> 
> 
> Since HugeTLB doesn't support page reclaim, enforcing the limit at
> page fault time implies that, the application will get SIGBUS signal
> if it tries to fault in HugeTLB pages beyond its limit. Therefore the
> application needs to know exactly how many HugeTLB pages it uses before
> hand, and the sysadmin needs to make sure that there are enough
> available on the machine for all the users to avoid processes getting
> SIGBUS.
> 
> When running some open-source software, it may not be possible to know
> the exact amount of hugetlb it consumes, so cannot correctly configure
> the max value. If there is a peak metric, we can run the open-source
> software first and then configure the max based on the peak value.
> In cgroup v1, the hugetlb controller provides the max_usage_in_bytes
> and rsvd.max_usage_in_bytes interface to display the historical maximum
> usage, so introduce peak and rsvd.peak to v2 to address this issue.

Super, thanks for doing this.

It's getting late in the cycle, but the patch is simple so I'll add it
to mm-unstable for additional exposure.  Hopefully some others can
offer their thoughts on the desirability of this.
Michal Hocko July 8, 2024, 12:48 p.m. UTC | #4
On Wed 03-07-24 13:38:04, Andrew Morton wrote:
> On Wed, 3 Jul 2024 10:45:56 +0800 xiujianfeng <xiujianfeng@huawei.com> wrote:
> 
> > 
> > 
> > On 2024/7/3 9:58, Andrew Morton wrote:
> > > On Tue, 2 Jul 2024 12:57:28 +0000 Xiu Jianfeng <xiujianfeng@huawei.com> wrote:
> > > 
> > >> Introduce peak and rsvd.peak to v2 to show the historical maximum
> > >> usage of resources, as in some scenarios it is necessary to configure
> > >> the value of max/rsvd.max based on the peak usage of resources.
> > > 
> > > "in some scenarios it is necessary" is not a strong statement.  It
> > > would be helpful to fully describe these scenarios so that others can
> > > better understand the value of this change.
> > > 
> > 
> > Hi Andrew,
> > 
> > Is the following description acceptable for you?
> > 
> > 
> > Since HugeTLB doesn't support page reclaim, enforcing the limit at
> > page fault time implies that, the application will get SIGBUS signal
> > if it tries to fault in HugeTLB pages beyond its limit. Therefore the
> > application needs to know exactly how many HugeTLB pages it uses before
> > hand, and the sysadmin needs to make sure that there are enough
> > available on the machine for all the users to avoid processes getting
> > SIGBUS.

yes, this is pretty much a definition of hugetlb.

> > When running some open-source software, it may not be possible to know
> > the exact amount of hugetlb it consumes, so cannot correctly configure
> > the max value. If there is a peak metric, we can run the open-source
> > software first and then configure the max based on the peak value.

I would push back on this. Hugetlb workloads pretty much require to know
the number of hugetlb pages ahead of time. Because you need to
preallocate them for the global hugetlb pool. What I am really missing
in the above justification is an explanation of how come you know how to
configure the global pool but you do not know that for a particular
cgroup. How exactly do you configure the global pool then?
Xiu Jianfeng July 8, 2024, 1:40 p.m. UTC | #5
On 2024/7/8 20:48, Michal Hocko wrote:
> On Wed 03-07-24 13:38:04, Andrew Morton wrote:
>> On Wed, 3 Jul 2024 10:45:56 +0800 xiujianfeng <xiujianfeng@huawei.com> wrote:
>>
>>>
>>>
>>> On 2024/7/3 9:58, Andrew Morton wrote:
>>>> On Tue, 2 Jul 2024 12:57:28 +0000 Xiu Jianfeng <xiujianfeng@huawei.com> wrote:
>>>>
>>>>> Introduce peak and rsvd.peak to v2 to show the historical maximum
>>>>> usage of resources, as in some scenarios it is necessary to configure
>>>>> the value of max/rsvd.max based on the peak usage of resources.
>>>>
>>>> "in some scenarios it is necessary" is not a strong statement.  It
>>>> would be helpful to fully describe these scenarios so that others can
>>>> better understand the value of this change.
>>>>
>>>
>>> Hi Andrew,
>>>
>>> Is the following description acceptable for you?
>>>
>>>
>>> Since HugeTLB doesn't support page reclaim, enforcing the limit at
>>> page fault time implies that, the application will get SIGBUS signal
>>> if it tries to fault in HugeTLB pages beyond its limit. Therefore the
>>> application needs to know exactly how many HugeTLB pages it uses before
>>> hand, and the sysadmin needs to make sure that there are enough
>>> available on the machine for all the users to avoid processes getting
>>> SIGBUS.
> 
> yes, this is pretty much a definition of hugetlb.
> 
>>> When running some open-source software, it may not be possible to know
>>> the exact amount of hugetlb it consumes, so cannot correctly configure
>>> the max value. If there is a peak metric, we can run the open-source
>>> software first and then configure the max based on the peak value.
> 
> I would push back on this. Hugetlb workloads pretty much require to know
> the number of hugetlb pages ahead of time. Because you need to
> preallocate them for the global hugetlb pool. What I am really missing
> in the above justification is an explanation of how come you know how to
> configure the global pool but you do not know that for a particular
> cgroup. How exactly do you configure the global pool then?

Yes, in this scenario, it's indeed challenging to determine the
appropriate size for the global pool. Therefore, a feasible approach is
to initially configure a larger value. Once the software is running
within the container successfully, the maximum value for the container
and the size of the system's global pool can be determined based on the
peak value, otherwise, increase the size of the global pool and try
again. so I believe the peak metric is useful for this scenario.
Michal Hocko July 8, 2024, 4:04 p.m. UTC | #6
On Mon 08-07-24 21:40:39, xiujianfeng wrote:
> 
> 
> On 2024/7/8 20:48, Michal Hocko wrote:
> > On Wed 03-07-24 13:38:04, Andrew Morton wrote:
> >> On Wed, 3 Jul 2024 10:45:56 +0800 xiujianfeng <xiujianfeng@huawei.com> wrote:
> >>
> >>>
> >>>
> >>> On 2024/7/3 9:58, Andrew Morton wrote:
> >>>> On Tue, 2 Jul 2024 12:57:28 +0000 Xiu Jianfeng <xiujianfeng@huawei.com> wrote:
> >>>>
> >>>>> Introduce peak and rsvd.peak to v2 to show the historical maximum
> >>>>> usage of resources, as in some scenarios it is necessary to configure
> >>>>> the value of max/rsvd.max based on the peak usage of resources.
> >>>>
> >>>> "in some scenarios it is necessary" is not a strong statement.  It
> >>>> would be helpful to fully describe these scenarios so that others can
> >>>> better understand the value of this change.
> >>>>
> >>>
> >>> Hi Andrew,
> >>>
> >>> Is the following description acceptable for you?
> >>>
> >>>
> >>> Since HugeTLB doesn't support page reclaim, enforcing the limit at
> >>> page fault time implies that, the application will get SIGBUS signal
> >>> if it tries to fault in HugeTLB pages beyond its limit. Therefore the
> >>> application needs to know exactly how many HugeTLB pages it uses before
> >>> hand, and the sysadmin needs to make sure that there are enough
> >>> available on the machine for all the users to avoid processes getting
> >>> SIGBUS.
> > 
> > yes, this is pretty much a definition of hugetlb.
> > 
> >>> When running some open-source software, it may not be possible to know
> >>> the exact amount of hugetlb it consumes, so cannot correctly configure
> >>> the max value. If there is a peak metric, we can run the open-source
> >>> software first and then configure the max based on the peak value.
> > 
> > I would push back on this. Hugetlb workloads pretty much require to know
> > the number of hugetlb pages ahead of time. Because you need to
> > preallocate them for the global hugetlb pool. What I am really missing
> > in the above justification is an explanation of how come you know how to
> > configure the global pool but you do not know that for a particular
> > cgroup. How exactly do you configure the global pool then?
> 
> Yes, in this scenario, it's indeed challenging to determine the
> appropriate size for the global pool. Therefore, a feasible approach is
> to initially configure a larger value. Once the software is running
> within the container successfully, the maximum value for the container
> and the size of the system's global pool can be determined based on the
> peak value, otherwise, increase the size of the global pool and try
> again. so I believe the peak metric is useful for this scenario.

This sounds really backwards to me. Not that I care much about peak
value itself. It is not really anything disruptive to add nor maintain
but this approach to configuring the system just feels completely wrong.
You shouldn't be really using hugetlb cgroup controller if you do not
have a very specific idea about expected and therefore allowed hugetlb
pool consumption.
Xiu Jianfeng July 9, 2024, 12:47 p.m. UTC | #7
On 2024/7/9 0:04, Michal Hocko wrote:
> On Mon 08-07-24 21:40:39, xiujianfeng wrote:
>>
>>
>> On 2024/7/8 20:48, Michal Hocko wrote:
>>> On Wed 03-07-24 13:38:04, Andrew Morton wrote:
>>>> On Wed, 3 Jul 2024 10:45:56 +0800 xiujianfeng <xiujianfeng@huawei.com> wrote:
>>>>
>>>>>
>>>>>
>>>>> On 2024/7/3 9:58, Andrew Morton wrote:
>>>>>> On Tue, 2 Jul 2024 12:57:28 +0000 Xiu Jianfeng <xiujianfeng@huawei.com> wrote:
>>>>>>
>>>>>>> Introduce peak and rsvd.peak to v2 to show the historical maximum
>>>>>>> usage of resources, as in some scenarios it is necessary to configure
>>>>>>> the value of max/rsvd.max based on the peak usage of resources.
>>>>>>
>>>>>> "in some scenarios it is necessary" is not a strong statement.  It
>>>>>> would be helpful to fully describe these scenarios so that others can
>>>>>> better understand the value of this change.
>>>>>>
>>>>>
>>>>> Hi Andrew,
>>>>>
>>>>> Is the following description acceptable for you?
>>>>>
>>>>>
>>>>> Since HugeTLB doesn't support page reclaim, enforcing the limit at
>>>>> page fault time implies that, the application will get SIGBUS signal
>>>>> if it tries to fault in HugeTLB pages beyond its limit. Therefore the
>>>>> application needs to know exactly how many HugeTLB pages it uses before
>>>>> hand, and the sysadmin needs to make sure that there are enough
>>>>> available on the machine for all the users to avoid processes getting
>>>>> SIGBUS.
>>>
>>> yes, this is pretty much a definition of hugetlb.
>>>
>>>>> When running some open-source software, it may not be possible to know
>>>>> the exact amount of hugetlb it consumes, so cannot correctly configure
>>>>> the max value. If there is a peak metric, we can run the open-source
>>>>> software first and then configure the max based on the peak value.
>>>
>>> I would push back on this. Hugetlb workloads pretty much require to know
>>> the number of hugetlb pages ahead of time. Because you need to
>>> preallocate them for the global hugetlb pool. What I am really missing
>>> in the above justification is an explanation of how come you know how to
>>> configure the global pool but you do not know that for a particular
>>> cgroup. How exactly do you configure the global pool then?
>>
>> Yes, in this scenario, it's indeed challenging to determine the
>> appropriate size for the global pool. Therefore, a feasible approach is
>> to initially configure a larger value. Once the software is running
>> within the container successfully, the maximum value for the container
>> and the size of the system's global pool can be determined based on the
>> peak value, otherwise, increase the size of the global pool and try
>> again. so I believe the peak metric is useful for this scenario.
> 
> This sounds really backwards to me. Not that I care much about peak
> value itself. It is not really anything disruptive to add nor maintain
> but this approach to configuring the system just feels completely wrong.
> You shouldn't be really using hugetlb cgroup controller if you do not
> have a very specific idea about expected and therefore allowed hugetlb
> pool consumption.
> 

Thanks for sharing your thoughts.

Since the peak metric exists in the legacy hugetlb controller, do you
have any idea what scenario it's used for? I found it was introduced by
commit abb8206cb077 ("hugetlb/cgroup: add hugetlb cgroup control
files"), however there is no any description about the scenario.
Michal Hocko July 9, 2024, 1:05 p.m. UTC | #8
On Tue 09-07-24 20:47:30, xiujianfeng wrote:
> 
> 
> On 2024/7/9 0:04, Michal Hocko wrote:
> > On Mon 08-07-24 21:40:39, xiujianfeng wrote:
> >>
> >>
> >> On 2024/7/8 20:48, Michal Hocko wrote:
> >>> On Wed 03-07-24 13:38:04, Andrew Morton wrote:
> >>>> On Wed, 3 Jul 2024 10:45:56 +0800 xiujianfeng <xiujianfeng@huawei.com> wrote:
> >>>>
> >>>>>
> >>>>>
> >>>>> On 2024/7/3 9:58, Andrew Morton wrote:
> >>>>>> On Tue, 2 Jul 2024 12:57:28 +0000 Xiu Jianfeng <xiujianfeng@huawei.com> wrote:
> >>>>>>
> >>>>>>> Introduce peak and rsvd.peak to v2 to show the historical maximum
> >>>>>>> usage of resources, as in some scenarios it is necessary to configure
> >>>>>>> the value of max/rsvd.max based on the peak usage of resources.
> >>>>>>
> >>>>>> "in some scenarios it is necessary" is not a strong statement.  It
> >>>>>> would be helpful to fully describe these scenarios so that others can
> >>>>>> better understand the value of this change.
> >>>>>>
> >>>>>
> >>>>> Hi Andrew,
> >>>>>
> >>>>> Is the following description acceptable for you?
> >>>>>
> >>>>>
> >>>>> Since HugeTLB doesn't support page reclaim, enforcing the limit at
> >>>>> page fault time implies that, the application will get SIGBUS signal
> >>>>> if it tries to fault in HugeTLB pages beyond its limit. Therefore the
> >>>>> application needs to know exactly how many HugeTLB pages it uses before
> >>>>> hand, and the sysadmin needs to make sure that there are enough
> >>>>> available on the machine for all the users to avoid processes getting
> >>>>> SIGBUS.
> >>>
> >>> yes, this is pretty much a definition of hugetlb.
> >>>
> >>>>> When running some open-source software, it may not be possible to know
> >>>>> the exact amount of hugetlb it consumes, so cannot correctly configure
> >>>>> the max value. If there is a peak metric, we can run the open-source
> >>>>> software first and then configure the max based on the peak value.
> >>>
> >>> I would push back on this. Hugetlb workloads pretty much require to know
> >>> the number of hugetlb pages ahead of time. Because you need to
> >>> preallocate them for the global hugetlb pool. What I am really missing
> >>> in the above justification is an explanation of how come you know how to
> >>> configure the global pool but you do not know that for a particular
> >>> cgroup. How exactly do you configure the global pool then?
> >>
> >> Yes, in this scenario, it's indeed challenging to determine the
> >> appropriate size for the global pool. Therefore, a feasible approach is
> >> to initially configure a larger value. Once the software is running
> >> within the container successfully, the maximum value for the container
> >> and the size of the system's global pool can be determined based on the
> >> peak value, otherwise, increase the size of the global pool and try
> >> again. so I believe the peak metric is useful for this scenario.
> > 
> > This sounds really backwards to me. Not that I care much about peak
> > value itself. It is not really anything disruptive to add nor maintain
> > but this approach to configuring the system just feels completely wrong.
> > You shouldn't be really using hugetlb cgroup controller if you do not
> > have a very specific idea about expected and therefore allowed hugetlb
> > pool consumption.
> > 
> 
> Thanks for sharing your thoughts.
> 
> Since the peak metric exists in the legacy hugetlb controller, do you
> have any idea what scenario it's used for? I found it was introduced by
> commit abb8206cb077 ("hugetlb/cgroup: add hugetlb cgroup control
> files"), however there is no any description about the scenario.

I do not remember but I suspect this is mimicts other cgroupv1
interfaces.
diff mbox series

Patch

diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
index ae0fdb6fc618..97d19968230a 100644
--- a/Documentation/admin-guide/cgroup-v2.rst
+++ b/Documentation/admin-guide/cgroup-v2.rst
@@ -2607,6 +2607,14 @@  HugeTLB Interface Files
         hugetlb pages of <hugepagesize> in this cgroup.  Only active in
         use hugetlb pages are included.  The per-node values are in bytes.
 
+  hugetlb.<hugepagesize>.peak
+	Show historical maximum usage for "hugepagesize" hugetlb.  It exists
+        for all the cgroup except root.
+
+  hugetlb.<hugepagesize>.rsvd.peak
+	Show historical maximum usage for "hugepagesize" hugetlb reservations.
+        It exists for all the cgroup except root.
+
 Misc
 ----
 
diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
index 4ff238ba1250..f443a56409a9 100644
--- a/mm/hugetlb_cgroup.c
+++ b/mm/hugetlb_cgroup.c
@@ -583,6 +583,13 @@  static int hugetlb_cgroup_read_u64_max(struct seq_file *seq, void *v)
 		else
 			seq_printf(seq, "%llu\n", val * PAGE_SIZE);
 		break;
+	case RES_RSVD_MAX_USAGE:
+		counter = &h_cg->rsvd_hugepage[idx];
+		fallthrough;
+	case RES_MAX_USAGE:
+		val = (u64)counter->watermark;
+		seq_printf(seq, "%llu\n", val * PAGE_SIZE);
+		break;
 	default:
 		BUG();
 	}
@@ -739,6 +746,18 @@  static struct cftype hugetlb_dfl_tmpl[] = {
 		.seq_show = hugetlb_cgroup_read_u64_max,
 		.flags = CFTYPE_NOT_ON_ROOT,
 	},
+	{
+		.name = "peak",
+		.private = RES_MAX_USAGE,
+		.seq_show = hugetlb_cgroup_read_u64_max,
+		.flags = CFTYPE_NOT_ON_ROOT,
+	},
+	{
+		.name = "rsvd.peak",
+		.private = RES_RSVD_MAX_USAGE,
+		.seq_show = hugetlb_cgroup_read_u64_max,
+		.flags = CFTYPE_NOT_ON_ROOT,
+	},
 	{
 		.name = "events",
 		.seq_show = hugetlb_events_show,