Message ID | 20210210071251.44084-1-linmiaohe@huawei.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm/hugetlb: optimize the surplus state transfer code in move_hugetlb_state() | expand |
On 2/9/21 11:12 PM, Miaohe Lin wrote: > We should not transfer the per-node surplus state when we do not cross the > node in order to save some cpu cycles > > Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> > --- > mm/hugetlb.c | 6 ++++++ > 1 file changed, 6 insertions(+) Thanks, I was going to comment that the usual case is migrating to another node and old_nid != new_nid. However, this really is workload and system configuration dependent. In any case, the quick check is worth potentially saving a lock/unlock cycle. Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index da347047ea10..4f2c92ddbca4 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5632,6 +5632,12 @@ void move_hugetlb_state(struct page *oldpage, struct page *newpage, int reason) SetHPageTemporary(oldpage); ClearHPageTemporary(newpage); + /* + * There is no need to transfer the per-node surplus state + * when we do not cross the node. + */ + if (new_nid == old_nid) + return; spin_lock(&hugetlb_lock); if (h->surplus_huge_pages_node[old_nid]) { h->surplus_huge_pages_node[old_nid]--;
We should not transfer the per-node surplus state when we do not cross the node in order to save some cpu cycles Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> --- mm/hugetlb.c | 6 ++++++ 1 file changed, 6 insertions(+)