From patchwork Mon Mar 25 22:33:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13603119 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E1776C54E64 for ; Mon, 25 Mar 2024 22:33:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A91996B0083; Mon, 25 Mar 2024 18:33:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A1A316B0085; Mon, 25 Mar 2024 18:33:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 848576B0087; Mon, 25 Mar 2024 18:33:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 72AC76B0083 for ; Mon, 25 Mar 2024 18:33:47 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 52EFDA086B for ; Mon, 25 Mar 2024 22:33:47 +0000 (UTC) X-FDA: 81937014894.23.43171FE Received: from mail-pg1-f172.google.com (mail-pg1-f172.google.com [209.85.215.172]) by imf22.hostedemail.com (Postfix) with ESMTP id AAD94C0009 for ; Mon, 25 Mar 2024 22:33:45 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=D5mFHqEh; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf22.hostedemail.com: domain of vishal.moola@gmail.com designates 209.85.215.172 as permitted sender) smtp.mailfrom=vishal.moola@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711406025; a=rsa-sha256; cv=none; b=JHXF8vDmXVmCrKZBsL6WVR8DEvRniWRN5B+ybI8IlcjwG2sj50SMqCvdkVmhBkdHK5yiZk EEs9sMqTCWvZ71m3557aj2SzBexVcS7IY4OvXoff+zWlhFUbHegRTSG0UWPp0JFJi/fGvN BEFB9HTUl26SI5wgmpZ3+4XPLbdblqA= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=D5mFHqEh; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf22.hostedemail.com: domain of vishal.moola@gmail.com designates 209.85.215.172 as permitted sender) smtp.mailfrom=vishal.moola@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711406025; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=sP/HrRtOaF5YRIfX9u0RpV4cBbAlDIrGYsjsP3+O3X8=; b=MF67zR9kWt8n1m9IukJH6a1rTqVcWBIQBQDfyiK8jOj4bjULikv1jXxJTUwOyxPEHORv0X i6blqHkwQ+3OL6sSnJh9n/2m13S5w1am3lS/hH0bFCAyO3cVA1e6dQUxlqqWSmrEGJcoKO OHD5Y7wCLOso11g4zoBh366AoxDJ738= Received: by mail-pg1-f172.google.com with SMTP id 41be03b00d2f7-5d8ddbac4fbso3427619a12.0 for ; Mon, 25 Mar 2024 15:33:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1711406024; x=1712010824; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=sP/HrRtOaF5YRIfX9u0RpV4cBbAlDIrGYsjsP3+O3X8=; b=D5mFHqEh7hunY5rNB51rqkjO0sbgp7WoLH+OaBUC1aE2cYUlErQ56B+IxOZnpzksG3 Ce8dFsNubFi7SAHCJcn3Q1wRGjOwHXDKwx0zJ/9I56PVfMpRuB4QGQxSD3vUUPWym5c5 mOjM81IakItPb13U8/wPYbFIZx3rN+egpU8gLHzNwCGCl28oKbpecg+ihkGaNyuLjBAe kEf13KNrjNpv6JvnBLRcoYYCdsPimre2t+ZI3fmc8EcDQp60lp26wd7FAIJ4uYBY53je AQAxzUCyy8/Flq2Au9GoiA9/7/yr6jrYQpgLm8nlxIAGKNJtr2V/izbIiKxaLRmXrxa8 fA+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711406024; x=1712010824; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=sP/HrRtOaF5YRIfX9u0RpV4cBbAlDIrGYsjsP3+O3X8=; b=CY6WHYPaK9pdTcywRG1f1m+w2WWjHG8hZBzWRHDxE12eJUu2dxRuRihPwYC2vKchch 2WqJZ+ec68HcZhJka1mreELB6HAA3QK1rqNUL5+24Pu9YjdIwSK4T4NN0m9EzuRgHkT7 TEfMdvFJLO5ykIjCjITEVvS4Lv7t9rMprVe1aqNzo82pIllKe7M3lF/1gd/ypUQkZ9f0 rZ+bycCz6+Xe/RMPfsM+vaFTgSw2Vvdx5a0EvKLYMxWIvVtfwXAFCpV4uJZFAq4IWMTt X8LTibNXD5Xxm3lyrwScrf40Ntp8g14TFiixG0nMqbqiH5DxmC5HRrTltcLaY7kmw3jW 7D4g== X-Gm-Message-State: AOJu0Yz04Ifoe0oG3DXSHaUEZgxWcoWXhbGsF8SYkL/IotLL/aroaf72 v/mUvGji3cGbEfqXOR3HpvnsvA5Q+ovF8Ngf4y4qwz5Avx0/YMuX4tNfs+Z3KBY= X-Google-Smtp-Source: AGHT+IGyWxFaoFa1+IvC2ORjow3Na1wp3zhzntyvupejForDNfjnFm53ZyuyQXm9jJ+7DPPZog8puQ== X-Received: by 2002:a17:90a:2f26:b0:29c:7769:419b with SMTP id s35-20020a17090a2f2600b0029c7769419bmr8194248pjd.9.1711406024293; Mon, 25 Mar 2024 15:33:44 -0700 (PDT) Received: from fedora.. (c-73-170-51-167.hsd1.ca.comcast.net. [73.170.51.167]) by smtp.googlemail.com with ESMTPSA id sx16-20020a17090b2cd000b002a053cdd4e5sm4356173pjb.9.2024.03.25.15.33.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Mar 2024 15:33:43 -0700 (PDT) From: "Vishal Moola (Oracle)" To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, akpm@linux-foundation.org, muchun.song@linux.dev, "Vishal Moola (Oracle)" Subject: [PATCH 2/5] hugetlb: Convert hugetlb_no_page() to use struct vm_fault Date: Mon, 25 Mar 2024 15:33:36 -0700 Message-ID: <20240325223339.169350-3-vishal.moola@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240325223339.169350-1-vishal.moola@gmail.com> References: <20240325223339.169350-1-vishal.moola@gmail.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: AAD94C0009 X-Stat-Signature: uz77zhsik9gt46mgqrmsxruiiq9m3ghg X-HE-Tag: 1711406025-325698 X-HE-Meta: U2FsdGVkX18A5ZLoZ7gXQxXiOitR/RkvEt63PlfA3HovGyOwytpp/p82HeLjrIhXksbbtSWT5agIiVJ+RvoS7BwcHQZ24YHsaaBVSa6MnFvMapElLuSUyr3ztDVqXGESFv6jUpAxZ1yuyQ3/v4/HesAOmJeCfU8OQrne+qtS8Lz5tkbBim8jooNWXJNpDMUqfWohgaq3sUqGEPxGO2ifoD46RgyqU5Yh6myWg7h+W+RPLgx0loqQs+D965LWDlAqKk2Gt1Ddp9n1YZ/8paPCns9B/slzUzfKNl6vuQlM2ZM1VmWIYSBhtFTtcRFX7/EAj22bI3fOPELz7EfTmrXRLZ2UBZlt+HlS6dPnuM6aMa2nT/cLBPNQO+vrJrWwrl6uCq5CE1HoWECwqLDN5OFgek81Ug02L5nPj7Y9VfrGccgSmYWbAQovFPa33VfeJziChjIvKS4JLAchERQhMFAssdF+f10yjt86PMRD2uJtIlf/nmvIkVZwJx6ohWgz+4OQeId/kt/jAxFtZYcyu43/rgoIG72bcpfG3GMlYBUI9Kf/Lg9c3Ufc4e3Glf7zPNOd0QqUYqvXh6ugHGBMmHbkN11AQ4DyJopuycuAv5u03lW7M09rbtmv4Eatjg9jM4Lu20Rdrp13h2MiSIM//Wdn9Dktf5GbmRu3flRlDQBtP+250atAr3OhskIe+FVxYRUmi7iRsdb8fLHVLHvSbAHlMgq46MIzFf3fgo4oLZxvP3TfGAUTRVmdzcvGbi8fVN3hhsPH2BVEtWWL0F9aziTbL6OAxZEFSiMsHbYiK7v7Voq/apm/3ooe9nSuaLq34uYuQYhnah2jpTlmZKDvl8ROpmQIqbck6dMXoiO7uXG+Z/mu69C026d+lvvwHW0CmYbgnwxJGU5vSOVRhXF0p7NZP6RWeoQeLdjvPXX3PbXwbW/t1Lrz3n8mn7uRCd/GpMR237EGxGLCo8MR/YiRvD7 WqrSABYf Io+n0KociGXJz8y9fOTroGIkgKi2fY8N1oSN76EsCfZ/HM1ZzPlHOBa73oXCtLNZahao4pQf2KpoDkEq5bXvEvyeP/Ef9YzWm7XygXnW0LsfbGwum8VeDjqPaMnAWgdQA3peT6wCbJCQpEPaQ7IJ3ybv/oMS/ws5ri5ASWGX7v6Zn/l7Qx+l7wHkvXjVs//GXv0epQwNBn+jIc6k3XRs51fYHFUtV//s5U6+LQ4kymInAuJZGer7Ci51/8qjKTe/6fy8wV/50vwGYNak= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: hugetlb_no_page() can use the struct vm_fault passed in from hugetlb_fault(). This alleviates the stack by consolidating 7 variables into a single struct. Signed-off-by: Vishal Moola (Oracle) --- mm/hugetlb.c | 59 ++++++++++++++++++++++++++-------------------------- 1 file changed, 29 insertions(+), 30 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 81e8ade53b64..819a6d067985 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6096,9 +6096,7 @@ static bool hugetlb_pte_stable(struct hstate *h, struct mm_struct *mm, static vm_fault_t hugetlb_no_page(struct mm_struct *mm, struct vm_area_struct *vma, - struct address_space *mapping, pgoff_t idx, - unsigned long address, pte_t *ptep, - pte_t old_pte, unsigned int flags, + struct address_space *mapping, struct vm_fault *vmf) { struct hstate *h = hstate_vma(vma); @@ -6107,10 +6105,8 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, unsigned long size; struct folio *folio; pte_t new_pte; - spinlock_t *ptl; - unsigned long haddr = address & huge_page_mask(h); bool new_folio, new_pagecache_folio = false; - u32 hash = hugetlb_fault_mutex_hash(mapping, idx); + u32 hash = hugetlb_fault_mutex_hash(mapping, vmf->pgoff); /* * Currently, we are forced to kill the process in the event the @@ -6129,10 +6125,10 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, * before we get page_table_lock. */ new_folio = false; - folio = filemap_lock_hugetlb_folio(h, mapping, idx); + folio = filemap_lock_hugetlb_folio(h, mapping, vmf->pgoff); if (IS_ERR(folio)) { size = i_size_read(mapping->host) >> huge_page_shift(h); - if (idx >= size) + if (vmf->pgoff >= size) goto out; /* Check for page in userfault range */ if (userfaultfd_missing(vma)) { @@ -6153,7 +6149,7 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, * never happen on the page after UFFDIO_COPY has * correctly installed the page and returned. */ - if (!hugetlb_pte_stable(h, mm, ptep, old_pte)) { + if (!hugetlb_pte_stable(h, mm, vmf->pte, vmf->orig_pte)) { ret = 0; goto out; } @@ -6162,7 +6158,7 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, VM_UFFD_MISSING); } - folio = alloc_hugetlb_folio(vma, haddr, 0); + folio = alloc_hugetlb_folio(vma, vmf->address, 0); if (IS_ERR(folio)) { /* * Returning error will result in faulting task being @@ -6176,18 +6172,20 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, * here. Before returning error, get ptl and make * sure there really is no pte entry. */ - if (hugetlb_pte_stable(h, mm, ptep, old_pte)) + if (hugetlb_pte_stable(h, mm, vmf->pte, vmf->orig_pte)) ret = vmf_error(PTR_ERR(folio)); else ret = 0; goto out; } - clear_huge_page(&folio->page, address, pages_per_huge_page(h)); + clear_huge_page(&folio->page, vmf->real_address, + pages_per_huge_page(h)); __folio_mark_uptodate(folio); new_folio = true; if (vma->vm_flags & VM_MAYSHARE) { - int err = hugetlb_add_to_page_cache(folio, mapping, idx); + int err = hugetlb_add_to_page_cache(folio, mapping, + vmf->pgoff); if (err) { /* * err can't be -EEXIST which implies someone @@ -6196,7 +6194,8 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, * to the page cache. So it's safe to call * restore_reserve_on_error() here. */ - restore_reserve_on_error(h, vma, haddr, folio); + restore_reserve_on_error(h, vma, vmf->address, + folio); folio_put(folio); goto out; } @@ -6226,7 +6225,7 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, folio_unlock(folio); folio_put(folio); /* See comment in userfaultfd_missing() block above */ - if (!hugetlb_pte_stable(h, mm, ptep, old_pte)) { + if (!hugetlb_pte_stable(h, mm, vmf->pte, vmf->orig_pte)) { ret = 0; goto out; } @@ -6241,23 +6240,23 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, * any allocations necessary to record that reservation occur outside * the spinlock. */ - if ((flags & FAULT_FLAG_WRITE) && !(vma->vm_flags & VM_SHARED)) { - if (vma_needs_reservation(h, vma, haddr) < 0) { + if ((vmf->flags & FAULT_FLAG_WRITE) && !(vma->vm_flags & VM_SHARED)) { + if (vma_needs_reservation(h, vma, vmf->address) < 0) { ret = VM_FAULT_OOM; goto backout_unlocked; } /* Just decrements count, does not deallocate */ - vma_end_reservation(h, vma, haddr); + vma_end_reservation(h, vma, vmf->address); } - ptl = huge_pte_lock(h, mm, ptep); + vmf->ptl = huge_pte_lock(h, mm, vmf->pte); ret = 0; /* If pte changed from under us, retry */ - if (!pte_same(huge_ptep_get(ptep), old_pte)) + if (!pte_same(huge_ptep_get(vmf->pte), vmf->orig_pte)) goto backout; if (anon_rmap) - hugetlb_add_new_anon_rmap(folio, vma, haddr); + hugetlb_add_new_anon_rmap(folio, vma, vmf->address); else hugetlb_add_file_rmap(folio); new_pte = make_huge_pte(vma, &folio->page, ((vma->vm_flags & VM_WRITE) @@ -6266,17 +6265,18 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, * If this pte was previously wr-protected, keep it wr-protected even * if populated. */ - if (unlikely(pte_marker_uffd_wp(old_pte))) + if (unlikely(pte_marker_uffd_wp(vmf->orig_pte))) new_pte = huge_pte_mkuffd_wp(new_pte); - set_huge_pte_at(mm, haddr, ptep, new_pte, huge_page_size(h)); + set_huge_pte_at(mm, vmf->address, vmf->pte, new_pte, huge_page_size(h)); hugetlb_count_add(pages_per_huge_page(h), mm); - if ((flags & FAULT_FLAG_WRITE) && !(vma->vm_flags & VM_SHARED)) { + if ((vmf->flags & FAULT_FLAG_WRITE) && !(vma->vm_flags & VM_SHARED)) { /* Optimization, do the COW without a second fault */ - ret = hugetlb_wp(mm, vma, address, ptep, flags, folio, ptl, vmf); + ret = hugetlb_wp(mm, vma, vmf->real_address, vmf->pte, + vmf->flags, folio, vmf->ptl, vmf); } - spin_unlock(ptl); + spin_unlock(vmf->ptl); /* * Only set hugetlb_migratable in newly allocated pages. Existing pages @@ -6293,10 +6293,10 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, return ret; backout: - spin_unlock(ptl); + spin_unlock(vmf->ptl); backout_unlocked: if (new_folio && !new_pagecache_folio) - restore_reserve_on_error(h, vma, haddr, folio); + restore_reserve_on_error(h, vma, vmf->address, folio); folio_unlock(folio); folio_put(folio); @@ -6392,8 +6392,7 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, * hugetlb_no_page will drop vma lock and hugetlb fault * mutex internally, which make us return immediately. */ - return hugetlb_no_page(mm, vma, mapping, vmf.pgoff, address, - vmf.pte, vmf.orig_pte, flags, &vmf); + return hugetlb_no_page(mm, vma, mapping, &vmf); } ret = 0;