From patchwork Mon Mar 25 22:33:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13603120 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 88294C54E64 for ; Mon, 25 Mar 2024 22:33:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 230746B0087; Mon, 25 Mar 2024 18:33:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1B7EB6B0088; Mon, 25 Mar 2024 18:33:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F276B6B0089; Mon, 25 Mar 2024 18:33:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id CDB036B0087 for ; Mon, 25 Mar 2024 18:33:48 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 95B90C0122 for ; Mon, 25 Mar 2024 22:33:48 +0000 (UTC) X-FDA: 81937014936.14.E4F40BB Received: from mail-pj1-f47.google.com (mail-pj1-f47.google.com [209.85.216.47]) by imf01.hostedemail.com (Postfix) with ESMTP id BCFA540014 for ; Mon, 25 Mar 2024 22:33:46 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=NzA5hU7H; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf01.hostedemail.com: domain of vishal.moola@gmail.com designates 209.85.216.47 as permitted sender) smtp.mailfrom=vishal.moola@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711406026; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=mVOzucr5mvu7KOVzAqxUx6FA7sdm3qrl6lnpegwv/As=; b=CPHhDEXBjcBgNLYkLdSOJUiDeyJKIc3qRJPjlKzt+GO5V0X9p8aE9epSx2bWZegdW4RZYz irsTb108fsLdAJrVcfht8nW7FfU9C/cZpx+EXkIm3qUKXQkk5BF9pg7kmL5cVnE7aSQ6FO 03sZ2qssXv07/8mm83jQXcz0mU84lMA= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=NzA5hU7H; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf01.hostedemail.com: domain of vishal.moola@gmail.com designates 209.85.216.47 as permitted sender) smtp.mailfrom=vishal.moola@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711406026; a=rsa-sha256; cv=none; b=hB8I229YMoH7eG960/gzV7Wp83BvVU6k1QrU//YrTiZOFYh3YQx1kecE3TQF1tL+8/rX1K IS8mVYNr+pIO8dIXFUFYD8qflr1XBrgmbmPD5sLathaULfcZiPdNX8oyfQ9Y4DDJttalDU XvH3tW1nb3OdIst3Dl2m0GdXz6dd+SI= Received: by mail-pj1-f47.google.com with SMTP id 98e67ed59e1d1-29ddfada0d0so2569310a91.3 for ; Mon, 25 Mar 2024 15:33:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1711406025; x=1712010825; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=mVOzucr5mvu7KOVzAqxUx6FA7sdm3qrl6lnpegwv/As=; b=NzA5hU7HikQM9BPyXb5DNEsrFG6jf3rJDKCs83qJFu5nCAW5wnmeAOmwNkAtcVOFj7 pgCJKdITON5H2RGXV3PHkYxlRYk+4GgcYObpEAJW0OmeqahpWgGD8ZGcxStpZVlWmN/E s6Z7y/q4t9w7n1KEh8IhoKWQnBJazEUSMlqxql52fEC6vkTDnwszuj4Cg2+DCS/QStxW FPgLa+H7WNZhErcGrpnid7Ugajsf8H7C6w8pfXS8CVYs/a07zGxr5Avft4o+VEwSgUQ+ LBOOiGtosJKzDvneoJNGsu2SmZCcG1lZ+LdeKUObktBnHLR3taXtEq2ocXQS93AEqQmg ahTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711406025; x=1712010825; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mVOzucr5mvu7KOVzAqxUx6FA7sdm3qrl6lnpegwv/As=; b=A9sXa/cE6V/jYqrRSA1/0Dar9fDQv6tngKkpmzPzQoaQf/V99+M7Yo3HDnenZko6Hm F7Y89N1C6j9tNNWPi2upwFXTuQi+d9T+lOAttZpu0Imot0u6/VI69NM8iNsrg+at0Qin 41NQOmMpE1hqeDvzqnrB3FzBkl9SWhZAwBpNwjPB8g1Iz6e4GA3u9dznrlpMGYgQsVHU RUYl3/6wmJTSXowFbSvIi+wCIGTs0G7P2cffUeaoCDz2AT0wWe+/+LG1ciuoepIxv/Db RmHr19iFsN3KdCGRyyOub8+AfPQfP8w8IpJmjp85XTsJndFIFiiKaTm5HjTj7eQbjcwo 2ucA== X-Gm-Message-State: AOJu0Yw33K/VQnndsv7TXXwXUtvXV20DBhua1knB4Ibl9M8suZ/oY7Y6 Cg+GZk26YAHyuXQTCfcWJtixrwuN7c4B6OqB9XnJekgA8tmRHkVEeWSBprAOax0= X-Google-Smtp-Source: AGHT+IFDk0utxLL00lJGKaaeGgNK3Hrl4HLtx0LXzLecUbp3ONOONhamzCvdDjqJbMq5Hf8iz/uE5Q== X-Received: by 2002:a17:90a:8a03:b0:29c:6000:a12b with SMTP id w3-20020a17090a8a0300b0029c6000a12bmr1164130pjn.38.1711406025338; Mon, 25 Mar 2024 15:33:45 -0700 (PDT) Received: from fedora.. (c-73-170-51-167.hsd1.ca.comcast.net. [73.170.51.167]) by smtp.googlemail.com with ESMTPSA id sx16-20020a17090b2cd000b002a053cdd4e5sm4356173pjb.9.2024.03.25.15.33.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Mar 2024 15:33:45 -0700 (PDT) From: "Vishal Moola (Oracle)" To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, akpm@linux-foundation.org, muchun.song@linux.dev, "Vishal Moola (Oracle)" Subject: [PATCH 3/5] hugetlb: Convert hugetlb_wp() to use struct vm_fault Date: Mon, 25 Mar 2024 15:33:37 -0700 Message-ID: <20240325223339.169350-4-vishal.moola@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240325223339.169350-1-vishal.moola@gmail.com> References: <20240325223339.169350-1-vishal.moola@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: BCFA540014 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: t567hnia4b8sug7n9zzd3ba78ua6mgcf X-HE-Tag: 1711406026-962908 X-HE-Meta: U2FsdGVkX1/0SaSgwx5u4TADlib8YDXRBnpmayRqZxdrPyHLmWtbEsLB2zfoghJ8pFLJ13TScB5OwWfSryKk9W53V1w60nveFcQmZ0f/OTKSBMhGOZUu3u9RqEWUWU/MYCsKs+kqCu6EqypwFWEMKm1HkiccAui+gyFlAaWV8BnSTM8OAfPxgmd+8ZhrGYEzZEZCgVXBDsrmZ/ElnaIWXSR4lSd1mY/zQDdTNS+9uJhLdZISMJhv/UwCu/OAPTpAREh5unhYKbWhhLohWzHbmjQ/8MfXr+pPBMMgrPUOJMlPs8T2fASzxnpRZkeaGu5yBmGQ1fipeO/PAuSmBtrTCIwjMLsQrLkcWaQXQ8T42zHlQReIfH5Iur8MPlH3qDCodqOO0eM8IlKgDqkwmgfR++ZLL3ISSqzasrxPK/XR0trvP5fma52my0TOfgyafW8Dsgu3vXwqTRnbX8WK3DFWqGeES00vpxkwvuwzNFsKDZt2SVzKPcAq5/hIR8uRAXRieEhwnF3NU7r/1HpdpqhP/0yeW+E4VId4kdphfeJgGNBcgCzB10ym5q4cAH1GPlq1JAr4MaBUUWWGhHWGmTagZkzJjStvjfH3j9Vkm/ZoFACL9zJ1ZG4ftLsgeRATnl9VCwKP2Z8vNLu6t088UjwIDFvl+dvL1uxrVsqjHyd5kn45T8U0DYhkTSJ4s3rDEj6HBizMzNHGCdt8yF1PmpI7pojIDaj1JvSUUM+GCsuqS+pN4il7uHGwPl/V6XPBV3B9ln2QOjadUTgVre5J2mgBKmG6LFWyBXToDS28knOSsSVdTg8VwlVxScO8snSYtd5mnO72QGGJV1v/loltxCQfJlEGM5jMVrN9Y8+IUf6ynacuBQdJ5xOVDu5/lg7sk5oPXpsomZidpPHXNWojROhuke8Gf9QHlRHdV8OFmD5+NBZHJnsjrkD0nAMih8ELkYe1DaGMhnFM53cwXIz0ueC cv9ldWkk L5IligGqTsLX/ROLSdZio+8ScXCmey9GqKlQJtH3b8sUQSQBg2LXdUAZBdbc5GuVIYU5TjyHrxv0wqr8fztGQsjuZaoJ1iP9+dxlF7g7zqmfGLa3Ybc6RK3zQJe7l9UV3Tw26ljxoof3C3g4rbYz8OIswMicC95WpYYYuji2BvOLeAGJu/rKN/b8i3Bb+mHqgpaJRBEdst04e0zjEx03Vzkln7Wg/+JJw30N0kyaQXtz8goK8a/5O8pZ+2YgibXnuZRZm1mWZVWrhvJiMv/0csLmedmMCv/t1ThhrKpu/ZFeOIuYL7M1QJVdtS5wA8/VnxolJUtlXUPk6/FvFteBz4WpZoiYjk5cFfEJodUHbDTFEX+F6beUhaJ3OwIIJA+We6US7JNj6M6Cl/tSuiF6EUEI4aT2pbocLSi4kMEa1DQj/99hEJO+lpyml03O16D5jBQiiZr2XiE42nga70d2CcuQFKhJEQ0xoLNSaYw4cAR+Qb4Vi7n5O8R1pLanMz6AuS+oyIvt9fUihJBTVBRSvBHBjZFNxPmCY+h7vCXASjHms557wtO8I1Oy2UXljlkX61/kt4tr+/Quk6hFbo+FbL9r+fw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: hugetlb_wp() can use the struct vm_fault passed in from hugetlb_fault(). This alleviates the stack by consolidating 5 variables into a single struct. Signed-off-by: Vishal Moola (Oracle) --- mm/hugetlb.c | 61 ++++++++++++++++++++++++++-------------------------- 1 file changed, 30 insertions(+), 31 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 819a6d067985..107b47329b9f 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5825,18 +5825,16 @@ static void unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma, * Keep the pte_same checks anyway to make transition from the mutex easier. */ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, - unsigned long address, pte_t *ptep, unsigned int flags, - struct folio *pagecache_folio, spinlock_t *ptl, + struct folio *pagecache_folio, struct vm_fault *vmf) { - const bool unshare = flags & FAULT_FLAG_UNSHARE; - pte_t pte = huge_ptep_get(ptep); + const bool unshare = vmf->flags & FAULT_FLAG_UNSHARE; + pte_t pte = huge_ptep_get(vmf->pte); struct hstate *h = hstate_vma(vma); struct folio *old_folio; struct folio *new_folio; int outside_reserve = 0; vm_fault_t ret = 0; - unsigned long haddr = address & huge_page_mask(h); struct mmu_notifier_range range; /* @@ -5859,7 +5857,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, /* Let's take out MAP_SHARED mappings first. */ if (vma->vm_flags & VM_MAYSHARE) { - set_huge_ptep_writable(vma, haddr, ptep); + set_huge_ptep_writable(vma, vmf->address, vmf->pte); return 0; } @@ -5878,7 +5876,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, SetPageAnonExclusive(&old_folio->page); } if (likely(!unshare)) - set_huge_ptep_writable(vma, haddr, ptep); + set_huge_ptep_writable(vma, vmf->address, vmf->pte); delayacct_wpcopy_end(); return 0; @@ -5905,8 +5903,8 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, * Drop page table lock as buddy allocator may be called. It will * be acquired again before returning to the caller, as expected. */ - spin_unlock(ptl); - new_folio = alloc_hugetlb_folio(vma, haddr, outside_reserve); + spin_unlock(vmf->ptl); + new_folio = alloc_hugetlb_folio(vma, vmf->address, outside_reserve); if (IS_ERR(new_folio)) { /* @@ -5931,19 +5929,21 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, * * Reacquire both after unmap operation. */ - idx = vma_hugecache_offset(h, vma, haddr); + idx = vma_hugecache_offset(h, vma, vmf->address); hash = hugetlb_fault_mutex_hash(mapping, idx); hugetlb_vma_unlock_read(vma); mutex_unlock(&hugetlb_fault_mutex_table[hash]); - unmap_ref_private(mm, vma, &old_folio->page, haddr); + unmap_ref_private(mm, vma, &old_folio->page, + vmf->address); mutex_lock(&hugetlb_fault_mutex_table[hash]); hugetlb_vma_lock_read(vma); - spin_lock(ptl); - ptep = hugetlb_walk(vma, haddr, huge_page_size(h)); - if (likely(ptep && - pte_same(huge_ptep_get(ptep), pte))) + spin_lock(vmf->ptl); + vmf->pte = hugetlb_walk(vma, vmf->address, + huge_page_size(h)); + if (likely(vmf->pte && + pte_same(huge_ptep_get(vmf->pte), pte))) goto retry_avoidcopy; /* * race occurs while re-acquiring page table @@ -5965,37 +5965,38 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, if (unlikely(ret)) goto out_release_all; - if (copy_user_large_folio(new_folio, old_folio, address, vma)) { + if (copy_user_large_folio(new_folio, old_folio, vmf->real_address, vma)) { ret = VM_FAULT_HWPOISON_LARGE; goto out_release_all; } __folio_mark_uptodate(new_folio); - mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, haddr, - haddr + huge_page_size(h)); + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, vmf->address, + vmf->address + huge_page_size(h)); mmu_notifier_invalidate_range_start(&range); /* * Retake the page table lock to check for racing updates * before the page tables are altered */ - spin_lock(ptl); - ptep = hugetlb_walk(vma, haddr, huge_page_size(h)); - if (likely(ptep && pte_same(huge_ptep_get(ptep), pte))) { + spin_lock(vmf->ptl); + vmf->pte = hugetlb_walk(vma, vmf->address, huge_page_size(h)); + if (likely(vmf->pte && pte_same(huge_ptep_get(vmf->pte), pte))) { pte_t newpte = make_huge_pte(vma, &new_folio->page, !unshare); /* Break COW or unshare */ - huge_ptep_clear_flush(vma, haddr, ptep); + huge_ptep_clear_flush(vma, vmf->address, vmf->pte); hugetlb_remove_rmap(old_folio); - hugetlb_add_new_anon_rmap(new_folio, vma, haddr); + hugetlb_add_new_anon_rmap(new_folio, vma, vmf->address); if (huge_pte_uffd_wp(pte)) newpte = huge_pte_mkuffd_wp(newpte); - set_huge_pte_at(mm, haddr, ptep, newpte, huge_page_size(h)); + set_huge_pte_at(mm, vmf->address, vmf->pte, newpte, + huge_page_size(h)); folio_set_hugetlb_migratable(new_folio); /* Make the old page be freed below */ new_folio = old_folio; } - spin_unlock(ptl); + spin_unlock(vmf->ptl); mmu_notifier_invalidate_range_end(&range); out_release_all: /* @@ -6003,12 +6004,12 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, * unshare) */ if (new_folio != old_folio) - restore_reserve_on_error(h, vma, haddr, new_folio); + restore_reserve_on_error(h, vma, vmf->address, new_folio); folio_put(new_folio); out_release_old: folio_put(old_folio); - spin_lock(ptl); /* Caller expects lock to be held */ + spin_lock(vmf->ptl); /* Caller expects lock to be held */ delayacct_wpcopy_end(); return ret; @@ -6272,8 +6273,7 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, hugetlb_count_add(pages_per_huge_page(h), mm); if ((vmf->flags & FAULT_FLAG_WRITE) && !(vma->vm_flags & VM_SHARED)) { /* Optimization, do the COW without a second fault */ - ret = hugetlb_wp(mm, vma, vmf->real_address, vmf->pte, - vmf->flags, folio, vmf->ptl, vmf); + ret = hugetlb_wp(mm, vma, folio, vmf); } spin_unlock(vmf->ptl); @@ -6486,8 +6486,7 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, if (flags & (FAULT_FLAG_WRITE|FAULT_FLAG_UNSHARE)) { if (!huge_pte_write(vmf.orig_pte)) { - ret = hugetlb_wp(mm, vma, address, vmf.pte, flags, - pagecache_folio, vmf.ptl, &vmf); + ret = hugetlb_wp(mm, vma, pagecache_folio, &vmf); goto out_put_page; } else if (likely(flags & FAULT_FLAG_WRITE)) { vmf.orig_pte = huge_pte_mkdirty(vmf.orig_pte);