From patchwork Wed Jun 28 21:53:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13296395 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 194F5EB64DA for ; Wed, 28 Jun 2023 21:53:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A8B4C8D0001; Wed, 28 Jun 2023 17:53:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9F9C48E0001; Wed, 28 Jun 2023 17:53:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 70E718D0001; Wed, 28 Jun 2023 17:53:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 585E18D0009 for ; Wed, 28 Jun 2023 17:53:27 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 210F4AF83F for ; Wed, 28 Jun 2023 21:53:27 +0000 (UTC) X-FDA: 80953508454.07.BBE9557 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf13.hostedemail.com (Postfix) with ESMTP id D316820024 for ; Wed, 28 Jun 2023 21:53:24 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Iz1r9vs3; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf13.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1687989204; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=f1uXyVFMcTV1oTRkqq3pr4O9sWiRe8mnSq6RPVSEFpA=; b=oZXJKvHyVZaqtSUIjPyQWBEyVEe59iKgNjsh4sMFF48JuWzE7ZcwmqJeDviYcTFSfOuZ1j d2M9QSClQMp1pVxAajREJyffGmgkpHRvUAagiNqJDEbkSgCUD63V3Hyaes2P9hWnNjet9L lr1I8bSL85AgYCb3+eDLAjHxG9lmORM= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Iz1r9vs3; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf13.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1687989204; a=rsa-sha256; cv=none; b=jkO12dCNVvY9n3p+tcaH9ftOfppqcvJvYGrn4MMGCA7tjDdvgosp5RASkcFMGbN9Wq1jzZ B4XKp8EvBzCTXFC8cebEeUTvUNUO1y2fHO3//+hrE7TPk1TjQ1o9lZV07D6alqGOLtUg0x m8ReC6YJ7OJ13NNXhWylT1GkDSvBbAc= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1687989204; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=f1uXyVFMcTV1oTRkqq3pr4O9sWiRe8mnSq6RPVSEFpA=; b=Iz1r9vs3tZZCJSGW8voEBK2lXrUCteONfCtRFIazClZEinAZQ509TNH8/wEpuKMTUPudZo YDCqmk3wcVrUN3WW+V5w6LHf3AFm4R/B4XnirtOGrguwTm+8vnk3xJtDrwJPeCldDlmwc+ REqJh4vMwSqIWw2fpQvj3Ctgcsv3RjM= Received: from mail-qv1-f70.google.com (mail-qv1-f70.google.com [209.85.219.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-138-RcnVuP2TPe-poR__NZlNVw-1; Wed, 28 Jun 2023 17:53:22 -0400 X-MC-Unique: RcnVuP2TPe-poR__NZlNVw-1 Received: by mail-qv1-f70.google.com with SMTP id 6a1803df08f44-635e2618aaeso42626d6.0 for ; Wed, 28 Jun 2023 14:53:21 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687989201; x=1690581201; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=f1uXyVFMcTV1oTRkqq3pr4O9sWiRe8mnSq6RPVSEFpA=; b=CpW8k+W2bZZ/SHcm77kP7JxSsWRsXbmpiSBGQReKXO+/SUbyF1ywbzCSCfnwGaDz5d ZxK+uPYxHm6WSXj7iJy3Cx0bK+pAYk61ti9jzCg0wKUWYXTdxX7/cEGvhFNPZDGTamK9 uOke/6HY+r0Fwi4XxCGWFb6N+3MF/p+kZJ6oywxO040zII4AMPt1/ieHufMshX/ymk24 Ny5gOQe4uWrzasLsue4w1GohN8cTRG4AhqX7wslRIcDx0es/ZpaoeI5CEfaWg6SfOPxH 9jW1GYD9VZr8+khbZEZAcMdXXhByOubIFrBKRSlZNvsA1+9tTMlS2f8EijXI0Zvr7+kx p7KQ== X-Gm-Message-State: AC+VfDwUcVTjl3myF7IwMGcP3RRVPvRsVQpAh4Ca1OfHKZBovOHBe0iZ Js5NmwXYQ/rDbbamwZgP7pqrS4XdzxUPyqZ4G+SucnU7NwCo2EOhjs8YzIBYFAW29m741xjxQ92 ihpMNdjxBu80kpnILTUbB4pHz7gJ9/+pZGIc8UFSWtWO6nO1ZlnP0QvfCI0bvP2VITRCz X-Received: by 2002:a05:6214:c6c:b0:62f:f6ac:abf5 with SMTP id t12-20020a0562140c6c00b0062ff6acabf5mr43014729qvj.5.1687989201160; Wed, 28 Jun 2023 14:53:21 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6IPT/qUgAkouDwxzob72SaB3s+apQpeAgKJRetCHfNucooMDrCa2F7cSJb+eYJVhrci9iAmQ== X-Received: by 2002:a05:6214:c6c:b0:62f:f6ac:abf5 with SMTP id t12-20020a0562140c6c00b0062ff6acabf5mr43014693qvj.5.1687989200642; Wed, 28 Jun 2023 14:53:20 -0700 (PDT) Received: from x1n.. (cpe5c7695f3aee0-cm5c7695f3aede.cpe.net.cable.rogers.com. [99.254.144.39]) by smtp.gmail.com with ESMTPSA id p3-20020a0cfac3000000b00631fea4d5bcsm6277797qvo.95.2023.06.28.14.53.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Jun 2023 14:53:20 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: David Hildenbrand , "Kirill A . Shutemov" , Andrew Morton , Andrea Arcangeli , Mike Rapoport , John Hubbard , Matthew Wilcox , Mike Kravetz , Vlastimil Babka , Yang Shi , James Houghton , Jason Gunthorpe , Lorenzo Stoakes , Hugh Dickins , peterx@redhat.com Subject: [PATCH v4 6/8] mm/gup: Retire follow_hugetlb_page() Date: Wed, 28 Jun 2023 17:53:08 -0400 Message-ID: <20230628215310.73782-7-peterx@redhat.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230628215310.73782-1-peterx@redhat.com> References: <20230628215310.73782-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: D316820024 X-Stat-Signature: eb6sffbxejtdm7pbzxesszj5x9uenh7b X-Rspam-User: X-HE-Tag: 1687989204-507762 X-HE-Meta: U2FsdGVkX1+VI181lb5nBgt+E2f58bZgN/9i5zQ9e7Hqtyx5pcF7lCWTDvETyaTiUWmYUfQQUih0q9dfJwkBgtOzcSJVg/kxhpMN+DdeNXb1sErAJEkEUEBmeqVqq0Zzk/AJe3g1+LV/KEg+FlNNBtk3wrWm72WQ4ML2NTm1reog0dycks8R8Vo69p5s4i6Otomifgg7QGKXqzQdmvjvUbYC8ZmkHDgCdloTmIgqHO0AJGvuBT7Cb2UzI8eLJ1Bi9puZO9ulUo/GsN+dtrQUBfkdQU6Ww5uBvL94Menz84/vv0sfIkBIyraV1JfxtqSEvpnkuyeGMDcigy9pDu1/9OiMeJaFVCVioQjvKnJDvccegH3sFs9/8f9D098bnBcBG6+5dalUigDp8DDgNRZxcVRl7V4qBsjma5O0rvaR+Osb/eah/Y4JoFcRwW0QimzEr/AGAkX2LXoqdMaHQsnhpG+D5SqeZI82v6jCi55ClKVXn1bLrClP7bWuW3Hl4zWdd/M0Rq9tl1Z29iBywu+XHyJ2Gjjc2lVzzROtPQWM24yQA1RcHUyTTQ41pD6us7KN9NF/bVOPKbCx6BmU/NrEsY2UEvd1BGctVuiRcSFWU3EaqCxkPCHNtkRtqwB4uDEfXrU6rjx5gnY8bSyp7JRjEP2oVOTyD8GaS+UZwo69017n4MRt3B7T1/aq/+WoiuP6yqUuhja7QKt+hXNgCmwIIdvzeyY/jwyJZwGokV/eZheyT7zf1pxTKBLaqhhsFbiz4qyQHlg7uVC/AUfS3HKX+L1CvoFBbORbEB29a2bL4uX45g5PUm/bvsCc8kUTPkHo7vk4fwNSSHsDRug/0AbBu6wbmqmJwJdIf+8h1zFxDQ/ufsxv/bGlcaOGdACBV35stH5IUmEE48eaX+y2tENEdy/XuC5KYI6ba9W5bTxxZyqRZDK5sP2qjg1Z4wKn9wXK6YA9G+g2pZ8ZlHC47BZ UW7w0QLO 2uOaU8fy9ng/C0c3Lr83pnbQ2G7/gB7Qj9kZnRHI+yGIaHRfqgLaLfQYEvxYt+Uaquo9SUsK6RMc9QvbBUgNX026sQtNoWKW4vTZvoM1ZMB5Rko3ewhfLPg0mUsN6t7rf76jYn6WewbrjvwUvr7k1U/pcwlKT2Q0jNWbRFaye+CO/rVLBpKwbyo2FIpbZPZrS0z7Eu3usIgpLzuKPKEMMDWo2hAxQR7obm6KmAhJzXUQ1vGKK/35h1l9DLq4fb4twIgNGbuTG1prlAkyz019EDpLY7J42fZfbP3yfdM/s8b/9uMrFyEYQnjVjmbfEhrr27nObE50ukt9MyZWS285s5TPq75J8ypUMmHDpMzeK8JKPnALggK1Wk36xWGJ7HeOVvSjHj7cI4OAbgOV0YKKY3oRLx3dI73g5e5sSmSYzK46UmG+Emct0HrJmJA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now __get_user_pages() should be well prepared to handle thp completely, as long as hugetlb gup requests even without the hugetlb's special path. Time to retire follow_hugetlb_page(). Tweak misc comments to reflect reality of follow_hugetlb_page()'s removal. Acked-by: David Hildenbrand Signed-off-by: Peter Xu --- fs/userfaultfd.c | 2 +- include/linux/hugetlb.h | 12 --- mm/gup.c | 19 ---- mm/hugetlb.c | 224 ---------------------------------------- 4 files changed, 1 insertion(+), 256 deletions(-) diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 7cecd49e078b..ae711f1d7a83 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -427,7 +427,7 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason) * * We also don't do userfault handling during * coredumping. hugetlbfs has the special - * follow_hugetlb_page() to skip missing pages in the + * hugetlb_follow_page_mask() to skip missing pages in the * FOLL_DUMP case, anon memory also checks for FOLL_DUMP with * the no_page_table() helper in follow_page_mask(), but the * shmem_vm_ops->fault method is invoked even during diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 9f282f370d96..9bc3c2d71b71 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -133,9 +133,6 @@ int copy_hugetlb_page_range(struct mm_struct *, struct mm_struct *, struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, unsigned long address, unsigned int flags, unsigned int *page_mask); -long follow_hugetlb_page(struct mm_struct *, struct vm_area_struct *, - struct page **, unsigned long *, unsigned long *, - long, unsigned int, int *); void unmap_hugepage_range(struct vm_area_struct *, unsigned long, unsigned long, struct page *, zap_flags_t); @@ -305,15 +302,6 @@ static inline struct page *hugetlb_follow_page_mask( BUILD_BUG(); /* should never be compiled in if !CONFIG_HUGETLB_PAGE*/ } -static inline long follow_hugetlb_page(struct mm_struct *mm, - struct vm_area_struct *vma, struct page **pages, - unsigned long *position, unsigned long *nr_pages, - long i, unsigned int flags, int *nonblocking) -{ - BUG(); - return 0; -} - static inline int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, struct vm_area_struct *dst_vma, diff --git a/mm/gup.c b/mm/gup.c index 0e2b0ff1143a..a7c294de6ae5 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -775,9 +775,6 @@ static struct page *follow_page_mask(struct vm_area_struct *vma, * Call hugetlb_follow_page_mask for hugetlb vmas as it will use * special hugetlb page table walking code. This eliminates the * need to check for hugetlb entries in the general walking code. - * - * hugetlb_follow_page_mask is only for follow_page() handling here. - * Ordinary GUP uses follow_hugetlb_page for hugetlb processing. */ if (is_vm_hugetlb_page(vma)) return hugetlb_follow_page_mask(vma, address, flags, @@ -1138,22 +1135,6 @@ static long __get_user_pages(struct mm_struct *mm, ret = check_vma_flags(vma, gup_flags); if (ret) goto out; - - if (is_vm_hugetlb_page(vma)) { - i = follow_hugetlb_page(mm, vma, pages, - &start, &nr_pages, i, - gup_flags, locked); - if (!*locked) { - /* - * We've got a VM_FAULT_RETRY - * and we've lost mmap_lock. - * We must stop here. - */ - BUG_ON(gup_flags & FOLL_NOWAIT); - goto out; - } - continue; - } } retry: /* diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 15e82a8a2b76..2f12da409a19 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5721,7 +5721,6 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, /* * Return whether there is a pagecache page to back given address within VMA. - * Caller follow_hugetlb_page() holds page_table_lock so we cannot lock_page. */ static bool hugetlbfs_pagecache_present(struct hstate *h, struct vm_area_struct *vma, unsigned long address) @@ -6422,37 +6421,6 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte, } #endif /* CONFIG_USERFAULTFD */ -static void record_subpages(struct page *page, struct vm_area_struct *vma, - int refs, struct page **pages) -{ - int nr; - - for (nr = 0; nr < refs; nr++) { - if (likely(pages)) - pages[nr] = nth_page(page, nr); - } -} - -static inline bool __follow_hugetlb_must_fault(struct vm_area_struct *vma, - unsigned int flags, pte_t *pte, - bool *unshare) -{ - pte_t pteval = huge_ptep_get(pte); - - *unshare = false; - if (is_swap_pte(pteval)) - return true; - if (huge_pte_write(pteval)) - return false; - if (flags & FOLL_WRITE) - return true; - if (gup_must_unshare(vma, flags, pte_page(pteval))) { - *unshare = true; - return true; - } - return false; -} - struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, unsigned long address, unsigned int flags, unsigned int *page_mask) @@ -6524,198 +6492,6 @@ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, return page; } -long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, - struct page **pages, unsigned long *position, - unsigned long *nr_pages, long i, unsigned int flags, - int *locked) -{ - unsigned long pfn_offset; - unsigned long vaddr = *position; - unsigned long remainder = *nr_pages; - struct hstate *h = hstate_vma(vma); - int err = -EFAULT, refs; - - while (vaddr < vma->vm_end && remainder) { - pte_t *pte; - spinlock_t *ptl = NULL; - bool unshare = false; - int absent; - struct page *page; - - /* - * If we have a pending SIGKILL, don't keep faulting pages and - * potentially allocating memory. - */ - if (fatal_signal_pending(current)) { - remainder = 0; - break; - } - - hugetlb_vma_lock_read(vma); - /* - * Some archs (sparc64, sh*) have multiple pte_ts to - * each hugepage. We have to make sure we get the - * first, for the page indexing below to work. - * - * Note that page table lock is not held when pte is null. - */ - pte = hugetlb_walk(vma, vaddr & huge_page_mask(h), - huge_page_size(h)); - if (pte) - ptl = huge_pte_lock(h, mm, pte); - absent = !pte || huge_pte_none(huge_ptep_get(pte)); - - /* - * When coredumping, it suits get_dump_page if we just return - * an error where there's an empty slot with no huge pagecache - * to back it. This way, we avoid allocating a hugepage, and - * the sparse dumpfile avoids allocating disk blocks, but its - * huge holes still show up with zeroes where they need to be. - */ - if (absent && (flags & FOLL_DUMP) && - !hugetlbfs_pagecache_present(h, vma, vaddr)) { - if (pte) - spin_unlock(ptl); - hugetlb_vma_unlock_read(vma); - remainder = 0; - break; - } - - /* - * We need call hugetlb_fault for both hugepages under migration - * (in which case hugetlb_fault waits for the migration,) and - * hwpoisoned hugepages (in which case we need to prevent the - * caller from accessing to them.) In order to do this, we use - * here is_swap_pte instead of is_hugetlb_entry_migration and - * is_hugetlb_entry_hwpoisoned. This is because it simply covers - * both cases, and because we can't follow correct pages - * directly from any kind of swap entries. - */ - if (absent || - __follow_hugetlb_must_fault(vma, flags, pte, &unshare)) { - vm_fault_t ret; - unsigned int fault_flags = 0; - - if (pte) - spin_unlock(ptl); - hugetlb_vma_unlock_read(vma); - - if (flags & FOLL_WRITE) - fault_flags |= FAULT_FLAG_WRITE; - else if (unshare) - fault_flags |= FAULT_FLAG_UNSHARE; - if (locked) { - fault_flags |= FAULT_FLAG_ALLOW_RETRY | - FAULT_FLAG_KILLABLE; - if (flags & FOLL_INTERRUPTIBLE) - fault_flags |= FAULT_FLAG_INTERRUPTIBLE; - } - if (flags & FOLL_NOWAIT) - fault_flags |= FAULT_FLAG_ALLOW_RETRY | - FAULT_FLAG_RETRY_NOWAIT; - if (flags & FOLL_TRIED) { - /* - * Note: FAULT_FLAG_ALLOW_RETRY and - * FAULT_FLAG_TRIED can co-exist - */ - fault_flags |= FAULT_FLAG_TRIED; - } - ret = hugetlb_fault(mm, vma, vaddr, fault_flags); - if (ret & VM_FAULT_ERROR) { - err = vm_fault_to_errno(ret, flags); - remainder = 0; - break; - } - if (ret & VM_FAULT_RETRY) { - if (locked && - !(fault_flags & FAULT_FLAG_RETRY_NOWAIT)) - *locked = 0; - *nr_pages = 0; - /* - * VM_FAULT_RETRY must not return an - * error, it will return zero - * instead. - * - * No need to update "position" as the - * caller will not check it after - * *nr_pages is set to 0. - */ - return i; - } - continue; - } - - pfn_offset = (vaddr & ~huge_page_mask(h)) >> PAGE_SHIFT; - page = pte_page(huge_ptep_get(pte)); - - VM_BUG_ON_PAGE((flags & FOLL_PIN) && PageAnon(page) && - !PageAnonExclusive(page), page); - - /* - * If subpage information not requested, update counters - * and skip the same_page loop below. - */ - if (!pages && !pfn_offset && - (vaddr + huge_page_size(h) < vma->vm_end) && - (remainder >= pages_per_huge_page(h))) { - vaddr += huge_page_size(h); - remainder -= pages_per_huge_page(h); - i += pages_per_huge_page(h); - spin_unlock(ptl); - hugetlb_vma_unlock_read(vma); - continue; - } - - /* vaddr may not be aligned to PAGE_SIZE */ - refs = min3(pages_per_huge_page(h) - pfn_offset, remainder, - (vma->vm_end - ALIGN_DOWN(vaddr, PAGE_SIZE)) >> PAGE_SHIFT); - - if (pages) - record_subpages(nth_page(page, pfn_offset), - vma, refs, - likely(pages) ? pages + i : NULL); - - if (pages) { - /* - * try_grab_folio() should always succeed here, - * because: a) we hold the ptl lock, and b) we've just - * checked that the huge page is present in the page - * tables. If the huge page is present, then the tail - * pages must also be present. The ptl prevents the - * head page and tail pages from being rearranged in - * any way. As this is hugetlb, the pages will never - * be p2pdma or not longterm pinable. So this page - * must be available at this point, unless the page - * refcount overflowed: - */ - if (WARN_ON_ONCE(!try_grab_folio(pages[i], refs, - flags))) { - spin_unlock(ptl); - hugetlb_vma_unlock_read(vma); - remainder = 0; - err = -ENOMEM; - break; - } - } - - vaddr += (refs << PAGE_SHIFT); - remainder -= refs; - i += refs; - - spin_unlock(ptl); - hugetlb_vma_unlock_read(vma); - } - *nr_pages = remainder; - /* - * setting position is actually required only if remainder is - * not zero but it's faster not to add a "if (remainder)" - * branch. - */ - *position = vaddr; - - return i ? i : err; -} - long hugetlb_change_protection(struct vm_area_struct *vma, unsigned long address, unsigned long end, pgprot_t newprot, unsigned long cp_flags)