From patchwork Thu Dec 12 08:18:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Hubbard X-Patchwork-Id: 11287493 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D9D1B14DB for ; Thu, 12 Dec 2019 08:22:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B847724658 for ; Thu, 12 Dec 2019 08:22:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="gxmcLDH2" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728691AbfLLIWm (ORCPT ); Thu, 12 Dec 2019 03:22:42 -0500 Received: from hqnvemgate25.nvidia.com ([216.228.121.64]:8610 "EHLO hqnvemgate25.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728305AbfLLIT1 (ORCPT ); Thu, 12 Dec 2019 03:19:27 -0500 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Thu, 12 Dec 2019 00:19:14 -0800 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Thu, 12 Dec 2019 00:19:21 -0800 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Thu, 12 Dec 2019 00:19:21 -0800 Received: from HQMAIL101.nvidia.com (172.20.187.10) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 12 Dec 2019 08:19:18 +0000 Received: from hqnvemgw03.nvidia.com (10.124.88.68) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Thu, 12 Dec 2019 08:19:18 +0000 Received: from blueforge.nvidia.com (Not Verified[10.110.48.28]) by hqnvemgw03.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Thu, 12 Dec 2019 00:19:18 -0800 From: John Hubbard To: Andrew Morton CC: Al Viro , Alex Williamson , Benjamin Herrenschmidt , =?utf-8?b?QmrDtnJuIFQ=?= =?utf-8?b?w7ZwZWw=?= , Christoph Hellwig , Dan Williams , Daniel Vetter , Dave Chinner , David Airlie , "David S . Miller" , Ira Weiny , Jan Kara , Jason Gunthorpe , Jens Axboe , Jonathan Corbet , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Magnus Karlsson , Mauro Carvalho Chehab , Michael Ellerman , Michal Hocko , Mike Kravetz , Paul Mackerras , Shuah Khan , Vlastimil Babka , , , , , , , , , , , , , LKML , John Hubbard , Christoph Hellwig Subject: [PATCH v10 02/25] mm/gup: move try_get_compound_head() to top, fix minor issues Date: Thu, 12 Dec 2019 00:18:54 -0800 Message-ID: <20191212081917.1264184-3-jhubbard@nvidia.com> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191212081917.1264184-1-jhubbard@nvidia.com> References: <20191212081917.1264184-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1576138755; bh=JKnh6awNMnXx+WkfJCjCpP/dD2pEJJn8AxQ3IWMhLGw=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=gxmcLDH24QYW/bRTfUutV80ebVXUWcVOgjFST78ubOajjqfcImb6wcL10E0yScFV9 OoQxmVO3iHhyHIPF7GClZG+GqSH/+MKl71KyHvoeT553u3ikDmjBV/qHzFMrtYqDCx C+7ncvIh8amz6LicWGev/AEHphB8cSXLFW6hVofI2hnuGLAluGSJgRmQfoywxFir+K vMFxM3P8yTB3nZvx+VpJ9InwyB27Ib2zyu4eVLtYnMiU1nwmx5CwZAzl5cxvNCi0qy EWWxnXKJpx+f3CBYgfZIxHYKjltfIjU22F1j+uIPufucdMNuZUuPLY4O57qU4s+XrA 0JupnX/tvIk+g== Sender: linux-media-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org An upcoming patch uses try_get_compound_head() more widely, so move it to the top of gup.c. Also fix a tiny spelling error and a checkpatch.pl warning. Reviewed-by: Christoph Hellwig Reviewed-by: Jan Kara Reviewed-by: Ira Weiny Signed-off-by: John Hubbard --- mm/gup.c | 29 +++++++++++++++-------------- 1 file changed, 15 insertions(+), 14 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index f764432914c4..3ecce297a47f 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -29,6 +29,21 @@ struct follow_page_context { unsigned int page_mask; }; +/* + * Return the compound head page with ref appropriately incremented, + * or NULL if that failed. + */ +static inline struct page *try_get_compound_head(struct page *page, int refs) +{ + struct page *head = compound_head(page); + + if (WARN_ON_ONCE(page_ref_count(head) < 0)) + return NULL; + if (unlikely(!page_cache_add_speculative(head, refs))) + return NULL; + return head; +} + /** * put_user_pages_dirty_lock() - release and optionally dirty gup-pinned pages * @pages: array of pages to be maybe marked dirty, and definitely released. @@ -1807,20 +1822,6 @@ static void __maybe_unused undo_dev_pagemap(int *nr, int nr_start, } } -/* - * Return the compund head page with ref appropriately incremented, - * or NULL if that failed. - */ -static inline struct page *try_get_compound_head(struct page *page, int refs) -{ - struct page *head = compound_head(page); - if (WARN_ON_ONCE(page_ref_count(head) < 0)) - return NULL; - if (unlikely(!page_cache_add_speculative(head, refs))) - return NULL; - return head; -} - #ifdef CONFIG_ARCH_HAS_PTE_SPECIAL static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, unsigned int flags, struct page **pages, int *nr)