From patchwork Wed Oct 2 14:28:04 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Kara X-Patchwork-Id: 2975221 Return-Path: X-Original-To: patchwork-linux-rdma@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id A2F51BFF0B for ; Wed, 2 Oct 2013 14:32:46 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 056952017D for ; Wed, 2 Oct 2013 14:32:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 46C4C20424 for ; Wed, 2 Oct 2013 14:32:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753410Ab3JBObZ (ORCPT ); Wed, 2 Oct 2013 10:31:25 -0400 Received: from cantor2.suse.de ([195.135.220.15]:49030 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753330Ab3JBO3A (ORCPT ); Wed, 2 Oct 2013 10:29:00 -0400 Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 2F18FA552F; Wed, 2 Oct 2013 16:28:58 +0200 (CEST) Received: by quack.suse.cz (Postfix, from userid 1000) id 3095F80ED5; Wed, 2 Oct 2013 16:28:55 +0200 (CEST) From: Jan Kara To: LKML Cc: linux-mm@kvack.org, Jan Kara , Mike Marciniszyn , Roland Dreier , linux-rdma@vger.kernel.org Subject: [PATCH 23/26] ib: Convert qib_get_user_pages() to get_user_pages_unlocked() Date: Wed, 2 Oct 2013 16:28:04 +0200 Message-Id: <1380724087-13927-24-git-send-email-jack@suse.cz> X-Mailer: git-send-email 1.8.1.4 In-Reply-To: <1380724087-13927-1-git-send-email-jack@suse.cz> References: <1380724087-13927-1-git-send-email-jack@suse.cz> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Spam-Status: No, score=-7.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Convert qib_get_user_pages() to use get_user_pages_unlocked(). This shortens the section where we hold mmap_sem for writing and also removes the knowledge about get_user_pages() locking from ipath driver. We also fix a bug in testing pinned number of pages when changing the code. CC: Mike Marciniszyn CC: Roland Dreier CC: linux-rdma@vger.kernel.org Signed-off-by: Jan Kara --- drivers/infiniband/hw/qib/qib_user_pages.c | 62 +++++++++++++----------------- 1 file changed, 26 insertions(+), 36 deletions(-) diff --git a/drivers/infiniband/hw/qib/qib_user_pages.c b/drivers/infiniband/hw/qib/qib_user_pages.c index 2bc1d2b96298..57ce83c2d1d9 100644 --- a/drivers/infiniband/hw/qib/qib_user_pages.c +++ b/drivers/infiniband/hw/qib/qib_user_pages.c @@ -48,39 +48,55 @@ static void __qib_release_user_pages(struct page **p, size_t num_pages, } } -/* - * Call with current->mm->mmap_sem held. +/** + * qib_get_user_pages - lock user pages into memory + * @start_page: the start page + * @num_pages: the number of pages + * @p: the output page structures + * + * This function takes a given start page (page aligned user virtual + * address) and pins it and the following specified number of pages. For + * now, num_pages is always 1, but that will probably change at some point + * (because caller is doing expected sends on a single virtually contiguous + * buffer, so we can do all pages at once). */ -static int __qib_get_user_pages(unsigned long start_page, size_t num_pages, - struct page **p, struct vm_area_struct **vma) +int qib_get_user_pages(unsigned long start_page, size_t num_pages, + struct page **p) { unsigned long lock_limit; size_t got; int ret; + struct mm_struct *mm = current->mm; + down_write(&mm->mmap_sem); lock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT; - if (num_pages > lock_limit && !capable(CAP_IPC_LOCK)) { + if (mm->pinned_vm + num_pages > lock_limit && !capable(CAP_IPC_LOCK)) { + up_write(&mm->mmap_sem); ret = -ENOMEM; goto bail; } + mm->pinned_vm += num_pages; + up_write(&mm->mmap_sem); for (got = 0; got < num_pages; got += ret) { - ret = get_user_pages(current, current->mm, - start_page + got * PAGE_SIZE, - num_pages - got, 1, 1, - p + got, vma); + ret = get_user_pages_unlocked(current, mm, + start_page + got * PAGE_SIZE, + num_pages - got, 1, 1, + p + got); if (ret < 0) goto bail_release; } - current->mm->pinned_vm += num_pages; ret = 0; goto bail; bail_release: __qib_release_user_pages(p, got, 0); + down_write(&mm->mmap_sem); + mm->pinned_vm -= num_pages; + up_write(&mm->mmap_sem); bail: return ret; } @@ -117,32 +133,6 @@ dma_addr_t qib_map_page(struct pci_dev *hwdev, struct page *page, return phys; } -/** - * qib_get_user_pages - lock user pages into memory - * @start_page: the start page - * @num_pages: the number of pages - * @p: the output page structures - * - * This function takes a given start page (page aligned user virtual - * address) and pins it and the following specified number of pages. For - * now, num_pages is always 1, but that will probably change at some point - * (because caller is doing expected sends on a single virtually contiguous - * buffer, so we can do all pages at once). - */ -int qib_get_user_pages(unsigned long start_page, size_t num_pages, - struct page **p) -{ - int ret; - - down_write(¤t->mm->mmap_sem); - - ret = __qib_get_user_pages(start_page, num_pages, p, NULL); - - up_write(¤t->mm->mmap_sem); - - return ret; -} - void qib_release_user_pages(struct page **p, size_t num_pages) { if (current->mm) /* during close after signal, mm can be NULL */