From patchwork Sun May 17 23:56:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Hubbard X-Patchwork-Id: 11554455 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 90A94913 for ; Sun, 17 May 2020 23:56:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7945920820 for ; Sun, 17 May 2020 23:56:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="dmu8XjIp" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726918AbgEQX4W (ORCPT ); Sun, 17 May 2020 19:56:22 -0400 Received: from hqnvemgate25.nvidia.com ([216.228.121.64]:15209 "EHLO hqnvemgate25.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726668AbgEQX4W (ORCPT ); Sun, 17 May 2020 19:56:22 -0400 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Sun, 17 May 2020 16:55:04 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Sun, 17 May 2020 16:56:22 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Sun, 17 May 2020 16:56:22 -0700 Received: from HQMAIL107.nvidia.com (172.20.187.13) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Sun, 17 May 2020 23:56:21 +0000 Received: from hqnvemgw03.nvidia.com (10.124.88.68) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Sun, 17 May 2020 23:56:21 +0000 Received: from sandstorm.nvidia.com (Not Verified[10.2.48.175]) by hqnvemgw03.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Sun, 17 May 2020 16:56:21 -0700 From: John Hubbard To: LKML CC: John Hubbard , Matt Porter , Alexandre Bounine , Sumit Semwal , Dan Carpenter , Andrew Morton , , Subject: [PATCH 1/2] rapidio: fix an error in get_user_pages_fast() error handling Date: Sun, 17 May 2020 16:56:19 -0700 Message-ID: <20200517235620.205225-2-jhubbard@nvidia.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200517235620.205225-1-jhubbard@nvidia.com> References: <20200517235620.205225-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1589759704; bh=/caTfybzk6G8zCAwE/wUhpELjA48X6aFYrwn8ihzqac=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=dmu8XjIpWdlFyVyCLNuEEDWxs3IKVzKyvRAL4FwJTpqxQE6JVIUMLgNPyzRJjKFhp EWzhzIg07dcjnuLMAV18Y48JC2BF6mKHnNrXuM9BcGCHDmTkprjgazysiwJnPr6paU dIsibGJQIs7Jst04M2DTx6ot+HxZfrM7Nt8zovjQk+Ua17VwpMJjWoXZl43m0lrIpK jXb/dElldN9iDmH8iuuGkLT5kC9ndZuZ6hYBi3gDVhcDj9J3pz16B4z+83gyiRnZBT kIiEstjnTRbetP6lnvytfWmEkuePv3Jqixpb4XI8Vuk/0usrpbdnPY8qB0BLNi8khs ffU4GgzyT4x4g== Sender: linux-media-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org In the case of get_user_pages_fast() returning fewer pages than requested, rio_dma_transfer() does not quite do the right thing. It attempts to release all the pages that were requested, rather than just the pages that were pinned. Fix the error handling so that only the pages that were successfully pinned are released. Fixes: e8de370188d0 ("rapidio: add mport char device driver") Cc: Matt Porter Cc: Alexandre Bounine Cc: Sumit Semwal Cc: Dan Carpenter Cc: Andrew Morton Cc: linux-media@vger.kernel.org Cc: stable@vger.kernel.org Signed-off-by: John Hubbard --- drivers/rapidio/devices/rio_mport_cdev.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/rapidio/devices/rio_mport_cdev.c b/drivers/rapidio/devices/rio_mport_cdev.c index 8155f59ece38..10af330153b5 100644 --- a/drivers/rapidio/devices/rio_mport_cdev.c +++ b/drivers/rapidio/devices/rio_mport_cdev.c @@ -877,6 +877,11 @@ rio_dma_transfer(struct file *filp, u32 transfer_mode, rmcd_error("pinned %ld out of %ld pages", pinned, nr_pages); ret = -EFAULT; + /* + * Set nr_pages up to mean "how many pages to unpin, in + * the error handler: + */ + nr_pages = pinned; goto err_pg; }