diff mbox series

[014/158] mm/gup.c: allow CMA migration to propagate errors back to caller

Message ID 20191201014950.8xby9TJPA%akpm@linux-foundation.org (mailing list archive)
State New, archived
Headers show
Series [001/158] scripts/spelling.txt: add more spellings to spelling.txt | expand

Commit Message

Andrew Morton Dec. 1, 2019, 1:49 a.m. UTC
From: zhong jiang <zhongjiang@huawei.com>
Subject: mm/gup.c: allow CMA migration to propagate errors back to caller

check_and_migrate_cma_pages() was recording the result of
__get_user_pages_locked() in an unsigned "nr_pages" variable.  Because
__get_user_pages_locked() returns a signed value that can include negative
errno values, this had the effect of hiding errors.

Change check_and_migrate_cma_pages() implementation so that it uses a
signed variable instead, and propagates the results back to the caller
just as other gup internal functions do.

This was discovered with the help of unsigned_lesser_than_zero.cocci.

Link: http://lkml.kernel.org/r/1571671030-58029-1-git-send-email-zhongjiang@huawei.com
Signed-off-by: zhong jiang <zhongjiang@huawei.com>
Suggested-by: John Hubbard <jhubbard@nvidia.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/gup.c |    8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)
diff mbox series

Patch

--- a/mm/gup.c~mm-gup-allow-cma-migration-to-propagate-errors-back-to-caller
+++ a/mm/gup.c
@@ -1443,6 +1443,7 @@  static long check_and_migrate_cma_pages(
 	bool drain_allow = true;
 	bool migrate_allow = true;
 	LIST_HEAD(cma_page_list);
+	long ret = nr_pages;
 
 check_again:
 	for (i = 0; i < nr_pages;) {
@@ -1504,17 +1505,18 @@  check_again:
 		 * again migrating any new CMA pages which we failed to isolate
 		 * earlier.
 		 */
-		nr_pages = __get_user_pages_locked(tsk, mm, start, nr_pages,
+		ret = __get_user_pages_locked(tsk, mm, start, nr_pages,
 						   pages, vmas, NULL,
 						   gup_flags);
 
-		if ((nr_pages > 0) && migrate_allow) {
+		if ((ret > 0) && migrate_allow) {
+			nr_pages = ret;
 			drain_allow = true;
 			goto check_again;
 		}
 	}
 
-	return nr_pages;
+	return ret;
 }
 #else
 static long check_and_migrate_cma_pages(struct task_struct *tsk,