diff mbox series

[1/1] net/rds: Track user mapped pages through special API

Message ID 20200212030355.1600749-2-jhubbard@nvidia.com (mailing list archive)
State Not Applicable
Headers show
Series net/rds: Track user mapped pages through special API | expand

Commit Message

John Hubbard Feb. 12, 2020, 3:03 a.m. UTC
From: Leon Romanovsky <leonro@mellanox.com>

Convert net/rds to use the newly introduces pin_user_pages() API,
which properly sets FOLL_PIN. Setting FOLL_PIN is now required for
code that requires tracking of pinned pages.

Note that this effectively changes the code's behavior: it now
ultimately calls set_page_dirty_lock(), instead of set_page_dirty().
This is probably more accurate.

As Christoph Hellwig put it, "set_page_dirty() is only safe if we are
dealing with a file backed page where we have reference on the inode it
hangs off." [1]

[1] https://lore.kernel.org/r/20190723153640.GB720@lst.de

Cc: Hans Westgaard Ry <hans.westgaard.ry@oracle.com>
Cc: Santosh Shilimkar <santosh.shilimkar@oracle.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: John Hubbard <jhubbard@nvidia.com>
---
 net/rds/rdma.c | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

Comments

Santosh Shilimkar Feb. 12, 2020, 5:31 p.m. UTC | #1
On 2/11/20 7:03 PM, John Hubbard wrote:
> From: Leon Romanovsky <leonro@mellanox.com>
> 
> Convert net/rds to use the newly introduces pin_user_pages() API,
> which properly sets FOLL_PIN. Setting FOLL_PIN is now required for
> code that requires tracking of pinned pages.
> 
> Note that this effectively changes the code's behavior: it now
> ultimately calls set_page_dirty_lock(), instead of set_page_dirty().
> This is probably more accurate.
> 
> As Christoph Hellwig put it, "set_page_dirty() is only safe if we are
> dealing with a file backed page where we have reference on the inode it
> hangs off." [1]
> 
> [1] https://urldefense.com/v3/__https://lore.kernel.org/r/20190723153640.GB720@lst.de__;!!GqivPVa7Brio!OJHuecs9Iup5ig3kQBi_423uMMuskWhBQAdOICrY3UQ_ZfEaxt9ySY7E8y32Q7pk5tByyA$
> 
> Cc: Hans Westgaard Ry <hans.westgaard.ry@oracle.com>
> Cc: Santosh Shilimkar <santosh.shilimkar@oracle.com>
> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
> Signed-off-by: John Hubbard <jhubbard@nvidia.com>
> ---
Change looks fine to me. Just on safer side, we will try
to test this change with regression suite to make sure it
works as expected.

For patch itself,

Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
Leon Romanovsky Feb. 12, 2020, 5:51 p.m. UTC | #2
On Wed, Feb 12, 2020 at 09:31:51AM -0800, santosh.shilimkar@oracle.com wrote:
> On 2/11/20 7:03 PM, John Hubbard wrote:
> > From: Leon Romanovsky <leonro@mellanox.com>
> >
> > Convert net/rds to use the newly introduces pin_user_pages() API,
> > which properly sets FOLL_PIN. Setting FOLL_PIN is now required for
> > code that requires tracking of pinned pages.
> >
> > Note that this effectively changes the code's behavior: it now
> > ultimately calls set_page_dirty_lock(), instead of set_page_dirty().
> > This is probably more accurate.
> >
> > As Christoph Hellwig put it, "set_page_dirty() is only safe if we are
> > dealing with a file backed page where we have reference on the inode it
> > hangs off." [1]
> >
> > [1] https://urldefense.com/v3/__https://lore.kernel.org/r/20190723153640.GB720@lst.de__;!!GqivPVa7Brio!OJHuecs9Iup5ig3kQBi_423uMMuskWhBQAdOICrY3UQ_ZfEaxt9ySY7E8y32Q7pk5tByyA$
> >
> > Cc: Hans Westgaard Ry <hans.westgaard.ry@oracle.com>
> > Cc: Santosh Shilimkar <santosh.shilimkar@oracle.com>
> > Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
> > Signed-off-by: John Hubbard <jhubbard@nvidia.com>
> > ---
> Change looks fine to me. Just on safer side, we will try
> to test this change with regression suite to make sure it
> works as expected.

Thanks Santosh,
I wrote this patch before John's series was merged into the tree,
but back then, Hans tested it and it worked, hope that it still works. :)

>
> For patch itself,
>
> Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
>
Santosh Shilimkar Feb. 12, 2020, 5:55 p.m. UTC | #3
On 2/12/20 9:51 AM, Leon Romanovsky wrote:
> On Wed, Feb 12, 2020 at 09:31:51AM -0800, santosh.shilimkar@oracle.com wrote:
>> On 2/11/20 7:03 PM, John Hubbard wrote:
>>> From: Leon Romanovsky <leonro@mellanox.com>
>>>
>>> Convert net/rds to use the newly introduces pin_user_pages() API,
>>> which properly sets FOLL_PIN. Setting FOLL_PIN is now required for
>>> code that requires tracking of pinned pages.
>>>
>>> Note that this effectively changes the code's behavior: it now
>>> ultimately calls set_page_dirty_lock(), instead of set_page_dirty().
>>> This is probably more accurate.
>>>
>>> As Christoph Hellwig put it, "set_page_dirty() is only safe if we are
>>> dealing with a file backed page where we have reference on the inode it
>>> hangs off." [1]
>>>
>>> [1] https://urldefense.com/v3/__https://lore.kernel.org/r/20190723153640.GB720@lst.de__;!!GqivPVa7Brio!OJHuecs9Iup5ig3kQBi_423uMMuskWhBQAdOICrY3UQ_ZfEaxt9ySY7E8y32Q7pk5tByyA$
>>>
>>> Cc: Hans Westgaard Ry <hans.westgaard.ry@oracle.com>
>>> Cc: Santosh Shilimkar <santosh.shilimkar@oracle.com>
>>> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
>>> Signed-off-by: John Hubbard <jhubbard@nvidia.com>
>>> ---
>> Change looks fine to me. Just on safer side, we will try
>> to test this change with regression suite to make sure it
>> works as expected.
> 
> Thanks Santosh,
> I wrote this patch before John's series was merged into the tree,
> but back then, Hans tested it and it worked, hope that it still works. :)
> 
I see. Wasn't aware of it. In that case, its should be fine.

Regards,
Santosh
David Miller Feb. 17, 2020, 2:37 a.m. UTC | #4
From: John Hubbard <jhubbard@nvidia.com>
Date: Tue, 11 Feb 2020 19:03:55 -0800

> From: Leon Romanovsky <leonro@mellanox.com>
> 
> Convert net/rds to use the newly introduces pin_user_pages() API,
> which properly sets FOLL_PIN. Setting FOLL_PIN is now required for
> code that requires tracking of pinned pages.
> 
> Note that this effectively changes the code's behavior: it now
> ultimately calls set_page_dirty_lock(), instead of set_page_dirty().
> This is probably more accurate.
> 
> As Christoph Hellwig put it, "set_page_dirty() is only safe if we are
> dealing with a file backed page where we have reference on the inode it
> hangs off." [1]
> 
> [1] https://lore.kernel.org/r/20190723153640.GB720@lst.de
> 
> Cc: Hans Westgaard Ry <hans.westgaard.ry@oracle.com>
> Cc: Santosh Shilimkar <santosh.shilimkar@oracle.com>
> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
> Signed-off-by: John Hubbard <jhubbard@nvidia.com>

Applied, thank you.
diff mbox series

Patch

diff --git a/net/rds/rdma.c b/net/rds/rdma.c
index 3341eee87bf9..585e6b3b69ce 100644
--- a/net/rds/rdma.c
+++ b/net/rds/rdma.c
@@ -162,10 +162,9 @@  static int rds_pin_pages(unsigned long user_addr, unsigned int nr_pages,
 	if (write)
 		gup_flags |= FOLL_WRITE;
 
-	ret = get_user_pages_fast(user_addr, nr_pages, gup_flags, pages);
+	ret = pin_user_pages_fast(user_addr, nr_pages, gup_flags, pages);
 	if (ret >= 0 && ret < nr_pages) {
-		while (ret--)
-			put_page(pages[ret]);
+		unpin_user_pages(pages, ret);
 		ret = -EFAULT;
 	}
 
@@ -300,8 +299,7 @@  static int __rds_rdma_map(struct rds_sock *rs, struct rds_get_mr_args *args,
 		 * to release anything.
 		 */
 		if (!need_odp) {
-			for (i = 0 ; i < nents; i++)
-				put_page(sg_page(&sg[i]));
+			unpin_user_pages(pages, nr_pages);
 			kfree(sg);
 		}
 		ret = PTR_ERR(trans_private);
@@ -325,7 +323,12 @@  static int __rds_rdma_map(struct rds_sock *rs, struct rds_get_mr_args *args,
 	if (cookie_ret)
 		*cookie_ret = cookie;
 
-	if (args->cookie_addr && put_user(cookie, (u64 __user *)(unsigned long) args->cookie_addr)) {
+	if (args->cookie_addr &&
+	    put_user(cookie, (u64 __user *)(unsigned long)args->cookie_addr)) {
+		if (!need_odp) {
+			unpin_user_pages(pages, nr_pages);
+			kfree(sg);
+		}
 		ret = -EFAULT;
 		goto out;
 	}
@@ -496,9 +499,7 @@  void rds_rdma_free_op(struct rm_rdma_op *ro)
 			 * is the case for a RDMA_READ which copies from remote
 			 * to local memory
 			 */
-			if (!ro->op_write)
-				set_page_dirty(page);
-			put_page(page);
+			unpin_user_pages_dirty_lock(&page, 1, !ro->op_write);
 		}
 	}
 
@@ -515,8 +516,7 @@  void rds_atomic_free_op(struct rm_atomic_op *ao)
 	/* Mark page dirty if it was possibly modified, which
 	 * is the case for a RDMA_READ which copies from remote
 	 * to local memory */
-	set_page_dirty(page);
-	put_page(page);
+	unpin_user_pages_dirty_lock(&page, 1, true);
 
 	kfree(ao->op_notifier);
 	ao->op_notifier = NULL;
@@ -944,7 +944,7 @@  int rds_cmsg_atomic(struct rds_sock *rs, struct rds_message *rm,
 	return ret;
 err:
 	if (page)
-		put_page(page);
+		unpin_user_page(page);
 	rm->atomic.op_active = 0;
 	kfree(rm->atomic.op_notifier);