Message ID | 20191121071354.456618-7-jhubbard@nvidia.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm/gup: track dma-pinned pages: FOLL_PIN | expand |
On 11/21/19 12:08 AM, Christoph Hellwig wrote: > On Wed, Nov 20, 2019 at 11:13:36PM -0800, John Hubbard wrote: >> +static int pin_goldfish_pages(unsigned long first_page, >> + unsigned long last_page, >> + unsigned int last_page_size, >> + int is_write, >> + struct page *pages[MAX_BUFFERS_PER_COMMAND], >> + unsigned int *iter_last_page_size) > > Why not goldfish_pin_pages? Normally we put the module / subsystem > in front. Heh, is that how it's supposed to go? Sure, I'll change it. :) > > Also can we get this queued up for 5.5 to get some trivial changes > out of the way? > Is that a question to Andrew, or a request for me to send this as a separate patch email (or both)? thanks,
diff --git a/drivers/platform/goldfish/goldfish_pipe.c b/drivers/platform/goldfish/goldfish_pipe.c index cef0133aa47a..7ed2a21a0bac 100644 --- a/drivers/platform/goldfish/goldfish_pipe.c +++ b/drivers/platform/goldfish/goldfish_pipe.c @@ -257,12 +257,12 @@ static int goldfish_pipe_error_convert(int status) } } -static int pin_user_pages(unsigned long first_page, - unsigned long last_page, - unsigned int last_page_size, - int is_write, - struct page *pages[MAX_BUFFERS_PER_COMMAND], - unsigned int *iter_last_page_size) +static int pin_goldfish_pages(unsigned long first_page, + unsigned long last_page, + unsigned int last_page_size, + int is_write, + struct page *pages[MAX_BUFFERS_PER_COMMAND], + unsigned int *iter_last_page_size) { int ret; int requested_pages = ((last_page - first_page) >> PAGE_SHIFT) + 1; @@ -354,9 +354,9 @@ static int transfer_max_buffers(struct goldfish_pipe *pipe, if (mutex_lock_interruptible(&pipe->lock)) return -ERESTARTSYS; - pages_count = pin_user_pages(first_page, last_page, - last_page_size, is_write, - pipe->pages, &iter_last_page_size); + pages_count = pin_goldfish_pages(first_page, last_page, + last_page_size, is_write, + pipe->pages, &iter_last_page_size); if (pages_count < 0) { mutex_unlock(&pipe->lock); return pages_count;