From patchwork Mon Jul 21 15:45:53 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Shilovsky X-Patchwork-Id: 4597101 Return-Path: X-Original-To: patchwork-cifs-client@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 658779F2B8 for ; Mon, 21 Jul 2014 15:46:43 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 70594200DE for ; Mon, 21 Jul 2014 15:46:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id AA1DE2014A for ; Mon, 21 Jul 2014 15:46:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933134AbaGUPqj (ORCPT ); Mon, 21 Jul 2014 11:46:39 -0400 Received: from mail-la0-f53.google.com ([209.85.215.53]:61861 "EHLO mail-la0-f53.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933127AbaGUPqi (ORCPT ); Mon, 21 Jul 2014 11:46:38 -0400 Received: by mail-la0-f53.google.com with SMTP id gl10so4806784lab.26 for ; Mon, 21 Jul 2014 08:46:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:subject:date:message-id:in-reply-to:references; bh=jtpijDw7MfkDktBTKWDrkRFQD8wqWHBYXFHr5bP9JgA=; b=WDWK8W+tLZZIkrQ4epN+GSwtNLs9BK2mDegbdxSu/SC6GzI6osMedtmdwbsRGA2gg8 05mzhmQpuY5H603L78MNF7dENd1/D/hWg4R/kGlHsSbvwHr+3v1KsQAiymc910XW/6dO Dtnzb/vxuEOBAxjdbfXlLHM8WF/2LJuGAyeEX3R5J+azuX73eXcenhZy1NLBJQG8/mGK 5GXQGUQ+0kbfWSPs8J7gV4+VPAAto8vARYdeHcMSonEEv5Wjw/0Q/zdwUOZuMk53UA13 wKpXCyhqovAkcFiTD1J+o7NeWcmOwxJxomw1gGufWIY5lqwgai1PR3HN540LLXwHu3VQ pCEw== X-Received: by 10.152.23.197 with SMTP id o5mr8519893laf.60.1405957595488; Mon, 21 Jul 2014 08:46:35 -0700 (PDT) Received: from localhost.localdomain ([92.43.3.35]) by mx.google.com with ESMTPSA id ok1sm25762485lbc.18.2014.07.21.08.46.33 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 21 Jul 2014 08:46:34 -0700 (PDT) From: Pavel Shilovsky To: linux-cifs@vger.kernel.org Subject: [PATCH v3 11/16] CIFS: Separate page search from readpages Date: Mon, 21 Jul 2014 19:45:53 +0400 Message-Id: <1405957558-18476-12-git-send-email-pshilovsky@samba.org> X-Mailer: git-send-email 1.8.1.2 In-Reply-To: <1405957558-18476-1-git-send-email-pshilovsky@samba.org> References: <1405957558-18476-1-git-send-email-pshilovsky@samba.org> Sender: linux-cifs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-cifs@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Pavel Shilovsky Reviewed-by: Shirish Pargaonkar --- fs/cifs/file.c | 107 ++++++++++++++++++++++++++++++++------------------------- 1 file changed, 61 insertions(+), 46 deletions(-) diff --git a/fs/cifs/file.c b/fs/cifs/file.c index c79bdf3..bec48f1 100644 --- a/fs/cifs/file.c +++ b/fs/cifs/file.c @@ -3340,6 +3340,63 @@ cifs_readpages_read_into_pages(struct TCP_Server_Info *server, return total_read > 0 && result != -EAGAIN ? total_read : result; } +static int +readpages_get_pages(struct address_space *mapping, struct list_head *page_list, + unsigned int rsize, struct list_head *tmplist, + unsigned int *nr_pages, loff_t *offset, unsigned int *bytes) +{ + struct page *page, *tpage; + unsigned int expected_index; + int rc; + + page = list_entry(page_list->prev, struct page, lru); + + /* + * Lock the page and put it in the cache. Since no one else + * should have access to this page, we're safe to simply set + * PG_locked without checking it first. + */ + __set_page_locked(page); + rc = add_to_page_cache_locked(page, mapping, + page->index, GFP_KERNEL); + + /* give up if we can't stick it in the cache */ + if (rc) { + __clear_page_locked(page); + return rc; + } + + /* move first page to the tmplist */ + *offset = (loff_t)page->index << PAGE_CACHE_SHIFT; + *bytes = PAGE_CACHE_SIZE; + *nr_pages = 1; + list_move_tail(&page->lru, tmplist); + + /* now try and add more pages onto the request */ + expected_index = page->index + 1; + list_for_each_entry_safe_reverse(page, tpage, page_list, lru) { + /* discontinuity ? */ + if (page->index != expected_index) + break; + + /* would this page push the read over the rsize? */ + if (*bytes + PAGE_CACHE_SIZE > rsize) + break; + + __set_page_locked(page); + if (add_to_page_cache_locked(page, mapping, page->index, + GFP_KERNEL)) { + __clear_page_locked(page); + break; + } + list_move_tail(&page->lru, tmplist); + (*bytes) += PAGE_CACHE_SIZE; + expected_index++; + (*nr_pages)++; + } + return rc; +} + static int cifs_readpages(struct file *file, struct address_space *mapping, struct list_head *page_list, unsigned num_pages) { @@ -3394,57 +3451,15 @@ static int cifs_readpages(struct file *file, struct address_space *mapping, * the rdata->pages, then we want them in increasing order. */ while (!list_empty(page_list)) { - unsigned int i; - unsigned int bytes = PAGE_CACHE_SIZE; - unsigned int expected_index; - unsigned int nr_pages = 1; + unsigned int i, nr_pages, bytes; loff_t offset; struct page *page, *tpage; struct cifs_readdata *rdata; - page = list_entry(page_list->prev, struct page, lru); - - /* - * Lock the page and put it in the cache. Since no one else - * should have access to this page, we're safe to simply set - * PG_locked without checking it first. - */ - __set_page_locked(page); - rc = add_to_page_cache_locked(page, mapping, - page->index, GFP_KERNEL); - - /* give up if we can't stick it in the cache */ - if (rc) { - __clear_page_locked(page); + rc = readpages_get_pages(mapping, page_list, rsize, &tmplist, + &nr_pages, &offset, &bytes); + if (rc) break; - } - - /* move first page to the tmplist */ - offset = (loff_t)page->index << PAGE_CACHE_SHIFT; - list_move_tail(&page->lru, &tmplist); - - /* now try and add more pages onto the request */ - expected_index = page->index + 1; - list_for_each_entry_safe_reverse(page, tpage, page_list, lru) { - /* discontinuity ? */ - if (page->index != expected_index) - break; - - /* would this page push the read over the rsize? */ - if (bytes + PAGE_CACHE_SIZE > rsize) - break; - - __set_page_locked(page); - if (add_to_page_cache_locked(page, mapping, - page->index, GFP_KERNEL)) { - __clear_page_locked(page); - break; - } - list_move_tail(&page->lru, &tmplist); - bytes += PAGE_CACHE_SIZE; - expected_index++; - nr_pages++; - } rdata = cifs_readdata_alloc(nr_pages, cifs_readv_complete); if (!rdata) {