From patchwork Fri Jun 27 09:57:49 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Shilovsky X-Patchwork-Id: 4434381 Return-Path: X-Original-To: patchwork-cifs-client@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id C53D89F2C8 for ; Fri, 27 Jun 2014 09:58:28 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id DD3AB20221 for ; Fri, 27 Jun 2014 09:58:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B94D3201EF for ; Fri, 27 Jun 2014 09:58:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753307AbaF0J60 (ORCPT ); Fri, 27 Jun 2014 05:58:26 -0400 Received: from mail-lb0-f176.google.com ([209.85.217.176]:46293 "EHLO mail-lb0-f176.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751598AbaF0J6Z (ORCPT ); Fri, 27 Jun 2014 05:58:25 -0400 Received: by mail-lb0-f176.google.com with SMTP id w7so3854632lbi.35 for ; Fri, 27 Jun 2014 02:58:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:subject:date:message-id:in-reply-to:references; bh=IvLoYYvGdMNywoSlhQbh/CmAv6SAPCeojIFAaw/6Yyg=; b=r+UMLv1i1x6GpVhi/rwtDX2tn21EII+6IgZV/V9JcmezLJxRKPmBOfmpa9rv8zk8i7 Oy+9keJiCFRIGkzftvnyHNnGA2lxK8jeX4NK5SGtj+pR1k1wSHXb2mLaimkTV4hUFutu ZGU788rzL6OzcSX7n0BXCJa2yPc6Bc50CcKM65CPjn609LfsPCj+cqBtBVUN7j6JfBxz LcY0MwpbTw4PYak+347MgzdhxaSDKGsqD79i1zfk+wHbXKywn42BUABcIcBfDw3VX1ct meaFaeXGvoS+T9QybIxR5go/XI+N8sSofrWVM1UNZXq7MbNkzGX/JDok+/OTp3N5DYrj dC5Q== X-Received: by 10.112.149.71 with SMTP id ty7mr15001298lbb.34.1403863104573; Fri, 27 Jun 2014 02:58:24 -0700 (PDT) Received: from localhost.localdomain ([92.43.3.6]) by mx.google.com with ESMTPSA id y8sm10969918lbr.18.2014.06.27.02.58.22 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 27 Jun 2014 02:58:23 -0700 (PDT) From: Pavel Shilovsky To: linux-cifs@vger.kernel.org Subject: [PATCH v2 12/16] CIFS: Fix rsize usage in readpages Date: Fri, 27 Jun 2014 13:57:49 +0400 Message-Id: <1403863073-19526-13-git-send-email-pshilovsky@samba.org> X-Mailer: git-send-email 1.8.1.2 In-Reply-To: <1403863073-19526-1-git-send-email-pshilovsky@samba.org> References: <1403863073-19526-1-git-send-email-pshilovsky@samba.org> Sender: linux-cifs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-cifs@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID,T_RP_MATCHES_RCVD,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP If a server changes maximum buffer size for read (rsize) requests on reconnect we can fail on repeating with a big size buffer on -EAGAIN error in readpages. Fix this by checking rsize all the time before repeating requests. Signed-off-by: Pavel Shilovsky --- fs/cifs/file.c | 41 ++++++++++++++++++++++++++--------------- 1 file changed, 26 insertions(+), 15 deletions(-) diff --git a/fs/cifs/file.c b/fs/cifs/file.c index ee7c547..5f3fa33 100644 --- a/fs/cifs/file.c +++ b/fs/cifs/file.c @@ -3348,6 +3348,8 @@ readpages_get_pages(struct address_space *mapping, struct list_head *page_list, unsigned int expected_index; int rc; + INIT_LIST_HEAD(tmplist); + page = list_entry(page_list->prev, struct page, lru); /* @@ -3403,19 +3405,10 @@ static int cifs_readpages(struct file *file, struct address_space *mapping, struct list_head tmplist; struct cifsFileInfo *open_file = file->private_data; struct cifs_sb_info *cifs_sb = CIFS_SB(file->f_path.dentry->d_sb); - unsigned int rsize = cifs_sb->rsize; + struct TCP_Server_Info *server; pid_t pid; /* - * Give up immediately if rsize is too small to read an entire page. - * The VFS will fall back to readpage. We should never reach this - * point however since we set ra_pages to 0 when the rsize is smaller - * than a cache page. - */ - if (unlikely(rsize < PAGE_CACHE_SIZE)) - return 0; - - /* * Reads as many pages as possible from fscache. Returns -ENOBUFS * immediately if the cookie is negative * @@ -3433,7 +3426,7 @@ static int cifs_readpages(struct file *file, struct address_space *mapping, pid = current->tgid; rc = 0; - INIT_LIST_HEAD(&tmplist); + server = tlink_tcon(open_file->tlink)->ses->server; cifs_dbg(FYI, "%s: file=%p mapping=%p num_pages=%u\n", __func__, file, mapping, num_pages); @@ -3455,8 +3448,17 @@ static int cifs_readpages(struct file *file, struct address_space *mapping, struct page *page, *tpage; struct cifs_readdata *rdata; - rc = readpages_get_pages(mapping, page_list, rsize, &tmplist, - &nr_pages, &offset, &bytes); + /* + * Give up immediately if rsize is too small to read an entire + * page. The VFS will fall back to readpage. We should never + * reach this point however since we set ra_pages to 0 when the + * rsize is smaller than a cache page. + */ + if (unlikely(cifs_sb->rsize < PAGE_CACHE_SIZE)) + return 0; + + rc = readpages_get_pages(mapping, page_list, cifs_sb->rsize, + &tmplist, &nr_pages, &offset, &bytes); if (rc) break; @@ -3486,15 +3488,24 @@ static int cifs_readpages(struct file *file, struct address_space *mapping, rdata->pages[rdata->nr_pages++] = page; } - rc = cifs_retry_async_readv(rdata); - if (rc != 0) { + if (!rdata->cfile->invalidHandle || + !cifs_reopen_file(rdata->cfile, true)) + rc = server->ops->async_readv(rdata); + if (rc) { for (i = 0; i < rdata->nr_pages; i++) { page = rdata->pages[i]; lru_cache_add_file(page); unlock_page(page); page_cache_release(page); + if (rc == -EAGAIN) + list_add_tail(&page->lru, &tmplist); } kref_put(&rdata->refcount, cifs_readdata_release); + if (rc == -EAGAIN) { + /* Re-add pages to the page_list and retry */ + list_splice(&tmplist, page_list); + continue; + } break; }