From patchwork Mon Jul 21 15:45:54 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Shilovsky X-Patchwork-Id: 4597081 Return-Path: X-Original-To: patchwork-cifs-client@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 953229F2B8 for ; Mon, 21 Jul 2014 15:46:41 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id A9AA320149 for ; Mon, 21 Jul 2014 15:46:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id BF4622013D for ; Mon, 21 Jul 2014 15:46:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932841AbaGUPqj (ORCPT ); Mon, 21 Jul 2014 11:46:39 -0400 Received: from mail-lb0-f174.google.com ([209.85.217.174]:56627 "EHLO mail-lb0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933134AbaGUPqi (ORCPT ); Mon, 21 Jul 2014 11:46:38 -0400 Received: by mail-lb0-f174.google.com with SMTP id c11so4894264lbj.19 for ; Mon, 21 Jul 2014 08:46:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:subject:date:message-id:in-reply-to:references; bh=CsadW6OSAoJ/xnxVaxuUCcbHgkVItyyFgz77kpGgxmc=; b=Fc+n/F2alGbYpGAhEQq/G+8O4y7IewnhQTxbaYV6dSvUdI7UpZf3p/KprCT7/ZHgha TdLGcA9MRshnFyKn05xvlR1m9pJn7E9VYsWUI883VczAI3ZGlJTep4KJ70AT3BeBkPad 3OdLPngebziXIgo0soLpAW0u6RhXMBmIWsEO7T6xTdsL07utZyGzT1daehDBOZ2KkBxI X8syzqtehLcUN+id+y8+tO1AFc9+ZZ8d9M3Di28YKh6CY4D4lDeJtg+Hc+DMxaiS2dag pMuD/RyoGQodcp24QTkOYeqIxADoZn2vs6aQW8Qi6Ck4pjz9gWoGa0d6HrYiROeb+iq4 HFQg== X-Received: by 10.153.7.74 with SMTP id da10mr27115781lad.27.1405957597349; Mon, 21 Jul 2014 08:46:37 -0700 (PDT) Received: from localhost.localdomain ([92.43.3.35]) by mx.google.com with ESMTPSA id ok1sm25762485lbc.18.2014.07.21.08.46.35 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 21 Jul 2014 08:46:36 -0700 (PDT) From: Pavel Shilovsky To: linux-cifs@vger.kernel.org Subject: [PATCH v3 12/16] CIFS: Fix rsize usage in readpages Date: Mon, 21 Jul 2014 19:45:54 +0400 Message-Id: <1405957558-18476-13-git-send-email-pshilovsky@samba.org> X-Mailer: git-send-email 1.8.1.2 In-Reply-To: <1405957558-18476-1-git-send-email-pshilovsky@samba.org> References: <1405957558-18476-1-git-send-email-pshilovsky@samba.org> Sender: linux-cifs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-cifs@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP If a server changes maximum buffer size for read (rsize) requests on reconnect we can fail on repeating with a big size buffer on -EAGAIN error in readpages. Fix this by checking rsize all the time before repeating requests. Signed-off-by: Pavel Shilovsky Reviewed-by: Shirish Pargaonkar --- fs/cifs/file.c | 41 ++++++++++++++++++++++++++--------------- 1 file changed, 26 insertions(+), 15 deletions(-) diff --git a/fs/cifs/file.c b/fs/cifs/file.c index bec48f1..d627918 100644 --- a/fs/cifs/file.c +++ b/fs/cifs/file.c @@ -3349,6 +3349,8 @@ readpages_get_pages(struct address_space *mapping, struct list_head *page_list, unsigned int expected_index; int rc; + INIT_LIST_HEAD(tmplist); + page = list_entry(page_list->prev, struct page, lru); /* @@ -3404,19 +3406,10 @@ static int cifs_readpages(struct file *file, struct address_space *mapping, struct list_head tmplist; struct cifsFileInfo *open_file = file->private_data; struct cifs_sb_info *cifs_sb = CIFS_SB(file->f_path.dentry->d_sb); - unsigned int rsize = cifs_sb->rsize; + struct TCP_Server_Info *server; pid_t pid; /* - * Give up immediately if rsize is too small to read an entire page. - * The VFS will fall back to readpage. We should never reach this - * point however since we set ra_pages to 0 when the rsize is smaller - * than a cache page. - */ - if (unlikely(rsize < PAGE_CACHE_SIZE)) - return 0; - - /* * Reads as many pages as possible from fscache. Returns -ENOBUFS * immediately if the cookie is negative * @@ -3434,7 +3427,7 @@ static int cifs_readpages(struct file *file, struct address_space *mapping, pid = current->tgid; rc = 0; - INIT_LIST_HEAD(&tmplist); + server = tlink_tcon(open_file->tlink)->ses->server; cifs_dbg(FYI, "%s: file=%p mapping=%p num_pages=%u\n", __func__, file, mapping, num_pages); @@ -3456,8 +3449,17 @@ static int cifs_readpages(struct file *file, struct address_space *mapping, struct page *page, *tpage; struct cifs_readdata *rdata; - rc = readpages_get_pages(mapping, page_list, rsize, &tmplist, - &nr_pages, &offset, &bytes); + /* + * Give up immediately if rsize is too small to read an entire + * page. The VFS will fall back to readpage. We should never + * reach this point however since we set ra_pages to 0 when the + * rsize is smaller than a cache page. + */ + if (unlikely(cifs_sb->rsize < PAGE_CACHE_SIZE)) + return 0; + + rc = readpages_get_pages(mapping, page_list, cifs_sb->rsize, + &tmplist, &nr_pages, &offset, &bytes); if (rc) break; @@ -3487,15 +3489,24 @@ static int cifs_readpages(struct file *file, struct address_space *mapping, rdata->pages[rdata->nr_pages++] = page; } - rc = cifs_retry_async_readv(rdata); - if (rc != 0) { + if (!rdata->cfile->invalidHandle || + !cifs_reopen_file(rdata->cfile, true)) + rc = server->ops->async_readv(rdata); + if (rc) { for (i = 0; i < rdata->nr_pages; i++) { page = rdata->pages[i]; lru_cache_add_file(page); unlock_page(page); page_cache_release(page); + if (rc == -EAGAIN) + list_add_tail(&page->lru, &tmplist); } kref_put(&rdata->refcount, cifs_readdata_release); + if (rc == -EAGAIN) { + /* Re-add pages to the page_list and retry */ + list_splice(&tmplist, page_list); + continue; + } break; }