From patchwork Fri Jun 27 09:57:53 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Shilovsky X-Patchwork-Id: 4434421 Return-Path: X-Original-To: patchwork-cifs-client@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 379959F2C8 for ; Fri, 27 Jun 2014 09:58:36 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 2B45D20221 for ; Fri, 27 Jun 2014 09:58:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 0573F201EF for ; Fri, 27 Jun 2014 09:58:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753314AbaF0J6d (ORCPT ); Fri, 27 Jun 2014 05:58:33 -0400 Received: from mail-lb0-f175.google.com ([209.85.217.175]:64529 "EHLO mail-lb0-f175.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751598AbaF0J6d (ORCPT ); Fri, 27 Jun 2014 05:58:33 -0400 Received: by mail-lb0-f175.google.com with SMTP id n15so3871263lbi.34 for ; Fri, 27 Jun 2014 02:58:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:subject:date:message-id:in-reply-to:references; bh=dTr6YPwbTYVQJWQBmmzMkWdW49WP6icGiwe+w0Wf/mE=; b=rkYgffSvDGiCsEswSBrp4EuvwqrsvKw/S9Oeq2yZjWWTZDFJhVhHXYxyaYGOaezj8h pGm24LRl7+uNVgqRlw3NSg/vdj7xMQgcBNAJegJcPX15uZ3SFZsxakfWMiDD9LvVyQhg IgBnOfWTl9grbMuz1pHjxpfOy+gW9cphOXyqe/KNIQUXGmF11YdDApjWUQ8qfkMCBtI6 3wpUCHwFKrAIvzsbtjgZ70lEfEw+yS/sVdF6jsuRzjOgipSnTvunJYQat5YShjsSzO5V bI86suRWRYa26dWTJQZmg22dU5jkPfXsFodrNJa/dZ8gDZltUkcYc7nhlbHS/S1le7/W gyWw== X-Received: by 10.152.179.131 with SMTP id dg3mr15657572lac.21.1403863111638; Fri, 27 Jun 2014 02:58:31 -0700 (PDT) Received: from localhost.localdomain ([92.43.3.6]) by mx.google.com with ESMTPSA id y8sm10969918lbr.18.2014.06.27.02.58.29 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 27 Jun 2014 02:58:30 -0700 (PDT) From: Pavel Shilovsky To: linux-cifs@vger.kernel.org Subject: [PATCH v2 16/16] CIFS: Use multicredits for SMB 2.1/3 reads Date: Fri, 27 Jun 2014 13:57:53 +0400 Message-Id: <1403863073-19526-17-git-send-email-pshilovsky@samba.org> X-Mailer: git-send-email 1.8.1.2 In-Reply-To: <1403863073-19526-1-git-send-email-pshilovsky@samba.org> References: <1403863073-19526-1-git-send-email-pshilovsky@samba.org> Sender: linux-cifs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-cifs@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID,T_RP_MATCHES_RCVD,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP If we negotiate SMB 2.1 and higher version of the protocol and a server supports large read buffer size, we need to consume 1 credit per 65536 bytes. So, we need to know how many credits we have and obtain the required number of them before constructing a readdata structure in readpages and user read. Signed-off-by: Pavel Shilovsky --- fs/cifs/cifsglob.h | 1 + fs/cifs/file.c | 35 ++++++++++++++++++++++++++++------- fs/cifs/smb2ops.c | 2 -- fs/cifs/smb2pdu.c | 30 +++++++++++++++++++++++++++--- 4 files changed, 56 insertions(+), 12 deletions(-) diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h index 54ca2b9..f33ff4c 100644 --- a/fs/cifs/cifsglob.h +++ b/fs/cifs/cifsglob.h @@ -1068,6 +1068,7 @@ struct cifs_readdata { struct kvec iov; unsigned int pagesz; unsigned int tailsz; + unsigned int credits; unsigned int nr_pages; struct page *pages[]; }; diff --git a/fs/cifs/file.c b/fs/cifs/file.c index 36fd042..d5f66ba 100644 --- a/fs/cifs/file.c +++ b/fs/cifs/file.c @@ -2916,7 +2916,7 @@ cifs_send_async_read(loff_t offset, size_t len, struct cifsFileInfo *open_file, struct cifs_sb_info *cifs_sb, struct list_head *rdata_list) { struct cifs_readdata *rdata; - unsigned int npages; + unsigned int npages, rsize, credits; size_t cur_len; int rc; pid_t pid; @@ -2930,13 +2930,19 @@ cifs_send_async_read(loff_t offset, size_t len, struct cifsFileInfo *open_file, pid = current->tgid; do { - cur_len = min_t(const size_t, len, cifs_sb->rsize); + rc = server->ops->wait_mtu_credits(server, cifs_sb->rsize, + &rsize, &credits); + if (rc) + break; + + cur_len = min_t(const size_t, len, rsize); npages = DIV_ROUND_UP(cur_len, PAGE_SIZE); /* allocate a readdata struct */ rdata = cifs_readdata_alloc(npages, cifs_uncached_readv_complete); if (!rdata) { + add_credits_and_wake_if(server, credits, 0); rc = -ENOMEM; break; } @@ -2952,12 +2958,14 @@ cifs_send_async_read(loff_t offset, size_t len, struct cifsFileInfo *open_file, rdata->pid = pid; rdata->pagesz = PAGE_SIZE; rdata->read_into_pages = cifs_uncached_read_into_pages; + rdata->credits = credits; if (!rdata->cfile->invalidHandle || !cifs_reopen_file(rdata->cfile, true)) rc = server->ops->async_readv(rdata); error: if (rc) { + add_credits_and_wake_if(server, rdata->credits, 0); kref_put(&rdata->refcount, cifs_uncached_readdata_release); if (rc == -EAGAIN) @@ -3457,10 +3465,16 @@ static int cifs_readpages(struct file *file, struct address_space *mapping, * the rdata->pages, then we want them in increasing order. */ while (!list_empty(page_list)) { - unsigned int i, nr_pages, bytes; + unsigned int i, nr_pages, bytes, rsize; loff_t offset; struct page *page, *tpage; struct cifs_readdata *rdata; + unsigned credits; + + rc = server->ops->wait_mtu_credits(server, cifs_sb->rsize, + &rsize, &credits); + if (rc) + break; /* * Give up immediately if rsize is too small to read an entire @@ -3468,13 +3482,17 @@ static int cifs_readpages(struct file *file, struct address_space *mapping, * reach this point however since we set ra_pages to 0 when the * rsize is smaller than a cache page. */ - if (unlikely(cifs_sb->rsize < PAGE_CACHE_SIZE)) + if (unlikely(rsize < PAGE_CACHE_SIZE)) { + add_credits_and_wake_if(server, credits, 0); return 0; + } - rc = readpages_get_pages(mapping, page_list, cifs_sb->rsize, - &tmplist, &nr_pages, &offset, &bytes); - if (rc) + rc = readpages_get_pages(mapping, page_list, rsize, &tmplist, + &nr_pages, &offset, &bytes); + if (rc) { + add_credits_and_wake_if(server, credits, 0); break; + } rdata = cifs_readdata_alloc(nr_pages, cifs_readv_complete); if (!rdata) { @@ -3486,6 +3504,7 @@ static int cifs_readpages(struct file *file, struct address_space *mapping, page_cache_release(page); } rc = -ENOMEM; + add_credits_and_wake_if(server, credits, 0); break; } @@ -3496,6 +3515,7 @@ static int cifs_readpages(struct file *file, struct address_space *mapping, rdata->pid = pid; rdata->pagesz = PAGE_CACHE_SIZE; rdata->read_into_pages = cifs_readpages_read_into_pages; + rdata->credits = credits; list_for_each_entry_safe(page, tpage, &tmplist, lru) { list_del(&page->lru); @@ -3506,6 +3526,7 @@ static int cifs_readpages(struct file *file, struct address_space *mapping, !cifs_reopen_file(rdata->cfile, true)) rc = server->ops->async_readv(rdata); if (rc) { + add_credits_and_wake_if(server, rdata->credits, 0); for (i = 0; i < rdata->nr_pages; i++) { page = rdata->pages[i]; lru_cache_add_file(page); diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c index fecc2de..081529f 100644 --- a/fs/cifs/smb2ops.c +++ b/fs/cifs/smb2ops.c @@ -242,8 +242,6 @@ smb2_negotiate_rsize(struct cifs_tcon *tcon, struct smb_vol *volume_info) /* start with specified rsize, or default */ rsize = volume_info->rsize ? volume_info->rsize : CIFS_DEFAULT_IOSIZE; rsize = min_t(unsigned int, rsize, server->max_read); - /* set it to the maximum buffer size value we can send with 1 credit */ - rsize = min_t(unsigned int, rsize, SMB2_MAX_BUFFER_SIZE); return rsize; } diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c index b0c89ef..5b4ca0c 100644 --- a/fs/cifs/smb2pdu.c +++ b/fs/cifs/smb2pdu.c @@ -1748,11 +1748,12 @@ smb2_readv_callback(struct mid_q_entry *mid) int smb2_async_readv(struct cifs_readdata *rdata) { - int rc; + int rc, flags = 0; struct smb2_hdr *buf; struct cifs_io_parms io_parms; struct smb_rqst rqst = { .rq_iov = &rdata->iov, .rq_nvec = 1 }; + struct TCP_Server_Info *server; cifs_dbg(FYI, "%s: offset=%llu bytes=%u\n", __func__, rdata->offset, rdata->bytes); @@ -1763,18 +1764,41 @@ smb2_async_readv(struct cifs_readdata *rdata) io_parms.persistent_fid = rdata->cfile->fid.persistent_fid; io_parms.volatile_fid = rdata->cfile->fid.volatile_fid; io_parms.pid = rdata->pid; + + server = io_parms.tcon->ses->server; + rc = smb2_new_read_req(&rdata->iov, &io_parms, 0, 0); - if (rc) + if (rc) { + if (rc == -EAGAIN && rdata->credits) { + /* credits was reseted by reconnect */ + rdata->credits = 0; + /* reduce in_flight value since we won't send the req */ + spin_lock(&server->req_lock); + server->in_flight--; + spin_unlock(&server->req_lock); + } return rc; + } buf = (struct smb2_hdr *)rdata->iov.iov_base; /* 4 for rfc1002 length field */ rdata->iov.iov_len = get_rfc1002_length(rdata->iov.iov_base) + 4; + if (rdata->credits) { + buf->CreditCharge = cpu_to_le16(DIV_ROUND_UP(rdata->bytes, + SMB2_MAX_BUFFER_SIZE)); + spin_lock(&server->req_lock); + server->credits += rdata->credits - + le16_to_cpu(buf->CreditCharge); + spin_unlock(&server->req_lock); + wake_up(&server->request_q); + flags = CIFS_HAS_CREDITS; + } + kref_get(&rdata->refcount); rc = cifs_call_async(io_parms.tcon->ses->server, &rqst, cifs_readv_receive, smb2_readv_callback, - rdata, 0); + rdata, flags); if (rc) { kref_put(&rdata->refcount, cifs_readdata_release); cifs_stats_fail_inc(io_parms.tcon, SMB2_READ_HE);