From patchwork Mon Jul 21 15:45:58 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Shilovsky X-Patchwork-Id: 4597141 Return-Path: X-Original-To: patchwork-cifs-client@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 30DBC9F2B8 for ; Mon, 21 Jul 2014 15:46:50 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 233CB200DE for ; Mon, 21 Jul 2014 15:46:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 09C86200E9 for ; Mon, 21 Jul 2014 15:46:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933112AbaGUPqr (ORCPT ); Mon, 21 Jul 2014 11:46:47 -0400 Received: from mail-la0-f52.google.com ([209.85.215.52]:64660 "EHLO mail-la0-f52.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932873AbaGUPqr (ORCPT ); Mon, 21 Jul 2014 11:46:47 -0400 Received: by mail-la0-f52.google.com with SMTP id e16so4834426lan.39 for ; Mon, 21 Jul 2014 08:46:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:subject:date:message-id:in-reply-to:references; bh=qMxYIIlPsoQBD+LAT5Kc90LOz4P0GozvZMaDfpTLoUU=; b=dgw7sJahsHJa2sxc2j82g1TsZl5k/tKsTvior15qoeIP9zImmsSNuse6Atd6pT+49V znF+beLHrYH2TL2KsDod2/7Le4g3cZpF0szl9NO0pVf9Hpqf6m5X6XuChatGKVUXYfoI W4R+usaBmkpJcos5sgbAsJ1f4+2NUm8fis94sezsbc9cueehcxzVjzlsMdA2ULTl6b/I kT8RenCiga3T2CWPvMJcOk4xbDo/BU7kWT26STgarP/wpuZAKhLUUpqqagnvCltQInhA uPRBkgAkOXur6AmhHb93IadgrtcyNY06+r+yKZB6wVaFt+ZvLp2oEW46AemH3RQ23UAg uRqg== X-Received: by 10.112.25.104 with SMTP id b8mr4415653lbg.95.1405957605287; Mon, 21 Jul 2014 08:46:45 -0700 (PDT) Received: from localhost.localdomain ([92.43.3.35]) by mx.google.com with ESMTPSA id ok1sm25762485lbc.18.2014.07.21.08.46.43 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 21 Jul 2014 08:46:44 -0700 (PDT) From: Pavel Shilovsky To: linux-cifs@vger.kernel.org Subject: [PATCH v3 16/16] CIFS: Use multicredits for SMB 2.1/3 reads Date: Mon, 21 Jul 2014 19:45:58 +0400 Message-Id: <1405957558-18476-17-git-send-email-pshilovsky@samba.org> X-Mailer: git-send-email 1.8.1.2 In-Reply-To: <1405957558-18476-1-git-send-email-pshilovsky@samba.org> References: <1405957558-18476-1-git-send-email-pshilovsky@samba.org> Sender: linux-cifs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-cifs@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP If we negotiate SMB 2.1 and higher version of the protocol and a server supports large read buffer size, we need to consume 1 credit per 65536 bytes. So, we need to know how many credits we have and obtain the required number of them before constructing a readdata structure in readpages and user read. Signed-off-by: Pavel Shilovsky Reviewed-by: Shirish Pargaonkar --- fs/cifs/cifsglob.h | 1 + fs/cifs/file.c | 35 ++++++++++++++++++++++++++++------- fs/cifs/smb2ops.c | 2 -- fs/cifs/smb2pdu.c | 30 +++++++++++++++++++++++++++--- 4 files changed, 56 insertions(+), 12 deletions(-) diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h index 54ca2b9..f33ff4c 100644 --- a/fs/cifs/cifsglob.h +++ b/fs/cifs/cifsglob.h @@ -1068,6 +1068,7 @@ struct cifs_readdata { struct kvec iov; unsigned int pagesz; unsigned int tailsz; + unsigned int credits; unsigned int nr_pages; struct page *pages[]; }; diff --git a/fs/cifs/file.c b/fs/cifs/file.c index 00b2a25..ebdeb56 100644 --- a/fs/cifs/file.c +++ b/fs/cifs/file.c @@ -2917,7 +2917,7 @@ cifs_send_async_read(loff_t offset, size_t len, struct cifsFileInfo *open_file, struct cifs_sb_info *cifs_sb, struct list_head *rdata_list) { struct cifs_readdata *rdata; - unsigned int npages; + unsigned int npages, rsize, credits; size_t cur_len; int rc; pid_t pid; @@ -2931,13 +2931,19 @@ cifs_send_async_read(loff_t offset, size_t len, struct cifsFileInfo *open_file, pid = current->tgid; do { - cur_len = min_t(const size_t, len, cifs_sb->rsize); + rc = server->ops->wait_mtu_credits(server, cifs_sb->rsize, + &rsize, &credits); + if (rc) + break; + + cur_len = min_t(const size_t, len, rsize); npages = DIV_ROUND_UP(cur_len, PAGE_SIZE); /* allocate a readdata struct */ rdata = cifs_readdata_alloc(npages, cifs_uncached_readv_complete); if (!rdata) { + add_credits_and_wake_if(server, credits, 0); rc = -ENOMEM; break; } @@ -2953,12 +2959,14 @@ cifs_send_async_read(loff_t offset, size_t len, struct cifsFileInfo *open_file, rdata->pid = pid; rdata->pagesz = PAGE_SIZE; rdata->read_into_pages = cifs_uncached_read_into_pages; + rdata->credits = credits; if (!rdata->cfile->invalidHandle || !cifs_reopen_file(rdata->cfile, true)) rc = server->ops->async_readv(rdata); error: if (rc) { + add_credits_and_wake_if(server, rdata->credits, 0); kref_put(&rdata->refcount, cifs_uncached_readdata_release); if (rc == -EAGAIN) @@ -3458,10 +3466,16 @@ static int cifs_readpages(struct file *file, struct address_space *mapping, * the rdata->pages, then we want them in increasing order. */ while (!list_empty(page_list)) { - unsigned int i, nr_pages, bytes; + unsigned int i, nr_pages, bytes, rsize; loff_t offset; struct page *page, *tpage; struct cifs_readdata *rdata; + unsigned credits; + + rc = server->ops->wait_mtu_credits(server, cifs_sb->rsize, + &rsize, &credits); + if (rc) + break; /* * Give up immediately if rsize is too small to read an entire @@ -3469,13 +3483,17 @@ static int cifs_readpages(struct file *file, struct address_space *mapping, * reach this point however since we set ra_pages to 0 when the * rsize is smaller than a cache page. */ - if (unlikely(cifs_sb->rsize < PAGE_CACHE_SIZE)) + if (unlikely(rsize < PAGE_CACHE_SIZE)) { + add_credits_and_wake_if(server, credits, 0); return 0; + } - rc = readpages_get_pages(mapping, page_list, cifs_sb->rsize, - &tmplist, &nr_pages, &offset, &bytes); - if (rc) + rc = readpages_get_pages(mapping, page_list, rsize, &tmplist, + &nr_pages, &offset, &bytes); + if (rc) { + add_credits_and_wake_if(server, credits, 0); break; + } rdata = cifs_readdata_alloc(nr_pages, cifs_readv_complete); if (!rdata) { @@ -3487,6 +3505,7 @@ static int cifs_readpages(struct file *file, struct address_space *mapping, page_cache_release(page); } rc = -ENOMEM; + add_credits_and_wake_if(server, credits, 0); break; } @@ -3497,6 +3516,7 @@ static int cifs_readpages(struct file *file, struct address_space *mapping, rdata->pid = pid; rdata->pagesz = PAGE_CACHE_SIZE; rdata->read_into_pages = cifs_readpages_read_into_pages; + rdata->credits = credits; list_for_each_entry_safe(page, tpage, &tmplist, lru) { list_del(&page->lru); @@ -3507,6 +3527,7 @@ static int cifs_readpages(struct file *file, struct address_space *mapping, !cifs_reopen_file(rdata->cfile, true)) rc = server->ops->async_readv(rdata); if (rc) { + add_credits_and_wake_if(server, rdata->credits, 0); for (i = 0; i < rdata->nr_pages; i++) { page = rdata->pages[i]; lru_cache_add_file(page); diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c index fecc2de..081529f 100644 --- a/fs/cifs/smb2ops.c +++ b/fs/cifs/smb2ops.c @@ -242,8 +242,6 @@ smb2_negotiate_rsize(struct cifs_tcon *tcon, struct smb_vol *volume_info) /* start with specified rsize, or default */ rsize = volume_info->rsize ? volume_info->rsize : CIFS_DEFAULT_IOSIZE; rsize = min_t(unsigned int, rsize, server->max_read); - /* set it to the maximum buffer size value we can send with 1 credit */ - rsize = min_t(unsigned int, rsize, SMB2_MAX_BUFFER_SIZE); return rsize; } diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c index a1d89b7..26624ee 100644 --- a/fs/cifs/smb2pdu.c +++ b/fs/cifs/smb2pdu.c @@ -1758,11 +1758,12 @@ smb2_readv_callback(struct mid_q_entry *mid) int smb2_async_readv(struct cifs_readdata *rdata) { - int rc; + int rc, flags = 0; struct smb2_hdr *buf; struct cifs_io_parms io_parms; struct smb_rqst rqst = { .rq_iov = &rdata->iov, .rq_nvec = 1 }; + struct TCP_Server_Info *server; cifs_dbg(FYI, "%s: offset=%llu bytes=%u\n", __func__, rdata->offset, rdata->bytes); @@ -1773,18 +1774,41 @@ smb2_async_readv(struct cifs_readdata *rdata) io_parms.persistent_fid = rdata->cfile->fid.persistent_fid; io_parms.volatile_fid = rdata->cfile->fid.volatile_fid; io_parms.pid = rdata->pid; + + server = io_parms.tcon->ses->server; + rc = smb2_new_read_req(&rdata->iov, &io_parms, 0, 0); - if (rc) + if (rc) { + if (rc == -EAGAIN && rdata->credits) { + /* credits was reseted by reconnect */ + rdata->credits = 0; + /* reduce in_flight value since we won't send the req */ + spin_lock(&server->req_lock); + server->in_flight--; + spin_unlock(&server->req_lock); + } return rc; + } buf = (struct smb2_hdr *)rdata->iov.iov_base; /* 4 for rfc1002 length field */ rdata->iov.iov_len = get_rfc1002_length(rdata->iov.iov_base) + 4; + if (rdata->credits) { + buf->CreditCharge = cpu_to_le16(DIV_ROUND_UP(rdata->bytes, + SMB2_MAX_BUFFER_SIZE)); + spin_lock(&server->req_lock); + server->credits += rdata->credits - + le16_to_cpu(buf->CreditCharge); + spin_unlock(&server->req_lock); + wake_up(&server->request_q); + flags = CIFS_HAS_CREDITS; + } + kref_get(&rdata->refcount); rc = cifs_call_async(io_parms.tcon->ses->server, &rqst, cifs_readv_receive, smb2_readv_callback, - rdata, 0); + rdata, flags); if (rc) { kref_put(&rdata->refcount, cifs_readdata_release); cifs_stats_fail_inc(io_parms.tcon, SMB2_READ_HE);