From patchwork Wed Jun 27 12:45:33 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 10491383 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id CB9EC602B3 for ; Wed, 27 Jun 2018 12:47:47 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B9C9528D7A for ; Wed, 27 Jun 2018 12:47:47 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id ADA9728D80; Wed, 27 Jun 2018 12:47:47 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3BC6128D7A for ; Wed, 27 Jun 2018 12:47:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4B8446B0279; Wed, 27 Jun 2018 08:47:46 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 48EF86B027A; Wed, 27 Jun 2018 08:47:46 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 37EFF6B027B; Wed, 27 Jun 2018 08:47:46 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qt0-f198.google.com (mail-qt0-f198.google.com [209.85.216.198]) by kanga.kvack.org (Postfix) with ESMTP id 0ADEF6B0279 for ; Wed, 27 Jun 2018 08:47:46 -0400 (EDT) Received: by mail-qt0-f198.google.com with SMTP id f8-v6so1754418qtb.23 for ; Wed, 27 Jun 2018 05:47:46 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=wMv4Sl4iGCeInwwWRnDpsxerdzu8R+q4wwtkFtn1RJ4=; b=W2gUdjZg7A469T+CHDz+6Vw2s5WRnrTEFNIvVR4mZ86BZbba13CzH5GEc3/6XOD6iZ /UF5lzdjMtMcF8GT5cSFvdA6KPj0H142VaPYxQdmjfgld4LuX80md3KtoLQGsWjbDiUq Y/UzqpQnzTvy1UbEhxkOckG7Kc091niYK2gVj+pYY3zJbOS+aEaB06dyxjpqS/wNazop c1c0j54tq7EtjEkyNbveZ6hVKqjTKxRCyft/MSIWG48uusdGivqs1Imk9tVMsKodL+Ls tPmcdJCbibnw1Cz1UiHALWDDaq2UwwmpFUPFd1/0ABu9IqT0kqrsx6rc/1QwuklbK/qP FLcw== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of ming.lei@redhat.com designates 66.187.233.73 as permitted sender) smtp.mailfrom=ming.lei@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com X-Gm-Message-State: APt69E0qY74WNcDPBnjgPEG5F+LhLcJCiyfXu5GexmrkAIIt9SeM3lII pMAofVg0D1beaimZDiFOS0JR7FAPBgSYN+p1o7LJNgbkxFoVtVv+E7DH7h+wfo2JHrG1Rv8sGJv bliAGun9fBZlWjhrs9FUwX2jll5zkiFpe0Gec+I/3Fo+jNJxvnKsqdNFCZh9C5rrsRA== X-Received: by 2002:ac8:30e3:: with SMTP id w32-v6mr5324577qta.280.1530103665813; Wed, 27 Jun 2018 05:47:45 -0700 (PDT) X-Google-Smtp-Source: AAOMgpdFIHiAQsuDll5t/j8rRPTLAJ1miv6ydpSoAbbQotazw/P0d3LSEZy6WCMeFbZ8RSPa5ef+ X-Received: by 2002:ac8:30e3:: with SMTP id w32-v6mr5324539qta.280.1530103665041; Wed, 27 Jun 2018 05:47:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1530103665; cv=none; d=google.com; s=arc-20160816; b=JUBV+rBsmwH+2gnvWydHX/lwo/Ex6Dn8ngMWc+VxPVAWFCu9OP7X1pu8WT3b12jhc8 eeDLCJ3ZnIslgF1zB9WrLMtmiBtym+vducn9BoKa6SmFKOz//ULiA1WBUQ/9gtMj7jml y4GumYeHBFmJT2kP11ryJCcTh4cW6W4GDs7GjHWY6/FdsAetInaT0tZ6z5FfKw56UNy+ R9Msqx1TIsI978wpcJAAeQOC5sOIf3m0y6uqjCvPEofUt9ZKASkixTdz+M1x/LYHs9ef 2pRIaRQBUUujNLpQEYivxJpjMXr9ljzHGFYceK+KuF59meZZ6z9q/It1nBsphZ8TS2gG rX0Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=wMv4Sl4iGCeInwwWRnDpsxerdzu8R+q4wwtkFtn1RJ4=; b=BuCuDUIJ+jRDVEOV4mKBJQflWLQlPB9FkyAaAr24UIkyJKdPdvFfEVKfKWHHt0PGyv JXebGiHtyipf5qcK/pKsvOOKJEv42BhIhmqxWOIO3xGMtSnCB6g0EOn/v0Lk6Y+8W+Hp NTD1b5ShJis7cfRQVZXsrYPWQE1HGPwq880sJolCIIhXet0FkdR87LPrJ5SeRYtA2mhF i1fSHzStL1N9ugD83Bsa4tR9aiS2AJJPvoeKaeyPkUQlP7uzpqfBtVvCv9znwUC13DQV mUXu2yBMFUgjqW4sbtoCZQhRpuAg1bh05Hv+XKcHJ6x73feWcpAv7BuGHfMeuYTHaLVP SJOA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of ming.lei@redhat.com designates 66.187.233.73 as permitted sender) smtp.mailfrom=ming.lei@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from mx1.redhat.com (mx3-rdu2.redhat.com. [66.187.233.73]) by mx.google.com with ESMTPS id x2-v6si1762174qvn.14.2018.06.27.05.47.44 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 27 Jun 2018 05:47:45 -0700 (PDT) Received-SPF: pass (google.com: domain of ming.lei@redhat.com designates 66.187.233.73 as permitted sender) client-ip=66.187.233.73; Authentication-Results: mx.google.com; spf=pass (google.com: domain of ming.lei@redhat.com designates 66.187.233.73 as permitted sender) smtp.mailfrom=ming.lei@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 96B508163C40; Wed, 27 Jun 2018 12:47:44 +0000 (UTC) Received: from localhost (ovpn-12-44.pek2.redhat.com [10.72.12.44]) by smtp.corp.redhat.com (Postfix) with ESMTP id 561242026D5B; Wed, 27 Jun 2018 12:47:35 +0000 (UTC) From: Ming Lei To: Jens Axboe , Christoph Hellwig , Kent Overstreet Cc: David Sterba , Huang Ying , Mike Snitzer , linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, Theodore Ts'o , "Darrick J . Wong" , Coly Li , Filipe Manana , Randy Dunlap , Christoph Hellwig Subject: [PATCH V7 09/24] block: use bio_add_page in bio_iov_iter_get_pages Date: Wed, 27 Jun 2018 20:45:33 +0800 Message-Id: <20180627124548.3456-10-ming.lei@redhat.com> In-Reply-To: <20180627124548.3456-1-ming.lei@redhat.com> References: <20180627124548.3456-1-ming.lei@redhat.com> X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.8]); Wed, 27 Jun 2018 12:47:44 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.8]); Wed, 27 Jun 2018 12:47:44 +0000 (UTC) for IP:'10.11.54.4' DOMAIN:'int-mx04.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'ming.lei@redhat.com' RCPT:'' X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Christoph Hellwig Replace a nasty hack with a different nasty hack to prepare for multipage bio_vecs. By moving the temporary page array as far up as possible in the space allocated for the bio_vec array we can iterate forward over it and thus use bio_add_page. Using bio_add_page means we'll be able to merge physically contiguous pages once support for multipath bio_vecs is merged. Reviewed-by: Ming Lei Signed-off-by: Christoph Hellwig --- block/bio.c | 45 +++++++++++++++++++++------------------------ 1 file changed, 21 insertions(+), 24 deletions(-) diff --git a/block/bio.c b/block/bio.c index de6cbaedfb65..80ea0c8878bd 100644 --- a/block/bio.c +++ b/block/bio.c @@ -825,6 +825,8 @@ int bio_add_page(struct bio *bio, struct page *page, } EXPORT_SYMBOL(bio_add_page); +#define PAGE_PTRS_PER_BVEC (sizeof(struct bio_vec) / sizeof(struct page *)) + /** * bio_iov_iter_get_pages - pin user or kernel pages and add them to a bio * @bio: bio to add pages to @@ -836,38 +838,33 @@ EXPORT_SYMBOL(bio_add_page); int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) { unsigned short nr_pages = bio->bi_max_vecs - bio->bi_vcnt; + unsigned short entries_left = bio->bi_max_vecs - bio->bi_vcnt; struct bio_vec *bv = bio->bi_io_vec + bio->bi_vcnt; struct page **pages = (struct page **)bv; - size_t offset, diff; - ssize_t size; + ssize_t size, left; + unsigned len, i; + size_t offset; + + /* + * Move page array up in the allocated memory for the bio vecs as + * far as possible so that we can start filling biovecs from the + * beginning without overwriting the temporary page array. + */ + BUILD_BUG_ON(PAGE_PTRS_PER_BVEC < 2); + pages += entries_left * (PAGE_PTRS_PER_BVEC - 1); size = iov_iter_get_pages(iter, pages, LONG_MAX, nr_pages, &offset); if (unlikely(size <= 0)) return size ? size : -EFAULT; - nr_pages = (size + offset + PAGE_SIZE - 1) / PAGE_SIZE; - /* - * Deep magic below: We need to walk the pinned pages backwards - * because we are abusing the space allocated for the bio_vecs - * for the page array. Because the bio_vecs are larger than the - * page pointers by definition this will always work. But it also - * means we can't use bio_add_page, so any changes to it's semantics - * need to be reflected here as well. - */ - bio->bi_iter.bi_size += size; - bio->bi_vcnt += nr_pages; - - diff = (nr_pages * PAGE_SIZE - offset) - size; - while (nr_pages--) { - bv[nr_pages].bv_page = pages[nr_pages]; - bv[nr_pages].bv_len = PAGE_SIZE; - bv[nr_pages].bv_offset = 0; - } + for (left = size, i = 0; left > 0; left -= len, i++) { + struct page *page = pages[i]; - bv[0].bv_offset += offset; - bv[0].bv_len -= offset; - if (diff) - bv[bio->bi_vcnt - 1].bv_len -= diff; + len = min_t(size_t, PAGE_SIZE - offset, left); + if (WARN_ON_ONCE(bio_add_page(bio, page, len, offset) != len)) + return -EINVAL; + offset = 0; + } iov_iter_advance(iter, size); return 0;