From patchwork Mon May 25 22:07:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Simmons X-Patchwork-Id: 11569517 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8089860D for ; Mon, 25 May 2020 22:08:40 +0000 (UTC) Received: from pdx1-mailman02.dreamhost.com (pdx1-mailman02.dreamhost.com [64.90.62.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 698242071A for ; Mon, 25 May 2020 22:08:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 698242071A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lustre-devel-bounces@lists.lustre.org Received: from pdx1-mailman02.dreamhost.com (localhost [IPv6:::1]) by pdx1-mailman02.dreamhost.com (Postfix) with ESMTP id 1930421F9A4; Mon, 25 May 2020 15:08:37 -0700 (PDT) X-Original-To: lustre-devel@lists.lustre.org Delivered-To: lustre-devel-lustre.org@pdx1-mailman02.dreamhost.com Received: from smtp4.ccs.ornl.gov (smtp4.ccs.ornl.gov [160.91.203.40]) by pdx1-mailman02.dreamhost.com (Postfix) with ESMTP id B6DDE21F742 for ; Mon, 25 May 2020 15:08:31 -0700 (PDT) Received: from star.ccs.ornl.gov (star.ccs.ornl.gov [160.91.202.134]) by smtp4.ccs.ornl.gov (Postfix) with ESMTP id 0E26E100567A; Mon, 25 May 2020 18:08:27 -0400 (EDT) Received: by star.ccs.ornl.gov (Postfix, from userid 2004) id F388E2BF; Mon, 25 May 2020 18:08:26 -0400 (EDT) From: James Simmons To: Andreas Dilger , Oleg Drokin , NeilBrown Date: Mon, 25 May 2020 18:07:42 -0400 Message-Id: <1590444502-20533-6-git-send-email-jsimmons@infradead.org> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1590444502-20533-1-git-send-email-jsimmons@infradead.org> References: <1590444502-20533-1-git-send-email-jsimmons@infradead.org> Subject: [lustre-devel] [PATCH 05/45] lnet: always put a page list into struct lnet_libmd X-BeenThere: lustre-devel@lists.lustre.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: "For discussing Lustre software development." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Lustre Development List MIME-Version: 1.0 Errors-To: lustre-devel-bounces@lists.lustre.org Sender: "lustre-devel" From: Mr NeilBrown 'struct lnet_libmd' is only created in lnet_md_build(). It can be given a list of pages or a virtual address. In the latter case, the memory will eventually be split into a list of pages. It is cleaner to split it into a list of pages early so that all lower levels only need to handle one type: a page list. WC-bug-id: https://jira.whamcloud.com/browse/LU-13004 Lustre-commit: 857f11169fc8 ("LU-13004 lnet: always put a page list into struct lnet_libmd") Signed-off-by: Mr NeilBrown Reviewed-on: https://review.whamcloud.com/37842 Reviewed-by: Shaun Tancheff Reviewed-by: James Simmons Reviewed-by: Oleg Drokin Signed-off-by: James Simmons --- net/lnet/lnet/lib-md.c | 38 +++++++++++++++++++++++++------------- 1 file changed, 25 insertions(+), 13 deletions(-) diff --git a/net/lnet/lnet/lib-md.c b/net/lnet/lnet/lib-md.c index a9a83c3..e1b8a06 100644 --- a/net/lnet/lnet/lib-md.c +++ b/net/lnet/lnet/lib-md.c @@ -173,13 +173,12 @@ int lnet_cpt_of_md(struct lnet_libmd *md, unsigned int offset) struct lnet_libmd *lmd; unsigned int size; - if ((umd->options & LNET_MD_KIOV) != 0) { + if (umd->options & LNET_MD_KIOV) niov = umd->length; - size = offsetof(struct lnet_libmd, md_iov.kiov[niov]); - } else { - niov = 1; - size = offsetof(struct lnet_libmd, md_iov.iov[niov]); - } + else + niov = DIV_ROUND_UP(offset_in_page(umd->start) + umd->length, + PAGE_SIZE); + size = offsetof(struct lnet_libmd, md_iov.kiov[niov]); if (size <= LNET_SMALL_MD_SIZE) { lmd = kmem_cache_zalloc(lnet_small_mds_cachep, GFP_NOFS); @@ -200,7 +199,6 @@ int lnet_cpt_of_md(struct lnet_libmd *md, unsigned int offset) lmd->md_niov = niov; INIT_LIST_HEAD(&lmd->md_list); - lmd->md_me = NULL; lmd->md_start = umd->start; lmd->md_offset = 0; @@ -238,19 +236,33 @@ int lnet_cpt_of_md(struct lnet_libmd *md, unsigned int offset) lnet_md_free(lmd); return ERR_PTR(-EINVAL); } - } else { /* contiguous */ - lmd->md_length = umd->length; - niov = 1; - lmd->md_niov = 1; - lmd->md_iov.iov[0].iov_base = umd->start; - lmd->md_iov.iov[0].iov_len = umd->length; + } else { /* contiguous - split into pages */ + void *pa = umd->start; + int len = umd->length; + lmd->md_length = len; + i = 0; + while (len) { + int plen; + + plen = min_t(int, len, PAGE_SIZE - offset_in_page(pa)); + + lmd->md_iov.kiov[i].bv_page = + lnet_kvaddr_to_page((unsigned long) pa); + lmd->md_iov.kiov[i].bv_offset = offset_in_page(pa); + lmd->md_iov.kiov[i].bv_len = plen; + + len -= plen; + pa += plen; + i += 1; + } if ((umd->options & LNET_MD_MAX_SIZE) && /* max size used */ (umd->max_size < 0 || umd->max_size > (int)umd->length)) { /* illegal max_size */ lnet_md_free(lmd); return ERR_PTR(-EINVAL); } + lmd->md_options |= LNET_MD_KIOV; } return lmd;