From patchwork Thu Sep 10 14:34:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 11769303 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 42F07618 for ; Thu, 10 Sep 2020 21:08:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2081721D91 for ; Thu, 10 Sep 2020 21:08:21 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="m8lLPubC" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726662AbgIJVHD (ORCPT ); Thu, 10 Sep 2020 17:07:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56788 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731086AbgIJOfT (ORCPT ); Thu, 10 Sep 2020 10:35:19 -0400 Received: from mail-ej1-x642.google.com (mail-ej1-x642.google.com [IPv6:2a00:1450:4864:20::642]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5E678C06179A; Thu, 10 Sep 2020 07:35:11 -0700 (PDT) Received: by mail-ej1-x642.google.com with SMTP id e23so9078772eja.3; Thu, 10 Sep 2020 07:35:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5XgEkGxv2E2AECNr+4xn15ufLIzEXL6wFg7AGQm6deE=; b=m8lLPubCJpHMsQ48XiovlcumnVbXdBf42PYCwXR+nF9UI7WC7XIGLSSRzwevHANLQk wi+0wfVGtDuv9VyzBxk5x6VvsrREhkADpJa4OxRgqXz2AsYSfvBcgqB0ToFEWO8u3tWN mWOxetMUEYFRa+XpycbBi5i4G605LdbyPmRhVhTNMy5f6leotLr/LGdAlkRL+NLA9BDM CELVU0bm48OhKDc6iwUzlOsTrlz9JZSDqsgAmp+hJEf/yqqwkzoy3uqLTvQObrWtpw8s wMHtcul6J5cHJystPlmiDiwcamkqhsn68STmLJvwBAFWfyFigrFAFaQ8vDu4yzUXi4wm DAQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5XgEkGxv2E2AECNr+4xn15ufLIzEXL6wFg7AGQm6deE=; b=O9jhSXk/QKFHBrdbq8XZPfEctxG/yErETsnLIfyXSizPqBYY/FY8cUzGFskNddW7Rq 6JLv5zQW3waEVU+xNCB0IACGI8Z1Qymi9UGBbIMVVVXOMM0T4j0QR8K5cWpEwDsG6yHq 9dG4ifNdnTQ6a8j3Yjij96tDNaN3V/7zBXeuGlAJ3HMrxbjVbMyuJzs5varp9hFoO7Jo ChdQCqZndN+gltxZDrRkS4U5S/9Cyscyzkph2tuavHaAcykzE3J0vvxssHAzq5+xKT57 MMD/gjpthmboTffIqI/YErsXHU3ufiAtTIgDT4Cyl3ZvMQOdTOaapmq25zWNBrtAKHIO mc1A== X-Gm-Message-State: AOAM5321KPyknoZ6QYpEq+YWLqWe/0smj3BPyUKS4LNL1rnDjWzcE36L CJaSk3TxSCkM7vU5EHHvp1eQOkDelxg= X-Google-Smtp-Source: ABdhPJwt6EVWdU+oFrWM8ATuZDym8zURbyb9o2FHDvZYxl/IdS63b9usGvYjJ0DU6WKUJgG2KJ7UDA== X-Received: by 2002:a17:906:15cc:: with SMTP id l12mr9418864ejd.7.1599748509943; Thu, 10 Sep 2020 07:35:09 -0700 (PDT) Received: from auth2-smtp.messagingengine.com (auth2-smtp.messagingengine.com. [66.111.4.228]) by smtp.gmail.com with ESMTPSA id h10sm6974601ejt.93.2020.09.10.07.35.04 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 10 Sep 2020 07:35:08 -0700 (PDT) Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailauth.nyi.internal (Postfix) with ESMTP id 0404A27C00A1; Thu, 10 Sep 2020 10:35:04 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Thu, 10 Sep 2020 10:35:04 -0400 X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduiedrudehjedgjedvucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne goufhorhhtvggutfgvtghiphdvucdlgedtmdenucfjughrpefhvffufffkofgjfhgggfes tdekredtredttdenucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnh hgsehgmhgrihhlrdgtohhmqeenucggtffrrghtthgvrhhnpeehvdevteefgfeiudettdef vedvvdelkeejueffffelgeeuhffhjeetkeeiueeuleenucfkphephedvrdduheehrdduud durdejudenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhm pegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhithihqdeiledvgeehtd eigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmhgrihhlrdgtohhmsehf ihigmhgvrdhnrghmvg X-ME-Proxy: Received: from localhost (unknown [52.155.111.71]) by mail.messagingengine.com (Postfix) with ESMTPA id E073B328005E; Thu, 10 Sep 2020 10:35:02 -0400 (EDT) From: Boqun Feng To: linux-hyperv@vger.kernel.org, linux-input@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-scsi@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Jiri Kosina , Benjamin Tissoires , Dmitry Torokhov , "David S. Miller" , Jakub Kicinski , "James E.J. Bottomley" , "Martin K. Petersen" , Michael Kelley , will@kernel.org, ardb@kernel.org, arnd@arndb.de, catalin.marinas@arm.com, mark.rutland@arm.com, maz@kernel.org, Boqun Feng Subject: [PATCH v3 01/11] Drivers: hv: vmbus: Always use HV_HYP_PAGE_SIZE for gpadl Date: Thu, 10 Sep 2020 22:34:45 +0800 Message-Id: <20200910143455.109293-2-boqun.feng@gmail.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200910143455.109293-1-boqun.feng@gmail.com> References: <20200910143455.109293-1-boqun.feng@gmail.com> MIME-Version: 1.0 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Since the hypervisor always uses 4K as its page size, the size of PFNs used for gpadl should be HV_HYP_PAGE_SIZE rather than PAGE_SIZE, so adjust this accordingly as the preparation for supporting 16K/64K page size guests. No functional changes on x86, since PAGE_SIZE is always 4k (equals to HV_HYP_PAGE_SIZE). Signed-off-by: Boqun Feng Reviewed-by: Michael Kelley --- drivers/hv/channel.c | 13 +++++-------- 1 file changed, 5 insertions(+), 8 deletions(-) diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c index 3ebda7707e46..4d0f8e5a88d6 100644 --- a/drivers/hv/channel.c +++ b/drivers/hv/channel.c @@ -22,9 +22,6 @@ #include "hyperv_vmbus.h" -#define NUM_PAGES_SPANNED(addr, len) \ -((PAGE_ALIGN(addr + len) >> PAGE_SHIFT) - (addr >> PAGE_SHIFT)) - static unsigned long virt_to_hvpfn(void *addr) { phys_addr_t paddr; @@ -35,7 +32,7 @@ static unsigned long virt_to_hvpfn(void *addr) else paddr = __pa(addr); - return paddr >> PAGE_SHIFT; + return paddr >> HV_HYP_PAGE_SHIFT; } /* @@ -330,7 +327,7 @@ static int create_gpadl_header(void *kbuffer, u32 size, int pfnsum, pfncount, pfnleft, pfncurr, pfnsize; - pagecount = size >> PAGE_SHIFT; + pagecount = size >> HV_HYP_PAGE_SHIFT; /* do we need a gpadl body msg */ pfnsize = MAX_SIZE_CHANNEL_MESSAGE - @@ -360,7 +357,7 @@ static int create_gpadl_header(void *kbuffer, u32 size, gpadl_header->range[0].byte_count = size; for (i = 0; i < pfncount; i++) gpadl_header->range[0].pfn_array[i] = virt_to_hvpfn( - kbuffer + PAGE_SIZE * i); + kbuffer + HV_HYP_PAGE_SIZE * i); *msginfo = msgheader; pfnsum = pfncount; @@ -412,7 +409,7 @@ static int create_gpadl_header(void *kbuffer, u32 size, */ for (i = 0; i < pfncurr; i++) gpadl_body->pfn[i] = virt_to_hvpfn( - kbuffer + PAGE_SIZE * (pfnsum + i)); + kbuffer + HV_HYP_PAGE_SIZE * (pfnsum + i)); /* add to msg header */ list_add_tail(&msgbody->msglistentry, @@ -441,7 +438,7 @@ static int create_gpadl_header(void *kbuffer, u32 size, gpadl_header->range[0].byte_count = size; for (i = 0; i < pagecount; i++) gpadl_header->range[0].pfn_array[i] = virt_to_hvpfn( - kbuffer + PAGE_SIZE * i); + kbuffer + HV_HYP_PAGE_SIZE * i); *msginfo = msgheader; } From patchwork Thu Sep 10 14:34:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 11769297 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6F58859D for ; Thu, 10 Sep 2020 21:07:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3AA7B21D91 for ; Thu, 10 Sep 2020 21:07:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="dKqwbaXm" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728087AbgIJVHG (ORCPT ); Thu, 10 Sep 2020 17:07:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56794 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731113AbgIJOfT (ORCPT ); Thu, 10 Sep 2020 10:35:19 -0400 Received: from mail-ej1-x644.google.com (mail-ej1-x644.google.com [IPv6:2a00:1450:4864:20::644]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 143E3C06179F; Thu, 10 Sep 2020 07:35:13 -0700 (PDT) Received: by mail-ej1-x644.google.com with SMTP id z22so9051642ejl.7; Thu, 10 Sep 2020 07:35:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Rv+IdoUkAEBqoHBWMCn0IzZT/rDL6zhCou8PWvAWWEM=; b=dKqwbaXmCPyyCmlsvIO5mGMqRyDyPp/ww5q1IjJrHG8/gJ2XPhcLEdzfjMV+e6w+g1 bqRdxIvsywXvnfnIX5q+WlriOHHar8YfLGcOojW9O/VvOWm5cFjpseHKfxVTNZJzX8oX 45RupwX83QKyU99/K/QnKDwc+34kuxIP+IAoHdOl+gYvIdMe6DMMrhH239wCSrsvn+pY r01Qt42kIXi04tihn4H0ZKXf74bqv7pPO9IAzZM8+nOmRE7sKvp1mc8P3yAlZEMvvQ83 itINNSaDtvccKCdSHdXLlvfwmymR9RBMWOmJ5d8ovXk5l/O31IOqFXcxG81gkn/W3doT Vwbw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Rv+IdoUkAEBqoHBWMCn0IzZT/rDL6zhCou8PWvAWWEM=; b=HdnHxNj4LlMEoF4FValpkD+9MK/SA5gMFof9Ene5QoYs50AWL6e6mdo7Iwng47MAdZ T/tHQ2MnbrpSCSghGbY5+PgH8z+fOq7a7wPEXFDbNhQY5o0ytxXe2NT8EmX6JwrdKs6R qMOR8GmWCfGG3cIX0oXlwU0BohcemZ7UGYJN1ddtMGjs1pxCm54N9Px1knIt132m5aCE NumAbD6zQmFkVcHR2s/G+ljOBBfi/tlr2FrF+m4eZAAXGt7bNiqHKKUlj5b0EIU6ouH3 YVLc0JVg0az4XtzFKwm7V4e/dX/f2xvplAg5Dw/CGS+BJgDvJDTG1l88ITyA7gkwxpv2 +RlQ== X-Gm-Message-State: AOAM533n55hR+mKE5kU3l6U7gZP/F9XkaCqoQ7rWHfekhGVyxSaocvHN 7YjBCSzBy0diIFt+s9ommao= X-Google-Smtp-Source: ABdhPJwlDmIm2gVNI6Egk8haxzqGydPuKKWSNBUxjI30JmIeIeHb4TZUsg76bBOqNsFly0fzFdxq0g== X-Received: by 2002:a17:906:abc5:: with SMTP id kq5mr8910184ejb.284.1599748511754; Thu, 10 Sep 2020 07:35:11 -0700 (PDT) Received: from auth2-smtp.messagingengine.com (auth2-smtp.messagingengine.com. [66.111.4.228]) by smtp.gmail.com with ESMTPSA id p17sm7124669ejw.125.2020.09.10.07.35.06 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 10 Sep 2020 07:35:10 -0700 (PDT) Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailauth.nyi.internal (Postfix) with ESMTP id BD2DA27C00A2; Thu, 10 Sep 2020 10:35:05 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Thu, 10 Sep 2020 10:35:05 -0400 X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduiedrudehjedgjedvucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne goufhorhhtvggutfgvtghiphdvucdlgedtmdenucfjughrpefhvffufffkofgjfhgggfes tdekredtredttdenucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnh hgsehgmhgrihhlrdgtohhmqeenucggtffrrghtthgvrhhnpeehvdevteefgfeiudettdef vedvvdelkeejueffffelgeeuhffhjeetkeeiueeuleenucfkphephedvrdduheehrdduud durdejudenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhm pegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhithihqdeiledvgeehtd eigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmhgrihhlrdgtohhmsehf ihigmhgvrdhnrghmvg X-ME-Proxy: Received: from localhost (unknown [52.155.111.71]) by mail.messagingengine.com (Postfix) with ESMTPA id 02320328005D; Thu, 10 Sep 2020 10:35:05 -0400 (EDT) From: Boqun Feng To: linux-hyperv@vger.kernel.org, linux-input@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-scsi@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Jiri Kosina , Benjamin Tissoires , Dmitry Torokhov , "David S. Miller" , Jakub Kicinski , "James E.J. Bottomley" , "Martin K. Petersen" , Michael Kelley , will@kernel.org, ardb@kernel.org, arnd@arndb.de, catalin.marinas@arm.com, mark.rutland@arm.com, maz@kernel.org, Boqun Feng Subject: [PATCH v3 02/11] Drivers: hv: vmbus: Move __vmbus_open() Date: Thu, 10 Sep 2020 22:34:46 +0800 Message-Id: <20200910143455.109293-3-boqun.feng@gmail.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200910143455.109293-1-boqun.feng@gmail.com> References: <20200910143455.109293-1-boqun.feng@gmail.com> MIME-Version: 1.0 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Pure function movement, no functional changes. The move is made, because in a later change, __vmbus_open() will rely on some static functions afterwards, so we separate the move and the modification of __vmbus_open() in two patches to make it easy to review. Signed-off-by: Boqun Feng Reviewed-by: Wei Liu Reviewed-by: Michael Kelley --- drivers/hv/channel.c | 309 ++++++++++++++++++++++--------------------- 1 file changed, 155 insertions(+), 154 deletions(-) diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c index 4d0f8e5a88d6..1cbe8fc931fc 100644 --- a/drivers/hv/channel.c +++ b/drivers/hv/channel.c @@ -109,160 +109,6 @@ int vmbus_alloc_ring(struct vmbus_channel *newchannel, } EXPORT_SYMBOL_GPL(vmbus_alloc_ring); -static int __vmbus_open(struct vmbus_channel *newchannel, - void *userdata, u32 userdatalen, - void (*onchannelcallback)(void *context), void *context) -{ - struct vmbus_channel_open_channel *open_msg; - struct vmbus_channel_msginfo *open_info = NULL; - struct page *page = newchannel->ringbuffer_page; - u32 send_pages, recv_pages; - unsigned long flags; - int err; - - if (userdatalen > MAX_USER_DEFINED_BYTES) - return -EINVAL; - - send_pages = newchannel->ringbuffer_send_offset; - recv_pages = newchannel->ringbuffer_pagecount - send_pages; - - if (newchannel->state != CHANNEL_OPEN_STATE) - return -EINVAL; - - newchannel->state = CHANNEL_OPENING_STATE; - newchannel->onchannel_callback = onchannelcallback; - newchannel->channel_callback_context = context; - - err = hv_ringbuffer_init(&newchannel->outbound, page, send_pages); - if (err) - goto error_clean_ring; - - err = hv_ringbuffer_init(&newchannel->inbound, - &page[send_pages], recv_pages); - if (err) - goto error_clean_ring; - - /* Establish the gpadl for the ring buffer */ - newchannel->ringbuffer_gpadlhandle = 0; - - err = vmbus_establish_gpadl(newchannel, - page_address(newchannel->ringbuffer_page), - (send_pages + recv_pages) << PAGE_SHIFT, - &newchannel->ringbuffer_gpadlhandle); - if (err) - goto error_clean_ring; - - /* Create and init the channel open message */ - open_info = kmalloc(sizeof(*open_info) + - sizeof(struct vmbus_channel_open_channel), - GFP_KERNEL); - if (!open_info) { - err = -ENOMEM; - goto error_free_gpadl; - } - - init_completion(&open_info->waitevent); - open_info->waiting_channel = newchannel; - - open_msg = (struct vmbus_channel_open_channel *)open_info->msg; - open_msg->header.msgtype = CHANNELMSG_OPENCHANNEL; - open_msg->openid = newchannel->offermsg.child_relid; - open_msg->child_relid = newchannel->offermsg.child_relid; - open_msg->ringbuffer_gpadlhandle = newchannel->ringbuffer_gpadlhandle; - open_msg->downstream_ringbuffer_pageoffset = newchannel->ringbuffer_send_offset; - open_msg->target_vp = hv_cpu_number_to_vp_number(newchannel->target_cpu); - - if (userdatalen) - memcpy(open_msg->userdata, userdata, userdatalen); - - spin_lock_irqsave(&vmbus_connection.channelmsg_lock, flags); - list_add_tail(&open_info->msglistentry, - &vmbus_connection.chn_msg_list); - spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags); - - if (newchannel->rescind) { - err = -ENODEV; - goto error_free_info; - } - - err = vmbus_post_msg(open_msg, - sizeof(struct vmbus_channel_open_channel), true); - - trace_vmbus_open(open_msg, err); - - if (err != 0) - goto error_clean_msglist; - - wait_for_completion(&open_info->waitevent); - - spin_lock_irqsave(&vmbus_connection.channelmsg_lock, flags); - list_del(&open_info->msglistentry); - spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags); - - if (newchannel->rescind) { - err = -ENODEV; - goto error_free_info; - } - - if (open_info->response.open_result.status) { - err = -EAGAIN; - goto error_free_info; - } - - newchannel->state = CHANNEL_OPENED_STATE; - kfree(open_info); - return 0; - -error_clean_msglist: - spin_lock_irqsave(&vmbus_connection.channelmsg_lock, flags); - list_del(&open_info->msglistentry); - spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags); -error_free_info: - kfree(open_info); -error_free_gpadl: - vmbus_teardown_gpadl(newchannel, newchannel->ringbuffer_gpadlhandle); - newchannel->ringbuffer_gpadlhandle = 0; -error_clean_ring: - hv_ringbuffer_cleanup(&newchannel->outbound); - hv_ringbuffer_cleanup(&newchannel->inbound); - newchannel->state = CHANNEL_OPEN_STATE; - return err; -} - -/* - * vmbus_connect_ring - Open the channel but reuse ring buffer - */ -int vmbus_connect_ring(struct vmbus_channel *newchannel, - void (*onchannelcallback)(void *context), void *context) -{ - return __vmbus_open(newchannel, NULL, 0, onchannelcallback, context); -} -EXPORT_SYMBOL_GPL(vmbus_connect_ring); - -/* - * vmbus_open - Open the specified channel. - */ -int vmbus_open(struct vmbus_channel *newchannel, - u32 send_ringbuffer_size, u32 recv_ringbuffer_size, - void *userdata, u32 userdatalen, - void (*onchannelcallback)(void *context), void *context) -{ - int err; - - err = vmbus_alloc_ring(newchannel, send_ringbuffer_size, - recv_ringbuffer_size); - if (err) - return err; - - err = __vmbus_open(newchannel, userdata, userdatalen, - onchannelcallback, context); - if (err) - vmbus_free_ring(newchannel); - - return err; -} -EXPORT_SYMBOL_GPL(vmbus_open); - /* Used for Hyper-V Socket: a guest client's connect() to the host */ int vmbus_send_tl_connect_request(const guid_t *shv_guest_servie_id, const guid_t *shv_host_servie_id) @@ -556,6 +402,161 @@ int vmbus_establish_gpadl(struct vmbus_channel *channel, void *kbuffer, } EXPORT_SYMBOL_GPL(vmbus_establish_gpadl); +static int __vmbus_open(struct vmbus_channel *newchannel, + void *userdata, u32 userdatalen, + void (*onchannelcallback)(void *context), void *context) +{ + struct vmbus_channel_open_channel *open_msg; + struct vmbus_channel_msginfo *open_info = NULL; + struct page *page = newchannel->ringbuffer_page; + u32 send_pages, recv_pages; + unsigned long flags; + int err; + + if (userdatalen > MAX_USER_DEFINED_BYTES) + return -EINVAL; + + send_pages = newchannel->ringbuffer_send_offset; + recv_pages = newchannel->ringbuffer_pagecount - send_pages; + + if (newchannel->state != CHANNEL_OPEN_STATE) + return -EINVAL; + + newchannel->state = CHANNEL_OPENING_STATE; + newchannel->onchannel_callback = onchannelcallback; + newchannel->channel_callback_context = context; + + err = hv_ringbuffer_init(&newchannel->outbound, page, send_pages); + if (err) + goto error_clean_ring; + + err = hv_ringbuffer_init(&newchannel->inbound, + &page[send_pages], recv_pages); + if (err) + goto error_clean_ring; + + /* Establish the gpadl for the ring buffer */ + newchannel->ringbuffer_gpadlhandle = 0; + + err = vmbus_establish_gpadl(newchannel, + page_address(newchannel->ringbuffer_page), + (send_pages + recv_pages) << PAGE_SHIFT, + &newchannel->ringbuffer_gpadlhandle); + if (err) + goto error_clean_ring; + + /* Create and init the channel open message */ + open_info = kmalloc(sizeof(*open_info) + + sizeof(struct vmbus_channel_open_channel), + GFP_KERNEL); + if (!open_info) { + err = -ENOMEM; + goto error_free_gpadl; + } + + init_completion(&open_info->waitevent); + open_info->waiting_channel = newchannel; + + open_msg = (struct vmbus_channel_open_channel *)open_info->msg; + open_msg->header.msgtype = CHANNELMSG_OPENCHANNEL; + open_msg->openid = newchannel->offermsg.child_relid; + open_msg->child_relid = newchannel->offermsg.child_relid; + open_msg->ringbuffer_gpadlhandle = newchannel->ringbuffer_gpadlhandle; + open_msg->downstream_ringbuffer_pageoffset = newchannel->ringbuffer_send_offset; + open_msg->target_vp = hv_cpu_number_to_vp_number(newchannel->target_cpu); + + if (userdatalen) + memcpy(open_msg->userdata, userdata, userdatalen); + + spin_lock_irqsave(&vmbus_connection.channelmsg_lock, flags); + list_add_tail(&open_info->msglistentry, + &vmbus_connection.chn_msg_list); + spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags); + + if (newchannel->rescind) { + err = -ENODEV; + goto error_free_info; + } + + err = vmbus_post_msg(open_msg, + sizeof(struct vmbus_channel_open_channel), true); + + trace_vmbus_open(open_msg, err); + + if (err != 0) + goto error_clean_msglist; + + wait_for_completion(&open_info->waitevent); + + spin_lock_irqsave(&vmbus_connection.channelmsg_lock, flags); + list_del(&open_info->msglistentry); + spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags); + + if (newchannel->rescind) { + err = -ENODEV; + goto error_free_info; + } + + if (open_info->response.open_result.status) { + err = -EAGAIN; + goto error_free_info; + } + + newchannel->state = CHANNEL_OPENED_STATE; + kfree(open_info); + return 0; + +error_clean_msglist: + spin_lock_irqsave(&vmbus_connection.channelmsg_lock, flags); + list_del(&open_info->msglistentry); + spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags); +error_free_info: + kfree(open_info); +error_free_gpadl: + vmbus_teardown_gpadl(newchannel, newchannel->ringbuffer_gpadlhandle); + newchannel->ringbuffer_gpadlhandle = 0; +error_clean_ring: + hv_ringbuffer_cleanup(&newchannel->outbound); + hv_ringbuffer_cleanup(&newchannel->inbound); + newchannel->state = CHANNEL_OPEN_STATE; + return err; +} + +/* + * vmbus_connect_ring - Open the channel but reuse ring buffer + */ +int vmbus_connect_ring(struct vmbus_channel *newchannel, + void (*onchannelcallback)(void *context), void *context) +{ + return __vmbus_open(newchannel, NULL, 0, onchannelcallback, context); +} +EXPORT_SYMBOL_GPL(vmbus_connect_ring); + +/* + * vmbus_open - Open the specified channel. + */ +int vmbus_open(struct vmbus_channel *newchannel, + u32 send_ringbuffer_size, u32 recv_ringbuffer_size, + void *userdata, u32 userdatalen, + void (*onchannelcallback)(void *context), void *context) +{ + int err; + + err = vmbus_alloc_ring(newchannel, send_ringbuffer_size, + recv_ringbuffer_size); + if (err) + return err; + + err = __vmbus_open(newchannel, userdata, userdatalen, + onchannelcallback, context); + if (err) + vmbus_free_ring(newchannel); + + return err; +} +EXPORT_SYMBOL_GPL(vmbus_open); + + /* * vmbus_teardown_gpadl -Teardown the specified GPADL handle */ From patchwork Thu Sep 10 14:34:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 11769307 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 804E992C for ; Thu, 10 Sep 2020 21:08:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5C14F21D91 for ; Thu, 10 Sep 2020 21:08:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="vXarxwEc" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727033AbgIJVI0 (ORCPT ); Thu, 10 Sep 2020 17:08:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56796 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731142AbgIJOfT (ORCPT ); Thu, 10 Sep 2020 10:35:19 -0400 Received: from mail-ed1-x541.google.com (mail-ed1-x541.google.com [IPv6:2a00:1450:4864:20::541]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C1D56C0617A0; Thu, 10 Sep 2020 07:35:14 -0700 (PDT) Received: by mail-ed1-x541.google.com with SMTP id a12so6513958eds.13; Thu, 10 Sep 2020 07:35:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=IiejDEtsBl2d8SKZ/+nBxU5WStlxWiq4vLsZltmyiQ8=; b=vXarxwEcn5/9iU5sCOgk3m21dO7zH1jwbfT3sEPRIU+pV075o+QeZtsY5YigyjXWe6 O0Ok5GSCZQDFFfM+u6k5Kajli5KJv90NoMY22wk5XNf9PlRsUs+ExkUJruuwC2mA0AZu Mf6dV/7jK/7fS6jgMUMT9KH9sl1PWD9c+//xaFuyIKO+DkWSAvBiSo8afXo1nAil3MtZ hFSEUXDaV/cOA82QEVT+3yQ15hp9aYN0aRsROoBxJNRDqBPejaELthFOOOfCU5g43mXi di92pZdDZjZ1jZhb3jvUJQbhBpQ8oqsJ/o4uUscmxdGssHPlPBvdRYV1VHOZ9cBtGFbI WpGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=IiejDEtsBl2d8SKZ/+nBxU5WStlxWiq4vLsZltmyiQ8=; b=kiV6EVZuOlVDT6aUVi3J5A7CsroipF/JGfnnyMdhl3TUbBmEVmEEfIfxQdQgGYmVNb BAckdyX/xPdHJjsnarsxsPsbLO/Zr9S+Xklyvo375ibIxkpCoN1NUvSh3UYRApcwe954 0fJ9A8YU6FDh7WTo80PzzhW7OOPrL5Hs0Ei3UZakdNXAZfGLySl5hyMoFTSnQNtLDBQv bc8mDntJuviytD2pbImg+3R7xJVynwnndC7tmav6FAgzzp/fIMmEzesKBGDYzs9CrPOO zBBCvLbn+kwH4sImEw3dmjcRTwRTkdtygui5rYqUdNBl7qK3snuzeNTE0Wd4JPoZuhDA JlOA== X-Gm-Message-State: AOAM532Y+qxdVRLYzTdibu5sAUYvYSFTrn/LIbNW85pd0nVvSBjszeDs 8gUYWdsd7/XiuEi9h075VfU= X-Google-Smtp-Source: ABdhPJxNS2n9T4s+YHra+9vcD1cGAaoREP1r5iGTk+7SWTcTuFgMfUOzUXvoUgH6CDhGveN9xi4SqQ== X-Received: by 2002:aa7:cad3:: with SMTP id l19mr9144766edt.352.1599748513369; Thu, 10 Sep 2020 07:35:13 -0700 (PDT) Received: from auth2-smtp.messagingengine.com (auth2-smtp.messagingengine.com. [66.111.4.228]) by smtp.gmail.com with ESMTPSA id o3sm7348562edt.79.2020.09.10.07.35.08 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 10 Sep 2020 07:35:12 -0700 (PDT) Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailauth.nyi.internal (Postfix) with ESMTP id C084327C00A3; Thu, 10 Sep 2020 10:35:07 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Thu, 10 Sep 2020 10:35:07 -0400 X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduiedrudehjedgjedvucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne goufhorhhtvggutfgvtghiphdvucdlgedtmdenucfjughrpefhvffufffkofgjfhgggfes tdekredtredttdenucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnh hgsehgmhgrihhlrdgtohhmqeenucggtffrrghtthgvrhhnpeehvdevteefgfeiudettdef vedvvdelkeejueffffelgeeuhffhjeetkeeiueeuleenucfkphephedvrdduheehrdduud durdejudenucevlhhushhtvghrufhiiigvpedvnecurfgrrhgrmhepmhgrihhlfhhrohhm pegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhithihqdeiledvgeehtd eigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmhgrihhlrdgtohhmsehf ihigmhgvrdhnrghmvg X-ME-Proxy: Received: from localhost (unknown [52.155.111.71]) by mail.messagingengine.com (Postfix) with ESMTPA id 0D5913064674; Thu, 10 Sep 2020 10:35:07 -0400 (EDT) From: Boqun Feng To: linux-hyperv@vger.kernel.org, linux-input@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-scsi@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Jiri Kosina , Benjamin Tissoires , Dmitry Torokhov , "David S. Miller" , Jakub Kicinski , "James E.J. Bottomley" , "Martin K. Petersen" , Michael Kelley , will@kernel.org, ardb@kernel.org, arnd@arndb.de, catalin.marinas@arm.com, mark.rutland@arm.com, maz@kernel.org, Boqun Feng Subject: [PATCH v3 03/11] Drivers: hv: vmbus: Introduce types of GPADL Date: Thu, 10 Sep 2020 22:34:47 +0800 Message-Id: <20200910143455.109293-4-boqun.feng@gmail.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200910143455.109293-1-boqun.feng@gmail.com> References: <20200910143455.109293-1-boqun.feng@gmail.com> MIME-Version: 1.0 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org This patch introduces two types of GPADL: HV_GPADL_{BUFFER, RING}. The types of GPADL are purely the concept in the guest, IOW the hypervisor treat them as the same. The reason of introducing the types for GPADL is to support guests whose page size is not 4k (the page size of Hyper-V hypervisor). In these guests, both the headers and the data parts of the ringbuffers need to be aligned to the PAGE_SIZE, because 1) some of the ringbuffers will be mapped into userspace and 2) we use "double mapping" mechanism to support fast wrap-around, and "double mapping" relies on ringbuffers being page-aligned. However, the Hyper-V hypervisor only uses 4k (HV_HYP_PAGE_SIZE) headers. Our solution to this is that we always make the headers of ringbuffers take one guest page and when GPADL is established between the guest and hypervisor, the only first 4k of header is used. To handle this special case, we need the types of GPADL to differ different guest memory usage for GPADL. Type enum is introduced along with several general interfaces to describe the differences between normal buffer GPADL and ringbuffer GPADL. Signed-off-by: Boqun Feng Reviewed-by: Michael Kelley --- drivers/hv/channel.c | 160 +++++++++++++++++++++++++++++++++++------ include/linux/hyperv.h | 44 +++++++++++- 2 files changed, 183 insertions(+), 21 deletions(-) diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c index 1cbe8fc931fc..45267b6d069e 100644 --- a/drivers/hv/channel.c +++ b/drivers/hv/channel.c @@ -35,6 +35,99 @@ static unsigned long virt_to_hvpfn(void *addr) return paddr >> HV_HYP_PAGE_SHIFT; } +/* + * hv_gpadl_size - Return the real size of a gpadl, the size that Hyper-V uses + * + * For BUFFER gpadl, Hyper-V uses the exact same size as the guest does. + * + * For RING gpadl, in each ring, the guest uses one PAGE_SIZE as the header + * (because of the alignment requirement), however, the hypervisor only + * uses the first HV_HYP_PAGE_SIZE as the header, therefore leaving a + * (PAGE_SIZE - HV_HYP_PAGE_SIZE) gap. And since there are two rings in a + * ringbuffer, the total size for a RING gpadl that Hyper-V uses is the + * total size that the guest uses minus twice of the gap size. + */ +static inline u32 hv_gpadl_size(enum hv_gpadl_type type, u32 size) +{ + switch (type) { + case HV_GPADL_BUFFER: + return size; + case HV_GPADL_RING: + /* The size of a ringbuffer must be page-aligned */ + BUG_ON(size % PAGE_SIZE); + /* + * Two things to notice here: + * 1) We're processing two ring buffers as a unit + * 2) We're skipping any space larger than HV_HYP_PAGE_SIZE in + * the first guest-size page of each of the two ring buffers. + * So we effectively subtract out two guest-size pages, and add + * back two Hyper-V size pages. + */ + return size - 2 * (PAGE_SIZE - HV_HYP_PAGE_SIZE); + } + BUG(); + return 0; +} + +/* + * hv_ring_gpadl_send_hvpgoffset - Calculate the send offset (in unit of + * HV_HYP_PAGE) in a ring gpadl based on the + * offset in the guest + * + * @offset: the offset (in bytes) where the send ringbuffer starts in the + * virtual address space of the guest + */ +static inline u32 hv_ring_gpadl_send_hvpgoffset(u32 offset) +{ + + /* + * For RING gpadl, in each ring, the guest uses one PAGE_SIZE as the + * header (because of the alignment requirement), however, the + * hypervisor only uses the first HV_HYP_PAGE_SIZE as the header, + * therefore leaving a (PAGE_SIZE - HV_HYP_PAGE_SIZE) gap. + * + * And to calculate the effective send offset in gpadl, we need to + * substract this gap. + */ + return (offset - (PAGE_SIZE - HV_HYP_PAGE_SIZE)) >> HV_HYP_PAGE_SHIFT; +} + +/* + * hv_gpadl_hvpfn - Return the Hyper-V page PFN of the @i th Hyper-V page in + * the gpadl + * + * @type: the type of the gpadl + * @kbuffer: the pointer to the gpadl in the guest + * @size: the total size (in bytes) of the gpadl + * @send_offset: the offset (in bytes) where the send ringbuffer starts in the + * virtual address space of the guest + * @i: the index + */ +static inline u64 hv_gpadl_hvpfn(enum hv_gpadl_type type, void *kbuffer, + u32 size, u32 send_offset, int i) +{ + int send_idx = hv_ring_gpadl_send_hvpgoffset(send_offset); + unsigned long delta = 0UL; + + switch (type) { + case HV_GPADL_BUFFER: + break; + case HV_GPADL_RING: + if (i == 0) + delta = 0; + else if (i <= send_idx) + delta = PAGE_SIZE - HV_HYP_PAGE_SIZE; + else + delta = 2 * (PAGE_SIZE - HV_HYP_PAGE_SIZE); + break; + default: + BUG(); + break; + } + + return virt_to_hvpfn(kbuffer + delta + (HV_HYP_PAGE_SIZE * i)); +} + /* * vmbus_setevent- Trigger an event notification on the specified * channel. @@ -160,7 +253,8 @@ EXPORT_SYMBOL_GPL(vmbus_send_modifychannel); /* * create_gpadl_header - Creates a gpadl for the specified buffer */ -static int create_gpadl_header(void *kbuffer, u32 size, +static int create_gpadl_header(enum hv_gpadl_type type, void *kbuffer, + u32 size, u32 send_offset, struct vmbus_channel_msginfo **msginfo) { int i; @@ -173,7 +267,7 @@ static int create_gpadl_header(void *kbuffer, u32 size, int pfnsum, pfncount, pfnleft, pfncurr, pfnsize; - pagecount = size >> HV_HYP_PAGE_SHIFT; + pagecount = hv_gpadl_size(type, size) >> HV_HYP_PAGE_SHIFT; /* do we need a gpadl body msg */ pfnsize = MAX_SIZE_CHANNEL_MESSAGE - @@ -200,10 +294,10 @@ static int create_gpadl_header(void *kbuffer, u32 size, gpadl_header->range_buflen = sizeof(struct gpa_range) + pagecount * sizeof(u64); gpadl_header->range[0].byte_offset = 0; - gpadl_header->range[0].byte_count = size; + gpadl_header->range[0].byte_count = hv_gpadl_size(type, size); for (i = 0; i < pfncount; i++) - gpadl_header->range[0].pfn_array[i] = virt_to_hvpfn( - kbuffer + HV_HYP_PAGE_SIZE * i); + gpadl_header->range[0].pfn_array[i] = hv_gpadl_hvpfn( + type, kbuffer, size, send_offset, i); *msginfo = msgheader; pfnsum = pfncount; @@ -254,8 +348,8 @@ static int create_gpadl_header(void *kbuffer, u32 size, * so the hypervisor guarantees that this is ok. */ for (i = 0; i < pfncurr; i++) - gpadl_body->pfn[i] = virt_to_hvpfn( - kbuffer + HV_HYP_PAGE_SIZE * (pfnsum + i)); + gpadl_body->pfn[i] = hv_gpadl_hvpfn(type, + kbuffer, size, send_offset, pfnsum + i); /* add to msg header */ list_add_tail(&msgbody->msglistentry, @@ -281,10 +375,10 @@ static int create_gpadl_header(void *kbuffer, u32 size, gpadl_header->range_buflen = sizeof(struct gpa_range) + pagecount * sizeof(u64); gpadl_header->range[0].byte_offset = 0; - gpadl_header->range[0].byte_count = size; + gpadl_header->range[0].byte_count = hv_gpadl_size(type, size); for (i = 0; i < pagecount; i++) - gpadl_header->range[0].pfn_array[i] = virt_to_hvpfn( - kbuffer + HV_HYP_PAGE_SIZE * i); + gpadl_header->range[0].pfn_array[i] = hv_gpadl_hvpfn( + type, kbuffer, size, send_offset, i); *msginfo = msgheader; } @@ -297,15 +391,20 @@ static int create_gpadl_header(void *kbuffer, u32 size, } /* - * vmbus_establish_gpadl - Establish a GPADL for the specified buffer + * __vmbus_establish_gpadl - Establish a GPADL for a buffer or ringbuffer * * @channel: a channel + * @type: the type of the corresponding GPADL, only meaningful for the guest. * @kbuffer: from kmalloc or vmalloc * @size: page-size multiple + * @send_offset: the offset (in bytes) where the send ring buffer starts, + * should be 0 for BUFFER type gpadl * @gpadl_handle: some funky thing */ -int vmbus_establish_gpadl(struct vmbus_channel *channel, void *kbuffer, - u32 size, u32 *gpadl_handle) +static int __vmbus_establish_gpadl(struct vmbus_channel *channel, + enum hv_gpadl_type type, void *kbuffer, + u32 size, u32 send_offset, + u32 *gpadl_handle) { struct vmbus_channel_gpadl_header *gpadlmsg; struct vmbus_channel_gpadl_body *gpadl_body; @@ -319,7 +418,7 @@ int vmbus_establish_gpadl(struct vmbus_channel *channel, void *kbuffer, next_gpadl_handle = (atomic_inc_return(&vmbus_connection.next_gpadl_handle) - 1); - ret = create_gpadl_header(kbuffer, size, &msginfo); + ret = create_gpadl_header(type, kbuffer, size, send_offset, &msginfo); if (ret) return ret; @@ -400,6 +499,21 @@ int vmbus_establish_gpadl(struct vmbus_channel *channel, void *kbuffer, kfree(msginfo); return ret; } + +/* + * vmbus_establish_gpadl - Establish a GPADL for the specified buffer + * + * @channel: a channel + * @kbuffer: from kmalloc or vmalloc + * @size: page-size multiple + * @gpadl_handle: some funky thing + */ +int vmbus_establish_gpadl(struct vmbus_channel *channel, void *kbuffer, + u32 size, u32 *gpadl_handle) +{ + return __vmbus_establish_gpadl(channel, HV_GPADL_BUFFER, kbuffer, size, + 0U, gpadl_handle); +} EXPORT_SYMBOL_GPL(vmbus_establish_gpadl); static int __vmbus_open(struct vmbus_channel *newchannel, @@ -438,10 +552,11 @@ static int __vmbus_open(struct vmbus_channel *newchannel, /* Establish the gpadl for the ring buffer */ newchannel->ringbuffer_gpadlhandle = 0; - err = vmbus_establish_gpadl(newchannel, - page_address(newchannel->ringbuffer_page), - (send_pages + recv_pages) << PAGE_SHIFT, - &newchannel->ringbuffer_gpadlhandle); + err = __vmbus_establish_gpadl(newchannel, HV_GPADL_RING, + page_address(newchannel->ringbuffer_page), + (send_pages + recv_pages) << PAGE_SHIFT, + newchannel->ringbuffer_send_offset << PAGE_SHIFT, + &newchannel->ringbuffer_gpadlhandle); if (err) goto error_clean_ring; @@ -462,7 +577,13 @@ static int __vmbus_open(struct vmbus_channel *newchannel, open_msg->openid = newchannel->offermsg.child_relid; open_msg->child_relid = newchannel->offermsg.child_relid; open_msg->ringbuffer_gpadlhandle = newchannel->ringbuffer_gpadlhandle; - open_msg->downstream_ringbuffer_pageoffset = newchannel->ringbuffer_send_offset; + /* + * The unit of ->downstream_ringbuffer_pageoffset is HV_HYP_PAGE and + * the unit of ->ringbuffer_send_offset (i.e. send_pages) is PAGE, so + * here we calculate it into HV_HYP_PAGE. + */ + open_msg->downstream_ringbuffer_pageoffset = + hv_ring_gpadl_send_hvpgoffset(send_pages << PAGE_SHIFT); open_msg->target_vp = hv_cpu_number_to_vp_number(newchannel->target_cpu); if (userdatalen) @@ -556,7 +677,6 @@ int vmbus_open(struct vmbus_channel *newchannel, } EXPORT_SYMBOL_GPL(vmbus_open); - /* * vmbus_teardown_gpadl -Teardown the specified GPADL handle */ diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h index 38100e80360a..7d16dd28aa48 100644 --- a/include/linux/hyperv.h +++ b/include/linux/hyperv.h @@ -29,6 +29,48 @@ #pragma pack(push, 1) +/* + * Types for GPADL, decides is how GPADL header is created. + * + * It doesn't make much difference between BUFFER and RING if PAGE_SIZE is the + * same as HV_HYP_PAGE_SIZE. + * + * If PAGE_SIZE is bigger than HV_HYP_PAGE_SIZE, the headers of ring buffers + * will be of PAGE_SIZE, however, only the first HV_HYP_PAGE will be put + * into gpadl, therefore the number for HV_HYP_PAGE and the indexes of each + * HV_HYP_PAGE will be different between different types of GPADL, for example + * if PAGE_SIZE is 64K: + * + * BUFFER: + * + * gva: |-- 64k --|-- 64k --| ... | + * gpa: | 4k | 4k | ... | 4k | 4k | 4k | ... | 4k | + * index: 0 1 2 15 16 17 18 .. 31 32 ... + * | | ... | | | ... | ... + * v V V V V V + * gpadl: | 4k | 4k | ... | 4k | 4k | 4k | ... | 4k | ... | + * index: 0 1 2 ... 15 16 17 18 .. 31 32 ... + * + * RING: + * + * | header | data | header | data | + * gva: |-- 64k --|-- 64k --| ... |-- 64k --|-- 64k --| ... | + * gpa: | 4k | .. | 4k | 4k | ... | 4k | ... | 4k | .. | 4k | .. | ... | + * index: 0 1 16 17 18 31 ... n n+1 n+16 ... 2n + * | / / / | / / + * | / / / | / / + * | / / ... / ... | / ... / + * | / / / | / / + * | / / / | / / + * V V V V V V v + * gpadl: | 4k | 4k | ... | ... | 4k | 4k | ... | + * index: 0 1 2 ... 16 ... n-15 n-14 n-13 ... 2n-30 + */ +enum hv_gpadl_type { + HV_GPADL_BUFFER, + HV_GPADL_RING +}; + /* Single-page buffer */ struct hv_page_buffer { u32 len; @@ -111,7 +153,7 @@ struct hv_ring_buffer { } feature_bits; /* Pad it to PAGE_SIZE so that data starts on page boundary */ - u8 reserved2[4028]; + u8 reserved2[PAGE_SIZE - 68]; /* * Ring data starts here + RingDataStartOffset From patchwork Thu Sep 10 14:34:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 11769315 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A9CB859D for ; Thu, 10 Sep 2020 21:09:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 826D321D91 for ; Thu, 10 Sep 2020 21:09:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="XBOuVfCp" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727800AbgIJVIf (ORCPT ); Thu, 10 Sep 2020 17:08:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56808 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731161AbgIJOfT (ORCPT ); Thu, 10 Sep 2020 10:35:19 -0400 Received: from mail-ed1-x543.google.com (mail-ed1-x543.google.com [IPv6:2a00:1450:4864:20::543]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1A025C0617A1; Thu, 10 Sep 2020 07:35:17 -0700 (PDT) Received: by mail-ed1-x543.google.com with SMTP id c10so6550316edk.6; Thu, 10 Sep 2020 07:35:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=CqVajLbzBylhgfGAcsTOQbCfA2sTjaEeoT2LMxZ+IiM=; b=XBOuVfCpG1rZMmR8+fzX2Iw0HyxjJyHpHSUZ13UXZNNaNP2MkaNQ5naNzlJnQr69MW txU08URc4XylmgF4L3W+Lbe9IVn0CWmYAQjTIVmhwm2VRGuMysnKzLIp2tWA4TR+4QUR f6PPWMysmhTzSbGNqoXtzjrfZxy6hLPieCB+zF/9UlCQEKVjGWHh6Z4BMy5VSCFhXc6N l/ieOxJHGv53pm8Ok+aySt4J/0GV3EQepNRybxtebZIqxALpwm7jXvPMxrC/OBs7WJdn n6DAxVWK4h4E1Fk5c4rz/gEi58CZYEtfZHINaFM+a93Vg3d0Pxc/kX0jOdeb1DbBnxXA 3e4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=CqVajLbzBylhgfGAcsTOQbCfA2sTjaEeoT2LMxZ+IiM=; b=OYCGQ4Ss405f5vjIkmxhY6ThjiVfLp0Y25fCNSsHqMmRYf3Ihxw+PUv0xPPXe4nTr5 okCCHvCqWbO617Y22A/3Nle+YtO91fj/DN6itb6mCfP62g0FkP8d4NbOb/O5eL3jJSRG U9WpXdUGg4rsY0pKfzoLEHNuHXbVoNBl1PqlndlTWf6zb3kK0kn00uAPGOUUifkBSDGq sr6oSn5BJONUjxY/a02s4W8IL2j9gwIuSSBw8XNY00N6wZqyuibv29m76Y50tZOXkuBC uh6lETHMUumbLFSnGNIptITnXsDRXCefE3Cm5rW3mt+476Y7iao2+/leK/h0AosR69QW YeuQ== X-Gm-Message-State: AOAM533uyzt8tXHiFE0lN6zsSDWslzwVnxpja1zoI68bvDXTn83yElD/ k1VUfTbEOqeI9mEUS7Wwrdo= X-Google-Smtp-Source: ABdhPJw2J6znE3xmbVz2se2YcLJ0No17Heg3y5yq6t4TgIrRRQk+Flrj2jGaX1lvFVts919IEO5+cg== X-Received: by 2002:aa7:dcc1:: with SMTP id w1mr9281936edu.360.1599748515729; Thu, 10 Sep 2020 07:35:15 -0700 (PDT) Received: from auth2-smtp.messagingengine.com (auth2-smtp.messagingengine.com. [66.111.4.228]) by smtp.gmail.com with ESMTPSA id m14sm7647191ejn.8.2020.09.10.07.35.10 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 10 Sep 2020 07:35:13 -0700 (PDT) Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailauth.nyi.internal (Postfix) with ESMTP id DFAB827C00A4; Thu, 10 Sep 2020 10:35:09 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Thu, 10 Sep 2020 10:35:09 -0400 X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduiedrudehjedgjedvucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne goufhorhhtvggutfgvtghiphdvucdlgedtmdenucfjughrpefhvffufffkofgjfhgggfes tdekredtredttdenucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnh hgsehgmhgrihhlrdgtohhmqeenucggtffrrghtthgvrhhnpeehvdevteefgfeiudettdef vedvvdelkeejueffffelgeeuhffhjeetkeeiueeuleenucfkphephedvrdduheehrdduud durdejudenucevlhhushhtvghrufhiiigvpedvnecurfgrrhgrmhepmhgrihhlfhhrohhm pegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhithihqdeiledvgeehtd eigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmhgrihhlrdgtohhmsehf ihigmhgvrdhnrghmvg X-ME-Proxy: Received: from localhost (unknown [52.155.111.71]) by mail.messagingengine.com (Postfix) with ESMTPA id 284573064683; Thu, 10 Sep 2020 10:35:09 -0400 (EDT) From: Boqun Feng To: linux-hyperv@vger.kernel.org, linux-input@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-scsi@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Jiri Kosina , Benjamin Tissoires , Dmitry Torokhov , "David S. Miller" , Jakub Kicinski , "James E.J. Bottomley" , "Martin K. Petersen" , Michael Kelley , will@kernel.org, ardb@kernel.org, arnd@arndb.de, catalin.marinas@arm.com, mark.rutland@arm.com, maz@kernel.org, Boqun Feng Subject: [PATCH v3 04/11] Drivers: hv: Use HV_HYP_PAGE in hv_synic_enable_regs() Date: Thu, 10 Sep 2020 22:34:48 +0800 Message-Id: <20200910143455.109293-5-boqun.feng@gmail.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200910143455.109293-1-boqun.feng@gmail.com> References: <20200910143455.109293-1-boqun.feng@gmail.com> MIME-Version: 1.0 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Both the base_*_gpa should use the guest page number in Hyper-V page, so use HV_HYP_PAGE instead of PAGE. Signed-off-by: Boqun Feng Reviewed-by: Michael Kelley --- drivers/hv/hv.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/hv/hv.c b/drivers/hv/hv.c index 7499079f4077..8ac8bbf5b5aa 100644 --- a/drivers/hv/hv.c +++ b/drivers/hv/hv.c @@ -165,7 +165,7 @@ void hv_synic_enable_regs(unsigned int cpu) hv_get_simp(simp.as_uint64); simp.simp_enabled = 1; simp.base_simp_gpa = virt_to_phys(hv_cpu->synic_message_page) - >> PAGE_SHIFT; + >> HV_HYP_PAGE_SHIFT; hv_set_simp(simp.as_uint64); @@ -173,7 +173,7 @@ void hv_synic_enable_regs(unsigned int cpu) hv_get_siefp(siefp.as_uint64); siefp.siefp_enabled = 1; siefp.base_siefp_gpa = virt_to_phys(hv_cpu->synic_event_page) - >> PAGE_SHIFT; + >> HV_HYP_PAGE_SHIFT; hv_set_siefp(siefp.as_uint64); From patchwork Thu Sep 10 14:34:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 11769295 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E5DEC59D for ; Thu, 10 Sep 2020 21:07:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C131C21D90 for ; Thu, 10 Sep 2020 21:07:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="bunKmthQ" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726231AbgIJVHI (ORCPT ); Thu, 10 Sep 2020 17:07:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56810 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731164AbgIJOfT (ORCPT ); Thu, 10 Sep 2020 10:35:19 -0400 Received: from mail-ej1-x643.google.com (mail-ej1-x643.google.com [IPv6:2a00:1450:4864:20::643]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 19DE9C0617A2; Thu, 10 Sep 2020 07:35:18 -0700 (PDT) Received: by mail-ej1-x643.google.com with SMTP id i26so9016009ejb.12; Thu, 10 Sep 2020 07:35:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=WR4JWrGv996OeneE+c3ZZLsHt7Sf8sK78dGUgjv8gw0=; b=bunKmthQjY1irGwYfXYtjhpBoZ02Q6GFR0J1xS/txz2ysnXVRbnag1W2zPWtSKdmaV RBFOGBbeudEBHV/UkPOzV5VFDOOGdrlAQmjEklpj25Fparz9D6SYrpguzxhj1hsKWGva bQRlAUPv4XnmUKoE6hWKFAwCBBWGcFz7xwLIJAIRaPQLjEKGbHhrGtLBAV+2oKcdqYVO DNTYTFwQvQwWWhAFWzKSQfLu2hlTPeD3zZ23iv+hy+kW0hG+l+5DY5ZC7VIq8tlkX4Jh GiRrHvcA/N1109bE2l1BuqymIFDZA/mp2S+431W2ydkS0ds1fgx3KpTStx9cDNHzKCLe ZwjA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=WR4JWrGv996OeneE+c3ZZLsHt7Sf8sK78dGUgjv8gw0=; b=a3LW//f0rO9uX/9uB1gFPVgf7f436BAoHXebYzcSV3hxJSnn1/FKFCS/9/rCJxv5td 409sYRl+G7jKsugEBPb/Axkh0Xy5sHw9zNNTQWNuCvBecHgp9ZvXyTK1aOnt+u4U8jYB CAKxuAOUSBxgJY1b9/pAkD/U/KoZGu5cBVmyXZ4OQ6KX/yFjUteE6MiKsiIr6bPqkXDY AEAnBo0nklDM/VaEqY7KSZ6Ye2wDzhlkktQ+qh2M32H7sNJIfJxOeCrAaeRk7+ApGkqf X+Q/bwotHanQsOSDbW+PjSxbDc00bozJBlC5eJYc7ec5+C7pFUotdgTqvV8Zg+piAoTl WRYA== X-Gm-Message-State: AOAM532xQff+VYnp/PviiSVpuJcofCydqd0F6EvvAUHqfB/a4935B8TG x9LyGzf9JzvZKa7BdjH7/Ww= X-Google-Smtp-Source: ABdhPJyx94Q8cxj894AhQFJ6+F7LX3XL9vOl4x+f8GGk2eU5cFiTAZOpfT19O7Hv+oTFkuExFVh7cw== X-Received: by 2002:a17:906:8690:: with SMTP id g16mr8920407ejx.187.1599748516837; Thu, 10 Sep 2020 07:35:16 -0700 (PDT) Received: from auth2-smtp.messagingengine.com (auth2-smtp.messagingengine.com. [66.111.4.228]) by smtp.gmail.com with ESMTPSA id v5sm7251507ejv.114.2020.09.10.07.35.12 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 10 Sep 2020 07:35:15 -0700 (PDT) Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailauth.nyi.internal (Postfix) with ESMTP id C4EF727C00A1; Thu, 10 Sep 2020 10:35:11 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Thu, 10 Sep 2020 10:35:11 -0400 X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduiedrudehjedgjedvucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne goufhorhhtvggutfgvtghiphdvucdlgedtmdenucfjughrpefhvffufffkofgjfhgggfes tdekredtredttdenucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnh hgsehgmhgrihhlrdgtohhmqeenucggtffrrghtthgvrhhnpeehvdevteefgfeiudettdef vedvvdelkeejueffffelgeeuhffhjeetkeeiueeuleenucfkphephedvrdduheehrdduud durdejudenucevlhhushhtvghrufhiiigvpedvnecurfgrrhgrmhepmhgrihhlfhhrohhm pegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhithihqdeiledvgeehtd eigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmhgrihhlrdgtohhmsehf ihigmhgvrdhnrghmvg X-ME-Proxy: Received: from localhost (unknown [52.155.111.71]) by mail.messagingengine.com (Postfix) with ESMTPA id 0D5C23280059; Thu, 10 Sep 2020 10:35:11 -0400 (EDT) From: Boqun Feng To: linux-hyperv@vger.kernel.org, linux-input@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-scsi@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Jiri Kosina , Benjamin Tissoires , Dmitry Torokhov , "David S. Miller" , Jakub Kicinski , "James E.J. Bottomley" , "Martin K. Petersen" , Michael Kelley , will@kernel.org, ardb@kernel.org, arnd@arndb.de, catalin.marinas@arm.com, mark.rutland@arm.com, maz@kernel.org, Boqun Feng Subject: [PATCH v3 05/11] Drivers: hv: vmbus: Move virt_to_hvpfn() to hyperv header Date: Thu, 10 Sep 2020 22:34:49 +0800 Message-Id: <20200910143455.109293-6-boqun.feng@gmail.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200910143455.109293-1-boqun.feng@gmail.com> References: <20200910143455.109293-1-boqun.feng@gmail.com> MIME-Version: 1.0 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org There will be more places other than vmbus where we need to calculate the Hyper-V page PFN from a virtual address, so move virt_to_hvpfn() to hyperv generic header. Signed-off-by: Boqun Feng Reviewed-by: Michael Kelley --- drivers/hv/channel.c | 13 ------------- include/linux/hyperv.h | 15 +++++++++++++++ 2 files changed, 15 insertions(+), 13 deletions(-) diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c index 45267b6d069e..fbdda9938039 100644 --- a/drivers/hv/channel.c +++ b/drivers/hv/channel.c @@ -22,19 +22,6 @@ #include "hyperv_vmbus.h" -static unsigned long virt_to_hvpfn(void *addr) -{ - phys_addr_t paddr; - - if (is_vmalloc_addr(addr)) - paddr = page_to_phys(vmalloc_to_page(addr)) + - offset_in_page(addr); - else - paddr = __pa(addr); - - return paddr >> HV_HYP_PAGE_SHIFT; -} - /* * hv_gpadl_size - Return the real size of a gpadl, the size that Hyper-V uses * diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h index 7d16dd28aa48..6f4831212979 100644 --- a/include/linux/hyperv.h +++ b/include/linux/hyperv.h @@ -14,6 +14,7 @@ #include +#include #include #include #include @@ -23,6 +24,7 @@ #include #include #include +#include #define MAX_PAGE_BUFFER_COUNT 32 #define MAX_MULTIPAGE_BUFFER_COUNT 32 /* 128K */ @@ -1672,4 +1674,17 @@ struct hyperv_pci_block_ops { extern struct hyperv_pci_block_ops hvpci_block_ops; +static inline unsigned long virt_to_hvpfn(void *addr) +{ + phys_addr_t paddr; + + if (is_vmalloc_addr(addr)) + paddr = page_to_phys(vmalloc_to_page(addr)) + + offset_in_page(addr); + else + paddr = __pa(addr); + + return paddr >> HV_HYP_PAGE_SHIFT; +} + #endif /* _HYPERV_H */ From patchwork Thu Sep 10 14:34:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 11767957 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DA00F618 for ; Thu, 10 Sep 2020 14:40:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BCE252074B for ; Thu, 10 Sep 2020 14:40:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="PLMSr5mh" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731196AbgIJOim (ORCPT ); Thu, 10 Sep 2020 10:38:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56818 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731019AbgIJOfX (ORCPT ); Thu, 10 Sep 2020 10:35:23 -0400 Received: from mail-ej1-x643.google.com (mail-ej1-x643.google.com [IPv6:2a00:1450:4864:20::643]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A10ECC0617A3; Thu, 10 Sep 2020 07:35:19 -0700 (PDT) Received: by mail-ej1-x643.google.com with SMTP id nw23so9063267ejb.4; Thu, 10 Sep 2020 07:35:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=KCYaWwcvDjz/OmO+LXtbRJ6LszuyteB8m/JYCI1W1Mk=; b=PLMSr5mhrZi80kquG3UTKc10L5Ed+nu36WWw+z2lZB7jY2NtSZwMDvRpv3TuwllizR 7wpc1iofk2ZoyyDJwpWceFzTiHn90TWZgiZ045Rqz/ctDq5xo9epE09YznVy4EY1BmCs l2St/n4EQLGLl4Irlj/CjiNUTGWsQjI0pEg6F+baJPziJDFYBOwuCCOyMb+xCs9CEFvu 9KFGfkT69xlzT78AdIbRIVXoCrdn7bCNxNeWDPtg/woOLBOGoZLIKsWAlBjbsXvCsg9/ +ESlgsmFyeGx507Da/rJqBOVrFWssFaCbV59079oaLVMMMb6jhwsRiqhXjYp00Drq8eS JdWQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=KCYaWwcvDjz/OmO+LXtbRJ6LszuyteB8m/JYCI1W1Mk=; b=UFEPDaW6+yZMFx6m0VsNz3MGpAwL6M01lCF9AbqLIjFd63jnlOiw5djUQvdTIPokW3 VHmxa+Vyv/WzdOTxr2SEOnyD7j43evAls5fS/lI86MXSaDa0UJ4YQZtQIWxiRD6THmWN bAJLAf2VKkdvTOG53mLFImXh7ZuiLo3x+AllljfJ05YnflmrDrNHPaHgqRjD8b9mplFw qNA8sImuiwgO9O/xK4HaS71jXLVHsGygYDog/+ooCpxRm+5AKKYfhIISJKs5CzDrL72/ 7RdEVaio6ZEY4Oz+W2hQfcrkPv+OP8EXky2dFoJPf1j6s82Z5lRnX/+bbaLC4AHaJthY KAfA== X-Gm-Message-State: AOAM532OKeSERX9CGPVjdviG55sYo/xPyfeuQYo69cztuMbQJwywpaeQ iwII9gjap6tv8/JWgpadtTA= X-Google-Smtp-Source: ABdhPJzWHmRORR4s5GPtiMxYzRL7RIeJHACIEt31sw6plEpoQQDbSHFzrwWgxYbTiAiLDLiHODRGHA== X-Received: by 2002:a17:906:e4f:: with SMTP id q15mr9640896eji.155.1599748518392; Thu, 10 Sep 2020 07:35:18 -0700 (PDT) Received: from auth2-smtp.messagingengine.com (auth2-smtp.messagingengine.com. [66.111.4.228]) by smtp.gmail.com with ESMTPSA id ot19sm6825559ejb.121.2020.09.10.07.35.14 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 10 Sep 2020 07:35:17 -0700 (PDT) Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailauth.nyi.internal (Postfix) with ESMTP id CE39C27C00A2; Thu, 10 Sep 2020 10:35:13 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Thu, 10 Sep 2020 10:35:13 -0400 X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduiedrudehjedgjedvucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne goufhorhhtvggutfgvtghiphdvucdlgedtmdenucfjughrpefhvffufffkofgjfhgggfes tdekredtredttdenucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnh hgsehgmhgrihhlrdgtohhmqeenucggtffrrghtthgvrhhnpeehvdevteefgfeiudettdef vedvvdelkeejueffffelgeeuhffhjeetkeeiueeuleenucfkphephedvrdduheehrdduud durdejudenucevlhhushhtvghrufhiiigvpeehnecurfgrrhgrmhepmhgrihhlfhhrohhm pegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhithihqdeiledvgeehtd eigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmhgrihhlrdgtohhmsehf ihigmhgvrdhnrghmvg X-ME-Proxy: Received: from localhost (unknown [52.155.111.71]) by mail.messagingengine.com (Postfix) with ESMTPA id 106293280059; Thu, 10 Sep 2020 10:35:13 -0400 (EDT) From: Boqun Feng To: linux-hyperv@vger.kernel.org, linux-input@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-scsi@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Jiri Kosina , Benjamin Tissoires , Dmitry Torokhov , "David S. Miller" , Jakub Kicinski , "James E.J. Bottomley" , "Martin K. Petersen" , Michael Kelley , will@kernel.org, ardb@kernel.org, arnd@arndb.de, catalin.marinas@arm.com, mark.rutland@arm.com, maz@kernel.org, Boqun Feng Subject: [PATCH v3 06/11] hv: hyperv.h: Introduce some hvpfn helper functions Date: Thu, 10 Sep 2020 22:34:50 +0800 Message-Id: <20200910143455.109293-7-boqun.feng@gmail.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200910143455.109293-1-boqun.feng@gmail.com> References: <20200910143455.109293-1-boqun.feng@gmail.com> MIME-Version: 1.0 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org When a guest communicate with the hypervisor, it must use HV_HYP_PAGE to calculate PFN, so introduce a few hvpfn helper functions as the counterpart of the page helper functions. This is the preparation for supporting guest whose PAGE_SIZE is not 4k. Signed-off-by: Boqun Feng Reviewed-by: Michael Kelley --- include/linux/hyperv.h | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h index 6f4831212979..00c09d2ff9ad 100644 --- a/include/linux/hyperv.h +++ b/include/linux/hyperv.h @@ -1687,4 +1687,9 @@ static inline unsigned long virt_to_hvpfn(void *addr) return paddr >> HV_HYP_PAGE_SHIFT; } +#define NR_HV_HYP_PAGES_IN_PAGE (PAGE_SIZE / HV_HYP_PAGE_SIZE) +#define offset_in_hvpage(ptr) ((unsigned long)(ptr) & ~HV_HYP_PAGE_MASK) +#define HVPFN_UP(x) (((x) + HV_HYP_PAGE_SIZE-1) >> HV_HYP_PAGE_SHIFT) +#define page_to_hvpfn(page) (page_to_pfn(page) * NR_HV_HYP_PAGES_IN_PAGE) + #endif /* _HYPERV_H */ From patchwork Thu Sep 10 14:34:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 11769285 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E2769159A for ; Thu, 10 Sep 2020 21:06:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B038621D90 for ; Thu, 10 Sep 2020 21:06:33 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="JbokN9ta" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728023AbgIJVGS (ORCPT ); Thu, 10 Sep 2020 17:06:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56942 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730988AbgIJOgI (ORCPT ); Thu, 10 Sep 2020 10:36:08 -0400 Received: from mail-ej1-x643.google.com (mail-ej1-x643.google.com [IPv6:2a00:1450:4864:20::643]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 43774C0617A4; Thu, 10 Sep 2020 07:35:21 -0700 (PDT) Received: by mail-ej1-x643.google.com with SMTP id z22so9052275ejl.7; Thu, 10 Sep 2020 07:35:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=o1Sl+9uuMxREmM+K2OMqTIchpX8+FnseIjhki4v5+3o=; b=JbokN9taeM5HKkcdr7fsWoutM4vC+zxluPBrBb9iiiacQ0WluxJIcr7w6lVfESNMnS T7MMEDGivB09ge8fKb5AjQCXw9Ica93xYYjhFRTHmLHmlp9YEgq8ngWJ+XafK0/Qnovl ca89DGMgARgqQjtLZYclQuOPfoKzsst/Dly2jAX1Wjcwe7zR7NIyJOR197tQJ/ceXzHr VTw6aTxktByygAkfphWQ5hlE/SzP7l3qJ1NIzR5cyM6aS2jeiiBM/D24yk/R1s5wz/c/ nsWKOTIPOFOYKCHVE8GgDFERi64hdDIVWFqcqns3rrN9x92GSFCy0eGo79MTtQ94wbfx DYLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=o1Sl+9uuMxREmM+K2OMqTIchpX8+FnseIjhki4v5+3o=; b=gaR6yd1ynMh3yVSt1Ihdk7WePBn3byfDtoiZM5BFsMbbJLJs0SGLpX2H5MdnksrwCG +ONJPb2/r94tn+w1r3msvMujBf2KxsWsrN+1C1Uwn3RQzLbWJprUo294UFVzlPaQ5Mvp Pf0G4PLfKS7SB/XY1hPbHbL6kxcZg3RfmFxcUhq1SgsiR7+aeYBJe4iewyOjunehUYJ1 JbsauILyWXsZ13kOXJiP7Qxhkv4bkK7B86EMVnc+i5XxPNp23LW3KByb0gPLa0rL/C9n zQhwmd4rPc/Cc4sBqTgQhi1nqtC/cgOyaotRjeTKrXoR6Prw0qUOffEtA1ZtE0hWUrw7 DMug== X-Gm-Message-State: AOAM5312iWY3L3WMo9fKUkZW4JRa81RWRYV9lqjL+7jBkReAHxHKDMox DF2wJxcnvbJNdNv15aUcS7A= X-Google-Smtp-Source: ABdhPJz2JkjAAY18FQ2QRF0MYrjWvtlWZTCjd8dvAjjvpoVblLdj491Ah77q7XuYVg3wtnWKLEtksg== X-Received: by 2002:a17:906:2e14:: with SMTP id n20mr9454635eji.214.1599748519915; Thu, 10 Sep 2020 07:35:19 -0700 (PDT) Received: from auth2-smtp.messagingengine.com (auth2-smtp.messagingengine.com. [66.111.4.228]) by smtp.gmail.com with ESMTPSA id bn14sm7308235ejb.115.2020.09.10.07.35.16 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 10 Sep 2020 07:35:18 -0700 (PDT) Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailauth.nyi.internal (Postfix) with ESMTP id AD50927C00A3; Thu, 10 Sep 2020 10:35:15 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Thu, 10 Sep 2020 10:35:15 -0400 X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduiedrudehjedgjedvucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne goufhorhhtvggutfgvtghiphdvucdlgedtmdenucfjughrpefhvffufffkofgjfhgggfes tdekredtredttdenucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnh hgsehgmhgrihhlrdgtohhmqeenucggtffrrghtthgvrhhnpeehvdevteefgfeiudettdef vedvvdelkeejueffffelgeeuhffhjeetkeeiueeuleenucfkphephedvrdduheehrdduud durdejudenucevlhhushhtvghrufhiiigvpeehnecurfgrrhgrmhepmhgrihhlfhhrohhm pegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhithihqdeiledvgeehtd eigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmhgrihhlrdgtohhmsehf ihigmhgvrdhnrghmvg X-ME-Proxy: Received: from localhost (unknown [52.155.111.71]) by mail.messagingengine.com (Postfix) with ESMTPA id E91FA3280059; Thu, 10 Sep 2020 10:35:14 -0400 (EDT) From: Boqun Feng To: linux-hyperv@vger.kernel.org, linux-input@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-scsi@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Jiri Kosina , Benjamin Tissoires , Dmitry Torokhov , "David S. Miller" , Jakub Kicinski , "James E.J. Bottomley" , "Martin K. Petersen" , Michael Kelley , will@kernel.org, ardb@kernel.org, arnd@arndb.de, catalin.marinas@arm.com, mark.rutland@arm.com, maz@kernel.org, Boqun Feng Subject: [PATCH v3 07/11] hv_netvsc: Use HV_HYP_PAGE_SIZE for Hyper-V communication Date: Thu, 10 Sep 2020 22:34:51 +0800 Message-Id: <20200910143455.109293-8-boqun.feng@gmail.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200910143455.109293-1-boqun.feng@gmail.com> References: <20200910143455.109293-1-boqun.feng@gmail.com> MIME-Version: 1.0 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org When communicating with Hyper-V, HV_HYP_PAGE_SIZE should be used since that's the page size used by Hyper-V and Hyper-V expects all page-related data using the unit of HY_HYP_PAGE_SIZE, for example, the "pfn" in hv_page_buffer is actually the HV_HYP_PAGE (i.e. the Hyper-V page) number. In order to support guest whose page size is not 4k, we need to make hv_netvsc always use HV_HYP_PAGE_SIZE for Hyper-V communication. Signed-off-by: Boqun Feng Reviewed-by: Michael Kelley --- drivers/net/hyperv/netvsc.c | 2 +- drivers/net/hyperv/netvsc_drv.c | 46 +++++++++++++++---------------- drivers/net/hyperv/rndis_filter.c | 13 ++++----- 3 files changed, 30 insertions(+), 31 deletions(-) diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c index 41f5cf0bb997..1d6f2256da6b 100644 --- a/drivers/net/hyperv/netvsc.c +++ b/drivers/net/hyperv/netvsc.c @@ -794,7 +794,7 @@ static void netvsc_copy_to_send_buf(struct netvsc_device *net_device, } for (i = 0; i < page_count; i++) { - char *src = phys_to_virt(pb[i].pfn << PAGE_SHIFT); + char *src = phys_to_virt(pb[i].pfn << HV_HYP_PAGE_SHIFT); u32 offset = pb[i].offset; u32 len = pb[i].len; diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c index 64b0a74c1523..61ea568e1ddf 100644 --- a/drivers/net/hyperv/netvsc_drv.c +++ b/drivers/net/hyperv/netvsc_drv.c @@ -373,32 +373,29 @@ static u16 netvsc_select_queue(struct net_device *ndev, struct sk_buff *skb, return txq; } -static u32 fill_pg_buf(struct page *page, u32 offset, u32 len, +static u32 fill_pg_buf(unsigned long hvpfn, u32 offset, u32 len, struct hv_page_buffer *pb) { int j = 0; - /* Deal with compound pages by ignoring unused part - * of the page. - */ - page += (offset >> PAGE_SHIFT); - offset &= ~PAGE_MASK; + hvpfn += offset >> HV_HYP_PAGE_SHIFT; + offset = offset & ~HV_HYP_PAGE_MASK; while (len > 0) { unsigned long bytes; - bytes = PAGE_SIZE - offset; + bytes = HV_HYP_PAGE_SIZE - offset; if (bytes > len) bytes = len; - pb[j].pfn = page_to_pfn(page); + pb[j].pfn = hvpfn; pb[j].offset = offset; pb[j].len = bytes; offset += bytes; len -= bytes; - if (offset == PAGE_SIZE && len) { - page++; + if (offset == HV_HYP_PAGE_SIZE && len) { + hvpfn++; offset = 0; j++; } @@ -421,23 +418,26 @@ static u32 init_page_array(void *hdr, u32 len, struct sk_buff *skb, * 2. skb linear data * 3. skb fragment data */ - slots_used += fill_pg_buf(virt_to_page(hdr), - offset_in_page(hdr), - len, &pb[slots_used]); + slots_used += fill_pg_buf(virt_to_hvpfn(hdr), + offset_in_hvpage(hdr), + len, + &pb[slots_used]); packet->rmsg_size = len; packet->rmsg_pgcnt = slots_used; - slots_used += fill_pg_buf(virt_to_page(data), - offset_in_page(data), - skb_headlen(skb), &pb[slots_used]); + slots_used += fill_pg_buf(virt_to_hvpfn(data), + offset_in_hvpage(data), + skb_headlen(skb), + &pb[slots_used]); for (i = 0; i < frags; i++) { skb_frag_t *frag = skb_shinfo(skb)->frags + i; - slots_used += fill_pg_buf(skb_frag_page(frag), - skb_frag_off(frag), - skb_frag_size(frag), &pb[slots_used]); + slots_used += fill_pg_buf(page_to_hvpfn(skb_frag_page(frag)), + skb_frag_off(frag), + skb_frag_size(frag), + &pb[slots_used]); } return slots_used; } @@ -453,8 +453,8 @@ static int count_skb_frag_slots(struct sk_buff *skb) unsigned long offset = skb_frag_off(frag); /* Skip unused frames from start of page */ - offset &= ~PAGE_MASK; - pages += PFN_UP(offset + size); + offset &= ~HV_HYP_PAGE_MASK; + pages += HVPFN_UP(offset + size); } return pages; } @@ -462,12 +462,12 @@ static int count_skb_frag_slots(struct sk_buff *skb) static int netvsc_get_slots(struct sk_buff *skb) { char *data = skb->data; - unsigned int offset = offset_in_page(data); + unsigned int offset = offset_in_hvpage(data); unsigned int len = skb_headlen(skb); int slots; int frag_slots; - slots = DIV_ROUND_UP(offset + len, PAGE_SIZE); + slots = DIV_ROUND_UP(offset + len, HV_HYP_PAGE_SIZE); frag_slots = count_skb_frag_slots(skb); return slots + frag_slots; } diff --git a/drivers/net/hyperv/rndis_filter.c b/drivers/net/hyperv/rndis_filter.c index b81ceba38218..1e2de8fb7fec 100644 --- a/drivers/net/hyperv/rndis_filter.c +++ b/drivers/net/hyperv/rndis_filter.c @@ -25,7 +25,7 @@ static void rndis_set_multicast(struct work_struct *w); -#define RNDIS_EXT_LEN PAGE_SIZE +#define RNDIS_EXT_LEN HV_HYP_PAGE_SIZE struct rndis_request { struct list_head list_ent; struct completion wait_event; @@ -215,18 +215,17 @@ static int rndis_filter_send_request(struct rndis_device *dev, packet->page_buf_cnt = 1; pb[0].pfn = virt_to_phys(&req->request_msg) >> - PAGE_SHIFT; + HV_HYP_PAGE_SHIFT; pb[0].len = req->request_msg.msg_len; - pb[0].offset = - (unsigned long)&req->request_msg & (PAGE_SIZE - 1); + pb[0].offset = offset_in_hvpage(&req->request_msg); /* Add one page_buf when request_msg crossing page boundary */ - if (pb[0].offset + pb[0].len > PAGE_SIZE) { + if (pb[0].offset + pb[0].len > HV_HYP_PAGE_SIZE) { packet->page_buf_cnt++; - pb[0].len = PAGE_SIZE - + pb[0].len = HV_HYP_PAGE_SIZE - pb[0].offset; pb[1].pfn = virt_to_phys((void *)&req->request_msg - + pb[0].len) >> PAGE_SHIFT; + + pb[0].len) >> HV_HYP_PAGE_SHIFT; pb[1].offset = 0; pb[1].len = req->request_msg.msg_len - pb[0].len; From patchwork Thu Sep 10 14:34:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 11769287 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E8B3C16C0 for ; Thu, 10 Sep 2020 21:06:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C87DA21D91 for ; Thu, 10 Sep 2020 21:06:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="pMyjgVpa" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727860AbgIJVGR (ORCPT ); Thu, 10 Sep 2020 17:06:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56944 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731031AbgIJOgI (ORCPT ); Thu, 10 Sep 2020 10:36:08 -0400 Received: from mail-ej1-x642.google.com (mail-ej1-x642.google.com [IPv6:2a00:1450:4864:20::642]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ABBF5C0617A5; Thu, 10 Sep 2020 07:35:22 -0700 (PDT) Received: by mail-ej1-x642.google.com with SMTP id nw23so9063526ejb.4; Thu, 10 Sep 2020 07:35:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cJZ1ALnckJC8ruyPBCuRJ0Lsg32GinrUeRHrSBpztMM=; b=pMyjgVpawhrKbJYpksDCkMVZhl5j4ABwRjUqNdAB9pCv02RlKDa8LThip7Yne0hup+ lSmEh/0qbxVuJsplNuvuv3inBYOqFa2fnmMWeysj2OTq4xUW3kZ2mT56EFj/30dyGHd/ ruaTRIFqoPIBBDyM5CTOOlzRNTFXpcoeZPjymeml9RKt1B0APqnXGJZOC38y11/rremY TiHaJsLeh9SJoPvHznavHEdMimz0ahcEpW9NyfhADOiISUj04ys7OgU5knLToCaSV917 8KpMCI7aItJYq9+PdjA5GYcsDGTrY50xv3zHMsRgrhKg99XcvySc6jtzX+VI4TDJMIj/ NgaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cJZ1ALnckJC8ruyPBCuRJ0Lsg32GinrUeRHrSBpztMM=; b=qFXQkGiYunqryUhY+An0Jm42j0iODwSL12zcfGp1RBHbhXZ4SAV/2w8eUCX1cC+xe6 6pohsrOo8lf74W4+A/FuRzxvDSVmD1pMn4PqV3XA18hf44CuJIRwsaLvpD6PL6jJB4sC gh/SR5DROPHQzXYkCc21Z9XyzbT9Jz08aacZPFlKvoiRCmw2HQEytndHf561pHNhetsf wUXgXRKVOmf2LthvOpRrObL8a5SYE+uCkxWKPaI8EOaxfk3dXt+0PpPIek9H/uK2TQwC W1Yrabakt9C+ZUJmgO48vcAQBhsSc1xVkoq43rqgYZzwrvsMv1pO6MHGixjdfCntQxYb PHlQ== X-Gm-Message-State: AOAM531dsM0GoWkXxx1CV/jEY0rx2VDyjzdChHOn+VHYkqbh8rfQI69o wXqQM61VSBRkaO6CxqrNMlM= X-Google-Smtp-Source: ABdhPJx6aEJH96kjwRD/WTSeHx2VkGr4g/gIVID3Jq9BaiQDF+o4qqUHlSGbFq8qOjiXLgCUsj6qsg== X-Received: by 2002:a17:906:aecb:: with SMTP id me11mr9608863ejb.217.1599748521431; Thu, 10 Sep 2020 07:35:21 -0700 (PDT) Received: from auth2-smtp.messagingengine.com (auth2-smtp.messagingengine.com. [66.111.4.228]) by smtp.gmail.com with ESMTPSA id h10sm6975442ejt.93.2020.09.10.07.35.18 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 10 Sep 2020 07:35:20 -0700 (PDT) Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailauth.nyi.internal (Postfix) with ESMTP id 94E8027C00A1; Thu, 10 Sep 2020 10:35:17 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Thu, 10 Sep 2020 10:35:17 -0400 X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduiedrudehjedgjedvucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne goufhorhhtvggutfgvtghiphdvucdlgedtmdenucfjughrpefhvffufffkofgjfhgggfes tdekredtredttdenucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnh hgsehgmhgrihhlrdgtohhmqeenucggtffrrghtthgvrhhnpeehvdevteefgfeiudettdef vedvvdelkeejueffffelgeeuhffhjeetkeeiueeuleenucfkphephedvrdduheehrdduud durdejudenucevlhhushhtvghrufhiiigvpeehnecurfgrrhgrmhepmhgrihhlfhhrohhm pegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhithihqdeiledvgeehtd eigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmhgrihhlrdgtohhmsehf ihigmhgvrdhnrghmvg X-ME-Proxy: Received: from localhost (unknown [52.155.111.71]) by mail.messagingengine.com (Postfix) with ESMTPA id CC5073064685; Thu, 10 Sep 2020 10:35:16 -0400 (EDT) From: Boqun Feng To: linux-hyperv@vger.kernel.org, linux-input@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-scsi@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Jiri Kosina , Benjamin Tissoires , Dmitry Torokhov , "David S. Miller" , Jakub Kicinski , "James E.J. Bottomley" , "Martin K. Petersen" , Michael Kelley , will@kernel.org, ardb@kernel.org, arnd@arndb.de, catalin.marinas@arm.com, mark.rutland@arm.com, maz@kernel.org, Boqun Feng Subject: [PATCH v3 08/11] Input: hyperv-keyboard: Make ringbuffer at least take two pages Date: Thu, 10 Sep 2020 22:34:52 +0800 Message-Id: <20200910143455.109293-9-boqun.feng@gmail.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200910143455.109293-1-boqun.feng@gmail.com> References: <20200910143455.109293-1-boqun.feng@gmail.com> MIME-Version: 1.0 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org When PAGE_SIZE > HV_HYP_PAGE_SIZE, we need the ringbuffer size to be at least 2 * PAGE_SIZE: one page for the header and at least one page of the data part (because of the alignment requirement for double mapping). So make sure the ringbuffer sizes to be at least 2 * PAGE_SIZE when using vmbus_open() to establish the vmbus connection. Signed-off-by: Boqun Feng Reviewed-by: Michael Kelley Acked-by: Dmitry Torokhov --- drivers/input/serio/hyperv-keyboard.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/input/serio/hyperv-keyboard.c b/drivers/input/serio/hyperv-keyboard.c index df4e9f6f4529..6ebc61e2db3f 100644 --- a/drivers/input/serio/hyperv-keyboard.c +++ b/drivers/input/serio/hyperv-keyboard.c @@ -75,8 +75,8 @@ struct synth_kbd_keystroke { #define HK_MAXIMUM_MESSAGE_SIZE 256 -#define KBD_VSC_SEND_RING_BUFFER_SIZE (40 * 1024) -#define KBD_VSC_RECV_RING_BUFFER_SIZE (40 * 1024) +#define KBD_VSC_SEND_RING_BUFFER_SIZE max(40 * 1024, (int)(2 * PAGE_SIZE)) +#define KBD_VSC_RECV_RING_BUFFER_SIZE max(40 * 1024, (int)(2 * PAGE_SIZE)) #define XTKBD_EMUL0 0xe0 #define XTKBD_EMUL1 0xe1 From patchwork Thu Sep 10 14:34:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 11769281 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 50D71159A for ; Thu, 10 Sep 2020 21:06:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 25653221E2 for ; Thu, 10 Sep 2020 21:06:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="l1bI5ss4" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728035AbgIJVEu (ORCPT ); Thu, 10 Sep 2020 17:04:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57126 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731162AbgIJOhX (ORCPT ); Thu, 10 Sep 2020 10:37:23 -0400 Received: from mail-ed1-x544.google.com (mail-ed1-x544.google.com [IPv6:2a00:1450:4864:20::544]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D7C52C0617AA; Thu, 10 Sep 2020 07:35:26 -0700 (PDT) Received: by mail-ed1-x544.google.com with SMTP id ay8so6532144edb.8; Thu, 10 Sep 2020 07:35:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=tyJXvA8EaR+XSTkGzm3iM/Tde3qccPQP5bZuy8OPyP8=; b=l1bI5ss4UryzwKjYgm10fxFz8W2McHW4Rg/aMe38ZkyRaSFJ82KHNXFdINMtHhNUaY 1o7IzPZ50slcLpR/lKAkh5A0n5jEb229Ff+nnz4O7gDwZZBO82//B1KTsqk3JkOgqgkZ vRe9sCHjSoYZke3HXkcIU++JbAKtirYO4vREVU27v+UO6DjXKjQlzVhJJOzjzEwQGjzn ssJXTSo+HEOJN5E+zvFHSZ+zMqRQZJDP/7QqhvSHmUXr7tmZoxzpHj2Oa82mWUAkHCcJ vtBUsf7uQ4zhqBSgUtp0CyA0ioCMV0e1TqUC2gyfbvjzyqclfhQczKdZPwTRiAvBKpX/ U7/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=tyJXvA8EaR+XSTkGzm3iM/Tde3qccPQP5bZuy8OPyP8=; b=dvAhSjWBeuExT5zRHn09etf8IEYCAWiIF5eicLjH/+m6r9QQQALGO/KY6OZA+iaTMr r3rpeQcE3utt/UyXjEpAG7XHWyAJRuWVrLe/XGw/X0tIkGkRSANZXlBRWvXA93OLh8lk pszwe4D0Vtz9+W1slBFOznMp8yM9L78cQWznTL35seBGVrQflHVF2D35YYBrPaqSN16V y9z9QHE4afwyXEQ40eD/tB6xj7GhtyKbHHoAN3MLEYW3SLMDjnLDJ7DNmYDIxpSPAXLk U/xmeDpvgiw7WlogSlsexiA4QRxz3dM7IYtNon/H9iqWw3i/wTEwvM8L1BiVIGFALtBy Og2g== X-Gm-Message-State: AOAM531udK24Z+0vMxJ+Ry3MH9cr64D2xXJ5s2/FPcwLvHmmuImMsZS6 y3JqWEMTYAMX4hgD+iJxSsg= X-Google-Smtp-Source: ABdhPJzuJ7HW8QVBuvHdERpLFcDaQ5GApnk5ppeYFcSB3KLXqN0xGv8p0J4Q/CqLBelSwXaw3ft15Q== X-Received: by 2002:aa7:de82:: with SMTP id j2mr10109782edv.3.1599748523652; Thu, 10 Sep 2020 07:35:23 -0700 (PDT) Received: from auth2-smtp.messagingengine.com (auth2-smtp.messagingengine.com. [66.111.4.228]) by smtp.gmail.com with ESMTPSA id f21sm7441952edw.83.2020.09.10.07.35.20 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 10 Sep 2020 07:35:22 -0700 (PDT) Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailauth.nyi.internal (Postfix) with ESMTP id 8E5B127C00A2; Thu, 10 Sep 2020 10:35:19 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Thu, 10 Sep 2020 10:35:19 -0400 X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduiedrudehjedgjedvucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne goufhorhhtvggutfgvtghiphdvucdlgedtmdenucfjughrpefhvffufffkofgjfhgggfes tdekredtredttdenucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnh hgsehgmhgrihhlrdgtohhmqeenucggtffrrghtthgvrhhnpeehvdevteefgfeiudettdef vedvvdelkeejueffffelgeeuhffhjeetkeeiueeuleenucfkphephedvrdduheehrdduud durdejudenucevlhhushhtvghrufhiiigvpeeknecurfgrrhgrmhepmhgrihhlfhhrohhm pegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhithihqdeiledvgeehtd eigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmhgrihhlrdgtohhmsehf ihigmhgvrdhnrghmvg X-ME-Proxy: Received: from localhost (unknown [52.155.111.71]) by mail.messagingengine.com (Postfix) with ESMTPA id B0931306468D; Thu, 10 Sep 2020 10:35:18 -0400 (EDT) From: Boqun Feng To: linux-hyperv@vger.kernel.org, linux-input@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-scsi@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Jiri Kosina , Benjamin Tissoires , Dmitry Torokhov , "David S. Miller" , Jakub Kicinski , "James E.J. Bottomley" , "Martin K. Petersen" , Michael Kelley , will@kernel.org, ardb@kernel.org, arnd@arndb.de, catalin.marinas@arm.com, mark.rutland@arm.com, maz@kernel.org, Boqun Feng , Jiri Kosina Subject: [PATCH v3 09/11] HID: hyperv: Make ringbuffer at least take two pages Date: Thu, 10 Sep 2020 22:34:53 +0800 Message-Id: <20200910143455.109293-10-boqun.feng@gmail.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200910143455.109293-1-boqun.feng@gmail.com> References: <20200910143455.109293-1-boqun.feng@gmail.com> MIME-Version: 1.0 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org When PAGE_SIZE > HV_HYP_PAGE_SIZE, we need the ringbuffer size to be at least 2 * PAGE_SIZE: one page for the header and at least one page of the data part (because of the alignment requirement for double mapping). So make sure the ringbuffer sizes to be at least 2 * PAGE_SIZE when using vmbus_open() to establish the vmbus connection. Signed-off-by: Boqun Feng Acked-by: Jiri Kosina Reviewed-by: Michael Kelley --- Hi Jiri, Thanks for your acked-by. I make a small change in this version (casting 2 * PAGE_SIZE into int to avoid compiler warnings), and it make no functional change. If the change is inappropriate, please let me know. drivers/hid/hid-hyperv.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/hid/hid-hyperv.c b/drivers/hid/hid-hyperv.c index 0b6ee1dee625..8905559b3882 100644 --- a/drivers/hid/hid-hyperv.c +++ b/drivers/hid/hid-hyperv.c @@ -104,8 +104,8 @@ struct synthhid_input_report { #pragma pack(pop) -#define INPUTVSC_SEND_RING_BUFFER_SIZE (40 * 1024) -#define INPUTVSC_RECV_RING_BUFFER_SIZE (40 * 1024) +#define INPUTVSC_SEND_RING_BUFFER_SIZE max(40 * 1024, (int)(2 * PAGE_SIZE)) +#define INPUTVSC_RECV_RING_BUFFER_SIZE max(40 * 1024, (int)(2 * PAGE_SIZE)) enum pipe_prot_msg_type { From patchwork Thu Sep 10 14:34:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 11769279 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5DEE016C0 for ; Thu, 10 Sep 2020 21:06:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3631421D91 for ; Thu, 10 Sep 2020 21:06:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="nUxVcG8/" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728036AbgIJVFF (ORCPT ); Thu, 10 Sep 2020 17:05:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57124 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731063AbgIJOhX (ORCPT ); Thu, 10 Sep 2020 10:37:23 -0400 Received: from mail-ed1-x541.google.com (mail-ed1-x541.google.com [IPv6:2a00:1450:4864:20::541]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A2690C0617A9; Thu, 10 Sep 2020 07:35:26 -0700 (PDT) Received: by mail-ed1-x541.google.com with SMTP id ay8so6532250edb.8; Thu, 10 Sep 2020 07:35:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=8HvRoiCT6Kaqw+ME0No79Ijetq5cFPuTsghNfxraGwU=; b=nUxVcG8/Gc71+4ot2A2Swhq1iHl5I5udIPQA5pErGqnEZcNyUf/HZVQqQqCohsrxzn 5cyDq7J56l7uFC0o2cj7YN1PoGuVFh7yqFbJ7iWrPT347WXmOVmFTwvpUtGLy7JXESll 8/YNkAbKEh22aN2uPfGV656lszd48mEZuuoyIJqvOynJpsKoolic0DdiX1khR5k/09fF xMGnXkMj1Jz7+55GFmHszF/42TLtQO3wg6BLZRCyTF4z6ZTLKY5Uu1P8Zs4+eN74SIsT ISomZMNoJvDbRDkqsk+mDo0Uwa8AdajBskyTH88+9FVf25WlpAHJWFIKWSmjqt+UvkDB l5tw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=8HvRoiCT6Kaqw+ME0No79Ijetq5cFPuTsghNfxraGwU=; b=GyfBFSnjfXRgcH663axb+NcMDPkxPZcFI64pMpmrOhy4KVMTNypX6oHLMfYI2UnTYw 8igjAUyaUJs7DR9JTRZg+5rccOYR09wVAaAu8iakky/G/nRb+4bG6LPKH3HhfHgOtdWb 5vWmgOPaLb57xFt9iDQ8OjmukPVCAIzCCRvp4Us0eoQ1RxSdxcmKLCe+gtYeHvAqYSdq dYF8Q9DwbGryMUSAQOKLgvb4vAjFQhNXPZMKEjC3vdUX+ryf47ngBzChDHDllWsj4/+Y yRryFZwpxZQ4X4IdjMLq7bjXsB61gy2lTUJpalwLPChylXwwfzmQinPsZT+mN28aYb/m apLw== X-Gm-Message-State: AOAM530D7NuTBPgMA+fZwlETlDkVm2kcmRwFe6tt0aY/w/cq88wRSDwn yJ5wOQPUDjpK0XGDLsVAiTs= X-Google-Smtp-Source: ABdhPJwNXgGLyf78LllCxiKYdXIqr7RFMl9vsJSHfdNs3cPpKF5gm4SQXM9MIAPD1KxMYSw9WhyOOA== X-Received: by 2002:aa7:c987:: with SMTP id c7mr9890667edt.385.1599748525404; Thu, 10 Sep 2020 07:35:25 -0700 (PDT) Received: from auth2-smtp.messagingengine.com (auth2-smtp.messagingengine.com. [66.111.4.228]) by smtp.gmail.com with ESMTPSA id e16sm7311945ejk.68.2020.09.10.07.35.21 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 10 Sep 2020 07:35:24 -0700 (PDT) Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailauth.nyi.internal (Postfix) with ESMTP id 5BC2827C00A3; Thu, 10 Sep 2020 10:35:21 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Thu, 10 Sep 2020 10:35:21 -0400 X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduiedrudehjedgjedvucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne goufhorhhtvggutfgvtghiphdvucdlgedtmdenucfjughrpefhvffufffkofgjfhgggfes tdekredtredttdenucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnh hgsehgmhgrihhlrdgtohhmqeenucggtffrrghtthgvrhhnpeehvdevteefgfeiudettdef vedvvdelkeejueffffelgeeuhffhjeetkeeiueeuleenucfkphephedvrdduheehrdduud durdejudenucevlhhushhtvghrufhiiigvpeeknecurfgrrhgrmhepmhgrihhlfhhrohhm pegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhithihqdeiledvgeehtd eigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmhgrihhlrdgtohhmsehf ihigmhgvrdhnrghmvg X-ME-Proxy: Received: from localhost (unknown [52.155.111.71]) by mail.messagingengine.com (Postfix) with ESMTPA id 92ED73280059; Thu, 10 Sep 2020 10:35:20 -0400 (EDT) From: Boqun Feng To: linux-hyperv@vger.kernel.org, linux-input@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-scsi@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Jiri Kosina , Benjamin Tissoires , Dmitry Torokhov , "David S. Miller" , Jakub Kicinski , "James E.J. Bottomley" , "Martin K. Petersen" , Michael Kelley , will@kernel.org, ardb@kernel.org, arnd@arndb.de, catalin.marinas@arm.com, mark.rutland@arm.com, maz@kernel.org, Boqun Feng Subject: [PATCH v3 10/11] Driver: hv: util: Make ringbuffer at least take two pages Date: Thu, 10 Sep 2020 22:34:54 +0800 Message-Id: <20200910143455.109293-11-boqun.feng@gmail.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200910143455.109293-1-boqun.feng@gmail.com> References: <20200910143455.109293-1-boqun.feng@gmail.com> MIME-Version: 1.0 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org When PAGE_SIZE > HV_HYP_PAGE_SIZE, we need the ringbuffer size to be at least 2 * PAGE_SIZE: one page for the header and at least one page of the data part (because of the alignment requirement for double mapping). So make sure the ringbuffer sizes to be at least 2 * PAGE_SIZE when using vmbus_open() to establish the vmbus connection. Signed-off-by: Boqun Feng Reviewed-by: Michael Kelley --- drivers/hv/hv_util.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/drivers/hv/hv_util.c b/drivers/hv/hv_util.c index a4e8d96513c2..3996c16568a3 100644 --- a/drivers/hv/hv_util.c +++ b/drivers/hv/hv_util.c @@ -500,6 +500,14 @@ static void heartbeat_onchannelcallback(void *context) } } +/* + * The size of each ring should be at least 2 * PAGE_SIZE, because we need one + * page for the header and at least another page (because of the alignment + * requirement for double mapping) for data part. + */ +#define HV_UTIL_RING_SEND_SIZE max(4 * HV_HYP_PAGE_SIZE, 2 * PAGE_SIZE) +#define HV_UTIL_RING_RECV_SIZE max(4 * HV_HYP_PAGE_SIZE, 2 * PAGE_SIZE) + static int util_probe(struct hv_device *dev, const struct hv_vmbus_device_id *dev_id) { @@ -530,8 +538,8 @@ static int util_probe(struct hv_device *dev, hv_set_drvdata(dev, srv); - ret = vmbus_open(dev->channel, 4 * HV_HYP_PAGE_SIZE, - 4 * HV_HYP_PAGE_SIZE, NULL, 0, srv->util_cb, + ret = vmbus_open(dev->channel, HV_UTIL_RING_SEND_SIZE, + HV_UTIL_RING_RECV_SIZE, NULL, 0, srv->util_cb, dev->channel); if (ret) goto error; @@ -590,8 +598,8 @@ static int util_resume(struct hv_device *dev) return ret; } - ret = vmbus_open(dev->channel, 4 * HV_HYP_PAGE_SIZE, - 4 * HV_HYP_PAGE_SIZE, NULL, 0, srv->util_cb, + ret = vmbus_open(dev->channel, HV_UTIL_RING_SEND_SIZE, + HV_UTIL_RING_RECV_SIZE, NULL, 0, srv->util_cb, dev->channel); return ret; } From patchwork Thu Sep 10 14:34:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 11769275 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B56D513B1 for ; Thu, 10 Sep 2020 21:04:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9478E21D91 for ; Thu, 10 Sep 2020 21:04:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="XE1hxXcF" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728014AbgIJVEr (ORCPT ); Thu, 10 Sep 2020 17:04:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57128 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731167AbgIJOhX (ORCPT ); Thu, 10 Sep 2020 10:37:23 -0400 Received: from mail-ej1-x643.google.com (mail-ej1-x643.google.com [IPv6:2a00:1450:4864:20::643]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 93810C0617AB; Thu, 10 Sep 2020 07:35:28 -0700 (PDT) Received: by mail-ej1-x643.google.com with SMTP id q13so9040781ejo.9; Thu, 10 Sep 2020 07:35:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=T3i3qXMnhJ5Pfez9h/HWjHE3nazk6XcC2fJXvr78eKk=; b=XE1hxXcFnJJOnwWXrIxwiW2HrxFHccyG+wLSuBNlS110t5JdPOcFyuEcDzcAwGqwsf kg30G7OmWlEzmZGb5s4ub+rjsJNugFWL3WYd9CXPjwnwWVOpB9+Uz5IFkGPFqQGGQ6yS O2jVO3oEye1D8vvqdHXK9lNam/rw4NQeUQilGtWUQ4zBd6whelBB4rgUSQym+GsqQnf4 WWsg/8mRRUxrQ9qn1KaKR1Ida8UPPRVpl/5iqiafbFQXAADPWkd4xDaSPbRyeOhh4CIQ modZoQeFhDCNMbyuMcrOza3HAH2wMa/dU5qq3FuaDuvbuM21pP75apRskyKwjhgiMcYz HuRA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=T3i3qXMnhJ5Pfez9h/HWjHE3nazk6XcC2fJXvr78eKk=; b=TbaK9FbHvYVtzoS3fXCwQ0jiAG4j8MsWeiCnNiDZDuHahghfzqe+b1KqKu9F9UNSCL CWa9eG5f5t81kCK2SH3DOBIkn/kY2zL9XkABE1F/HLtjpBSZHxI13Ddhezri8otTIhXJ RNn825P9RhUxrCPrM2WebZvyrZi3uwAZA3nMPyxWT8DCSVhZZjUfaRG9NTRKEDvE6WiH MR5fSi915PtclTWaaEha3GPzQHwMSTmAj5r62ua3l7NU9sTxoh+KtJacqJ9Uta3xDcnG U3yIBntPRQiA8SFL3SpPozXcb8ctubhhWf8PLHmPciIEDrueBJkGw0HjyIRkDyqKUEX4 //Gw== X-Gm-Message-State: AOAM5331f9Fo8zqXD3zfU5Jh39iC2gCtBRdjcqakj2V9RI1r5Ry5Fp+K uTvgSJw5wGOtnbQvmBZ9JEs= X-Google-Smtp-Source: ABdhPJzK8qE8+S3d6bWZO57JfOB2xqDtq4ntdMlxomIVjovtPXSzMq9l3+nxOhieFaaCFSpmkfiFnw== X-Received: by 2002:a17:906:24d6:: with SMTP id f22mr8828364ejb.85.1599748526997; Thu, 10 Sep 2020 07:35:26 -0700 (PDT) Received: from auth2-smtp.messagingengine.com (auth2-smtp.messagingengine.com. [66.111.4.228]) by smtp.gmail.com with ESMTPSA id l10sm7466796edr.12.2020.09.10.07.35.23 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 10 Sep 2020 07:35:26 -0700 (PDT) Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailauth.nyi.internal (Postfix) with ESMTP id 3A4A827C00A1; Thu, 10 Sep 2020 10:35:23 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Thu, 10 Sep 2020 10:35:23 -0400 X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduiedrudehjedgjedvucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne goufhorhhtvggutfgvtghiphdvucdlgedtmdenucfjughrpefhvffufffkofgjfhgggfes tdekredtredttdenucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnh hgsehgmhgrihhlrdgtohhmqeenucggtffrrghtthgvrhhnpeehvdevteefgfeiudettdef vedvvdelkeejueffffelgeeuhffhjeetkeeiueeuleenucfkphephedvrdduheehrdduud durdejudenucevlhhushhtvghrufhiiigvpeeknecurfgrrhgrmhepmhgrihhlfhhrohhm pegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhithihqdeiledvgeehtd eigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmhgrihhlrdgtohhmsehf ihigmhgvrdhnrghmvg X-ME-Proxy: Received: from localhost (unknown [52.155.111.71]) by mail.messagingengine.com (Postfix) with ESMTPA id 779C03280065; Thu, 10 Sep 2020 10:35:22 -0400 (EDT) From: Boqun Feng To: linux-hyperv@vger.kernel.org, linux-input@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-scsi@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Jiri Kosina , Benjamin Tissoires , Dmitry Torokhov , "David S. Miller" , Jakub Kicinski , "James E.J. Bottomley" , "Martin K. Petersen" , Michael Kelley , will@kernel.org, ardb@kernel.org, arnd@arndb.de, catalin.marinas@arm.com, mark.rutland@arm.com, maz@kernel.org, Boqun Feng Subject: [PATCH v3 11/11] scsi: storvsc: Support PAGE_SIZE larger than 4K Date: Thu, 10 Sep 2020 22:34:55 +0800 Message-Id: <20200910143455.109293-12-boqun.feng@gmail.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200910143455.109293-1-boqun.feng@gmail.com> References: <20200910143455.109293-1-boqun.feng@gmail.com> MIME-Version: 1.0 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Hyper-V always use 4k page size (HV_HYP_PAGE_SIZE), so when communicating with Hyper-V, a guest should always use HV_HYP_PAGE_SIZE as the unit for page related data. For storvsc, the data is vmbus_packet_mpb_array. And since in scsi_cmnd, sglist of pages (in unit of PAGE_SIZE) is used, we need convert pages in the sglist of scsi_cmnd into Hyper-V pages in vmbus_packet_mpb_array. This patch does the conversion by dividing pages in sglist into Hyper-V pages, offset and indexes in vmbus_packet_mpb_array are recalculated accordingly. Signed-off-by: Boqun Feng Reviewed-by: Michael Kelley --- drivers/scsi/storvsc_drv.c | 54 +++++++++++++++++++++++++++++++++----- 1 file changed, 47 insertions(+), 7 deletions(-) diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c index 8f5f5dc863a4..119b76ca24a1 100644 --- a/drivers/scsi/storvsc_drv.c +++ b/drivers/scsi/storvsc_drv.c @@ -1739,23 +1739,63 @@ static int storvsc_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scmnd) payload_sz = sizeof(cmd_request->mpb); if (sg_count) { - if (sg_count > MAX_PAGE_BUFFER_COUNT) { + unsigned int hvpgoff = 0; + unsigned long hvpg_offset = sgl->offset & ~HV_HYP_PAGE_MASK; + unsigned int hvpg_count = HVPFN_UP(hvpg_offset + length); + u64 hvpfn; - payload_sz = (sg_count * sizeof(u64) + + if (hvpg_count > MAX_PAGE_BUFFER_COUNT) { + + payload_sz = (hvpg_count * sizeof(u64) + sizeof(struct vmbus_packet_mpb_array)); payload = kzalloc(payload_sz, GFP_ATOMIC); if (!payload) return SCSI_MLQUEUE_DEVICE_BUSY; } + /* + * sgl is a list of PAGEs, and payload->range.pfn_array + * expects the page number in the unit of HV_HYP_PAGE_SIZE (the + * page size that Hyper-V uses, so here we need to divide PAGEs + * into HV_HYP_PAGE in case that PAGE_SIZE > HV_HYP_PAGE_SIZE. + */ payload->range.len = length; - payload->range.offset = sgl[0].offset; + payload->range.offset = sgl[0].offset & ~HV_HYP_PAGE_MASK; + hvpgoff = sgl[0].offset >> HV_HYP_PAGE_SHIFT; cur_sgl = sgl; - for (i = 0; i < sg_count; i++) { - payload->range.pfn_array[i] = - page_to_pfn(sg_page((cur_sgl))); - cur_sgl = sg_next(cur_sgl); + for (i = 0; i < hvpg_count; i++) { + /* + * 'i' is the index of hv pages in the payload and + * 'hvpgoff' is the offset (in hv pages) of the first + * hv page in the the first page. The relationship + * between the sum of 'i' and 'hvpgoff' and the offset + * (in hv pages) in a payload page ('hvpgoff_in_page') + * is as follow: + * + * |------------------ PAGE -------------------| + * | NR_HV_HYP_PAGES_IN_PAGE hvpgs in total | + * |hvpg|hvpg| ... |hvpg|... |hvpg| + * ^ ^ ^ ^ + * +-hvpgoff-+ +-hvpgoff_in_page-+ + * ^ | + * +--------------------- i ---------------------------+ + */ + unsigned int hvpgoff_in_page = + (i + hvpgoff) % NR_HV_HYP_PAGES_IN_PAGE; + + /* + * Two cases that we need to fetch a page: + * 1) i == 0, the first step or + * 2) hvpgoff_in_page == 0, when we reach the boundary + * of a page. + */ + if (hvpgoff_in_page == 0 || i == 0) { + hvpfn = page_to_hvpfn(sg_page(cur_sgl)); + cur_sgl = sg_next(cur_sgl); + } + + payload->range.pfn_array[i] = hvpfn + hvpgoff_in_page; } }