From patchwork Mon May 18 12:11:26 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 6428541 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 0DCEAC0432 for ; Mon, 18 May 2015 12:15:48 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 22F0720444 for ; Mon, 18 May 2015 12:15:47 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1DCF02043C for ; Mon, 18 May 2015 12:15:46 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1YuJuU-0007ly-7f; Mon, 18 May 2015 12:12:42 +0000 Received: from smtp.citrix.com ([66.165.176.89]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1YuJuC-0007SZ-5v for linux-arm-kernel@lists.infradead.org; Mon, 18 May 2015 12:12:25 +0000 X-IronPort-AV: E=Sophos;i="5.13,453,1427760000"; d="scan'208";a="263593874" Message-ID: <5559D6EE.3030400@citrix.com> Date: Mon, 18 May 2015 13:11:26 +0100 From: Julien Grall User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Icedove/31.6.0 MIME-Version: 1.0 To: Wei Liu , Julien Grall Subject: Re: [Xen-devel] [RFC 21/23] net/xen-netback: Make it running on 64KB page granularity References: <1431622863-28575-1-git-send-email-julien.grall@citrix.com> <1431622863-28575-22-git-send-email-julien.grall@citrix.com> <20150515023534.GE19352@zion.uk.xensource.com> <5555E81E.8070803@citrix.com> <20150515153143.GA8521@zion.uk.xensource.com> In-Reply-To: <20150515153143.GA8521@zion.uk.xensource.com> X-DLP: MIA2 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150518_051224_533591_95D10A47 X-CRM114-Status: GOOD ( 24.77 ) X-Spam-Score: -4.0 (----) Cc: ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com, netdev@vger.kernel.org, tim@xen.org, linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org, linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Hi Wei, On 15/05/15 16:31, Wei Liu wrote: > On Fri, May 15, 2015 at 01:35:42PM +0100, Julien Grall wrote: >> On 15/05/15 03:35, Wei Liu wrote: >>> On Thu, May 14, 2015 at 06:01:01PM +0100, Julien Grall wrote: >>>> The PV network protocol is using 4KB page granularity. The goal of this >>>> patch is to allow a Linux using 64KB page granularity working as a >>>> network backend on a non-modified Xen. >>>> >>>> It's only necessary to adapt the ring size and break skb data in small >>>> chunk of 4KB. The rest of the code is relying on the grant table code. >>>> >>>> Although only simple workload is working (dhcp request, ping). If I try >>>> to use wget in the guest, it will stall until a tcpdump is started on >>>> the vif interface in DOM0. I wasn't able to find why. >>>> >>> >>> I think in wget workload you're more likely to break down 64K pages to >>> 4K pages. Some of your calculation of mfn, offset might be wrong. >> >> If so, why tcpdump on the vif interface would make wget suddenly >> working? Does it make netback use a different path? > > No, but if might make core network component behave differently, this is > only my suspicion. > > Do you see malformed packets with tcpdump? I don't see any malformed packets with tcpdump. The connection is stalling until tcpdump is started on the vif in dom0. >> >>>> I have not modified XEN_NETBK_RX_SLOTS_MAX because I wasn't sure what >>>> it's used for (I have limited knowledge on the network driver). >>>> >>> >>> This is the maximum slots a guest packet can use. AIUI the protocol >>> still works on 4K granularity (you break 64K page to a bunch of 4K >>> pages), you don't need to change this. >> >> 1 slot = 1 grant right? If so, XEN_NETBK_RX_SLOTS_MAX is based on the >> number of Linux page. So we would have to get the number for Xen page. >> > > Yes, 1 slot = 1 grant. I see what you're up to now. Yes, you need to > change this constant to match underlying HV page. > >> Although, I gave a try to multiple by XEN_PFN_PER_PAGE (4KB/64KB = 16) >> but it get stuck in the loop. >> > > I don't follow. What is the new #define? Which loop does it get stuck? The function xenvif_wait_for_rx_work never returns. I guess it's because there is not enough slot available. For 64KB page granularity we ask for 16 times more slots than 4KB page granularity. Although, it's very unlikely that all the slot will be used. FWIW I pointed out the same problem on blkfront. >>> >>>> queue->tx_copy_ops[*copy_ops].dest.domid = DOMID_SELF; >>>> queue->tx_copy_ops[*copy_ops].dest.offset = >>>> - offset_in_page(skb->data); >>>> + offset_in_page(skb->data) & ~XEN_PAGE_MASK; >>>> >>>> queue->tx_copy_ops[*copy_ops].len = data_len; >>>> queue->tx_copy_ops[*copy_ops].flags = GNTCOPY_source_gref; >>>> @@ -1366,8 +1367,8 @@ static int xenvif_handle_frag_list(struct xenvif_queue *queue, struct sk_buff *s >>> >>> This function is to coalesce frag_list to a new SKB. It's completely >>> fine to use the natural granularity of backend domain. The way you >>> modified it can lead to waste of memory, i.e. you only use first 4K of a >>> 64K page. >> >> Thanks for explaining. I wasn't sure how the function works so I change >> it for safety. I will redo the change. >> >> FWIW, I'm sure there is other place in netback where we waste memory >> with 64KB page granularity (such as grant table). I need to track them. >> >> Let me know if you have some place in mind where the memory usage can be >> improved. >> > > I was about to say the mmap_pages array is an array of pages. But that > probably belongs to grant table driver. Yes, there is a lot of rework in the grant table driver in order to avoid wasting memory. Regards, diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h index 0eda6e9..c2a5402 100644 --- a/drivers/net/xen-netback/common.h +++ b/drivers/net/xen-netback/common.h @@ -204,7 +204,7 @@ struct xenvif_queue { /* Per-queue data for xenvif */ /* Maximum number of Rx slots a to-guest packet may use, including the * slot needed for GSO meta-data. */ -#define XEN_NETBK_RX_SLOTS_MAX (MAX_SKB_FRAGS + 1) +#define XEN_NETBK_RX_SLOTS_MAX ((MAX_SKB_FRAGS + 1) * XEN_PFN_PER_PAGE) enum state_bit_shift { /* This bit marks that the vif is connected */