From patchwork Sat Jan 13 04:47:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nikita Yushchenko X-Patchwork-Id: 13518822 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-lj1-f169.google.com (mail-lj1-f169.google.com [209.85.208.169]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EFFEDE546 for ; Sat, 13 Jan 2024 04:47:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=cogentembedded.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=cogentembedded.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=cogentembedded-com.20230601.gappssmtp.com header.i=@cogentembedded-com.20230601.gappssmtp.com header.b="IuIcKNTX" Received: by mail-lj1-f169.google.com with SMTP id 38308e7fff4ca-2cd81b09e83so34211821fa.2 for ; Fri, 12 Jan 2024 20:47:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cogentembedded-com.20230601.gappssmtp.com; s=20230601; t=1705121246; x=1705726046; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=pGYVtaOwn2aSZdpDm8ExTcOizp30qs5QNeu6pAO5dGc=; b=IuIcKNTXczNI9k8KeaRHXYfqUWqH5d1wxijR7BEnN2tMLSJUW/T5dsc7hUrHaDsOYP l0YcUhmdAj04ETe8mmxLoGyn1V8f+ekKhzJHH4aFQoMCjxEFPKn3IK/4zwJy6r8I8Q5x tAf4QdMLANHgh4W9x+byup81xrlYP6uxF6GQhoV6WzLMQ8kA8hAaWUvtNcp0stv4lOKC /+S4wgg0E66n/bgEhXJOk5O92PyTkFgT7owPg4Uh3py4IjOU39ywjOsQVF7qynUkEbLK qkGScRO7K6bJtJzRtGdU6tCxYBIF66YJ5zpGLf2fOZmTaF4taD0ew4jaDpF2g7jj0WIn /zTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1705121246; x=1705726046; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=pGYVtaOwn2aSZdpDm8ExTcOizp30qs5QNeu6pAO5dGc=; b=WOJMArurDbq24wlx7Mkiqg/aRGqQCRRDVNsqBtIUvywBkMZAyisQ0TPhX6VQyt+u2y FeUUandHrXVFBrR/gndvT+SqgEPsBIOF4hCvljVMSACJW+bOMnpGobRZa2EWFTzfWkQv CbroqzgLEvNETfq1q+SxxboATOxPqXtr4F3jsGvWeqF2s1wy823CpsJsUQq7qiazQ8o7 0BM0KNZmav0p1SRPkNlw0G6op2HyrwOjAMMHuvBI0aRgcJgjSBeKvuxtIQXT2sw7cJ5b AIBeUNUmL5JxoIVBDLpDBPLAVa1Xomowda5b46Wh/XTmCpkKZI0eUNoH77M8acvmDfw1 2afg== X-Gm-Message-State: AOJu0YwdOyQmuu7TGFdPBOd3QsAl4kTv7Vhb0OL20sOOgQB7l8T6hqgl EaRQ9d2uspOiJ25FaKOEXa7R/fqYj+EPLg== X-Google-Smtp-Source: AGHT+IGcVGUMkbXDcfPGAey7BgeLwbvzqJqgd9ktQyfYfRz3bXNihChX1fFzblaMbL9tfUR7MW9VCw== X-Received: by 2002:a2e:9ccc:0:b0:2cd:4eb2:f19b with SMTP id g12-20020a2e9ccc000000b002cd4eb2f19bmr677700ljj.182.1705121246105; Fri, 12 Jan 2024 20:47:26 -0800 (PST) Received: from cobook.home ([91.231.66.25]) by smtp.gmail.com with ESMTPSA id 5-20020a05651c00c500b002cca6703b16sm672190ljr.44.2024.01.12.20.47.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Jan 2024 20:47:25 -0800 (PST) From: Nikita Yushchenko To: "David S. Miller" , Jakub Kicinski , Paolo Abeni Cc: Sergey Shtylyov , Claudiu Beznea , Yoshihiro Shimoda , Wolfram Sang , =?utf-8?q?Uwe_Kleine-K?= =?utf-8?q?=C3=B6nig?= , netdev@vger.kernel.org, linux-renesas-soc@vger.kernel.org, linux-kernel@vger.kernel.org, Nikita Yushchenko Subject: [PATCH] net: ravb: Fix wrong dma_unmap_single() calls in ring unmapping Date: Sat, 13 Jan 2024 10:47:21 +0600 Message-Id: <20240113044721.481131-1-nikita.yoush@cogentembedded.com> X-Mailer: git-send-email 2.39.2 Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org When unmapping ring entries on Rx ring release, ravb driver needs to unmap only those entries that have been mapped successfully. To check if an entry needs to be unmapped, currently the address stored inside descriptor is passed to dma_mapping_error() call. But, address field inside descriptor is 32-bit, while dma_mapping_error() is implemented by comparing it's argument with DMA_MAPPING_ERROR constant that is 64-bit when dma_addr_t is 64-bit. So the comparison gets wrong, resulting into ravb driver calling dma_unnmap_single() for 0xffffffff address. When the ring entries are mapped, in case of mapping failure the driver sets descriptor's size field to zero (which is a signal to hardware to not use this descriptor). Fix ring unmapping to detect if an entry needs to be unmapped by checking for zero size field. Fixes: a47b70ea86bd ("ravb: unmap descriptors when freeing rings") Signed-off-by: Nikita Yushchenko Reviewed-by: Sergey Shtylyov Reviewed-by: Florian Fainelli --- drivers/net/ethernet/renesas/ravb_main.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c index 0e3731f50fc2..4d4b5d44c4e7 100644 --- a/drivers/net/ethernet/renesas/ravb_main.c +++ b/drivers/net/ethernet/renesas/ravb_main.c @@ -256,8 +256,7 @@ static void ravb_rx_ring_free_gbeth(struct net_device *ndev, int q) for (i = 0; i < priv->num_rx_ring[q]; i++) { struct ravb_rx_desc *desc = &priv->gbeth_rx_ring[i]; - if (!dma_mapping_error(ndev->dev.parent, - le32_to_cpu(desc->dptr))) + if (le16_to_cpu(desc->ds_cc) != 0) dma_unmap_single(ndev->dev.parent, le32_to_cpu(desc->dptr), GBETH_RX_BUFF_MAX, @@ -281,8 +280,7 @@ static void ravb_rx_ring_free_rcar(struct net_device *ndev, int q) for (i = 0; i < priv->num_rx_ring[q]; i++) { struct ravb_ex_rx_desc *desc = &priv->rx_ring[q][i]; - if (!dma_mapping_error(ndev->dev.parent, - le32_to_cpu(desc->dptr))) + if (le16_to_cpu(desc->ds_cc) != 0) dma_unmap_single(ndev->dev.parent, le32_to_cpu(desc->dptr), RX_BUF_SZ,