From patchwork Wed Aug 9 21:02:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 13348507 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E1F0EC001DE for ; Wed, 9 Aug 2023 21:04:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232495AbjHIVEW (ORCPT ); Wed, 9 Aug 2023 17:04:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34310 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232212AbjHIVEV (ORCPT ); Wed, 9 Aug 2023 17:04:21 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BB692138 for ; Wed, 9 Aug 2023 14:03:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1691615016; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wyGZDO6+cs0Dn/l2naooCm9LYp0IvC76ZpjNX9Hb8V4=; b=Bru75B7S4xAyoqb3OI30B6++ywPZeXByR0SL46EBtoK9GXhICoHMVy8zPer2cekKulxYYz TMMq/ybfdCpyRJXoZ7LtDkzr63bWEWrf88PbrPAh1Hg5YHyXKju1uWR4x9E7Yc8ihJXrYu eocMV5m0QteEqnDKbXp4c+2UWFgVcvk= Received: from mimecast-mx02.redhat.com (66.187.233.73 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-606-kXMhG4BlMWOXEsD45UptFQ-1; Wed, 09 Aug 2023 17:03:34 -0400 X-MC-Unique: kXMhG4BlMWOXEsD45UptFQ-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4774E2999B23; Wed, 9 Aug 2023 21:03:34 +0000 (UTC) Received: from localhost (unknown [10.39.192.2]) by smtp.corp.redhat.com (Postfix) with ESMTP id D8D3C140E914; Wed, 9 Aug 2023 21:03:17 +0000 (UTC) From: Stefan Hajnoczi To: kvm@vger.kernel.org Cc: Jason Gunthorpe , "Tian, Kevin" , linux-kernel@vger.kernel.org, Alex Williamson , Stefan Hajnoczi Subject: [PATCH 1/4] vfio: trivially use __aligned_u64 for ioctl structs Date: Wed, 9 Aug 2023 17:02:45 -0400 Message-ID: <20230809210248.2898981-2-stefanha@redhat.com> In-Reply-To: <20230809210248.2898981-1-stefanha@redhat.com> References: <20230809210248.2898981-1-stefanha@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.7 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org u64 alignment behaves differently depending on the architecture and so offers __aligned_u64 to achieve consistent behavior in kernel<->userspace ABIs. There are structs in that can trivially be updated to __aligned_u64 because the struct sizes are multiples of 8 bytes. There is no change in memory layout on any CPU architecture and therefore this change is safe. The commits that follow this one handle the trickier cases where explanation about ABI breakage is necessary. Suggested-by: Jason Gunthorpe Signed-off-by: Stefan Hajnoczi Reviewed-by: Jason Gunthorpe --- include/uapi/linux/vfio.h | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h index 20c804bdc09c..b1dfcf3b7665 100644 --- a/include/uapi/linux/vfio.h +++ b/include/uapi/linux/vfio.h @@ -276,8 +276,8 @@ struct vfio_region_info { #define VFIO_REGION_INFO_FLAG_CAPS (1 << 3) /* Info supports caps */ __u32 index; /* Region index */ __u32 cap_offset; /* Offset within info struct of first cap */ - __u64 size; /* Region size (bytes) */ - __u64 offset; /* Region offset from start of device fd */ + __aligned_u64 size; /* Region size (bytes) */ + __aligned_u64 offset; /* Region offset from start of device fd */ }; #define VFIO_DEVICE_GET_REGION_INFO _IO(VFIO_TYPE, VFIO_BASE + 8) @@ -293,8 +293,8 @@ struct vfio_region_info { #define VFIO_REGION_INFO_CAP_SPARSE_MMAP 1 struct vfio_region_sparse_mmap_area { - __u64 offset; /* Offset of mmap'able area within region */ - __u64 size; /* Size of mmap'able area */ + __aligned_u64 offset; /* Offset of mmap'able area within region */ + __aligned_u64 size; /* Size of mmap'able area */ }; struct vfio_region_info_cap_sparse_mmap { @@ -449,9 +449,9 @@ struct vfio_device_migration_info { VFIO_DEVICE_STATE_V1_RESUMING) __u32 reserved; - __u64 pending_bytes; - __u64 data_offset; - __u64 data_size; + __aligned_u64 pending_bytes; + __aligned_u64 data_offset; + __aligned_u64 data_size; }; /* @@ -475,7 +475,7 @@ struct vfio_device_migration_info { struct vfio_region_info_cap_nvlink2_ssatgt { struct vfio_info_cap_header header; - __u64 tgt; + __aligned_u64 tgt; }; /* From patchwork Wed Aug 9 21:02:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 13348508 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EAAC2C41513 for ; Wed, 9 Aug 2023 21:04:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232716AbjHIVEX (ORCPT ); Wed, 9 Aug 2023 17:04:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37518 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232364AbjHIVEW (ORCPT ); Wed, 9 Aug 2023 17:04:22 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8F3D81724 for ; Wed, 9 Aug 2023 14:03:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1691615020; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+34Kxh+IXA75qZRSAThsF0lzd7JtEgbFkW9igt207zc=; b=PQPlJhf7LUVtsUZ5KeS8ZuJJ1jLpkF/0U5FuYUmMhVcpXDbIwbNRjv1ztHf0WYH/T1kshd uX19eTW+qk6Gs0SQs31PFJKubxjsjpo63SX1P4hJd3oC/v+jP9r5yUA0b+fhJExijtVCHT eRNVJHfaiCDwSdGU4BVOZXkVvXMpOuI= Received: from mimecast-mx02.redhat.com (66.187.233.73 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-685-BcMmNP0QPOWTz6848GGn6Q-1; Wed, 09 Aug 2023 17:03:37 -0400 X-MC-Unique: BcMmNP0QPOWTz6848GGn6Q-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E9B943C0E44D; Wed, 9 Aug 2023 21:03:36 +0000 (UTC) Received: from localhost (unknown [10.39.192.2]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4E1351121314; Wed, 9 Aug 2023 21:03:36 +0000 (UTC) From: Stefan Hajnoczi To: kvm@vger.kernel.org Cc: Jason Gunthorpe , "Tian, Kevin" , linux-kernel@vger.kernel.org, Alex Williamson , Stefan Hajnoczi Subject: [PATCH 2/4] vfio: use __aligned_u64 in struct vfio_device_gfx_plane_info Date: Wed, 9 Aug 2023 17:02:46 -0400 Message-ID: <20230809210248.2898981-3-stefanha@redhat.com> In-Reply-To: <20230809210248.2898981-1-stefanha@redhat.com> References: <20230809210248.2898981-1-stefanha@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The memory layout of struct vfio_device_gfx_plane_info is architecture-dependent due to a u64 field and a struct size that is not a multiple of 8 bytes: - On x86_64 the struct size is padded to a multiple of 8 bytes. - On x32 the struct size is only a multiple of 4 bytes, not 8. - Other architectures may vary. Use __aligned_u64 to make memory layout consistent. This reduces the chance of holes that result in an information leak and the chance that 32-bit userspace on a 64-bit kernel breakage. This patch increases the struct size on x32 but this is safe because of the struct's argsz field. The kernel may grow the struct as long as it still supports smaller argsz values from userspace (e.g. applications compiled against older kernel headers). Suggested-by: Jason Gunthorpe Signed-off-by: Stefan Hajnoczi --- include/uapi/linux/vfio.h | 3 ++- drivers/gpu/drm/i915/gvt/kvmgt.c | 4 +++- samples/vfio-mdev/mbochs.c | 6 ++++-- samples/vfio-mdev/mdpy.c | 4 +++- 4 files changed, 12 insertions(+), 5 deletions(-) diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h index b1dfcf3b7665..45db62d74064 100644 --- a/include/uapi/linux/vfio.h +++ b/include/uapi/linux/vfio.h @@ -746,7 +746,7 @@ struct vfio_device_gfx_plane_info { __u32 drm_plane_type; /* type of plane: DRM_PLANE_TYPE_* */ /* out */ __u32 drm_format; /* drm format of plane */ - __u64 drm_format_mod; /* tiled mode */ + __aligned_u64 drm_format_mod; /* tiled mode */ __u32 width; /* width of plane */ __u32 height; /* height of plane */ __u32 stride; /* stride of plane */ @@ -759,6 +759,7 @@ struct vfio_device_gfx_plane_info { __u32 region_index; /* region index */ __u32 dmabuf_id; /* dma-buf id */ }; + __u32 reserved; }; #define VFIO_DEVICE_QUERY_GFX_PLANE _IO(VFIO_TYPE, VFIO_BASE + 14) diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c index de675d799c7d..ffab3536dc8a 100644 --- a/drivers/gpu/drm/i915/gvt/kvmgt.c +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c @@ -1382,7 +1382,7 @@ static long intel_vgpu_ioctl(struct vfio_device *vfio_dev, unsigned int cmd, intel_gvt_reset_vgpu(vgpu); return 0; } else if (cmd == VFIO_DEVICE_QUERY_GFX_PLANE) { - struct vfio_device_gfx_plane_info dmabuf; + struct vfio_device_gfx_plane_info dmabuf = {}; int ret = 0; minsz = offsetofend(struct vfio_device_gfx_plane_info, @@ -1392,6 +1392,8 @@ static long intel_vgpu_ioctl(struct vfio_device *vfio_dev, unsigned int cmd, if (dmabuf.argsz < minsz) return -EINVAL; + minsz = min(minsz, sizeof(dmabuf)); + ret = intel_vgpu_query_plane(vgpu, &dmabuf); if (ret != 0) return ret; diff --git a/samples/vfio-mdev/mbochs.c b/samples/vfio-mdev/mbochs.c index c6c6b5d26670..ee42a780041f 100644 --- a/samples/vfio-mdev/mbochs.c +++ b/samples/vfio-mdev/mbochs.c @@ -1262,7 +1262,7 @@ static long mbochs_ioctl(struct vfio_device *vdev, unsigned int cmd, case VFIO_DEVICE_QUERY_GFX_PLANE: { - struct vfio_device_gfx_plane_info plane; + struct vfio_device_gfx_plane_info plane = {}; minsz = offsetofend(struct vfio_device_gfx_plane_info, region_index); @@ -1273,11 +1273,13 @@ static long mbochs_ioctl(struct vfio_device *vdev, unsigned int cmd, if (plane.argsz < minsz) return -EINVAL; + outsz = min_t(unsigned long, plane.argsz, sizeof(plane)); + ret = mbochs_query_gfx_plane(mdev_state, &plane); if (ret) return ret; - if (copy_to_user((void __user *)arg, &plane, minsz)) + if (copy_to_user((void __user *)arg, &plane, outsz)) return -EFAULT; return 0; diff --git a/samples/vfio-mdev/mdpy.c b/samples/vfio-mdev/mdpy.c index a62ea11e20ec..1500b120de04 100644 --- a/samples/vfio-mdev/mdpy.c +++ b/samples/vfio-mdev/mdpy.c @@ -591,7 +591,7 @@ static long mdpy_ioctl(struct vfio_device *vdev, unsigned int cmd, case VFIO_DEVICE_QUERY_GFX_PLANE: { - struct vfio_device_gfx_plane_info plane; + struct vfio_device_gfx_plane_info plane = {}; minsz = offsetofend(struct vfio_device_gfx_plane_info, region_index); @@ -602,6 +602,8 @@ static long mdpy_ioctl(struct vfio_device *vdev, unsigned int cmd, if (plane.argsz < minsz) return -EINVAL; + minsz = min_t(unsigned long, plane.argsz, sizeof(plane)); + ret = mdpy_query_gfx_plane(mdev_state, &plane); if (ret) return ret; From patchwork Wed Aug 9 21:02:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 13348509 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07654C001B0 for ; Wed, 9 Aug 2023 21:04:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233193AbjHIVEg (ORCPT ); Wed, 9 Aug 2023 17:04:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37534 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233153AbjHIVEa (ORCPT ); Wed, 9 Aug 2023 17:04:30 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 823A71BD9 for ; Wed, 9 Aug 2023 14:03:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1691615023; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=9Dj3+t39K/qBj5MrzSksnLCUG1VrR0qcfWhwfuxcrlk=; b=Gku1CW8a5Ktaq+W2jv5V5ntYYIZTCQd6clONwxXQgMobAeWB23DeYBtZuY0zSMktpJuWyd CeKLiZqLcPX3WIWj7jaK65Y3sa1pmLgllIhM48uTQ/5+MZQslmhqfjMpPhOtQbrX4tpPZz 9D5CdwtihBxAxum7vZRjLwY/jfGTh2k= Received: from mimecast-mx02.redhat.com (66.187.233.73 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-696-l9RuCBQtNjyNy_v5wdxC-Q-1; Wed, 09 Aug 2023 17:03:39 -0400 X-MC-Unique: l9RuCBQtNjyNy_v5wdxC-Q-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 31D3B2999B26; Wed, 9 Aug 2023 21:03:39 +0000 (UTC) Received: from localhost (unknown [10.39.192.2]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8AC59C15BAE; Wed, 9 Aug 2023 21:03:38 +0000 (UTC) From: Stefan Hajnoczi To: kvm@vger.kernel.org Cc: Jason Gunthorpe , "Tian, Kevin" , linux-kernel@vger.kernel.org, Alex Williamson , Stefan Hajnoczi Subject: [PATCH 3/4] vfio: use __aligned_u64 in struct vfio_iommu_type1_info Date: Wed, 9 Aug 2023 17:02:47 -0400 Message-ID: <20230809210248.2898981-4-stefanha@redhat.com> In-Reply-To: <20230809210248.2898981-1-stefanha@redhat.com> References: <20230809210248.2898981-1-stefanha@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The memory layout of struct vfio_iommu_type1_info is architecture-dependent due to a u64 field and a struct size that is not a multiple of 8 bytes: - On x86_64 the struct size is padded to a multiple of 8 bytes. - On x32 the struct size is only a multiple of 4 bytes, not 8. - Other architectures may vary. Use __aligned_u64 to make memory layout consistent. This reduces the chance of holes that result in an information leak and the chance that 32-bit userspace on a 64-bit kernel breakage. This patch increases the struct size on x32 but this is safe because of the struct's argsz field. The kernel may grow the struct as long as it still supports smaller argsz values from userspace (e.g. applications compiled against older kernel headers). Suggested-by: Jason Gunthorpe Signed-off-by: Stefan Hajnoczi Reviewed-by: Jason Gunthorpe --- include/uapi/linux/vfio.h | 3 ++- drivers/vfio/vfio_iommu_type1.c | 11 ++--------- 2 files changed, 4 insertions(+), 10 deletions(-) diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h index 45db62d74064..0b5786ec50d8 100644 --- a/include/uapi/linux/vfio.h +++ b/include/uapi/linux/vfio.h @@ -1303,8 +1303,9 @@ struct vfio_iommu_type1_info { __u32 flags; #define VFIO_IOMMU_INFO_PGSIZES (1 << 0) /* supported page sizes info */ #define VFIO_IOMMU_INFO_CAPS (1 << 1) /* Info supports caps */ - __u64 iova_pgsizes; /* Bitmap of supported page sizes */ + __aligned_u64 iova_pgsizes; /* Bitmap of supported page sizes */ __u32 cap_offset; /* Offset within info struct of first cap */ + __u32 reserved; }; /* diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index ebe0ad31d0b0..f51159a7a4de 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -2762,27 +2762,20 @@ static int vfio_iommu_dma_avail_build_caps(struct vfio_iommu *iommu, static int vfio_iommu_type1_get_info(struct vfio_iommu *iommu, unsigned long arg) { - struct vfio_iommu_type1_info info; + struct vfio_iommu_type1_info info = {}; unsigned long minsz; struct vfio_info_cap caps = { .buf = NULL, .size = 0 }; - unsigned long capsz; int ret; minsz = offsetofend(struct vfio_iommu_type1_info, iova_pgsizes); - /* For backward compatibility, cannot require this */ - capsz = offsetofend(struct vfio_iommu_type1_info, cap_offset); - if (copy_from_user(&info, (void __user *)arg, minsz)) return -EFAULT; if (info.argsz < minsz) return -EINVAL; - if (info.argsz >= capsz) { - minsz = capsz; - info.cap_offset = 0; /* output, no-recopy necessary */ - } + minsz = min_t(unsigned long, info.argsz, sizeof(info)); mutex_lock(&iommu->lock); info.flags = VFIO_IOMMU_INFO_PGSIZES; From patchwork Wed Aug 9 21:02:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 13348510 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81EF7C001E0 for ; Wed, 9 Aug 2023 21:04:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230330AbjHIVEh (ORCPT ); Wed, 9 Aug 2023 17:04:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37536 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233136AbjHIVEa (ORCPT ); Wed, 9 Aug 2023 17:04:30 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 297EB1BF7 for ; Wed, 9 Aug 2023 14:03:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1691615024; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=9pYnh9AZiLo/QufdQ00bPfQxtHLj4posydlYcWhbRmM=; b=O+2hFfbCCb1wZeoBpGOv0EzfBn6ayyQdArzS8PuafmqjBRZGgskqM7ja2WTurMLqGIw/PX pOxn4G2TcPI50GCFinmkdNcONGZMfwFM+rGX4ldq87yZNGv0m/Vg/0KEoVrFz2P2WM7UHu KkBXMbtjUz1fAWaq5wbdjnTawyxIeoY= Received: from mimecast-mx02.redhat.com (66.187.233.73 [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-146-vVAOG69QMN2a86L0HEy8cg-1; Wed, 09 Aug 2023 17:03:43 -0400 X-MC-Unique: vVAOG69QMN2a86L0HEy8cg-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BA971380673C; Wed, 9 Aug 2023 21:03:41 +0000 (UTC) Received: from localhost (unknown [10.39.192.2]) by smtp.corp.redhat.com (Postfix) with ESMTP id 20376C15BAE; Wed, 9 Aug 2023 21:03:40 +0000 (UTC) From: Stefan Hajnoczi To: kvm@vger.kernel.org Cc: Jason Gunthorpe , "Tian, Kevin" , linux-kernel@vger.kernel.org, Alex Williamson , Stefan Hajnoczi Subject: [PATCH 4/4] vfio: use __aligned_u64 in struct vfio_device_ioeventfd Date: Wed, 9 Aug 2023 17:02:48 -0400 Message-ID: <20230809210248.2898981-5-stefanha@redhat.com> In-Reply-To: <20230809210248.2898981-1-stefanha@redhat.com> References: <20230809210248.2898981-1-stefanha@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The memory layout of struct vfio_device_ioeventfd is architecture-dependent due to a u64 field and a struct size that is not a multiple of 8 bytes: - On x86_64 the struct size is padded to a multiple of 8 bytes. - On x32 the struct size is only a multiple of 4 bytes, not 8. - Other architectures may vary. Use __aligned_u64 to make memory layout consistent. This reduces the chance of holes that result in an information leak and the chance that 32-bit userspace on a 64-bit kernel breakage. This patch increases the struct size on x32 but this is safe because of the struct's argsz field. The kernel may grow the struct as long as it still supports smaller argsz values from userspace (e.g. applications compiled against older kernel headers). The code that uses struct vfio_device_ioeventfd already works correctly when the struct size grows, so only the struct definition needs to be changed. Suggested-by: Jason Gunthorpe Signed-off-by: Stefan Hajnoczi Reviewed-by: Jason Gunthorpe --- include/uapi/linux/vfio.h | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h index 0b5786ec50d8..d61269765bf8 100644 --- a/include/uapi/linux/vfio.h +++ b/include/uapi/linux/vfio.h @@ -794,9 +794,10 @@ struct vfio_device_ioeventfd { #define VFIO_DEVICE_IOEVENTFD_32 (1 << 2) /* 4-byte write */ #define VFIO_DEVICE_IOEVENTFD_64 (1 << 3) /* 8-byte write */ #define VFIO_DEVICE_IOEVENTFD_SIZE_MASK (0xf) - __u64 offset; /* device fd offset of write */ - __u64 data; /* data to be written */ + __aligned_u64 offset; /* device fd offset of write */ + __aligned_u64 data; /* data to be written */ __s32 fd; /* -1 for de-assignment */ + __u32 reserved; }; #define VFIO_DEVICE_IOEVENTFD _IO(VFIO_TYPE, VFIO_BASE + 16)