diff mbox

[1/3] drm: fourcc byteorder: drop DRM_FORMAT_BIG_ENDIAN

Message ID 20170502133404.15354-2-kraxel@redhat.com (mailing list archive)
State New, archived
Headers show

Commit Message

Gerd Hoffmann May 2, 2017, 1:34 p.m. UTC
It's unused.

Suggested-by: Daniel Vetter <daniel.vetter@intel.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 include/uapi/drm/drm_fourcc.h     | 2 --
 drivers/gpu/drm/drm_fourcc.c      | 3 +--
 drivers/gpu/drm/drm_framebuffer.c | 2 +-
 3 files changed, 2 insertions(+), 5 deletions(-)

Comments

Emil Velikov May 2, 2017, 1:53 p.m. UTC | #1
Hi Gerd,

I did not have the change to follow through the discussion.
Pardon if my suggestion have already been discussed.

On 2 May 2017 at 14:34, Gerd Hoffmann <kraxel@redhat.com> wrote:
> It's unused.
>
> Suggested-by: Daniel Vetter <daniel.vetter@intel.com>
> Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
> ---
>  include/uapi/drm/drm_fourcc.h     | 2 --
>  drivers/gpu/drm/drm_fourcc.c      | 3 +--
>  drivers/gpu/drm/drm_framebuffer.c | 2 +-
>  3 files changed, 2 insertions(+), 5 deletions(-)
>
> diff --git a/include/uapi/drm/drm_fourcc.h b/include/uapi/drm/drm_fourcc.h
> index 995c8f9c69..305bc34be0 100644
> --- a/include/uapi/drm/drm_fourcc.h
> +++ b/include/uapi/drm/drm_fourcc.h
> @@ -33,8 +33,6 @@ extern "C" {
>  #define fourcc_code(a, b, c, d) ((__u32)(a) | ((__u32)(b) << 8) | \
>                                  ((__u32)(c) << 16) | ((__u32)(d) << 24))
>
> -#define DRM_FORMAT_BIG_ENDIAN (1<<31) /* format is big endian instead of little endian */
> -

Weston references DRM_FORMAT_BIG_ENDIAN thus this patch will lead to
build breakage.
That is never fun, so please carefully coordinate with the Weston devs
to minimise the fireworks.

Personally I would leave the symbol, since it's UAPI and document that
should not be used. Not my call, so feel free to ignore.

Thanks
Emil
Gerd Hoffmann May 2, 2017, 2:14 p.m. UTC | #2
> > diff --git a/include/uapi/drm/drm_fourcc.h b/include/uapi/drm/drm_fourcc.h
> > index 995c8f9c69..305bc34be0 100644
> > --- a/include/uapi/drm/drm_fourcc.h
> > +++ b/include/uapi/drm/drm_fourcc.h
> > @@ -33,8 +33,6 @@ extern "C" {
> >  #define fourcc_code(a, b, c, d) ((__u32)(a) | ((__u32)(b) << 8) | \
> >                                  ((__u32)(c) << 16) | ((__u32)(d) << 24))
> >
> > -#define DRM_FORMAT_BIG_ENDIAN (1<<31) /* format is big endian instead of little endian */
> > -
> 
> Weston references DRM_FORMAT_BIG_ENDIAN thus this patch will lead to
> build breakage.

> Personally I would leave the symbol, since it's UAPI and document that
> should not be used.

Fair enough.  I can surely add a deprecated comment instead of just
dropping it.

I'm wondering how it is used though, given that none of the drivers in
the kernel actually  support this flag ...

cheers,
  Gerd
Pekka Paalanen May 2, 2017, 2:27 p.m. UTC | #3
On Tue, 2 May 2017 14:53:43 +0100
Emil Velikov <emil.l.velikov@gmail.com> wrote:

> Hi Gerd,
> 
> I did not have the change to follow through the discussion.
> Pardon if my suggestion have already been discussed.
> 
> On 2 May 2017 at 14:34, Gerd Hoffmann <kraxel@redhat.com> wrote:
> > It's unused.
> >
> > Suggested-by: Daniel Vetter <daniel.vetter@intel.com>
> > Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
> > ---
> >  include/uapi/drm/drm_fourcc.h     | 2 --
> >  drivers/gpu/drm/drm_fourcc.c      | 3 +--
> >  drivers/gpu/drm/drm_framebuffer.c | 2 +-
> >  3 files changed, 2 insertions(+), 5 deletions(-)
> >
> > diff --git a/include/uapi/drm/drm_fourcc.h b/include/uapi/drm/drm_fourcc.h
> > index 995c8f9c69..305bc34be0 100644
> > --- a/include/uapi/drm/drm_fourcc.h
> > +++ b/include/uapi/drm/drm_fourcc.h
> > @@ -33,8 +33,6 @@ extern "C" {
> >  #define fourcc_code(a, b, c, d) ((__u32)(a) | ((__u32)(b) << 8) | \
> >                                  ((__u32)(c) << 16) | ((__u32)(d) << 24))
> >
> > -#define DRM_FORMAT_BIG_ENDIAN (1<<31) /* format is big endian instead of little endian */
> > -  
> 
> Weston references DRM_FORMAT_BIG_ENDIAN thus this patch will lead to
> build breakage.
> That is never fun, so please carefully coordinate with the Weston devs
> to minimise the fireworks.
> 
> Personally I would leave the symbol, since it's UAPI and document that
> should not be used. Not my call, so feel free to ignore.

Hi,

indeed, weston does have one occurrence of it. I don't think it has
actually been needed in practice ever, but removing it will cause a
build failure:
https://cgit.freedesktop.org/wayland/weston/tree/libweston/gl-renderer.c?id=2.0.0#n1820
Funnily enough, it's only ever used to get rid of the bit, "just in
case".

I also think that this patch requires more comments than the
commit message has at the moment.

Removing the definition also removes the possibility to describe a lot
of pixel formats, so that should definitely be mentioned. I think it
would also be good to have some kind of justified claim that no
hardware actually needs the pixel formats this is removing (e.g. RGB565
BE). Maybe this was already in the long discussions, but I feel it
should be summarized in the commit message.


Thanks,
pq
Gerd Hoffmann May 2, 2017, 3:06 p.m. UTC | #4
Hi,

> I also think that this patch requires more comments than the
> commit message has at the moment.
> 
> Removing the definition also removes the possibility to describe a lot
> of pixel formats, so that should definitely be mentioned. I think it
> would also be good to have some kind of justified claim that no
> hardware actually needs the pixel formats this is removing (e.g. RGB565
> BE).

That and RGB2101010 BE are the only candidates I can think of.

Dealing with those in none-native byteorder is a PITA though.  Display
hardware is little endian (pci byte order) for quite a while already.

> Maybe this was already in the long discussions, but I feel it
> should be summarized in the commit message.

Radeon and nvidia (nv40) cards where mentioned.  I'll try to summarize
(feel free to correct me if I'm wrong).

nvidia has support for 8 bit-per-color formats only on bigendian hosts.
Not sure whenever this is a driver or hardware limitation.

radeon looks differently on pre-R600 and R600+ hardware.

pre-R600 can byteswap on cpu access, so the cpu view is bigendian
whereas things are actually stored on little endian byte order.
Hardware supports both 16bit and 32bit swaps.  Used for fbdev emulation
(probably 32bit swaps, but not fully sure).  xorg radeon driver doesn't
use the byteswapping feature, because it is a PITA when bo's are moved
between vram and system memory.

R600+ supports bigendian framebuffer formats, so no byteswapping on
access is needed.  Not sure whenever that includes 16bpp formats or
whenever this is limited to the 8 bit-per-color formats (simliar to
nvidia).  Discussion focused on the pre-R600 hardware and how this
on-acpu-access byteswapping is more a problem than it helps ...

cheers,
  Gerd
Ilia Mirkin May 2, 2017, 5:57 p.m. UTC | #5
On Tue, May 2, 2017 at 11:06 AM, Gerd Hoffmann <kraxel@redhat.com> wrote:
> Radeon and nvidia (nv40) cards where mentioned.  I'll try to summarize
> (feel free to correct me if I'm wrong).
>
> nvidia has support for 8 bit-per-color formats only on bigendian hosts.
> Not sure whenever this is a driver or hardware limitation.

Let me just summarize the NVIDIA situation. First off, pre-nv50 and
nv50+ are entirely different and unrelated beasts.

The (pre-nv50) hardware has (a few) "big endian mode" bits. Those bits
are kind of unrelated to each other and control their own "domains".
One of the domains is reading of the scanout fb. So as a result, the
hardware can scan out XRGB8888, RGB565, and XRGB1555 stored in either
little or big endian packings, irrespective of the "mode" that other
parts of the hardware are in.

However there's the delicate little question of the GPU *generating*
the data. These older GPUs don't have quite the format flexibility
offered by newer hw. So only XRGB8888 is supported, packed in whatever
"mode" the whole PGRAPH unit is in. (I say this because things seem to
work when rendering using the XRGB8888 format while scanning out with
the BE flag set.)

There are no APIs for controlling the endianness of each engine in
nouveau, so it ends up being in "big endian" mode on BE hosts, so the
GPU can only render to big-endian-packed framebuffers.

None of this applies to nv50+ hw. (Although it might in broad strokes.)

Currently the driver is exposing XRGB8888 and ARGB8888 formats as
that's what drm_crtc_init does for it. However the ARGB8888 format
doesn't work (and shouldn't be exposed, the alpha is meaningless on a
single-plane setup), and the XRGB8888 format is assumed to be packed
in cpu host endian (and the "BE" bit is set accordingly).

Hope this helps!

  -ilia
Michel Dänzer May 3, 2017, 3:05 a.m. UTC | #6
On 03/05/17 12:06 AM, Gerd Hoffmann wrote:
> 
>> Removing the definition also removes the possibility to describe a lot
>> of pixel formats, so that should definitely be mentioned. I think it
>> would also be good to have some kind of justified claim that no
>> hardware actually needs the pixel formats this is removing (e.g. RGB565
>> BE).
> 
> That and RGB2101010 BE are the only candidates I can think of.
> 
> Dealing with those in none-native byteorder is a PITA though.  Display
> hardware is little endian (pci byte order) for quite a while already.

Maybe by default, but not exclusively.


> radeon looks differently on pre-R600 and R600+ hardware.
> 
> pre-R600 can byteswap on cpu access, so the cpu view is bigendian
> whereas things are actually stored on little endian byte order.
> Hardware supports both 16bit and 32bit swaps.  Used for fbdev emulation
> (probably 32bit swaps, but not fully sure).

32-bit swaps for 32 bpp, 16-bit swaps for 16 bpp.


> R600+ supports bigendian framebuffer formats, so no byteswapping on
> access is needed.  Not sure whenever that includes 16bpp formats or
> whenever this is limited to the 8 bit-per-color formats [...]

It includes 16bpp. Looking at
drivers/gpu/drm/radeon/atombios_crtc.c:dce4_crtc_do_set_base(), it sets
up byte-swapping for all multi-byte formats, so it effectively treats
all those formats as if they had DRM_FORMAT_BIG_ENDIAN set.

If the radeon (and amdgpu) driver were to be changed to use
drm_mode_legacy_fb_format_he for >= R600, that must also handle 16 bpp,
which requires DRM_FORMAT_BIG_ENDIAN. So I still don't see how that can
be removed or even deprecated.

From Ilia's followup it sounds like there's a similar situation with
nouveau.
Gerd Hoffmann May 3, 2017, 9:24 a.m. UTC | #7
Hi,

> > R600+ supports bigendian framebuffer formats, so no byteswapping on
> > access is needed.  Not sure whenever that includes 16bpp formats or
> > whenever this is limited to the 8 bit-per-color formats [...]
> 
> It includes 16bpp. Looking at
> drivers/gpu/drm/radeon/atombios_crtc.c:dce4_crtc_do_set_base(), it sets
> up byte-swapping for all multi-byte formats, so it effectively treats
> all those formats as if they had DRM_FORMAT_BIG_ENDIAN set.

> If the radeon (and amdgpu) driver were to be changed to use
> drm_mode_legacy_fb_format_he for >= R600, that must also handle 16 bpp,
> which requires DRM_FORMAT_BIG_ENDIAN. So I still don't see how that can
> be removed or even deprecated.

Ok.

Dropped patch #1.

Updated patch #2 to include all formats returned by
drm_mode_legacy_fb_format, and also renamed them to DRM_FORMAT_HOST_*.

Question is how to go forward with patch #3.  I'd prefer to not add
drm_mode_legacy_fb_format_he if possible.  Is there a chance to adapt
the radeon and nvidia drivers to a fixed drm_mode_legacy_fb_format
function (returning be formats on be) without invasive changes?  Given
they both treat formats as if they had DRM_FORMAT_BIG_ENDIAN set this
could (with the help of the extended patch #2) be a simple
s/DRM_FORMAT_/DRM_FORMAT_HOST/ at the right places ...

Michael?  Ilia?

cheers,
  Gerd
Michel Dänzer May 8, 2017, 12:38 a.m. UTC | #8
On 03/05/17 06:24 PM, Gerd Hoffmann wrote:
>   Hi,
> 
>>> R600+ supports bigendian framebuffer formats, so no byteswapping on
>>> access is needed.  Not sure whenever that includes 16bpp formats or
>>> whenever this is limited to the 8 bit-per-color formats [...]
>>
>> It includes 16bpp. Looking at
>> drivers/gpu/drm/radeon/atombios_crtc.c:dce4_crtc_do_set_base(), it sets
>> up byte-swapping for all multi-byte formats, so it effectively treats
>> all those formats as if they had DRM_FORMAT_BIG_ENDIAN set.
> 
>> If the radeon (and amdgpu) driver were to be changed to use
>> drm_mode_legacy_fb_format_he for >= R600, that must also handle 16 bpp,
>> which requires DRM_FORMAT_BIG_ENDIAN. So I still don't see how that can
>> be removed or even deprecated.
> 
> Ok.
> 
> Dropped patch #1.
> 
> Updated patch #2 to include all formats returned by
> drm_mode_legacy_fb_format, and also renamed them to DRM_FORMAT_HOST_*.
> 
> Question is how to go forward with patch #3.  I'd prefer to not add
> drm_mode_legacy_fb_format_he if possible.  Is there a chance to adapt
> the radeon and nvidia drivers to a fixed drm_mode_legacy_fb_format
> function (returning be formats on be) without invasive changes?  Given
> they both treat formats as if they had DRM_FORMAT_BIG_ENDIAN set this
> could (with the help of the extended patch #2) be a simple
> s/DRM_FORMAT_/DRM_FORMAT_HOST/ at the right places ...

For radeon this doesn't work with pre-R600 GPUs, which only support
little endian formats for display.
diff mbox

Patch

diff --git a/include/uapi/drm/drm_fourcc.h b/include/uapi/drm/drm_fourcc.h
index 995c8f9c69..305bc34be0 100644
--- a/include/uapi/drm/drm_fourcc.h
+++ b/include/uapi/drm/drm_fourcc.h
@@ -33,8 +33,6 @@  extern "C" {
 #define fourcc_code(a, b, c, d) ((__u32)(a) | ((__u32)(b) << 8) | \
 				 ((__u32)(c) << 16) | ((__u32)(d) << 24))
 
-#define DRM_FORMAT_BIG_ENDIAN (1<<31) /* format is big endian instead of little endian */
-
 /* color index */
 #define DRM_FORMAT_C8		fourcc_code('C', '8', ' ', ' ') /* [7:0] C */
 
diff --git a/drivers/gpu/drm/drm_fourcc.c b/drivers/gpu/drm/drm_fourcc.c
index 9c0152df45..adb3ff59a4 100644
--- a/drivers/gpu/drm/drm_fourcc.c
+++ b/drivers/gpu/drm/drm_fourcc.c
@@ -86,12 +86,11 @@  EXPORT_SYMBOL(drm_mode_legacy_fb_format);
 const char *drm_get_format_name(uint32_t format, struct drm_format_name_buf *buf)
 {
 	snprintf(buf->str, sizeof(buf->str),
-		 "%c%c%c%c %s-endian (0x%08x)",
+		 "%c%c%c%c (0x%08x)",
 		 printable_char(format & 0xff),
 		 printable_char((format >> 8) & 0xff),
 		 printable_char((format >> 16) & 0xff),
 		 printable_char((format >> 24) & 0x7f),
-		 format & DRM_FORMAT_BIG_ENDIAN ? "big" : "little",
 		 format);
 
 	return buf->str;
diff --git a/drivers/gpu/drm/drm_framebuffer.c b/drivers/gpu/drm/drm_framebuffer.c
index fc8ef42203..efe8b5ece5 100644
--- a/drivers/gpu/drm/drm_framebuffer.c
+++ b/drivers/gpu/drm/drm_framebuffer.c
@@ -152,7 +152,7 @@  static int framebuffer_check(struct drm_device *dev,
 	int i;
 
 	/* check if the format is supported at all */
-	info = __drm_format_info(r->pixel_format & ~DRM_FORMAT_BIG_ENDIAN);
+	info = __drm_format_info(r->pixel_format);
 	if (!info) {
 		struct drm_format_name_buf format_name;
 		DRM_DEBUG_KMS("bad framebuffer format %s\n",