diff mbox series

[05/11] x86: remove HIGHMEM64G support

Message ID 20241204103042.1904639-6-arnd@kernel.org (mailing list archive)
State New
Headers show
Series x86: 32-bit cleanups | expand

Commit Message

Arnd Bergmann Dec. 4, 2024, 10:30 a.m. UTC
From: Arnd Bergmann <arnd@arndb.de>

The HIGHMEM64G support was added in linux-2.3.25 to support (then)
high-end Pentium Pro and Pentium III Xeon servers with more than 4GB of
addressing, NUMA and PCI-X slots started appearing.

I have found no evidence of this ever being used in regular dual-socket
servers or consumer devices, all the users seem obsolete these days,
even by i386 standards:

 - Support for NUMA servers (NUMA-Q, IBM x440, unisys) was already
   removed ten years ago.

 - 4+ socket non-NUMA servers based on Intel 450GX/450NX, HP F8 and
   ServerWorks ServerSet/GrandChampion could theoretically still work
   with 8GB, but these were exceptionally rare even 20 years ago and
   would have usually been equipped with than the maximum amount of
   RAM.

 - Some SKUs of the Celeron D from 2004 had 64-bit mode fused off but
   could still work in a Socket 775 mainboard designed for the later
   Core 2 Duo and 8GB. Apparently most BIOSes at the time only allowed
   64-bit CPUs.

 - In the early days of x86-64 hardware, there was sometimes the need
   to run a 32-bit kernel to work around bugs in the hardware drivers,
   or in the syscall emulation for 32-bit userspace. This likely still
   works but there should never be a need for this any more.

Removing this also drops the need for PHYS_ADDR_T_64BIT and SWIOTLB.
PAE mode is still required to get access to the 'NX' bit on Atom
'Pentium M' and 'Core Duo' CPUs.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
 Documentation/admin-guide/kdump/kdump.rst     |  4 --
 Documentation/arch/x86/usb-legacy-support.rst | 11 +----
 arch/x86/Kconfig                              | 46 +++----------------
 arch/x86/configs/xen.config                   |  2 -
 arch/x86/include/asm/page_32_types.h          |  4 +-
 arch/x86/mm/init_32.c                         |  9 +---
 6 files changed, 11 insertions(+), 65 deletions(-)

Comments

Brian Gerst Dec. 4, 2024, 1:29 p.m. UTC | #1
On Wed, Dec 4, 2024 at 5:34 AM Arnd Bergmann <arnd@kernel.org> wrote:
>
> From: Arnd Bergmann <arnd@arndb.de>
>
> The HIGHMEM64G support was added in linux-2.3.25 to support (then)
> high-end Pentium Pro and Pentium III Xeon servers with more than 4GB of
> addressing, NUMA and PCI-X slots started appearing.
>
> I have found no evidence of this ever being used in regular dual-socket
> servers or consumer devices, all the users seem obsolete these days,
> even by i386 standards:
>
>  - Support for NUMA servers (NUMA-Q, IBM x440, unisys) was already
>    removed ten years ago.
>
>  - 4+ socket non-NUMA servers based on Intel 450GX/450NX, HP F8 and
>    ServerWorks ServerSet/GrandChampion could theoretically still work
>    with 8GB, but these were exceptionally rare even 20 years ago and
>    would have usually been equipped with than the maximum amount of
>    RAM.
>
>  - Some SKUs of the Celeron D from 2004 had 64-bit mode fused off but
>    could still work in a Socket 775 mainboard designed for the later
>    Core 2 Duo and 8GB. Apparently most BIOSes at the time only allowed
>    64-bit CPUs.
>
>  - In the early days of x86-64 hardware, there was sometimes the need
>    to run a 32-bit kernel to work around bugs in the hardware drivers,
>    or in the syscall emulation for 32-bit userspace. This likely still
>    works but there should never be a need for this any more.
>
> Removing this also drops the need for PHYS_ADDR_T_64BIT and SWIOTLB.
> PAE mode is still required to get access to the 'NX' bit on Atom
> 'Pentium M' and 'Core Duo' CPUs.

8GB of memory is still useful for 32-bit guest VMs.


Brian Gerst
Arnd Bergmann Dec. 4, 2024, 1:43 p.m. UTC | #2
On Wed, Dec 4, 2024, at 14:29, Brian Gerst wrote:
> On Wed, Dec 4, 2024 at 5:34 AM Arnd Bergmann <arnd@kernel.org> wrote:
>>
>>  - In the early days of x86-64 hardware, there was sometimes the need
>>    to run a 32-bit kernel to work around bugs in the hardware drivers,
>>    or in the syscall emulation for 32-bit userspace. This likely still
>>    works but there should never be a need for this any more.
>>
>> Removing this also drops the need for PHYS_ADDR_T_64BIT and SWIOTLB.
>> PAE mode is still required to get access to the 'NX' bit on Atom
>> 'Pentium M' and 'Core Duo' CPUs.
>
> 8GB of memory is still useful for 32-bit guest VMs.

Can you give some more background on this?

It's clear that one can run a virtual machine this way and it
currently works, but are you able to construct a case where this
is a good idea, compared to running the same userspace with a
64-bit kernel?

From what I can tell, any practical workload that requires
8GB of total RAM will likely run into either the lowmem
limits or into virtual addressig limits, in addition to the
problems of 32-bit kernels being generally worse than 64-bit
ones in terms of performance, features and testing.

      Arnd
Brian Gerst Dec. 4, 2024, 2:02 p.m. UTC | #3
On Wed, Dec 4, 2024 at 8:43 AM Arnd Bergmann <arnd@arndb.de> wrote:
>
> On Wed, Dec 4, 2024, at 14:29, Brian Gerst wrote:
> > On Wed, Dec 4, 2024 at 5:34 AM Arnd Bergmann <arnd@kernel.org> wrote:
> >>
> >>  - In the early days of x86-64 hardware, there was sometimes the need
> >>    to run a 32-bit kernel to work around bugs in the hardware drivers,
> >>    or in the syscall emulation for 32-bit userspace. This likely still
> >>    works but there should never be a need for this any more.
> >>
> >> Removing this also drops the need for PHYS_ADDR_T_64BIT and SWIOTLB.
> >> PAE mode is still required to get access to the 'NX' bit on Atom
> >> 'Pentium M' and 'Core Duo' CPUs.
> >
> > 8GB of memory is still useful for 32-bit guest VMs.
>
> Can you give some more background on this?
>
> It's clear that one can run a virtual machine this way and it
> currently works, but are you able to construct a case where this
> is a good idea, compared to running the same userspace with a
> 64-bit kernel?
>
> From what I can tell, any practical workload that requires
> 8GB of total RAM will likely run into either the lowmem
> limits or into virtual addressig limits, in addition to the
> problems of 32-bit kernels being generally worse than 64-bit
> ones in terms of performance, features and testing.

I use a 32-bit VM to test 32-bit kernel builds.  I haven't benchmarked
kernel builds with 4GB/8GB yet, but logically more memory would be
better for caching files.


Brian Gerst
Brian Gerst Dec. 4, 2024, 3 p.m. UTC | #4
On Wed, Dec 4, 2024 at 9:02 AM Brian Gerst <brgerst@gmail.com> wrote:
>
> On Wed, Dec 4, 2024 at 8:43 AM Arnd Bergmann <arnd@arndb.de> wrote:
> >
> > On Wed, Dec 4, 2024, at 14:29, Brian Gerst wrote:
> > > On Wed, Dec 4, 2024 at 5:34 AM Arnd Bergmann <arnd@kernel.org> wrote:
> > >>
> > >>  - In the early days of x86-64 hardware, there was sometimes the need
> > >>    to run a 32-bit kernel to work around bugs in the hardware drivers,
> > >>    or in the syscall emulation for 32-bit userspace. This likely still
> > >>    works but there should never be a need for this any more.
> > >>
> > >> Removing this also drops the need for PHYS_ADDR_T_64BIT and SWIOTLB.
> > >> PAE mode is still required to get access to the 'NX' bit on Atom
> > >> 'Pentium M' and 'Core Duo' CPUs.
> > >
> > > 8GB of memory is still useful for 32-bit guest VMs.
> >
> > Can you give some more background on this?
> >
> > It's clear that one can run a virtual machine this way and it
> > currently works, but are you able to construct a case where this
> > is a good idea, compared to running the same userspace with a
> > 64-bit kernel?
> >
> > From what I can tell, any practical workload that requires
> > 8GB of total RAM will likely run into either the lowmem
> > limits or into virtual addressig limits, in addition to the
> > problems of 32-bit kernels being generally worse than 64-bit
> > ones in terms of performance, features and testing.
>
> I use a 32-bit VM to test 32-bit kernel builds.  I haven't benchmarked
> kernel builds with 4GB/8GB yet, but logically more memory would be
> better for caching files.
>
>
> Brian Gerst

After verifying, I only had the VM set to 4GB and CONFIG_HIGHMEM64G
was not set.  So I have no issue with this.


Brian Gerst
H. Peter Anvin Dec. 4, 2024, 3:53 p.m. UTC | #5
On December 4, 2024 5:43:28 AM PST, Arnd Bergmann <arnd@arndb.de> wrote:
>On Wed, Dec 4, 2024, at 14:29, Brian Gerst wrote:
>> On Wed, Dec 4, 2024 at 5:34 AM Arnd Bergmann <arnd@kernel.org> wrote:
>>>
>>>  - In the early days of x86-64 hardware, there was sometimes the need
>>>    to run a 32-bit kernel to work around bugs in the hardware drivers,
>>>    or in the syscall emulation for 32-bit userspace. This likely still
>>>    works but there should never be a need for this any more.
>>>
>>> Removing this also drops the need for PHYS_ADDR_T_64BIT and SWIOTLB.
>>> PAE mode is still required to get access to the 'NX' bit on Atom
>>> 'Pentium M' and 'Core Duo' CPUs.
>>
>> 8GB of memory is still useful for 32-bit guest VMs.
>
>Can you give some more background on this?
>
>It's clear that one can run a virtual machine this way and it
>currently works, but are you able to construct a case where this
>is a good idea, compared to running the same userspace with a
>64-bit kernel?
>
>From what I can tell, any practical workload that requires
>8GB of total RAM will likely run into either the lowmem
>limits or into virtual addressig limits, in addition to the
>problems of 32-bit kernels being generally worse than 64-bit
>ones in terms of performance, features and testing.
>
>      Arnd
>

The biggest proven is that without HIGHMEM you put a limit of just under 1 GB (892 MiB if I recall correctly), *not* 4 GB, on 32-bit kernels. That is *well* below the amount of RAM present in late-era 32-bit legacy systems, which were put in production as "recently" as 20 years ago and may still be in niche production uses. Embedded systems may be significantly more recent; I know for a fact that 32-bit systems were put in production in the 2010s.
H. Peter Anvin Dec. 4, 2024, 3:58 p.m. UTC | #6
On December 4, 2024 6:02:48 AM PST, Brian Gerst <brgerst@gmail.com> wrote:
>On Wed, Dec 4, 2024 at 8:43 AM Arnd Bergmann <arnd@arndb.de> wrote:
>>
>> On Wed, Dec 4, 2024, at 14:29, Brian Gerst wrote:
>> > On Wed, Dec 4, 2024 at 5:34 AM Arnd Bergmann <arnd@kernel.org> wrote:
>> >>
>> >>  - In the early days of x86-64 hardware, there was sometimes the need
>> >>    to run a 32-bit kernel to work around bugs in the hardware drivers,
>> >>    or in the syscall emulation for 32-bit userspace. This likely still
>> >>    works but there should never be a need for this any more.
>> >>
>> >> Removing this also drops the need for PHYS_ADDR_T_64BIT and SWIOTLB.
>> >> PAE mode is still required to get access to the 'NX' bit on Atom
>> >> 'Pentium M' and 'Core Duo' CPUs.
>> >
>> > 8GB of memory is still useful for 32-bit guest VMs.
>>
>> Can you give some more background on this?
>>
>> It's clear that one can run a virtual machine this way and it
>> currently works, but are you able to construct a case where this
>> is a good idea, compared to running the same userspace with a
>> 64-bit kernel?
>>
>> From what I can tell, any practical workload that requires
>> 8GB of total RAM will likely run into either the lowmem
>> limits or into virtual addressig limits, in addition to the
>> problems of 32-bit kernels being generally worse than 64-bit
>> ones in terms of performance, features and testing.
>
>I use a 32-bit VM to test 32-bit kernel builds.  I haven't benchmarked
>kernel builds with 4GB/8GB yet, but logically more memory would be
>better for caching files.
>
>
>Brian Gerst
>

For the record, back when kernel.org was still a 32-bit machine which, once would have thought, would have been ideal for caching files, rarely achieved more than 50% memory usage with which I believe was 8 GB RAM. The next generation was 16 GB x86-64.
H. Peter Anvin Dec. 4, 2024, 4:37 p.m. UTC | #7
On December 4, 2024 5:29:17 AM PST, Brian Gerst <brgerst@gmail.com> wrote:
>On Wed, Dec 4, 2024 at 5:34 AM Arnd Bergmann <arnd@kernel.org> wrote:
>>
>> From: Arnd Bergmann <arnd@arndb.de>
>>
>> The HIGHMEM64G support was added in linux-2.3.25 to support (then)
>> high-end Pentium Pro and Pentium III Xeon servers with more than 4GB of
>> addressing, NUMA and PCI-X slots started appearing.
>>
>> I have found no evidence of this ever being used in regular dual-socket
>> servers or consumer devices, all the users seem obsolete these days,
>> even by i386 standards:
>>
>>  - Support for NUMA servers (NUMA-Q, IBM x440, unisys) was already
>>    removed ten years ago.
>>
>>  - 4+ socket non-NUMA servers based on Intel 450GX/450NX, HP F8 and
>>    ServerWorks ServerSet/GrandChampion could theoretically still work
>>    with 8GB, but these were exceptionally rare even 20 years ago and
>>    would have usually been equipped with than the maximum amount of
>>    RAM.
>>
>>  - Some SKUs of the Celeron D from 2004 had 64-bit mode fused off but
>>    could still work in a Socket 775 mainboard designed for the later
>>    Core 2 Duo and 8GB. Apparently most BIOSes at the time only allowed
>>    64-bit CPUs.
>>
>>  - In the early days of x86-64 hardware, there was sometimes the need
>>    to run a 32-bit kernel to work around bugs in the hardware drivers,
>>    or in the syscall emulation for 32-bit userspace. This likely still
>>    works but there should never be a need for this any more.
>>
>> Removing this also drops the need for PHYS_ADDR_T_64BIT and SWIOTLB.
>> PAE mode is still required to get access to the 'NX' bit on Atom
>> 'Pentium M' and 'Core Duo' CPUs.
>
>8GB of memory is still useful for 32-bit guest VMs.
>
>
>Brian Gerst
>

By the way, there are 64-bit machines which require swiotlb.
Arnd Bergmann Dec. 4, 2024, 4:55 p.m. UTC | #8
On Wed, Dec 4, 2024, at 17:37, H. Peter Anvin wrote:
> On December 4, 2024 5:29:17 AM PST, Brian Gerst <brgerst@gmail.com> wrote:
>>>
>>> Removing this also drops the need for PHYS_ADDR_T_64BIT and SWIOTLB.
>>> PAE mode is still required to get access to the 'NX' bit on Atom
>>> 'Pentium M' and 'Core Duo' CPUs.
>
> By the way, there are 64-bit machines which require swiotlb.

What I meant to write here was that CONFIG_X86_PAE no longer
needs to select PHYS_ADDR_T_64BIT and SWIOTLB. I ended up
splitting that change out to patch 06/11 with a better explanation,
so the sentence above is just wrong now and I've removed it
in my local copy now.

Obviously 64-bit kernels still generally need swiotlb.

       Arnd
Andy Shevchenko Dec. 4, 2024, 6:37 p.m. UTC | #9
On Wed, Dec 4, 2024 at 6:57 PM Arnd Bergmann <arnd@arndb.de> wrote:
> On Wed, Dec 4, 2024, at 17:37, H. Peter Anvin wrote:
> > On December 4, 2024 5:29:17 AM PST, Brian Gerst <brgerst@gmail.com> wrote:
> >>>
> >>> Removing this also drops the need for PHYS_ADDR_T_64BIT and SWIOTLB.
> >>> PAE mode is still required to get access to the 'NX' bit on Atom
> >>> 'Pentium M' and 'Core Duo' CPUs.
> >
> > By the way, there are 64-bit machines which require swiotlb.
>
> What I meant to write here was that CONFIG_X86_PAE no longer
> needs to select PHYS_ADDR_T_64BIT and SWIOTLB. I ended up
> splitting that change out to patch 06/11 with a better explanation,
> so the sentence above is just wrong now and I've removed it
> in my local copy now.
>
> Obviously 64-bit kernels still generally need swiotlb.

Theoretically swiotlb can be useful on 32-bit machines as well for the
DMA controllers that have < 32-bit mask. Dunno if swiotlb was designed
to run on 32-bit machines at all.
Arnd Bergmann Dec. 4, 2024, 9:14 p.m. UTC | #10
On Wed, Dec 4, 2024, at 19:37, Andy Shevchenko wrote:
> On Wed, Dec 4, 2024 at 6:57 PM Arnd Bergmann <arnd@arndb.de> wrote:
>> On Wed, Dec 4, 2024, at 17:37, H. Peter Anvin wrote:
>> > On December 4, 2024 5:29:17 AM PST, Brian Gerst <brgerst@gmail.com> wrote:
>> >>>
>> >>> Removing this also drops the need for PHYS_ADDR_T_64BIT and SWIOTLB.
>> >>> PAE mode is still required to get access to the 'NX' bit on Atom
>> >>> 'Pentium M' and 'Core Duo' CPUs.
>> >
>> > By the way, there are 64-bit machines which require swiotlb.
>>
>> What I meant to write here was that CONFIG_X86_PAE no longer
>> needs to select PHYS_ADDR_T_64BIT and SWIOTLB. I ended up
>> splitting that change out to patch 06/11 with a better explanation,
>> so the sentence above is just wrong now and I've removed it
>> in my local copy now.
>>
>> Obviously 64-bit kernels still generally need swiotlb.
>
> Theoretically swiotlb can be useful on 32-bit machines as well for the
> DMA controllers that have < 32-bit mask. Dunno if swiotlb was designed
> to run on 32-bit machines at all.

Right, that is a possibility. However those machines would
currently be broken on kernels without X86_PAE, since they
don't select swiotlb.

If anyone does rely on the current behavior of X86_PAE to support
broken DMA devices, it's probably best to fix it in a different
way.

      Arnd
diff mbox series

Patch

diff --git a/Documentation/admin-guide/kdump/kdump.rst b/Documentation/admin-guide/kdump/kdump.rst
index 5376890adbeb..1f7f14c6e184 100644
--- a/Documentation/admin-guide/kdump/kdump.rst
+++ b/Documentation/admin-guide/kdump/kdump.rst
@@ -180,10 +180,6 @@  Dump-capture kernel config options (Arch Dependent, i386 and x86_64)
 1) On i386, enable high memory support under "Processor type and
    features"::
 
-	CONFIG_HIGHMEM64G=y
-
-   or::
-
 	CONFIG_HIGHMEM4G
 
 2) With CONFIG_SMP=y, usually nr_cpus=1 need specified on the kernel
diff --git a/Documentation/arch/x86/usb-legacy-support.rst b/Documentation/arch/x86/usb-legacy-support.rst
index e01c08b7c981..b17bf122270a 100644
--- a/Documentation/arch/x86/usb-legacy-support.rst
+++ b/Documentation/arch/x86/usb-legacy-support.rst
@@ -20,11 +20,7 @@  It has several drawbacks, though:
    features (wheel, extra buttons, touchpad mode) of the real PS/2 mouse may
    not be available.
 
-2) If CONFIG_HIGHMEM64G is enabled, the PS/2 mouse emulation can cause
-   system crashes, because the SMM BIOS is not expecting to be in PAE mode.
-   The Intel E7505 is a typical machine where this happens.
-
-3) If AMD64 64-bit mode is enabled, again system crashes often happen,
+2) If AMD64 64-bit mode is enabled, again system crashes often happen,
    because the SMM BIOS isn't expecting the CPU to be in 64-bit mode.  The
    BIOS manufacturers only test with Windows, and Windows doesn't do 64-bit
    yet.
@@ -38,11 +34,6 @@  Problem 1)
   compiled-in, too.
 
 Problem 2)
-  can currently only be solved by either disabling HIGHMEM64G
-  in the kernel config or USB Legacy support in the BIOS. A BIOS update
-  could help, but so far no such update exists.
-
-Problem 3)
   is usually fixed by a BIOS update. Check the board
   manufacturers web site. If an update is not available, disable USB
   Legacy support in the BIOS. If this alone doesn't help, try also adding
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 42494739344d..b373db8a8176 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1383,15 +1383,11 @@  config X86_CPUID
 	  with major 203 and minors 0 to 31 for /dev/cpu/0/cpuid to
 	  /dev/cpu/31/cpuid.
 
-choice
-	prompt "High Memory Support"
-	default HIGHMEM4G
+config HIGHMEM4G
+	bool "High Memory Support"
 	depends on X86_32
-
-config NOHIGHMEM
-	bool "off"
 	help
-	  Linux can use up to 64 Gigabytes of physical memory on x86 systems.
+	  Linux can use up to 4 Gigabytes of physical memory on x86 systems.
 	  However, the address space of 32-bit x86 processors is only 4
 	  Gigabytes large. That means that, if you have a large amount of
 	  physical memory, not all of it can be "permanently mapped" by the
@@ -1407,38 +1403,9 @@  config NOHIGHMEM
 	  possible.
 
 	  If the machine has between 1 and 4 Gigabytes physical RAM, then
-	  answer "4GB" here.
+	  answer "Y" here.
 
-	  If more than 4 Gigabytes is used then answer "64GB" here. This
-	  selection turns Intel PAE (Physical Address Extension) mode on.
-	  PAE implements 3-level paging on IA32 processors. PAE is fully
-	  supported by Linux, PAE mode is implemented on all recent Intel
-	  processors (Pentium Pro and better). NOTE: If you say "64GB" here,
-	  then the kernel will not boot on CPUs that don't support PAE!
-
-	  The actual amount of total physical memory will either be
-	  auto detected or can be forced by using a kernel command line option
-	  such as "mem=256M". (Try "man bootparam" or see the documentation of
-	  your boot loader (lilo or loadlin) about how to pass options to the
-	  kernel at boot time.)
-
-	  If unsure, say "off".
-
-config HIGHMEM4G
-	bool "4GB"
-	help
-	  Select this if you have a 32-bit processor and between 1 and 4
-	  gigabytes of physical RAM.
-
-config HIGHMEM64G
-	bool "64GB"
-	depends on X86_HAVE_PAE
-	select X86_PAE
-	help
-	  Select this if you have a 32-bit processor and more than 4
-	  gigabytes of physical RAM.
-
-endchoice
+	  If unsure, say N.
 
 choice
 	prompt "Memory split" if EXPERT
@@ -1484,8 +1451,7 @@  config PAGE_OFFSET
 	depends on X86_32
 
 config HIGHMEM
-	def_bool y
-	depends on X86_32 && (HIGHMEM64G || HIGHMEM4G)
+	def_bool HIGHMEM4G
 
 config X86_PAE
 	bool "PAE (Physical Address Extension) Support"
diff --git a/arch/x86/configs/xen.config b/arch/x86/configs/xen.config
index 581296255b39..d5d091e03bd3 100644
--- a/arch/x86/configs/xen.config
+++ b/arch/x86/configs/xen.config
@@ -1,6 +1,4 @@ 
 # global x86 required specific stuff
-# On 32-bit HIGHMEM4G is not allowed
-CONFIG_HIGHMEM64G=y
 CONFIG_64BIT=y
 
 # These enable us to allow some of the
diff --git a/arch/x86/include/asm/page_32_types.h b/arch/x86/include/asm/page_32_types.h
index faf9cc1c14bb..25c32652f404 100644
--- a/arch/x86/include/asm/page_32_types.h
+++ b/arch/x86/include/asm/page_32_types.h
@@ -11,8 +11,8 @@ 
  * a virtual address space of one gigabyte, which limits the
  * amount of physical memory you can use to about 950MB.
  *
- * If you want more physical memory than this then see the CONFIG_HIGHMEM4G
- * and CONFIG_HIGHMEM64G options in the kernel configuration.
+ * If you want more physical memory than this then see the CONFIG_VMSPLIT_2G
+ * and CONFIG_HIGHMEM4G options in the kernel configuration.
  */
 #define __PAGE_OFFSET_BASE	_AC(CONFIG_PAGE_OFFSET, UL)
 #define __PAGE_OFFSET		__PAGE_OFFSET_BASE
diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
index ac41b1e0940d..f288aad8dc74 100644
--- a/arch/x86/mm/init_32.c
+++ b/arch/x86/mm/init_32.c
@@ -582,7 +582,7 @@  static void __init lowmem_pfn_init(void)
 	"only %luMB highmem pages available, ignoring highmem size of %luMB!\n"
 
 #define MSG_HIGHMEM_TRIMMED \
-	"Warning: only 4GB will be used. Use a HIGHMEM64G enabled kernel!\n"
+	"Warning: only 4GB will be used. Support for for CONFIG_HIGHMEM64G was removed!\n"
 /*
  * We have more RAM than fits into lowmem - we try to put it into
  * highmem, also taking the highmem=x boot parameter into account:
@@ -606,18 +606,13 @@  static void __init highmem_pfn_init(void)
 #ifndef CONFIG_HIGHMEM
 	/* Maximum memory usable is what is directly addressable */
 	printk(KERN_WARNING "Warning only %ldMB will be used.\n", MAXMEM>>20);
-	if (max_pfn > MAX_NONPAE_PFN)
-		printk(KERN_WARNING "Use a HIGHMEM64G enabled kernel.\n");
-	else
-		printk(KERN_WARNING "Use a HIGHMEM enabled kernel.\n");
+	printk(KERN_WARNING "Use a HIGHMEM enabled kernel.\n");
 	max_pfn = MAXMEM_PFN;
 #else /* !CONFIG_HIGHMEM */
-#ifndef CONFIG_HIGHMEM64G
 	if (max_pfn > MAX_NONPAE_PFN) {
 		max_pfn = MAX_NONPAE_PFN;
 		printk(KERN_WARNING MSG_HIGHMEM_TRIMMED);
 	}
-#endif /* !CONFIG_HIGHMEM64G */
 #endif /* !CONFIG_HIGHMEM */
 }