diff mbox series

[v5,2/4] x86/vmemmap: Drop handling of 1GB vmemmap ranges

Message ID 20210309174113.5597-3-osalvador@suse.de (mailing list archive)
State New, archived
Headers show
Series Cleanup and fixups for vmemmap handling | expand

Commit Message

Oscar Salvador March 9, 2021, 5:41 p.m. UTC
We never get to allocate 1GB pages when mapping the vmemmap range.
Drop the dead code both for the aligned and unaligned cases and leave
only the direct map handling.

Signed-off-by: Oscar Salvador <osalvador@suse.de>
Suggested-by: David Hildenbrand <david@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
---
 arch/x86/mm/init_64.c | 35 +++++++----------------------------
 1 file changed, 7 insertions(+), 28 deletions(-)

Comments

Dave Hansen March 9, 2021, 6:34 p.m. UTC | #1
On 3/9/21 9:41 AM, Oscar Salvador wrote:
> We never get to allocate 1GB pages when mapping the vmemmap range.
> Drop the dead code both for the aligned and unaligned cases and leave
> only the direct map handling.

I was hoping to seem some more meat in this changelog, possibly some of
what David Hildenbrand said in the v4 thread about this patch.
Basically, we don't have code to allocate 1G mappings because it isn't
clear that it would be worth the complexity, and it might also waste memory.

I'm fine with the code, but I would appreciate a beefed-up changelog:

Acked-by: Dave Hansen <dave.hansen@linux.intel.com>
Oscar Salvador March 9, 2021, 9:27 p.m. UTC | #2
On Tue, Mar 09, 2021 at 10:34:51AM -0800, Dave Hansen wrote:
> On 3/9/21 9:41 AM, Oscar Salvador wrote:
> > We never get to allocate 1GB pages when mapping the vmemmap range.
> > Drop the dead code both for the aligned and unaligned cases and leave
> > only the direct map handling.
> 
> I was hoping to seem some more meat in this changelog, possibly some of
> what David Hildenbrand said in the v4 thread about this patch.
> Basically, we don't have code to allocate 1G mappings because it isn't
> clear that it would be worth the complexity, and it might also waste memory.
> 
> I'm fine with the code, but I would appreciate a beefed-up changelog:
> 
> Acked-by: Dave Hansen <dave.hansen@linux.intel.com>

Since I had to do another pass to fix up some compilaton errors,
I added a bit more of explanation in that regard.

Thanks!
diff mbox series

Patch

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index b0e1d215c83e..9ecb3c488ac8 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -1062,7 +1062,6 @@  remove_pud_table(pud_t *pud_start, unsigned long addr, unsigned long end,
 	unsigned long next, pages = 0;
 	pmd_t *pmd_base;
 	pud_t *pud;
-	void *page_addr;
 
 	pud = pud_start + pud_index(addr);
 	for (; addr < end; addr = next, pud++) {
@@ -1071,33 +1070,13 @@  remove_pud_table(pud_t *pud_start, unsigned long addr, unsigned long end,
 		if (!pud_present(*pud))
 			continue;
 
-		if (pud_large(*pud)) {
-			if (IS_ALIGNED(addr, PUD_SIZE) &&
-			    IS_ALIGNED(next, PUD_SIZE)) {
-				if (!direct)
-					free_pagetable(pud_page(*pud),
-						       get_order(PUD_SIZE));
-
-				spin_lock(&init_mm.page_table_lock);
-				pud_clear(pud);
-				spin_unlock(&init_mm.page_table_lock);
-				pages++;
-			} else {
-				/* If here, we are freeing vmemmap pages. */
-				memset((void *)addr, PAGE_INUSE, next - addr);
-
-				page_addr = page_address(pud_page(*pud));
-				if (!memchr_inv(page_addr, PAGE_INUSE,
-						PUD_SIZE)) {
-					free_pagetable(pud_page(*pud),
-						       get_order(PUD_SIZE));
-
-					spin_lock(&init_mm.page_table_lock);
-					pud_clear(pud);
-					spin_unlock(&init_mm.page_table_lock);
-				}
-			}
-
+		if (pud_large(*pud) &&
+		    IS_ALIGNED(addr, PUD_SIZE) &&
+		    IS_ALIGNED(next, PUD_SIZE)) {
+			spin_lock(&init_mm.page_table_lock);
+			pud_clear(pud);
+			spin_unlock(&init_mm.page_table_lock);
+			pages++;
 			continue;
 		}