diff mbox series

[v7,04/38] x86/boot: convert mod refs to boot_module mod

Message ID 20241021004613.18793-5-dpsmith@apertussolutions.com (mailing list archive)
State Superseded
Headers show
Series Boot modules for Hyperlaunch | expand

Commit Message

Daniel P. Smith Oct. 21, 2024, 12:45 a.m. UTC
To allow a slow conversion of x86 over to struct boot_module, start with
replacing all references to module_t mod, only in setup.c, to the mod element
of struct boot_module. These serves twofold, first to allow the incremental
transition from module_t fields to struct boot_module fields. The second is to
allow the conversion of function definitions from taking module_t parameters to
accepting struct boot_module as needed when a transitioned field will be
accessed.

Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
---
Changes since v6:
- code style
- switched to a local ref

Changes since v5:
- rewrote commit message
- coding style changes
- added comment for initial_images assignment
---
 xen/arch/x86/setup.c | 62 +++++++++++++++++++++++++-------------------
 1 file changed, 35 insertions(+), 27 deletions(-)

Comments

Andrew Cooper Oct. 21, 2024, 6 p.m. UTC | #1
On 21/10/2024 1:45 am, Daniel P. Smith wrote:
> To allow a slow conversion of x86 over to struct boot_module, start with
> replacing all references to module_t mod, only in setup.c, to the mod element
> of struct boot_module. These serves twofold, first to allow the incremental
> transition from module_t fields to struct boot_module fields. The second is to
> allow the conversion of function definitions from taking module_t parameters to
> accepting struct boot_module as needed when a transitioned field will be
> accessed.
>
> Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
> ---
> Changes since v6:
> - code style
> - switched to a local ref
>
> Changes since v5:
> - rewrote commit message
> - coding style changes
> - added comment for initial_images assignment
> ---
>  xen/arch/x86/setup.c | 62 +++++++++++++++++++++++++-------------------
>  1 file changed, 35 insertions(+), 27 deletions(-)
>
> diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
> index 48809aa94451..b6d688f8fe5e 100644
> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -1364,15 +1364,19 @@ void asmlinkage __init noreturn __start_xen(unsigned long mbi_p)
>      set_kexec_crash_area_size((u64)nr_pages << PAGE_SHIFT);
>      kexec_reserve_area();
>  
> -    initial_images = mod;
> +    /*
> +     * The field bi->mods[0].mod points to the first element of the module_t
> +     * array.
> +     */
> +    initial_images = bi->mods[0].mod;

This looks actively-dodgy.  It might be correct, but its also not necessary.

bi->mods[] is populated and both initial_images_nrpages() and
discard_initial_images() have a local bi-> pointer which they already
consume nr_module from, so you really can drop initial_images here in
the series.

i.e. you want to pull patch 28 forward to ahead of of this one, and it
will reduce the churn through the series.

But mostly, it removes a transient-WTF construct from the series, making
it easier to review.

~Andrew
Andrew Cooper Oct. 21, 2024, 6:11 p.m. UTC | #2
On 21/10/2024 7:00 pm, Andrew Cooper wrote:
> On 21/10/2024 1:45 am, Daniel P. Smith wrote:
>> To allow a slow conversion of x86 over to struct boot_module, start with
>> replacing all references to module_t mod, only in setup.c, to the mod element
>> of struct boot_module. These serves twofold, first to allow the incremental
>> transition from module_t fields to struct boot_module fields. The second is to
>> allow the conversion of function definitions from taking module_t parameters to
>> accepting struct boot_module as needed when a transitioned field will be
>> accessed.
>>
>> Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
>> ---
>> Changes since v6:
>> - code style
>> - switched to a local ref
>>
>> Changes since v5:
>> - rewrote commit message
>> - coding style changes
>> - added comment for initial_images assignment
>> ---
>>  xen/arch/x86/setup.c | 62 +++++++++++++++++++++++++-------------------
>>  1 file changed, 35 insertions(+), 27 deletions(-)
>>
>> diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
>> index 48809aa94451..b6d688f8fe5e 100644
>> --- a/xen/arch/x86/setup.c
>> +++ b/xen/arch/x86/setup.c
>> @@ -1364,15 +1364,19 @@ void asmlinkage __init noreturn __start_xen(unsigned long mbi_p)
>>      set_kexec_crash_area_size((u64)nr_pages << PAGE_SHIFT);
>>      kexec_reserve_area();
>>  
>> -    initial_images = mod;
>> +    /*
>> +     * The field bi->mods[0].mod points to the first element of the module_t
>> +     * array.
>> +     */
>> +    initial_images = bi->mods[0].mod;
> This looks actively-dodgy.  It might be correct, but its also not necessary.
>
> bi->mods[] is populated and both initial_images_nrpages() and
> discard_initial_images() have a local bi-> pointer which they already
> consume nr_module from, so you really can drop initial_images here in
> the series.
>
> i.e. you want to pull patch 28 forward to ahead of of this one, and it
> will reduce the churn through the series.
>
> But mostly, it removes a transient-WTF construct from the series, making
> it easier to review.

In fact, having done this locally, both patches 27 and 28 disappear. 
Patches 29 and 30 are the ones which swap these two functions away from
using the mod-> pointer, and they're basically the same.

So yes, far less churn.

~Andrew
diff mbox series

Patch

diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 48809aa94451..b6d688f8fe5e 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1364,15 +1364,19 @@  void asmlinkage __init noreturn __start_xen(unsigned long mbi_p)
     set_kexec_crash_area_size((u64)nr_pages << PAGE_SHIFT);
     kexec_reserve_area();
 
-    initial_images = mod;
+    /*
+     * The field bi->mods[0].mod points to the first element of the module_t
+     * array.
+     */
+    initial_images = bi->mods[0].mod;
 
     for ( i = 0; !efi_enabled(EFI_LOADER) && i < bi->nr_modules; i++ )
     {
-        if ( mod[i].mod_start & (PAGE_SIZE - 1) )
+        if ( bi->mods[i].mod->mod_start & (PAGE_SIZE - 1) )
             panic("Bootloader didn't honor module alignment request\n");
-        mod[i].mod_end -= mod[i].mod_start;
-        mod[i].mod_start >>= PAGE_SHIFT;
-        mod[i].reserved = 0;
+        bi->mods[i].mod->mod_end -= bi->mods[i].mod->mod_start;
+        bi->mods[i].mod->mod_start >>= PAGE_SHIFT;
+        bi->mods[i].mod->reserved = 0;
     }
 
     /*
@@ -1383,6 +1387,7 @@  void asmlinkage __init noreturn __start_xen(unsigned long mbi_p)
 
     if ( xen_phys_start )
     {
+        unsigned int xen = bi->nr_modules;
         relocated = true;
 
         /*
@@ -1390,8 +1395,8 @@  void asmlinkage __init noreturn __start_xen(unsigned long mbi_p)
          * respective reserve_e820_ram() invocation below. No need to
          * query efi_boot_mem_unused() here, though.
          */
-        mod[bi->nr_modules].mod_start = virt_to_mfn(_stext);
-        mod[bi->nr_modules].mod_end = __2M_rwdata_end - _stext;
+        bi->mods[xen].mod->mod_start = virt_to_mfn(_stext);
+        bi->mods[xen].mod->mod_end = __2M_rwdata_end - _stext;
     }
 
     bi->mods[0].headroom =
@@ -1483,9 +1488,9 @@  void asmlinkage __init noreturn __start_xen(unsigned long mbi_p)
              * move mod[0], we incorporate this as extra space at the start.
              */
             struct boot_module *bm = &bi->mods[j];
-            unsigned long size = PAGE_ALIGN(bm->headroom + mod[j].mod_end);
+            unsigned long size = PAGE_ALIGN(bm->headroom + bm->mod->mod_end);
 
-            if ( mod[j].reserved )
+            if ( bm->mod->reserved )
                 continue;
 
             /* Don't overlap with other modules (or Xen itself). */
@@ -1497,14 +1502,14 @@  void asmlinkage __init noreturn __start_xen(unsigned long mbi_p)
 
             if ( s < end &&
                  (bm->headroom ||
-                  ((end - size) >> PAGE_SHIFT) > mod[j].mod_start) )
+                  ((end - size) >> PAGE_SHIFT) > bm->mod->mod_start) )
             {
                 move_memory(end - size + bm->headroom,
-                            (uint64_t)mod[j].mod_start << PAGE_SHIFT,
-                            mod[j].mod_end);
-                mod[j].mod_start = (end - size) >> PAGE_SHIFT;
-                mod[j].mod_end += bm->headroom;
-                mod[j].reserved = 1;
+                            (uint64_t)bm->mod->mod_start << PAGE_SHIFT,
+                            bm->mod->mod_end);
+                bm->mod->mod_start = (end - size) >> PAGE_SHIFT;
+                bm->mod->mod_end += bm->headroom;
+                bm->mod->reserved = 1;
             }
         }
 
@@ -1530,13 +1535,14 @@  void asmlinkage __init noreturn __start_xen(unsigned long mbi_p)
 #endif
     }
 
-    if ( bi->mods[0].headroom && !mod->reserved )
+    if ( bi->mods[0].headroom && !bi->mods[0].mod->reserved )
         panic("Not enough memory to relocate the dom0 kernel image\n");
     for ( i = 0; i < bi->nr_modules; ++i )
     {
-        uint64_t s = (uint64_t)mod[i].mod_start << PAGE_SHIFT;
+        uint64_t s = (uint64_t)bi->mods[i].mod->mod_start << PAGE_SHIFT;
 
-        reserve_e820_ram(&boot_e820, s, s + PAGE_ALIGN(mod[i].mod_end));
+        reserve_e820_ram(&boot_e820, s,
+                         s + PAGE_ALIGN(bi->mods[i].mod->mod_end));
     }
 
     if ( !xen_phys_start )
@@ -1614,8 +1620,8 @@  void asmlinkage __init noreturn __start_xen(unsigned long mbi_p)
                 map_e = boot_e820.map[j].addr + boot_e820.map[j].size;
                 for ( j = 0; j < bi->nr_modules; ++j )
                 {
-                    uint64_t end = pfn_to_paddr(mod[j].mod_start) +
-                                   mod[j].mod_end;
+                    uint64_t end = pfn_to_paddr(bi->mods[j].mod->mod_start) +
+                                   bi->mods[j].mod->mod_end;
 
                     if ( map_e < end )
                         map_e = end;
@@ -1689,11 +1695,12 @@  void asmlinkage __init noreturn __start_xen(unsigned long mbi_p)
 
     for ( i = 0; i < bi->nr_modules; ++i )
     {
-        set_pdx_range(mod[i].mod_start,
-                      mod[i].mod_start + PFN_UP(mod[i].mod_end));
-        map_pages_to_xen((unsigned long)mfn_to_virt(mod[i].mod_start),
-                         _mfn(mod[i].mod_start),
-                         PFN_UP(mod[i].mod_end), PAGE_HYPERVISOR);
+        set_pdx_range(bi->mods[i].mod->mod_start,
+                      bi->mods[i].mod->mod_start +
+                          PFN_UP(bi->mods[i].mod->mod_end));
+        map_pages_to_xen((unsigned long)mfn_to_virt(bi->mods[i].mod->mod_start),
+                         _mfn(bi->mods[i].mod->mod_start),
+                         PFN_UP(bi->mods[i].mod->mod_end), PAGE_HYPERVISOR);
     }
 
 #ifdef CONFIG_KEXEC
@@ -2082,8 +2089,9 @@  void asmlinkage __init noreturn __start_xen(unsigned long mbi_p)
      * We're going to setup domain0 using the module(s) that we stashed safely
      * above our heap. The second module, if present, is an initrd ramdisk.
      */
-    dom0 = create_dom0(mod, bi->mods[0].headroom,
-                       initrdidx < bi->nr_modules ? mod + initrdidx : NULL,
+    dom0 = create_dom0(bi->mods[0].mod, bi->mods[0].headroom,
+                       initrdidx < bi->nr_modules ? bi->mods[initrdidx].mod
+                                                  : NULL,
                        kextra, bi->loader);
     if ( !dom0 )
         panic("Could not set up DOM0 guest OS\n");