mbox series

[v2,0/3] Yocto Gitlab CI

Message ID cover.1665561024.git.bertrand.marquis@arm.com (mailing list archive)
Headers show
Series Yocto Gitlab CI | expand

Message

Bertrand Marquis Oct. 12, 2022, 8:02 a.m. UTC
This patch series is a first attempt to check if we could use Yocto in
gitlab ci to build and run xen on qemu for arm, arm64 and x86.

The first patch is creating a container with all elements required to
build Yocto, a checkout of the yocto layers required and an helper
script to build and run xen on qemu with yocto.

The second patch is creating containers with a first build of yocto done
so that susbsequent build with those containers would only rebuild what
was changed and take the rest from the cache.

The third patch is adding a way to easily clean locally created
containers.

This is is mainly for discussion and sharing as there are still some
issues/problem to solve:
- building the qemu* containers can take several hours depending on the
  network bandwith and computing power of the machine where those are
  created
- produced containers containing the cache have a size between 8 and
  12GB depending on the architecture. We might need to store the build
  cache somewhere else to reduce the size. If we choose to have one
  single image, the needed size is around 20GB and we need up to 40GB
  during the build, which is why I splitted them.
- during the build and run, we use a bit more then 20GB of disk which is
  over the allowed size in gitlab

Once all problems passed, this can be used to build and run dom0 on qemu
with a modified Xen on the 3 archs in less than 10 minutes.

This has been tested on a x86 host machine and on an arm host machine
(with mk_dsdt.c fix).

Changes in v2:
- remove gitignore patch which was merged
- add a --dump-log support in build-yocto.sh script and use it during
  container creation to see the error logs.

Bertrand Marquis (3):
  automation: Add elements for Yocto test and run
  automation: Add yocto containers with cache
  automation: Add a clean rule for containers

 automation/build/Makefile                     |  19 +-
 automation/build/yocto/build-yocto.sh         | 340 ++++++++++++++++++
 .../build/yocto/kirkstone-qemuarm.dockerfile  |  28 ++
 .../yocto/kirkstone-qemuarm64.dockerfile      |  28 ++
 .../yocto/kirkstone-qemux86-64.dockerfile     |  28 ++
 automation/build/yocto/kirkstone.dockerfile   | 100 ++++++
 6 files changed, 542 insertions(+), 1 deletion(-)
 create mode 100755 automation/build/yocto/build-yocto.sh
 create mode 100644 automation/build/yocto/kirkstone-qemuarm.dockerfile
 create mode 100644 automation/build/yocto/kirkstone-qemuarm64.dockerfile
 create mode 100644 automation/build/yocto/kirkstone-qemux86-64.dockerfile
 create mode 100644 automation/build/yocto/kirkstone.dockerfile

Comments

Stefano Stabellini Oct. 14, 2022, 8:27 p.m. UTC | #1
On Wed, 12 Oct 2022, Bertrand Marquis wrote:
> This patch series is a first attempt to check if we could use Yocto in
> gitlab ci to build and run xen on qemu for arm, arm64 and x86.
> 
> The first patch is creating a container with all elements required to
> build Yocto, a checkout of the yocto layers required and an helper
> script to build and run xen on qemu with yocto.
> 
> The second patch is creating containers with a first build of yocto done
> so that susbsequent build with those containers would only rebuild what
> was changed and take the rest from the cache.
> 
> The third patch is adding a way to easily clean locally created
> containers.
> 
> This is is mainly for discussion and sharing as there are still some
> issues/problem to solve:
> - building the qemu* containers can take several hours depending on the
>   network bandwith and computing power of the machine where those are
>   created
> - produced containers containing the cache have a size between 8 and
>   12GB depending on the architecture. We might need to store the build
>   cache somewhere else to reduce the size. If we choose to have one
>   single image, the needed size is around 20GB and we need up to 40GB
>   during the build, which is why I splitted them.
> - during the build and run, we use a bit more then 20GB of disk which is
>   over the allowed size in gitlab
> 
> Once all problems passed, this can be used to build and run dom0 on qemu
> with a modified Xen on the 3 archs in less than 10 minutes.

The build still doesn't work for me. I found the reason:

  create archive failed: cpio: write failed - Cannot allocate memory

It is a "silly" out of memory error. I tried to solve the problem by
adding:

  export RPM_BUILD_NCPUS=8

at the beginning of build-yocto.sh but it didn't work. I realize that
this error might be considered a workstation configuration error at my
end but I cannot find a way past it. Any suggestions?
Bertrand Marquis Oct. 17, 2022, 9:21 a.m. UTC | #2
Hi Stefano,

> On 14 Oct 2022, at 21:27, Stefano Stabellini <sstabellini@kernel.org> wrote:
> 
> On Wed, 12 Oct 2022, Bertrand Marquis wrote:
>> This patch series is a first attempt to check if we could use Yocto in
>> gitlab ci to build and run xen on qemu for arm, arm64 and x86.
>> 
>> The first patch is creating a container with all elements required to
>> build Yocto, a checkout of the yocto layers required and an helper
>> script to build and run xen on qemu with yocto.
>> 
>> The second patch is creating containers with a first build of yocto done
>> so that susbsequent build with those containers would only rebuild what
>> was changed and take the rest from the cache.
>> 
>> The third patch is adding a way to easily clean locally created
>> containers.
>> 
>> This is is mainly for discussion and sharing as there are still some
>> issues/problem to solve:
>> - building the qemu* containers can take several hours depending on the
>>  network bandwith and computing power of the machine where those are
>>  created
>> - produced containers containing the cache have a size between 8 and
>>  12GB depending on the architecture. We might need to store the build
>>  cache somewhere else to reduce the size. If we choose to have one
>>  single image, the needed size is around 20GB and we need up to 40GB
>>  during the build, which is why I splitted them.
>> - during the build and run, we use a bit more then 20GB of disk which is
>>  over the allowed size in gitlab
>> 
>> Once all problems passed, this can be used to build and run dom0 on qemu
>> with a modified Xen on the 3 archs in less than 10 minutes.
> 
> The build still doesn't work for me. I found the reason:
> 
>  create archive failed: cpio: write failed - Cannot allocate memory
> 
> It is a "silly" out of memory error. I tried to solve the problem by
> adding:
> 
>  export RPM_BUILD_NCPUS=8
> 
> at the beginning of build-yocto.sh but it didn't work. I realize that
> this error might be considered a workstation configuration error at my
> end but I cannot find a way past it. Any suggestions?


Can you give me more details on when this is happening ? Ie the full logs.

Can you try to apply the following:
--- a/automation/build/yocto/build-yocto.sh
+++ b/automation/build/yocto/build-yocto.sh
@@ -107,6 +107,9 @@ IMAGE_INSTALL:append:pn-xen-image-minimal = " ssh-pregen-hostkeys"
 # Save some disk space
 INHERIT += "rm_work"

+# Reduce number of jobs
+BB_NUMBER_THREADS=2
+
 EOF

     if [ "${do_localsrc}" = "y" ]; then

This should reduce the number of parallel jobs during Yocto build.

Cheers
Bertrand
Stefano Stabellini Oct. 18, 2022, 1:27 a.m. UTC | #3
On Mon, 17 Oct 2022, Bertrand Marquis wrote:
> Hi Stefano,
> 
> > On 14 Oct 2022, at 21:27, Stefano Stabellini <sstabellini@kernel.org> wrote:
> > 
> > On Wed, 12 Oct 2022, Bertrand Marquis wrote:
> >> This patch series is a first attempt to check if we could use Yocto in
> >> gitlab ci to build and run xen on qemu for arm, arm64 and x86.
> >> 
> >> The first patch is creating a container with all elements required to
> >> build Yocto, a checkout of the yocto layers required and an helper
> >> script to build and run xen on qemu with yocto.
> >> 
> >> The second patch is creating containers with a first build of yocto done
> >> so that susbsequent build with those containers would only rebuild what
> >> was changed and take the rest from the cache.
> >> 
> >> The third patch is adding a way to easily clean locally created
> >> containers.
> >> 
> >> This is is mainly for discussion and sharing as there are still some
> >> issues/problem to solve:
> >> - building the qemu* containers can take several hours depending on the
> >>  network bandwith and computing power of the machine where those are
> >>  created
> >> - produced containers containing the cache have a size between 8 and
> >>  12GB depending on the architecture. We might need to store the build
> >>  cache somewhere else to reduce the size. If we choose to have one
> >>  single image, the needed size is around 20GB and we need up to 40GB
> >>  during the build, which is why I splitted them.
> >> - during the build and run, we use a bit more then 20GB of disk which is
> >>  over the allowed size in gitlab
> >> 
> >> Once all problems passed, this can be used to build and run dom0 on qemu
> >> with a modified Xen on the 3 archs in less than 10 minutes.
> > 
> > The build still doesn't work for me. I found the reason:
> > 
> >  create archive failed: cpio: write failed - Cannot allocate memory
> > 
> > It is a "silly" out of memory error. I tried to solve the problem by
> > adding:
> > 
> >  export RPM_BUILD_NCPUS=8
> > 
> > at the beginning of build-yocto.sh but it didn't work. I realize that
> > this error might be considered a workstation configuration error at my
> > end but I cannot find a way past it. Any suggestions?
> 
> 
> Can you give me more details on when this is happening ? Ie the full logs.
> 
> Can you try to apply the following:
> --- a/automation/build/yocto/build-yocto.sh
> +++ b/automation/build/yocto/build-yocto.sh
> @@ -107,6 +107,9 @@ IMAGE_INSTALL:append:pn-xen-image-minimal = " ssh-pregen-hostkeys"
>  # Save some disk space
>  INHERIT += "rm_work"
> 
> +# Reduce number of jobs
> +BB_NUMBER_THREADS=2
> +
>  EOF
> 
>      if [ "${do_localsrc}" = "y" ]; then
> 
> This should reduce the number of parallel jobs during Yocto build.

It should be

BB_NUMBER_THREADS="2"

but that worked! Let me a couple of more tests.
Stefano Stabellini Oct. 19, 2022, 12:02 a.m. UTC | #4
On Mon, 17 Oct 2022, Stefano Stabellini wrote:
> It should be
> 
> BB_NUMBER_THREADS="2"
> 
> but that worked! Let me a couple of more tests.

I could run successfully a Yocto build test with qemuarm64 as target in
gitlab-ci, hurray! No size issues, no build time issues, everything was
fine. See:
https://gitlab.com/xen-project/people/sstabellini/xen/-/jobs/3193051236
https://gitlab.com/xen-project/people/sstabellini/xen/-/jobs/3193083119

I made the appended changes in top of this series.

- I pushed registry.gitlab.com/xen-project/xen/yocto:kirkstone and
  registry.gitlab.com/xen-project/xen/yocto:kirkstone-qemuarm64
- for the gitlab-ci runs, we need to run build-yocto.sh from the copy in
  xen.git, not from a copy stored inside a container
- when building the kirkstone-qemuarm64 container the first time
  (outside of gitlab-ci) I used COPY and took the script from the local
  xen.git tree
- after a number of tests, I settled on: BB_NUMBER_THREADS="8" more than
  this and it breaks on some workstations, please add it
- I am running the yocto build on arm64 so that we can use the arm64
  hardware to do it in gitlab-ci

Please feel free to incorporate these changes in your series, and add
corresponding changes for the qemuarm32 and qemux86 targets.

I am looking forward to it! Almost there!

Cheers,

Stefano


diff --git a/automation/build/yocto/build-yocto.sh b/automation/build/yocto/build-yocto.sh
index 0d31dad607..16f1dcc0a5 100755
--- a/automation/build/yocto/build-yocto.sh
+++ b/automation/build/yocto/build-yocto.sh
@@ -107,6 +107,9 @@ IMAGE_INSTALL:append:pn-xen-image-minimal = " ssh-pregen-hostkeys"
 # Save some disk space
 INHERIT += "rm_work"
 
+# Reduce number of jobs
+BB_NUMBER_THREADS="8"
+
 EOF
 
     if [ "${do_localsrc}" = "y" ]; then
diff --git a/automation/build/yocto/kirkstone-qemuarm64.dockerfile b/automation/build/yocto/kirkstone-qemuarm64.dockerfile
index f279a7af92..aea3fc1f3e 100644
--- a/automation/build/yocto/kirkstone-qemuarm64.dockerfile
+++ b/automation/build/yocto/kirkstone-qemuarm64.dockerfile
@@ -16,7 +16,8 @@ ARG target=qemuarm64
 
 # This step can take one to several hours depending on your download bandwith
 # and the speed of your computer
-RUN /home/$USER_NAME/bin/build-yocto.sh --dump-log $target
+COPY ./build-yocto.sh /
+RUN /build-yocto.sh --dump-log $target
 
 FROM $from_image
 
diff --git a/automation/build/yocto/kirkstone.dockerfile b/automation/build/yocto/kirkstone.dockerfile
index 367a7863b6..ffbd91aa90 100644
--- a/automation/build/yocto/kirkstone.dockerfile
+++ b/automation/build/yocto/kirkstone.dockerfile
@@ -84,9 +84,6 @@ RUN mkdir -p /home/$USER_NAME/yocto-layers \
              /home/$USER_NAME/xen && \
     chown $USER_NAME.$USER_NAME /home/$USER_NAME/*
 
-# Copy the build script
-COPY build-yocto.sh /home/$USER_NAME/bin/
-
 # clone yocto repositories we need
 ARG yocto_version="kirkstone"
 RUN for rep in \
diff --git a/automation/gitlab-ci/build.yaml b/automation/gitlab-ci/build.yaml
index ddc2234faf..4b8bcde252 100644
--- a/automation/gitlab-ci/build.yaml
+++ b/automation/gitlab-ci/build.yaml
@@ -584,6 +584,22 @@ alpine-3.12-gcc-arm64-boot-cpupools:
     EXTRA_XEN_CONFIG: |
       CONFIG_BOOT_TIME_CPUPOOLS=y
 
+yocto-kirkstone-qemuarm64:
+  stage: build
+  image: registry.gitlab.com/xen-project/xen/${CONTAINER}
+  script:
+    - ./automation/build/yocto/build-yocto.sh -v --log-dir=./logs --xen-dir=`pwd` qemuarm64
+  variables:
+    CONTAINER: yocto:kirkstone-qemuarm64
+  artifacts:
+    paths:
+      - '*.log'
+      - '*/*.log'
+      - 'logs/*'
+    when: always
+  tags:
+    - arm64
+
 ## Test artifacts common
 
 .test-jobs-artifact-common:
Bertrand Marquis Oct. 19, 2022, 8:10 a.m. UTC | #5
Hi Stefano,

> On 19 Oct 2022, at 01:02, Stefano Stabellini <sstabellini@kernel.org> wrote:
> 
> On Mon, 17 Oct 2022, Stefano Stabellini wrote:
>> It should be
>> 
>> BB_NUMBER_THREADS="2"
>> 
>> but that worked! Let me a couple of more tests.
> 
> I could run successfully a Yocto build test with qemuarm64 as target in
> gitlab-ci, hurray! No size issues, no build time issues, everything was
> fine. See:
> https://gitlab.com/xen-project/people/sstabellini/xen/-/jobs/3193051236
> https://gitlab.com/xen-project/people/sstabellini/xen/-/jobs/3193083119

Awesome, this is quite fast :-)

> 
> I made the appended changes in top of this series.
> 
> - I pushed registry.gitlab.com/xen-project/xen/yocto:kirkstone and
>  registry.gitlab.com/xen-project/xen/yocto:kirkstone-qemuarm64

This should already be handle by the Makefile using PUSH or did
you have to modify something ?

> - for the gitlab-ci runs, we need to run build-yocto.sh from the copy in
>  xen.git, not from a copy stored inside a container

Ok

> - when building the kirkstone-qemuarm64 container the first time
>  (outside of gitlab-ci) I used COPY and took the script from the local
>  xen.git tree

Ok

> - after a number of tests, I settled on: BB_NUMBER_THREADS="8" more than
>  this and it breaks on some workstations, please add it

I will put this by default and leave a command line argument to have a solution to change this.

> - I am running the yocto build on arm64 so that we can use the arm64
>  hardware to do it in gitlab-ci

I tested this when I made the patches and this works for arm64, arm32 and x86 targets on an arm64 machine so go for it.

> 
> Please feel free to incorporate these changes in your series, and add
> corresponding changes for the qemuarm32 and qemux86 targets.

Will do and I will also add a patch to create the build.yaml entries.

> 
> I am looking forward to it! Almost there!

Me to :-)

Thanks a lot for the testing and the review.

Cheers
Bertrand

> 
> Cheers,
> 
> Stefano
> 
> 
> diff --git a/automation/build/yocto/build-yocto.sh b/automation/build/yocto/build-yocto.sh
> index 0d31dad607..16f1dcc0a5 100755
> --- a/automation/build/yocto/build-yocto.sh
> +++ b/automation/build/yocto/build-yocto.sh
> @@ -107,6 +107,9 @@ IMAGE_INSTALL:append:pn-xen-image-minimal = " ssh-pregen-hostkeys"
> # Save some disk space
> INHERIT += "rm_work"
> 
> +# Reduce number of jobs
> +BB_NUMBER_THREADS="8"
> +
> EOF
> 
>     if [ "${do_localsrc}" = "y" ]; then
> diff --git a/automation/build/yocto/kirkstone-qemuarm64.dockerfile b/automation/build/yocto/kirkstone-qemuarm64.dockerfile
> index f279a7af92..aea3fc1f3e 100644
> --- a/automation/build/yocto/kirkstone-qemuarm64.dockerfile
> +++ b/automation/build/yocto/kirkstone-qemuarm64.dockerfile
> @@ -16,7 +16,8 @@ ARG target=qemuarm64
> 
> # This step can take one to several hours depending on your download bandwith
> # and the speed of your computer
> -RUN /home/$USER_NAME/bin/build-yocto.sh --dump-log $target
> +COPY ./build-yocto.sh /
> +RUN /build-yocto.sh --dump-log $target
> 
> FROM $from_image
> 
> diff --git a/automation/build/yocto/kirkstone.dockerfile b/automation/build/yocto/kirkstone.dockerfile
> index 367a7863b6..ffbd91aa90 100644
> --- a/automation/build/yocto/kirkstone.dockerfile
> +++ b/automation/build/yocto/kirkstone.dockerfile
> @@ -84,9 +84,6 @@ RUN mkdir -p /home/$USER_NAME/yocto-layers \
>              /home/$USER_NAME/xen && \
>     chown $USER_NAME.$USER_NAME /home/$USER_NAME/*
> 
> -# Copy the build script
> -COPY build-yocto.sh /home/$USER_NAME/bin/
> -
> # clone yocto repositories we need
> ARG yocto_version="kirkstone"
> RUN for rep in \
> diff --git a/automation/gitlab-ci/build.yaml b/automation/gitlab-ci/build.yaml
> index ddc2234faf..4b8bcde252 100644
> --- a/automation/gitlab-ci/build.yaml
> +++ b/automation/gitlab-ci/build.yaml
> @@ -584,6 +584,22 @@ alpine-3.12-gcc-arm64-boot-cpupools:
>     EXTRA_XEN_CONFIG: |
>       CONFIG_BOOT_TIME_CPUPOOLS=y
> 
> +yocto-kirkstone-qemuarm64:
> +  stage: build
> +  image: registry.gitlab.com/xen-project/xen/${CONTAINER}
> +  script:
> +    - ./automation/build/yocto/build-yocto.sh -v --log-dir=./logs --xen-dir=`pwd` qemuarm64
> +  variables:
> +    CONTAINER: yocto:kirkstone-qemuarm64
> +  artifacts:
> +    paths:
> +      - '*.log'
> +      - '*/*.log'
> +      - 'logs/*'
> +    when: always
> +  tags:
> +    - arm64
> +
> ## Test artifacts common
> 
> .test-jobs-artifact-common:
Michal Orzel Oct. 19, 2022, 9:06 a.m. UTC | #6
Hi Stefano,

On 19/10/2022 02:02, Stefano Stabellini wrote:
> 
> 
> On Mon, 17 Oct 2022, Stefano Stabellini wrote:
>> It should be
>>
>> BB_NUMBER_THREADS="2"
>>
>> but that worked! Let me a couple of more tests.
> 
> I could run successfully a Yocto build test with qemuarm64 as target in
> gitlab-ci, hurray! No size issues, no build time issues, everything was
> fine. See:
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.com%2Fxen-project%2Fpeople%2Fsstabellini%2Fxen%2F-%2Fjobs%2F3193051236&amp;data=05%7C01%7Cmichal.orzel%40amd.com%7C75ea919bbde249e1bac408dab1654960%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C638017345841386870%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=oWrGVbloqkJoOxvvxTr55RbKVzd3YmS4iiLPyxDZCYY%3D&amp;reserved=0
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.com%2Fxen-project%2Fpeople%2Fsstabellini%2Fxen%2F-%2Fjobs%2F3193083119&amp;data=05%7C01%7Cmichal.orzel%40amd.com%7C75ea919bbde249e1bac408dab1654960%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C638017345841386870%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=%2BXf3ZB1bsVi8K%2BzEEe1Dhpg0GSohpseogff12GaK3Gw%3D&amp;reserved=0
> 
> I made the appended changes in top of this series.
> 
> - I pushed registry.gitlab.com/xen-project/xen/yocto:kirkstone and
>   registry.gitlab.com/xen-project/xen/yocto:kirkstone-qemuarm64
> - for the gitlab-ci runs, we need to run build-yocto.sh from the copy in
>   xen.git, not from a copy stored inside a container
> - when building the kirkstone-qemuarm64 container the first time
>   (outside of gitlab-ci) I used COPY and took the script from the local
>   xen.git tree
> - after a number of tests, I settled on: BB_NUMBER_THREADS="8" more than
>   this and it breaks on some workstations, please add it
> - I am running the yocto build on arm64 so that we can use the arm64
>   hardware to do it in gitlab-ci
> 
> Please feel free to incorporate these changes in your series, and add
> corresponding changes for the qemuarm32 and qemux86 targets.
> 
> I am looking forward to it! Almost there!
> 
> Cheers,
> 
> Stefano
> 
> 
> diff --git a/automation/build/yocto/build-yocto.sh b/automation/build/yocto/build-yocto.sh
> index 0d31dad607..16f1dcc0a5 100755
> --- a/automation/build/yocto/build-yocto.sh
> +++ b/automation/build/yocto/build-yocto.sh
> @@ -107,6 +107,9 @@ IMAGE_INSTALL:append:pn-xen-image-minimal = " ssh-pregen-hostkeys"
>  # Save some disk space
>  INHERIT += "rm_work"
> 
> +# Reduce number of jobs
> +BB_NUMBER_THREADS="8"
> +
>  EOF
> 
>      if [ "${do_localsrc}" = "y" ]; then
> diff --git a/automation/build/yocto/kirkstone-qemuarm64.dockerfile b/automation/build/yocto/kirkstone-qemuarm64.dockerfile
> index f279a7af92..aea3fc1f3e 100644
> --- a/automation/build/yocto/kirkstone-qemuarm64.dockerfile
> +++ b/automation/build/yocto/kirkstone-qemuarm64.dockerfile
> @@ -16,7 +16,8 @@ ARG target=qemuarm64
> 
>  # This step can take one to several hours depending on your download bandwith
>  # and the speed of your computer
> -RUN /home/$USER_NAME/bin/build-yocto.sh --dump-log $target
> +COPY ./build-yocto.sh /
> +RUN /build-yocto.sh --dump-log $target
> 
>  FROM $from_image
> 
> diff --git a/automation/build/yocto/kirkstone.dockerfile b/automation/build/yocto/kirkstone.dockerfile
> index 367a7863b6..ffbd91aa90 100644
> --- a/automation/build/yocto/kirkstone.dockerfile
> +++ b/automation/build/yocto/kirkstone.dockerfile
> @@ -84,9 +84,6 @@ RUN mkdir -p /home/$USER_NAME/yocto-layers \
>               /home/$USER_NAME/xen && \
>      chown $USER_NAME.$USER_NAME /home/$USER_NAME/*
> 
> -# Copy the build script
> -COPY build-yocto.sh /home/$USER_NAME/bin/
> -
>  # clone yocto repositories we need
>  ARG yocto_version="kirkstone"
>  RUN for rep in \
> diff --git a/automation/gitlab-ci/build.yaml b/automation/gitlab-ci/build.yaml
> index ddc2234faf..4b8bcde252 100644
> --- a/automation/gitlab-ci/build.yaml
> +++ b/automation/gitlab-ci/build.yaml
> @@ -584,6 +584,22 @@ alpine-3.12-gcc-arm64-boot-cpupools:
>      EXTRA_XEN_CONFIG: |
>        CONFIG_BOOT_TIME_CPUPOOLS=y
> 
> +yocto-kirkstone-qemuarm64:
> +  stage: build
> +  image: registry.gitlab.com/xen-project/xen/${CONTAINER}
> +  script:
> +    - ./automation/build/yocto/build-yocto.sh -v --log-dir=./logs --xen-dir=`pwd` qemuarm64
> +  variables:
> +    CONTAINER: yocto:kirkstone-qemuarm64
> +  artifacts:
> +    paths:
> +      - '*.log'
> +      - '*/*.log'
The above lines are not needed as the logs/* below will handle them all (logs are only stored in logs/).

> +      - 'logs/*'
> +    when: always
> +  tags:
> +    - arm64
> +
build-yocto.sh performs both build and run actions. I think it'd be better to move this into test.yaml in that case.
The best would be to create one build job (specifying --no-run) in build.yaml and one test job (specifying --no-build) in test.yaml.
This however would probably require marking path build/tmp/deploy/***/qemuarm64 as an build artifact. The question then is
whether having this path would be enough for runqemu (Bertrand's opinion needed).

Apart from that there is an aspect of Yocto releases and the containers/tests names.
Yocto needs to be up-to-date in order to properly build Xen+tools.
This basically means that we will need to update the containers once
per Yocto release. The old containers would still need to be stored in our CI container registry
so that we can use CI for older versions of Xen. However, updating the containers would also require
modifying the existing tests (for now we have e.g. yocto-kirkstone-qemuarm64 but in a month we will have
to change them to yocto-langdale-qemuarm64). In a few years time this will result in several CI jobs
that are the same but differ only in name/container. I would thus suggest to name the CI jobs like this:
yocto-qemuarm64 (without yocto release name) and define the top-level YOCTO_CONTAINER variable to store
the current yocto release container. This will solve the issue I described above.


~Michal
Bertrand Marquis Oct. 19, 2022, 10:40 a.m. UTC | #7
Hi Michal,

> On 19 Oct 2022, at 10:06, Michal Orzel <michal.orzel@amd.com> wrote:
> 
> Hi Stefano,
> 
> On 19/10/2022 02:02, Stefano Stabellini wrote:
>> 
>> 
>> On Mon, 17 Oct 2022, Stefano Stabellini wrote:
>>> It should be
>>> 
>>> BB_NUMBER_THREADS="2"
>>> 
>>> but that worked! Let me a couple of more tests.
>> 
>> I could run successfully a Yocto build test with qemuarm64 as target in
>> gitlab-ci, hurray! No size issues, no build time issues, everything was
>> fine. See:
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.com%2Fxen-project%2Fpeople%2Fsstabellini%2Fxen%2F-%2Fjobs%2F3193051236&amp;data=05%7C01%7Cmichal.orzel%40amd.com%7C75ea919bbde249e1bac408dab1654960%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C638017345841386870%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=oWrGVbloqkJoOxvvxTr55RbKVzd3YmS4iiLPyxDZCYY%3D&amp;reserved=0
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.com%2Fxen-project%2Fpeople%2Fsstabellini%2Fxen%2F-%2Fjobs%2F3193083119&amp;data=05%7C01%7Cmichal.orzel%40amd.com%7C75ea919bbde249e1bac408dab1654960%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C638017345841386870%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=%2BXf3ZB1bsVi8K%2BzEEe1Dhpg0GSohpseogff12GaK3Gw%3D&amp;reserved=0
>> 
>> I made the appended changes in top of this series.
>> 
>> - I pushed registry.gitlab.com/xen-project/xen/yocto:kirkstone and
>>  registry.gitlab.com/xen-project/xen/yocto:kirkstone-qemuarm64
>> - for the gitlab-ci runs, we need to run build-yocto.sh from the copy in
>>  xen.git, not from a copy stored inside a container
>> - when building the kirkstone-qemuarm64 container the first time
>>  (outside of gitlab-ci) I used COPY and took the script from the local
>>  xen.git tree
>> - after a number of tests, I settled on: BB_NUMBER_THREADS="8" more than
>>  this and it breaks on some workstations, please add it
>> - I am running the yocto build on arm64 so that we can use the arm64
>>  hardware to do it in gitlab-ci
>> 
>> Please feel free to incorporate these changes in your series, and add
>> corresponding changes for the qemuarm32 and qemux86 targets.
>> 
>> I am looking forward to it! Almost there!
>> 
>> Cheers,
>> 
>> Stefano
>> 
>> 
>> diff --git a/automation/build/yocto/build-yocto.sh b/automation/build/yocto/build-yocto.sh
>> index 0d31dad607..16f1dcc0a5 100755
>> --- a/automation/build/yocto/build-yocto.sh
>> +++ b/automation/build/yocto/build-yocto.sh
>> @@ -107,6 +107,9 @@ IMAGE_INSTALL:append:pn-xen-image-minimal = " ssh-pregen-hostkeys"
>> # Save some disk space
>> INHERIT += "rm_work"
>> 
>> +# Reduce number of jobs
>> +BB_NUMBER_THREADS="8"
>> +
>> EOF
>> 
>>     if [ "${do_localsrc}" = "y" ]; then
>> diff --git a/automation/build/yocto/kirkstone-qemuarm64.dockerfile b/automation/build/yocto/kirkstone-qemuarm64.dockerfile
>> index f279a7af92..aea3fc1f3e 100644
>> --- a/automation/build/yocto/kirkstone-qemuarm64.dockerfile
>> +++ b/automation/build/yocto/kirkstone-qemuarm64.dockerfile
>> @@ -16,7 +16,8 @@ ARG target=qemuarm64
>> 
>> # This step can take one to several hours depending on your download bandwith
>> # and the speed of your computer
>> -RUN /home/$USER_NAME/bin/build-yocto.sh --dump-log $target
>> +COPY ./build-yocto.sh /
>> +RUN /build-yocto.sh --dump-log $target
>> 
>> FROM $from_image
>> 
>> diff --git a/automation/build/yocto/kirkstone.dockerfile b/automation/build/yocto/kirkstone.dockerfile
>> index 367a7863b6..ffbd91aa90 100644
>> --- a/automation/build/yocto/kirkstone.dockerfile
>> +++ b/automation/build/yocto/kirkstone.dockerfile
>> @@ -84,9 +84,6 @@ RUN mkdir -p /home/$USER_NAME/yocto-layers \
>>              /home/$USER_NAME/xen && \
>>     chown $USER_NAME.$USER_NAME /home/$USER_NAME/*
>> 
>> -# Copy the build script
>> -COPY build-yocto.sh /home/$USER_NAME/bin/
>> -
>> # clone yocto repositories we need
>> ARG yocto_version="kirkstone"
>> RUN for rep in \
>> diff --git a/automation/gitlab-ci/build.yaml b/automation/gitlab-ci/build.yaml
>> index ddc2234faf..4b8bcde252 100644
>> --- a/automation/gitlab-ci/build.yaml
>> +++ b/automation/gitlab-ci/build.yaml
>> @@ -584,6 +584,22 @@ alpine-3.12-gcc-arm64-boot-cpupools:
>>     EXTRA_XEN_CONFIG: |
>>       CONFIG_BOOT_TIME_CPUPOOLS=y
>> 
>> +yocto-kirkstone-qemuarm64:
>> +  stage: build
>> +  image: registry.gitlab.com/xen-project/xen/${CONTAINER}
>> +  script:
>> +    - ./automation/build/yocto/build-yocto.sh -v --log-dir=./logs --xen-dir=`pwd` qemuarm64
>> +  variables:
>> +    CONTAINER: yocto:kirkstone-qemuarm64
>> +  artifacts:
>> +    paths:
>> +      - '*.log'
>> +      - '*/*.log'
> The above lines are not needed as the logs/* below will handle them all (logs are only stored in logs/).

Ack

> 
>> +      - 'logs/*'
>> +    when: always
>> +  tags:
>> +    - arm64
>> +
> build-yocto.sh performs both build and run actions. I think it'd be better to move this into test.yaml in that case.
> The best would be to create one build job (specifying --no-run) in build.yaml and one test job (specifying --no-build) in test.yaml.
> This however would probably require marking path build/tmp/deploy/***/qemuarm64 as an build artifact. The question then is
> whether having this path would be enough for runqemu (Bertrand's opinion needed).

This will not be enough to run qemu as the qemu binary and its dependencies are in the build artifacts and not in deploy.
Splitting the build and run is not a good idea because the size of the artifact between the 2 will be huge.

> 
> Apart from that there is an aspect of Yocto releases and the containers/tests names.
> Yocto needs to be up-to-date in order to properly build Xen+tools.
> This basically means that we will need to update the containers once
> per Yocto release. The old containers would still need to be stored in our CI container registry
> so that we can use CI for older versions of Xen. However, updating the containers would also require
> modifying the existing tests (for now we have e.g. yocto-kirkstone-qemuarm64 but in a month we will have
> to change them to yocto-langdale-qemuarm64). In a few years time this will result in several CI jobs
> that are the same but differ only in name/container. I would thus suggest to name the CI jobs like this:
> yocto-qemuarm64 (without yocto release name) and define the top-level YOCTO_CONTAINER variable to store
> the current yocto release container. This will solve the issue I described above.

I think we have no other way around this and we will need to have one Yocto release supported by Xen officially so
we will have to keep old docker images for old releases of Xen and move to newer versions of Yocto in staging when
it is needed.

We have to find a way for gitlab-ci to use the build.yaml contained inside the tree that is to be tested somehow so that gitlab would automatically take the right one.
Which means that build.yaml will be different between branches and contain the right version for the current branch.

Regards
Bertrand

> 
> 
> ~Michal
Michal Orzel Oct. 19, 2022, 10:53 a.m. UTC | #8
Hi Bertrand,

On 19/10/2022 12:40, Bertrand Marquis wrote:
> 
> 
> Hi Michal,
> 
>> On 19 Oct 2022, at 10:06, Michal Orzel <michal.orzel@amd.com> wrote:
>>
>> Hi Stefano,
>>
>> On 19/10/2022 02:02, Stefano Stabellini wrote:
>>>
>>>
>>> On Mon, 17 Oct 2022, Stefano Stabellini wrote:
>>>> It should be
>>>>
>>>> BB_NUMBER_THREADS="2"
>>>>
>>>> but that worked! Let me a couple of more tests.
>>>
>>> I could run successfully a Yocto build test with qemuarm64 as target in
>>> gitlab-ci, hurray! No size issues, no build time issues, everything was
>>> fine. See:
>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.com%2Fxen-project%2Fpeople%2Fsstabellini%2Fxen%2F-%2Fjobs%2F3193051236&amp;data=05%7C01%7Cmichal.orzel%40amd.com%7C5f7fc3a161fe44b5954808dab1be5c3a%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C638017728406088513%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=2mb3N26wiz39RJNSA4KoIOt%2BG9X7EMDOWIpfKc2ZZOc%3D&amp;reserved=0
>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.com%2Fxen-project%2Fpeople%2Fsstabellini%2Fxen%2F-%2Fjobs%2F3193083119&amp;data=05%7C01%7Cmichal.orzel%40amd.com%7C5f7fc3a161fe44b5954808dab1be5c3a%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C638017728406088513%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=QhTFefS8NU1f7oLemB0Vtn%2BDCD%2BCnq1v1gEmlKCJt84%3D&amp;reserved=0
>>>
>>> I made the appended changes in top of this series.
>>>
>>> - I pushed registry.gitlab.com/xen-project/xen/yocto:kirkstone and
>>>  registry.gitlab.com/xen-project/xen/yocto:kirkstone-qemuarm64
>>> - for the gitlab-ci runs, we need to run build-yocto.sh from the copy in
>>>  xen.git, not from a copy stored inside a container
>>> - when building the kirkstone-qemuarm64 container the first time
>>>  (outside of gitlab-ci) I used COPY and took the script from the local
>>>  xen.git tree
>>> - after a number of tests, I settled on: BB_NUMBER_THREADS="8" more than
>>>  this and it breaks on some workstations, please add it
>>> - I am running the yocto build on arm64 so that we can use the arm64
>>>  hardware to do it in gitlab-ci
>>>
>>> Please feel free to incorporate these changes in your series, and add
>>> corresponding changes for the qemuarm32 and qemux86 targets.
>>>
>>> I am looking forward to it! Almost there!
>>>
>>> Cheers,
>>>
>>> Stefano
>>>
>>>
>>> diff --git a/automation/build/yocto/build-yocto.sh b/automation/build/yocto/build-yocto.sh
>>> index 0d31dad607..16f1dcc0a5 100755
>>> --- a/automation/build/yocto/build-yocto.sh
>>> +++ b/automation/build/yocto/build-yocto.sh
>>> @@ -107,6 +107,9 @@ IMAGE_INSTALL:append:pn-xen-image-minimal = " ssh-pregen-hostkeys"
>>> # Save some disk space
>>> INHERIT += "rm_work"
>>>
>>> +# Reduce number of jobs
>>> +BB_NUMBER_THREADS="8"
>>> +
>>> EOF
>>>
>>>     if [ "${do_localsrc}" = "y" ]; then
>>> diff --git a/automation/build/yocto/kirkstone-qemuarm64.dockerfile b/automation/build/yocto/kirkstone-qemuarm64.dockerfile
>>> index f279a7af92..aea3fc1f3e 100644
>>> --- a/automation/build/yocto/kirkstone-qemuarm64.dockerfile
>>> +++ b/automation/build/yocto/kirkstone-qemuarm64.dockerfile
>>> @@ -16,7 +16,8 @@ ARG target=qemuarm64
>>>
>>> # This step can take one to several hours depending on your download bandwith
>>> # and the speed of your computer
>>> -RUN /home/$USER_NAME/bin/build-yocto.sh --dump-log $target
>>> +COPY ./build-yocto.sh /
>>> +RUN /build-yocto.sh --dump-log $target
>>>
>>> FROM $from_image
>>>
>>> diff --git a/automation/build/yocto/kirkstone.dockerfile b/automation/build/yocto/kirkstone.dockerfile
>>> index 367a7863b6..ffbd91aa90 100644
>>> --- a/automation/build/yocto/kirkstone.dockerfile
>>> +++ b/automation/build/yocto/kirkstone.dockerfile
>>> @@ -84,9 +84,6 @@ RUN mkdir -p /home/$USER_NAME/yocto-layers \
>>>              /home/$USER_NAME/xen && \
>>>     chown $USER_NAME.$USER_NAME /home/$USER_NAME/*
>>>
>>> -# Copy the build script
>>> -COPY build-yocto.sh /home/$USER_NAME/bin/
>>> -
>>> # clone yocto repositories we need
>>> ARG yocto_version="kirkstone"
>>> RUN for rep in \
>>> diff --git a/automation/gitlab-ci/build.yaml b/automation/gitlab-ci/build.yaml
>>> index ddc2234faf..4b8bcde252 100644
>>> --- a/automation/gitlab-ci/build.yaml
>>> +++ b/automation/gitlab-ci/build.yaml
>>> @@ -584,6 +584,22 @@ alpine-3.12-gcc-arm64-boot-cpupools:
>>>     EXTRA_XEN_CONFIG: |
>>>       CONFIG_BOOT_TIME_CPUPOOLS=y
>>>
>>> +yocto-kirkstone-qemuarm64:
>>> +  stage: build
>>> +  image: registry.gitlab.com/xen-project/xen/${CONTAINER}
>>> +  script:
>>> +    - ./automation/build/yocto/build-yocto.sh -v --log-dir=./logs --xen-dir=`pwd` qemuarm64
>>> +  variables:
>>> +    CONTAINER: yocto:kirkstone-qemuarm64
>>> +  artifacts:
>>> +    paths:
>>> +      - '*.log'
>>> +      - '*/*.log'
>> The above lines are not needed as the logs/* below will handle them all (logs are only stored in logs/).
> 
> Ack
> 
>>
>>> +      - 'logs/*'
>>> +    when: always
>>> +  tags:
>>> +    - arm64
>>> +
>> build-yocto.sh performs both build and run actions. I think it'd be better to move this into test.yaml in that case.
>> The best would be to create one build job (specifying --no-run) in build.yaml and one test job (specifying --no-build) in test.yaml.
>> This however would probably require marking path build/tmp/deploy/***/qemuarm64 as an build artifact. The question then is
>> whether having this path would be enough for runqemu (Bertrand's opinion needed).
> 
> This will not be enough to run qemu as the qemu binary and its dependencies are in the build artifacts and not in deploy.
> Splitting the build and run is not a good idea because the size of the artifact between the 2 will be huge.
> 
>>
>> Apart from that there is an aspect of Yocto releases and the containers/tests names.
>> Yocto needs to be up-to-date in order to properly build Xen+tools.
>> This basically means that we will need to update the containers once
>> per Yocto release. The old containers would still need to be stored in our CI container registry
>> so that we can use CI for older versions of Xen. However, updating the containers would also require
>> modifying the existing tests (for now we have e.g. yocto-kirkstone-qemuarm64 but in a month we will have
>> to change them to yocto-langdale-qemuarm64). In a few years time this will result in several CI jobs
>> that are the same but differ only in name/container. I would thus suggest to name the CI jobs like this:
>> yocto-qemuarm64 (without yocto release name) and define the top-level YOCTO_CONTAINER variable to store
>> the current yocto release container. This will solve the issue I described above.
> 
> I think we have no other way around this and we will need to have one Yocto release supported by Xen officially so
> we will have to keep old docker images for old releases of Xen and move to newer versions of Yocto in staging when
> it is needed.
> 
> We have to find a way for gitlab-ci to use the build.yaml contained inside the tree that is to be tested somehow so that gitlab would automatically take the right one.
> Which means that build.yaml will be different between branches and contain the right version for the current branch.
> 

What I suggest is that with each new yocto release, we add new docker container files and push them to registry.
So we will end up in a registry having e.g. (arm64 as an example):
- kirkstone-qemuarm64
- langdale-qemuarm64
We maintain only the one group of CI jobs whose names are generic (yocto-qemuarm64).
After adding new containers for a new Yocto release, we modify the YOCTO_RELEASE variable
to point to the latest yocto release containers.

test.yaml:
...
# Yocto test jobs
variables:
  YOCTO_RELEASE: "kirkstone"

yocto-qemuarm64:
  extends: .test-jobs-common
  script:
    - ./automation/build/yocto/build-yocto.sh -v --log-dir=./logs --xen-dir=`pwd` qemuarm64
  variables:
    CONTAINER: yocto:${YOCTO_RELEASE}-qemuarm64
  artifacts:
    paths:
      - 'logs/*'
    when: always
  tags:
    - arm64

This means that:
- on the current staging branch the YOCTO_RELEASE points to the latest containers (for the latest yocto release)
- on the old stable branches the YOCTO_RELEASE points to the old containers (for the old yocto release).

~Michal
Stefano Stabellini Oct. 19, 2022, 10:11 p.m. UTC | #9
On Wed, 19 Oct 2022, Bertrand Marquis wrote:
> Hi Michal,
> 
> > On 19 Oct 2022, at 10:06, Michal Orzel <michal.orzel@amd.com> wrote:
> > 
> > Hi Stefano,
> > 
> > On 19/10/2022 02:02, Stefano Stabellini wrote:
> >> 
> >> 
> >> On Mon, 17 Oct 2022, Stefano Stabellini wrote:
> >>> It should be
> >>> 
> >>> BB_NUMBER_THREADS="2"
> >>> 
> >>> but that worked! Let me a couple of more tests.
> >> 
> >> I could run successfully a Yocto build test with qemuarm64 as target in
> >> gitlab-ci, hurray! No size issues, no build time issues, everything was
> >> fine. See:
> >> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.com%2Fxen-project%2Fpeople%2Fsstabellini%2Fxen%2F-%2Fjobs%2F3193051236&amp;data=05%7C01%7Cmichal.orzel%40amd.com%7C75ea919bbde249e1bac408dab1654960%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C638017345841386870%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=oWrGVbloqkJoOxvvxTr55RbKVzd3YmS4iiLPyxDZCYY%3D&amp;reserved=0
> >> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.com%2Fxen-project%2Fpeople%2Fsstabellini%2Fxen%2F-%2Fjobs%2F3193083119&amp;data=05%7C01%7Cmichal.orzel%40amd.com%7C75ea919bbde249e1bac408dab1654960%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C638017345841386870%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=%2BXf3ZB1bsVi8K%2BzEEe1Dhpg0GSohpseogff12GaK3Gw%3D&amp;reserved=0
> >> 
> >> I made the appended changes in top of this series.
> >> 
> >> - I pushed registry.gitlab.com/xen-project/xen/yocto:kirkstone and
> >>  registry.gitlab.com/xen-project/xen/yocto:kirkstone-qemuarm64
> >> - for the gitlab-ci runs, we need to run build-yocto.sh from the copy in
> >>  xen.git, not from a copy stored inside a container
> >> - when building the kirkstone-qemuarm64 container the first time
> >>  (outside of gitlab-ci) I used COPY and took the script from the local
> >>  xen.git tree
> >> - after a number of tests, I settled on: BB_NUMBER_THREADS="8" more than
> >>  this and it breaks on some workstations, please add it
> >> - I am running the yocto build on arm64 so that we can use the arm64
> >>  hardware to do it in gitlab-ci
> >> 
> >> Please feel free to incorporate these changes in your series, and add
> >> corresponding changes for the qemuarm32 and qemux86 targets.
> >> 
> >> I am looking forward to it! Almost there!
> >> 
> >> Cheers,
> >> 
> >> Stefano
> >> 
> >> 
> >> diff --git a/automation/build/yocto/build-yocto.sh b/automation/build/yocto/build-yocto.sh
> >> index 0d31dad607..16f1dcc0a5 100755
> >> --- a/automation/build/yocto/build-yocto.sh
> >> +++ b/automation/build/yocto/build-yocto.sh
> >> @@ -107,6 +107,9 @@ IMAGE_INSTALL:append:pn-xen-image-minimal = " ssh-pregen-hostkeys"
> >> # Save some disk space
> >> INHERIT += "rm_work"
> >> 
> >> +# Reduce number of jobs
> >> +BB_NUMBER_THREADS="8"
> >> +
> >> EOF
> >> 
> >>     if [ "${do_localsrc}" = "y" ]; then
> >> diff --git a/automation/build/yocto/kirkstone-qemuarm64.dockerfile b/automation/build/yocto/kirkstone-qemuarm64.dockerfile
> >> index f279a7af92..aea3fc1f3e 100644
> >> --- a/automation/build/yocto/kirkstone-qemuarm64.dockerfile
> >> +++ b/automation/build/yocto/kirkstone-qemuarm64.dockerfile
> >> @@ -16,7 +16,8 @@ ARG target=qemuarm64
> >> 
> >> # This step can take one to several hours depending on your download bandwith
> >> # and the speed of your computer
> >> -RUN /home/$USER_NAME/bin/build-yocto.sh --dump-log $target
> >> +COPY ./build-yocto.sh /
> >> +RUN /build-yocto.sh --dump-log $target
> >> 
> >> FROM $from_image
> >> 
> >> diff --git a/automation/build/yocto/kirkstone.dockerfile b/automation/build/yocto/kirkstone.dockerfile
> >> index 367a7863b6..ffbd91aa90 100644
> >> --- a/automation/build/yocto/kirkstone.dockerfile
> >> +++ b/automation/build/yocto/kirkstone.dockerfile
> >> @@ -84,9 +84,6 @@ RUN mkdir -p /home/$USER_NAME/yocto-layers \
> >>              /home/$USER_NAME/xen && \
> >>     chown $USER_NAME.$USER_NAME /home/$USER_NAME/*
> >> 
> >> -# Copy the build script
> >> -COPY build-yocto.sh /home/$USER_NAME/bin/
> >> -
> >> # clone yocto repositories we need
> >> ARG yocto_version="kirkstone"
> >> RUN for rep in \
> >> diff --git a/automation/gitlab-ci/build.yaml b/automation/gitlab-ci/build.yaml
> >> index ddc2234faf..4b8bcde252 100644
> >> --- a/automation/gitlab-ci/build.yaml
> >> +++ b/automation/gitlab-ci/build.yaml
> >> @@ -584,6 +584,22 @@ alpine-3.12-gcc-arm64-boot-cpupools:
> >>     EXTRA_XEN_CONFIG: |
> >>       CONFIG_BOOT_TIME_CPUPOOLS=y
> >> 
> >> +yocto-kirkstone-qemuarm64:
> >> +  stage: build
> >> +  image: registry.gitlab.com/xen-project/xen/${CONTAINER}
> >> +  script:
> >> +    - ./automation/build/yocto/build-yocto.sh -v --log-dir=./logs --xen-dir=`pwd` qemuarm64
> >> +  variables:
> >> +    CONTAINER: yocto:kirkstone-qemuarm64
> >> +  artifacts:
> >> +    paths:
> >> +      - '*.log'
> >> +      - '*/*.log'
> > The above lines are not needed as the logs/* below will handle them all (logs are only stored in logs/).
> 
> Ack
> 
> > 
> >> +      - 'logs/*'
> >> +    when: always
> >> +  tags:
> >> +    - arm64
> >> +
> > build-yocto.sh performs both build and run actions. I think it'd be better to move this into test.yaml in that case.
> > The best would be to create one build job (specifying --no-run) in build.yaml and one test job (specifying --no-build) in test.yaml.
> > This however would probably require marking path build/tmp/deploy/***/qemuarm64 as an build artifact. The question then is
> > whether having this path would be enough for runqemu (Bertrand's opinion needed).
> 
> This will not be enough to run qemu as the qemu binary and its dependencies are in the build artifacts and not in deploy.
> Splitting the build and run is not a good idea because the size of the artifact between the 2 will be huge.

Although not ideal, I think it is fine to have a single job that does
both the build and the run.
Stefano Stabellini Oct. 19, 2022, 10:12 p.m. UTC | #10
On Wed, 19 Oct 2022, Michal Orzel wrote:
> Hi Bertrand,
> 
> On 19/10/2022 12:40, Bertrand Marquis wrote:
> > 
> > 
> > Hi Michal,
> > 
> >> On 19 Oct 2022, at 10:06, Michal Orzel <michal.orzel@amd.com> wrote:
> >>
> >> Hi Stefano,
> >>
> >> On 19/10/2022 02:02, Stefano Stabellini wrote:
> >>>
> >>>
> >>> On Mon, 17 Oct 2022, Stefano Stabellini wrote:
> >>>> It should be
> >>>>
> >>>> BB_NUMBER_THREADS="2"
> >>>>
> >>>> but that worked! Let me a couple of more tests.
> >>>
> >>> I could run successfully a Yocto build test with qemuarm64 as target in
> >>> gitlab-ci, hurray! No size issues, no build time issues, everything was
> >>> fine. See:
> >>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.com%2Fxen-project%2Fpeople%2Fsstabellini%2Fxen%2F-%2Fjobs%2F3193051236&amp;data=05%7C01%7Cmichal.orzel%40amd.com%7C5f7fc3a161fe44b5954808dab1be5c3a%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C638017728406088513%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=2mb3N26wiz39RJNSA4KoIOt%2BG9X7EMDOWIpfKc2ZZOc%3D&amp;reserved=0
> >>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.com%2Fxen-project%2Fpeople%2Fsstabellini%2Fxen%2F-%2Fjobs%2F3193083119&amp;data=05%7C01%7Cmichal.orzel%40amd.com%7C5f7fc3a161fe44b5954808dab1be5c3a%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C638017728406088513%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=QhTFefS8NU1f7oLemB0Vtn%2BDCD%2BCnq1v1gEmlKCJt84%3D&amp;reserved=0
> >>>
> >>> I made the appended changes in top of this series.
> >>>
> >>> - I pushed registry.gitlab.com/xen-project/xen/yocto:kirkstone and
> >>>  registry.gitlab.com/xen-project/xen/yocto:kirkstone-qemuarm64
> >>> - for the gitlab-ci runs, we need to run build-yocto.sh from the copy in
> >>>  xen.git, not from a copy stored inside a container
> >>> - when building the kirkstone-qemuarm64 container the first time
> >>>  (outside of gitlab-ci) I used COPY and took the script from the local
> >>>  xen.git tree
> >>> - after a number of tests, I settled on: BB_NUMBER_THREADS="8" more than
> >>>  this and it breaks on some workstations, please add it
> >>> - I am running the yocto build on arm64 so that we can use the arm64
> >>>  hardware to do it in gitlab-ci
> >>>
> >>> Please feel free to incorporate these changes in your series, and add
> >>> corresponding changes for the qemuarm32 and qemux86 targets.
> >>>
> >>> I am looking forward to it! Almost there!
> >>>
> >>> Cheers,
> >>>
> >>> Stefano
> >>>
> >>>
> >>> diff --git a/automation/build/yocto/build-yocto.sh b/automation/build/yocto/build-yocto.sh
> >>> index 0d31dad607..16f1dcc0a5 100755
> >>> --- a/automation/build/yocto/build-yocto.sh
> >>> +++ b/automation/build/yocto/build-yocto.sh
> >>> @@ -107,6 +107,9 @@ IMAGE_INSTALL:append:pn-xen-image-minimal = " ssh-pregen-hostkeys"
> >>> # Save some disk space
> >>> INHERIT += "rm_work"
> >>>
> >>> +# Reduce number of jobs
> >>> +BB_NUMBER_THREADS="8"
> >>> +
> >>> EOF
> >>>
> >>>     if [ "${do_localsrc}" = "y" ]; then
> >>> diff --git a/automation/build/yocto/kirkstone-qemuarm64.dockerfile b/automation/build/yocto/kirkstone-qemuarm64.dockerfile
> >>> index f279a7af92..aea3fc1f3e 100644
> >>> --- a/automation/build/yocto/kirkstone-qemuarm64.dockerfile
> >>> +++ b/automation/build/yocto/kirkstone-qemuarm64.dockerfile
> >>> @@ -16,7 +16,8 @@ ARG target=qemuarm64
> >>>
> >>> # This step can take one to several hours depending on your download bandwith
> >>> # and the speed of your computer
> >>> -RUN /home/$USER_NAME/bin/build-yocto.sh --dump-log $target
> >>> +COPY ./build-yocto.sh /
> >>> +RUN /build-yocto.sh --dump-log $target
> >>>
> >>> FROM $from_image
> >>>
> >>> diff --git a/automation/build/yocto/kirkstone.dockerfile b/automation/build/yocto/kirkstone.dockerfile
> >>> index 367a7863b6..ffbd91aa90 100644
> >>> --- a/automation/build/yocto/kirkstone.dockerfile
> >>> +++ b/automation/build/yocto/kirkstone.dockerfile
> >>> @@ -84,9 +84,6 @@ RUN mkdir -p /home/$USER_NAME/yocto-layers \
> >>>              /home/$USER_NAME/xen && \
> >>>     chown $USER_NAME.$USER_NAME /home/$USER_NAME/*
> >>>
> >>> -# Copy the build script
> >>> -COPY build-yocto.sh /home/$USER_NAME/bin/
> >>> -
> >>> # clone yocto repositories we need
> >>> ARG yocto_version="kirkstone"
> >>> RUN for rep in \
> >>> diff --git a/automation/gitlab-ci/build.yaml b/automation/gitlab-ci/build.yaml
> >>> index ddc2234faf..4b8bcde252 100644
> >>> --- a/automation/gitlab-ci/build.yaml
> >>> +++ b/automation/gitlab-ci/build.yaml
> >>> @@ -584,6 +584,22 @@ alpine-3.12-gcc-arm64-boot-cpupools:
> >>>     EXTRA_XEN_CONFIG: |
> >>>       CONFIG_BOOT_TIME_CPUPOOLS=y
> >>>
> >>> +yocto-kirkstone-qemuarm64:
> >>> +  stage: build
> >>> +  image: registry.gitlab.com/xen-project/xen/${CONTAINER}
> >>> +  script:
> >>> +    - ./automation/build/yocto/build-yocto.sh -v --log-dir=./logs --xen-dir=`pwd` qemuarm64
> >>> +  variables:
> >>> +    CONTAINER: yocto:kirkstone-qemuarm64
> >>> +  artifacts:
> >>> +    paths:
> >>> +      - '*.log'
> >>> +      - '*/*.log'
> >> The above lines are not needed as the logs/* below will handle them all (logs are only stored in logs/).
> > 
> > Ack
> > 
> >>
> >>> +      - 'logs/*'
> >>> +    when: always
> >>> +  tags:
> >>> +    - arm64
> >>> +
> >> build-yocto.sh performs both build and run actions. I think it'd be better to move this into test.yaml in that case.
> >> The best would be to create one build job (specifying --no-run) in build.yaml and one test job (specifying --no-build) in test.yaml.
> >> This however would probably require marking path build/tmp/deploy/***/qemuarm64 as an build artifact. The question then is
> >> whether having this path would be enough for runqemu (Bertrand's opinion needed).
> > 
> > This will not be enough to run qemu as the qemu binary and its dependencies are in the build artifacts and not in deploy.
> > Splitting the build and run is not a good idea because the size of the artifact between the 2 will be huge.
> > 
> >>
> >> Apart from that there is an aspect of Yocto releases and the containers/tests names.
> >> Yocto needs to be up-to-date in order to properly build Xen+tools.
> >> This basically means that we will need to update the containers once
> >> per Yocto release. The old containers would still need to be stored in our CI container registry
> >> so that we can use CI for older versions of Xen. However, updating the containers would also require
> >> modifying the existing tests (for now we have e.g. yocto-kirkstone-qemuarm64 but in a month we will have
> >> to change them to yocto-langdale-qemuarm64). In a few years time this will result in several CI jobs
> >> that are the same but differ only in name/container. I would thus suggest to name the CI jobs like this:
> >> yocto-qemuarm64 (without yocto release name) and define the top-level YOCTO_CONTAINER variable to store
> >> the current yocto release container. This will solve the issue I described above.
> > 
> > I think we have no other way around this and we will need to have one Yocto release supported by Xen officially so
> > we will have to keep old docker images for old releases of Xen and move to newer versions of Yocto in staging when
> > it is needed.
> > 
> > We have to find a way for gitlab-ci to use the build.yaml contained inside the tree that is to be tested somehow so that gitlab would automatically take the right one.
> > Which means that build.yaml will be different between branches and contain the right version for the current branch.
> > 
> 
> What I suggest is that with each new yocto release, we add new docker container files and push them to registry.
> So we will end up in a registry having e.g. (arm64 as an example):
> - kirkstone-qemuarm64
> - langdale-qemuarm64
> We maintain only the one group of CI jobs whose names are generic (yocto-qemuarm64).
> After adding new containers for a new Yocto release, we modify the YOCTO_RELEASE variable
> to point to the latest yocto release containers.
> 
> test.yaml:
> ...
> # Yocto test jobs
> variables:
>   YOCTO_RELEASE: "kirkstone"
> 
> yocto-qemuarm64:
>   extends: .test-jobs-common
>   script:
>     - ./automation/build/yocto/build-yocto.sh -v --log-dir=./logs --xen-dir=`pwd` qemuarm64
>   variables:
>     CONTAINER: yocto:${YOCTO_RELEASE}-qemuarm64
>   artifacts:
>     paths:
>       - 'logs/*'
>     when: always
>   tags:
>     - arm64
> 
> This means that:
> - on the current staging branch the YOCTO_RELEASE points to the latest containers (for the latest yocto release)
> - on the old stable branches the YOCTO_RELEASE points to the old containers (for the old yocto release).
 
I think that's a good idea