Message ID | 20210219215838.752547-3-crosa@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | GitLab Custom Runners and Jobs (was: QEMU Gating CI) | expand |
Hi, On 2/19/21 6:58 PM, Cleber Rosa wrote: > To run basic jobs on custom runners, the environment needs to be > properly set up. The most common requirement is having the right > packages installed. > > The playbook introduced here covers the QEMU's project s390x and > aarch64 machines. At the time this is being proposed, those machines > have already had this playbook applied to them. > > Signed-off-by: Cleber Rosa <crosa@redhat.com> > --- > docs/devel/ci.rst | 30 ++++++++++ > scripts/ci/setup/build-environment.yml | 76 ++++++++++++++++++++++++++ > scripts/ci/setup/inventory | 1 + > 3 files changed, 107 insertions(+) > create mode 100644 scripts/ci/setup/build-environment.yml > create mode 100644 scripts/ci/setup/inventory I tested the playbook in a Fedora 32 (aarch64) machine using containers, as: dnf install -y podman podman-docker ansible echo "" > inventory for ver in 18.04 20.04; do name="ubuntu_$(echo $ver | sed 's/\./_/')_runner" podman run --rm -d --name "$name" docker.io/library/ubuntu:$ver tail -f /dev/null podman exec "$name" sh -c 'apt-get update && apt-get install -y python3' echo "$name ansible_connection=docker ansible_python_interpreter=/usr/bin/python3" >> inventory done ansible-playbook -i inventory build-environment.yml So, Tested-by: Wainer dos Santos Moschetta <wainersm@redhat.com> > > diff --git a/docs/devel/ci.rst b/docs/devel/ci.rst > index 585b7bf4b8..a556558435 100644 > --- a/docs/devel/ci.rst > +++ b/docs/devel/ci.rst > @@ -26,3 +26,33 @@ gitlab-runner, is called a "custom runner". > The GitLab CI jobs definition for the custom runners are located under:: > > .gitlab-ci.d/custom-runners.yml > + > +Machine Setup Howto > +------------------- > + > +For all Linux based systems, the setup can be mostly automated by the > +execution of two Ansible playbooks. Start by adding your machines to > +the ``inventory`` file under ``scripts/ci/setup``, such as this:: > + > + fully.qualified.domain > + other.machine.hostname > + > +You may need to set some variables in the inventory file itself. One > +very common need is to tell Ansible to use a Python 3 interpreter on > +those hosts. This would look like:: > + > + fully.qualified.domain ansible_python_interpreter=/usr/bin/python3 > + other.machine.hostname ansible_python_interpreter=/usr/bin/python3 > + > +Build environment > +~~~~~~~~~~~~~~~~~ > + > +The ``scripts/ci/setup/build-environment.yml`` Ansible playbook will > +set up machines with the environment needed to perform builds and run > +QEMU tests. It covers a number of different Linux distributions and > +FreeBSD. > + > +To run the playbook, execute:: > + > + cd scripts/ci/setup > + ansible-playbook -i inventory build-environment.yml > diff --git a/scripts/ci/setup/build-environment.yml b/scripts/ci/setup/build-environment.yml > new file mode 100644 > index 0000000000..0197e0a48b > --- /dev/null > +++ b/scripts/ci/setup/build-environment.yml > @@ -0,0 +1,76 @@ > +--- > +- name: Installation of basic packages to build QEMU > + hosts: all > + tasks: > + - name: Update apt cache > + apt: > + update_cache: yes > + when: > + - ansible_facts['distribution'] == 'Ubuntu' > + > + - name: Install basic packages to build QEMU on Ubuntu 18.04/20.04 > + package: > + name: > + # Originally from tests/docker/dockerfiles/ubuntu1804.docker > + - ccache > + - clang > + - gcc > + - gettext > + - git > + - glusterfs-common > + - libaio-dev > + - libattr1-dev > + - libbrlapi-dev > + - libbz2-dev > + - libcacard-dev > + - libcap-ng-dev > + - libcurl4-gnutls-dev > + - libdrm-dev > + - libepoxy-dev > + - libfdt-dev > + - libgbm-dev > + - libgtk-3-dev > + - libibverbs-dev > + - libiscsi-dev > + - libjemalloc-dev > + - libjpeg-turbo8-dev > + - liblzo2-dev > + - libncurses5-dev > + - libncursesw5-dev > + - libnfs-dev > + - libnss3-dev > + - libnuma-dev > + - libpixman-1-dev > + - librados-dev > + - librbd-dev > + - librdmacm-dev > + - libsasl2-dev > + - libsdl2-dev > + - libseccomp-dev > + - libsnappy-dev > + - libspice-protocol-dev > + - libssh-dev > + - libusb-1.0-0-dev > + - libusbredirhost-dev > + - libvdeplug-dev > + - libvte-2.91-dev > + - libzstd-dev > + - make > + - ninja-build > + - python3-yaml > + - python3-sphinx > + - sparse > + - xfslibs-dev > + state: present > + when: > + - ansible_facts['distribution'] == 'Ubuntu' > + > + - name: Install packages to build QEMU on Ubuntu 18.04/20.04 on non-s390x > + package: > + name: > + - libspice-server-dev > + - libxen-dev > + state: present > + when: > + - ansible_facts['distribution'] == 'Ubuntu' > + - ansible_facts['architecture'] != 's390x' > diff --git a/scripts/ci/setup/inventory b/scripts/ci/setup/inventory > new file mode 100644 > index 0000000000..2fbb50c4a8 > --- /dev/null > +++ b/scripts/ci/setup/inventory > @@ -0,0 +1 @@ > +localhost
Cleber Rosa <crosa@redhat.com> writes: > To run basic jobs on custom runners, the environment needs to be > properly set up. The most common requirement is having the right > packages installed. > > The playbook introduced here covers the QEMU's project s390x and > aarch64 machines. At the time this is being proposed, those machines > have already had this playbook applied to them. > > Signed-off-by: Cleber Rosa <crosa@redhat.com> > --- > docs/devel/ci.rst | 30 ++++++++++ > scripts/ci/setup/build-environment.yml | 76 ++++++++++++++++++++++++++ > scripts/ci/setup/inventory | 1 + > 3 files changed, 107 insertions(+) > create mode 100644 scripts/ci/setup/build-environment.yml > create mode 100644 scripts/ci/setup/inventory > > diff --git a/docs/devel/ci.rst b/docs/devel/ci.rst > index 585b7bf4b8..a556558435 100644 > --- a/docs/devel/ci.rst > +++ b/docs/devel/ci.rst > @@ -26,3 +26,33 @@ gitlab-runner, is called a "custom runner". > The GitLab CI jobs definition for the custom runners are located under:: > > .gitlab-ci.d/custom-runners.yml > + > +Machine Setup Howto > +------------------- > + > +For all Linux based systems, the setup can be mostly automated by the > +execution of two Ansible playbooks. Start by adding your machines to > +the ``inventory`` file under ``scripts/ci/setup``, such as this:: > + > + fully.qualified.domain > + other.machine.hostname Is this really needed? Can't the host list be passed in the command line? I find it off to imagine users wanting to configure whole fleets of runners. > + > +You may need to set some variables in the inventory file itself. One > +very common need is to tell Ansible to use a Python 3 interpreter on > +those hosts. This would look like:: > + > + fully.qualified.domain ansible_python_interpreter=/usr/bin/python3 > + other.machine.hostname ansible_python_interpreter=/usr/bin/python3 > + > +Build environment > +~~~~~~~~~~~~~~~~~ > + > +The ``scripts/ci/setup/build-environment.yml`` Ansible playbook will > +set up machines with the environment needed to perform builds and run > +QEMU tests. It covers a number of different Linux distributions and > +FreeBSD. > + > +To run the playbook, execute:: > + > + cd scripts/ci/setup > + ansible-playbook -i inventory build-environment.yml So I got somewhat there with a direct command line invocation: ansible-playbook -u root -i 192.168.122.24,192.168.122.45 scripts/ci/setup/build-environment.yml -e 'ansible_python_interpreter=/usr/bin/python3' although for some reason a single host -i fails... > diff --git a/scripts/ci/setup/build-environment.yml b/scripts/ci/setup/build-environment.yml > new file mode 100644 > index 0000000000..0197e0a48b > --- /dev/null > +++ b/scripts/ci/setup/build-environment.yml > @@ -0,0 +1,76 @@ > +--- > +- name: Installation of basic packages to build QEMU > + hosts: all > + tasks: > + - name: Update apt cache > + apt: > + update_cache: yes > + when: > + - ansible_facts['distribution'] == 'Ubuntu' So are we limiting to Ubuntu here rather than say a Debian base? > + > + - name: Install basic packages to build QEMU on Ubuntu 18.04/20.04 > + package: > + name: > + # Originally from tests/docker/dockerfiles/ubuntu1804.docker > + - ccache > + - clang > + - gcc > + - gettext > + - git > + - glusterfs-common > + - libaio-dev > + - libattr1-dev > + - libbrlapi-dev > + - libbz2-dev > + - libcacard-dev > + - libcap-ng-dev > + - libcurl4-gnutls-dev > + - libdrm-dev > + - libepoxy-dev > + - libfdt-dev > + - libgbm-dev > + - libgtk-3-dev > + - libibverbs-dev > + - libiscsi-dev > + - libjemalloc-dev > + - libjpeg-turbo8-dev > + - liblzo2-dev > + - libncurses5-dev > + - libncursesw5-dev > + - libnfs-dev > + - libnss3-dev > + - libnuma-dev > + - libpixman-1-dev > + - librados-dev > + - librbd-dev > + - librdmacm-dev > + - libsasl2-dev > + - libsdl2-dev > + - libseccomp-dev > + - libsnappy-dev > + - libspice-protocol-dev > + - libssh-dev > + - libusb-1.0-0-dev > + - libusbredirhost-dev > + - libvdeplug-dev > + - libvte-2.91-dev > + - libzstd-dev > + - make > + - ninja-build > + - python3-yaml > + - python3-sphinx > + - sparse > + - xfslibs-dev > + state: present > + when: > + - ansible_facts['distribution'] == 'Ubuntu' > + > + - name: Install packages to build QEMU on Ubuntu 18.04/20.04 on non-s390x > + package: > + name: > + - libspice-server-dev > + - libxen-dev > + state: present > + when: > + - ansible_facts['distribution'] == 'Ubuntu' > + - ansible_facts['architecture'] != 's390x' > diff --git a/scripts/ci/setup/inventory b/scripts/ci/setup/inventory > new file mode 100644 > index 0000000000..2fbb50c4a8 > --- /dev/null > +++ b/scripts/ci/setup/inventory > @@ -0,0 +1 @@ > +localhost I'm not sure we should have a default here because it will inevitably cause someone to do something to their machine when trying to setup a runner.
On Tue, Feb 23, 2021 at 02:01:53PM +0000, Alex Bennée wrote: > > Cleber Rosa <crosa@redhat.com> writes: > > > To run basic jobs on custom runners, the environment needs to be > > properly set up. The most common requirement is having the right > > packages installed. > > > > The playbook introduced here covers the QEMU's project s390x and > > aarch64 machines. At the time this is being proposed, those machines > > have already had this playbook applied to them. > > > > Signed-off-by: Cleber Rosa <crosa@redhat.com> > > --- > > docs/devel/ci.rst | 30 ++++++++++ > > scripts/ci/setup/build-environment.yml | 76 ++++++++++++++++++++++++++ > > scripts/ci/setup/inventory | 1 + > > 3 files changed, 107 insertions(+) > > create mode 100644 scripts/ci/setup/build-environment.yml > > create mode 100644 scripts/ci/setup/inventory > > > > diff --git a/docs/devel/ci.rst b/docs/devel/ci.rst > > index 585b7bf4b8..a556558435 100644 > > --- a/docs/devel/ci.rst > > +++ b/docs/devel/ci.rst > > @@ -26,3 +26,33 @@ gitlab-runner, is called a "custom runner". > > The GitLab CI jobs definition for the custom runners are located under:: > > > > .gitlab-ci.d/custom-runners.yml > > + > > +Machine Setup Howto > > +------------------- > > + > > +For all Linux based systems, the setup can be mostly automated by the > > +execution of two Ansible playbooks. Start by adding your machines to > > +the ``inventory`` file under ``scripts/ci/setup``, such as this:: > > + > > + fully.qualified.domain > > + other.machine.hostname > > Is this really needed? Can't the host list be passed in the command > line? I find it off to imagine users wanting to configure whole fleets > of runners. Why not support both, since the playbook execution is not wrapped by anything, giving the option of using either and inventory or direct cmdline invocation seems like the proper way to do it. > > > + > > +You may need to set some variables in the inventory file itself. One > > +very common need is to tell Ansible to use a Python 3 interpreter on > > +those hosts. This would look like:: > > + > > + fully.qualified.domain ansible_python_interpreter=/usr/bin/python3 > > + other.machine.hostname ansible_python_interpreter=/usr/bin/python3 > > + > > +Build environment > > +~~~~~~~~~~~~~~~~~ > > + > > +The ``scripts/ci/setup/build-environment.yml`` Ansible playbook will > > +set up machines with the environment needed to perform builds and run > > +QEMU tests. It covers a number of different Linux distributions and > > +FreeBSD. > > + > > +To run the playbook, execute:: > > + > > + cd scripts/ci/setup > > + ansible-playbook -i inventory build-environment.yml > > So I got somewhat there with a direct command line invocation: > > ansible-playbook -u root -i 192.168.122.24,192.168.122.45 scripts/ci/setup/build-environment.yml -e 'ansible_python_interpreter=/usr/bin/python3' > > although for some reason a single host -i fails... The trick is to end it with a ',' like "-i host1," Erik
Alex Bennée <alex.bennee@linaro.org> writes: > Cleber Rosa <crosa@redhat.com> writes: > >> To run basic jobs on custom runners, the environment needs to be >> properly set up. The most common requirement is having the right >> packages installed. >> <snip> > > So I got somewhat there with a direct command line invocation: > > ansible-playbook -u root -i 192.168.122.24,192.168.122.45 scripts/ci/setup/build-environment.yml -e 'ansible_python_interpreter=/usr/bin/python3' > > although for some reason a single host -i fails... > >> diff --git a/scripts/ci/setup/build-environment.yml b/scripts/ci/setup/build-environment.yml >> new file mode 100644 >> index 0000000000..0197e0a48b >> --- /dev/null >> +++ b/scripts/ci/setup/build-environment.yml >> @@ -0,0 +1,76 @@ >> +--- >> +- name: Installation of basic packages to build QEMU >> + hosts: all >> + tasks: >> + - name: Update apt cache >> + apt: >> + update_cache: yes >> + when: >> + - ansible_facts['distribution'] == 'Ubuntu' > > So are we limiting to Ubuntu here rather than say a Debian base? Also I'm getting: TASK [Update apt cache] ***************************************************************************************************************************************************** fatal: [hackbox-ubuntu-2004]: FAILED! => {"msg": "The conditional check 'ansible_facts['distribution'] == 'Ubuntu'' failed. The error was: error while evaluating conditional (ansible_facts['distribution'] == 'Ubuntu'): 'dict object' has no attribute 'distribution'\n\nThe error appears to have been in '/home/alex/lsrc/qemu.git/scripts/ci/setup/build-environment.yml': line 5, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n tasks:\n - name: Update apt cache\n ^ here\n"} which is odd given that machine is definitely an Ubuntu one.
Erik Skultety <eskultet@redhat.com> writes: > On Tue, Feb 23, 2021 at 02:01:53PM +0000, Alex Bennée wrote: >> >> Cleber Rosa <crosa@redhat.com> writes: >> >> > To run basic jobs on custom runners, the environment needs to be >> > properly set up. The most common requirement is having the right >> > packages installed. >> > >> > The playbook introduced here covers the QEMU's project s390x and >> > aarch64 machines. At the time this is being proposed, those machines >> > have already had this playbook applied to them. >> > >> > Signed-off-by: Cleber Rosa <crosa@redhat.com> >> > --- >> > docs/devel/ci.rst | 30 ++++++++++ >> > scripts/ci/setup/build-environment.yml | 76 ++++++++++++++++++++++++++ >> > scripts/ci/setup/inventory | 1 + >> > 3 files changed, 107 insertions(+) >> > create mode 100644 scripts/ci/setup/build-environment.yml >> > create mode 100644 scripts/ci/setup/inventory >> > >> > diff --git a/docs/devel/ci.rst b/docs/devel/ci.rst >> > index 585b7bf4b8..a556558435 100644 >> > --- a/docs/devel/ci.rst >> > +++ b/docs/devel/ci.rst >> > @@ -26,3 +26,33 @@ gitlab-runner, is called a "custom runner". >> > The GitLab CI jobs definition for the custom runners are located under:: >> > >> > .gitlab-ci.d/custom-runners.yml >> > + >> > +Machine Setup Howto >> > +------------------- >> > + >> > +For all Linux based systems, the setup can be mostly automated by the >> > +execution of two Ansible playbooks. Start by adding your machines to >> > +the ``inventory`` file under ``scripts/ci/setup``, such as this:: >> > + >> > + fully.qualified.domain >> > + other.machine.hostname >> >> Is this really needed? Can't the host list be passed in the command >> line? I find it off to imagine users wanting to configure whole fleets >> of runners. > > Why not support both, since the playbook execution is not wrapped by anything, > giving the option of using either and inventory or direct cmdline invocation > seems like the proper way to do it. Sure - and I dare say people used to managing fleets of servers will want to do it properly but in the first instance lets provide the simple command line option so a user can get up and running without also ensuring files are in the correct format. > >> >> > + >> > +You may need to set some variables in the inventory file itself. One >> > +very common need is to tell Ansible to use a Python 3 interpreter on >> > +those hosts. This would look like:: >> > + >> > + fully.qualified.domain ansible_python_interpreter=/usr/bin/python3 >> > + other.machine.hostname ansible_python_interpreter=/usr/bin/python3 >> > + >> > +Build environment >> > +~~~~~~~~~~~~~~~~~ >> > + >> > +The ``scripts/ci/setup/build-environment.yml`` Ansible playbook will >> > +set up machines with the environment needed to perform builds and run >> > +QEMU tests. It covers a number of different Linux distributions and >> > +FreeBSD. >> > + >> > +To run the playbook, execute:: >> > + >> > + cd scripts/ci/setup >> > + ansible-playbook -i inventory build-environment.yml >> >> So I got somewhat there with a direct command line invocation: >> >> ansible-playbook -u root -i 192.168.122.24,192.168.122.45 scripts/ci/setup/build-environment.yml -e 'ansible_python_interpreter=/usr/bin/python3' >> >> although for some reason a single host -i fails... > > The trick is to end it with a ',' like "-i host1," Ahh found it thanks.
On Tue, Feb 23, 2021 at 02:01:53PM +0000, Alex Bennée wrote: > > Cleber Rosa <crosa@redhat.com> writes: > > > To run basic jobs on custom runners, the environment needs to be > > properly set up. The most common requirement is having the right > > packages installed. > > > > The playbook introduced here covers the QEMU's project s390x and > > aarch64 machines. At the time this is being proposed, those machines > > have already had this playbook applied to them. > > > > Signed-off-by: Cleber Rosa <crosa@redhat.com> > > --- > > docs/devel/ci.rst | 30 ++++++++++ > > scripts/ci/setup/build-environment.yml | 76 ++++++++++++++++++++++++++ > > scripts/ci/setup/inventory | 1 + > > 3 files changed, 107 insertions(+) > > create mode 100644 scripts/ci/setup/build-environment.yml > > create mode 100644 scripts/ci/setup/inventory > > > > diff --git a/docs/devel/ci.rst b/docs/devel/ci.rst > > index 585b7bf4b8..a556558435 100644 > > --- a/docs/devel/ci.rst > > +++ b/docs/devel/ci.rst > > @@ -26,3 +26,33 @@ gitlab-runner, is called a "custom runner". > > The GitLab CI jobs definition for the custom runners are located under:: > > > > .gitlab-ci.d/custom-runners.yml > > + > > +Machine Setup Howto > > +------------------- > > + > > +For all Linux based systems, the setup can be mostly automated by the > > +execution of two Ansible playbooks. Start by adding your machines to > > +the ``inventory`` file under ``scripts/ci/setup``, such as this:: > > + > > + fully.qualified.domain > > + other.machine.hostname > > Is this really needed? Can't the host list be passed in the command > line? I find it off to imagine users wanting to configure whole fleets > of runners. > No, it's not needed. But, in my experience, it's the most common way people use ansible-playbook. As with all most tools QEMU relies on, that are many different ways of using them. IMO documenting more than one way to perform the same task makes the documentation unclear. > > + > > +You may need to set some variables in the inventory file itself. One > > +very common need is to tell Ansible to use a Python 3 interpreter on > > +those hosts. This would look like:: > > + > > + fully.qualified.domain ansible_python_interpreter=/usr/bin/python3 > > + other.machine.hostname ansible_python_interpreter=/usr/bin/python3 > > + > > +Build environment > > +~~~~~~~~~~~~~~~~~ > > + > > +The ``scripts/ci/setup/build-environment.yml`` Ansible playbook will > > +set up machines with the environment needed to perform builds and run > > +QEMU tests. It covers a number of different Linux distributions and > > +FreeBSD. > > + > > +To run the playbook, execute:: > > + > > + cd scripts/ci/setup > > + ansible-playbook -i inventory build-environment.yml > > So I got somewhat there with a direct command line invocation: > > ansible-playbook -u root -i 192.168.122.24,192.168.122.45 scripts/ci/setup/build-environment.yml -e 'ansible_python_interpreter=/usr/bin/python3' > Yes, and the "-e" is another example of the multiple ways to achieve the same task. > although for some reason a single host -i fails... > > > diff --git a/scripts/ci/setup/build-environment.yml b/scripts/ci/setup/build-environment.yml It requires a comma separated list, even if it's a list with a single item: https://docs.ansible.com/ansible/latest/cli/ansible-playbook.html#cmdoption-ansible-playbook-i > > new file mode 100644 > > index 0000000000..0197e0a48b > > --- /dev/null > > +++ b/scripts/ci/setup/build-environment.yml > > @@ -0,0 +1,76 @@ > > +--- > > +- name: Installation of basic packages to build QEMU > > + hosts: all > > + tasks: > > + - name: Update apt cache > > + apt: > > + update_cache: yes > > + when: > > + - ansible_facts['distribution'] == 'Ubuntu' > > So are we limiting to Ubuntu here rather than say a Debian base? > You have a point, because this would certainly work and be applicable to Debian systems too. But, this is a new addition on v5, and I'm limiting this patch to the machines that are available/connected right now to the QEMU project on GitLab. I can change that to "distribution_family == Debian" if you think it's a good idea. But IMO it'd make more sense for a patch introducing the package list for Debian systems to change that. > > + > > + - name: Install basic packages to build QEMU on Ubuntu 18.04/20.04 > > + package: > > + name: > > + # Originally from tests/docker/dockerfiles/ubuntu1804.docker > > + - ccache > > + - clang > > + - gcc > > + - gettext > > + - git > > + - glusterfs-common > > + - libaio-dev > > + - libattr1-dev > > + - libbrlapi-dev > > + - libbz2-dev > > + - libcacard-dev > > + - libcap-ng-dev > > + - libcurl4-gnutls-dev > > + - libdrm-dev > > + - libepoxy-dev > > + - libfdt-dev > > + - libgbm-dev > > + - libgtk-3-dev > > + - libibverbs-dev > > + - libiscsi-dev > > + - libjemalloc-dev > > + - libjpeg-turbo8-dev > > + - liblzo2-dev > > + - libncurses5-dev > > + - libncursesw5-dev > > + - libnfs-dev > > + - libnss3-dev > > + - libnuma-dev > > + - libpixman-1-dev > > + - librados-dev > > + - librbd-dev > > + - librdmacm-dev > > + - libsasl2-dev > > + - libsdl2-dev > > + - libseccomp-dev > > + - libsnappy-dev > > + - libspice-protocol-dev > > + - libssh-dev > > + - libusb-1.0-0-dev > > + - libusbredirhost-dev > > + - libvdeplug-dev > > + - libvte-2.91-dev > > + - libzstd-dev > > + - make > > + - ninja-build > > + - python3-yaml > > + - python3-sphinx > > + - sparse > > + - xfslibs-dev > > + state: present > > + when: > > + - ansible_facts['distribution'] == 'Ubuntu' > > + > > + - name: Install packages to build QEMU on Ubuntu 18.04/20.04 on non-s390x > > + package: > > + name: > > + - libspice-server-dev > > + - libxen-dev > > + state: present > > + when: > > + - ansible_facts['distribution'] == 'Ubuntu' > > + - ansible_facts['architecture'] != 's390x' > > diff --git a/scripts/ci/setup/inventory b/scripts/ci/setup/inventory > > new file mode 100644 > > index 0000000000..2fbb50c4a8 > > --- /dev/null > > +++ b/scripts/ci/setup/inventory > > @@ -0,0 +1 @@ > > +localhost > > I'm not sure we should have a default here because it will inevitably > cause someone to do something to their machine when trying to setup a > runner. > Fair enough. Then I see two options: 1) follow the vars.yml.template example and only ship a inventory.template file 2) use a placeholder with an impossible hostname such as "my-qemu-runner.example.org" or "your-host-name-here" > -- > Alex Bennée > Let me know what you think is more reasonable, and thanks for the review! Regards, - Cleber.
On Tue, Feb 23, 2021 at 03:51:33PM +0100, Erik Skultety wrote: > On Tue, Feb 23, 2021 at 02:01:53PM +0000, Alex Bennée wrote: > > > > Cleber Rosa <crosa@redhat.com> writes: > > > > > To run basic jobs on custom runners, the environment needs to be > > > properly set up. The most common requirement is having the right > > > packages installed. > > > > > > The playbook introduced here covers the QEMU's project s390x and > > > aarch64 machines. At the time this is being proposed, those machines > > > have already had this playbook applied to them. > > > > > > Signed-off-by: Cleber Rosa <crosa@redhat.com> > > > --- > > > docs/devel/ci.rst | 30 ++++++++++ > > > scripts/ci/setup/build-environment.yml | 76 ++++++++++++++++++++++++++ > > > scripts/ci/setup/inventory | 1 + > > > 3 files changed, 107 insertions(+) > > > create mode 100644 scripts/ci/setup/build-environment.yml > > > create mode 100644 scripts/ci/setup/inventory > > > > > > diff --git a/docs/devel/ci.rst b/docs/devel/ci.rst > > > index 585b7bf4b8..a556558435 100644 > > > --- a/docs/devel/ci.rst > > > +++ b/docs/devel/ci.rst > > > @@ -26,3 +26,33 @@ gitlab-runner, is called a "custom runner". > > > The GitLab CI jobs definition for the custom runners are located under:: > > > > > > .gitlab-ci.d/custom-runners.yml > > > + > > > +Machine Setup Howto > > > +------------------- > > > + > > > +For all Linux based systems, the setup can be mostly automated by the > > > +execution of two Ansible playbooks. Start by adding your machines to > > > +the ``inventory`` file under ``scripts/ci/setup``, such as this:: > > > + > > > + fully.qualified.domain > > > + other.machine.hostname > > > > Is this really needed? Can't the host list be passed in the command > > line? I find it off to imagine users wanting to configure whole fleets > > of runners. > > Why not support both, since the playbook execution is not wrapped by anything, > giving the option of using either and inventory or direct cmdline invocation > seems like the proper way to do it. > Well, these two (and possibly many others) are supported by ansible-playbook. I don't think we should document more than one though, as it leads to a more confusing documentation. > > > > > + > > > +You may need to set some variables in the inventory file itself. One > > > +very common need is to tell Ansible to use a Python 3 interpreter on > > > +those hosts. This would look like:: > > > + > > > + fully.qualified.domain ansible_python_interpreter=/usr/bin/python3 > > > + other.machine.hostname ansible_python_interpreter=/usr/bin/python3 > > > + > > > +Build environment > > > +~~~~~~~~~~~~~~~~~ > > > + > > > +The ``scripts/ci/setup/build-environment.yml`` Ansible playbook will > > > +set up machines with the environment needed to perform builds and run > > > +QEMU tests. It covers a number of different Linux distributions and > > > +FreeBSD. > > > + > > > +To run the playbook, execute:: > > > + > > > + cd scripts/ci/setup > > > + ansible-playbook -i inventory build-environment.yml > > > > So I got somewhat there with a direct command line invocation: > > > > ansible-playbook -u root -i 192.168.122.24,192.168.122.45 scripts/ci/setup/build-environment.yml -e 'ansible_python_interpreter=/usr/bin/python3' > > > > although for some reason a single host -i fails... > > The trick is to end it with a ',' like "-i host1," > Yep, that is the trick! A weird one nevertheless... :) > Erik Thanks for the review and comments so far Erik! Best, - Cleber.
On Tue, Feb 23, 2021 at 03:17:24PM +0000, Alex Bennée wrote: > > Erik Skultety <eskultet@redhat.com> writes: > > > On Tue, Feb 23, 2021 at 02:01:53PM +0000, Alex Bennée wrote: > >> > >> Cleber Rosa <crosa@redhat.com> writes: > >> > >> > To run basic jobs on custom runners, the environment needs to be > >> > properly set up. The most common requirement is having the right > >> > packages installed. > >> > > >> > The playbook introduced here covers the QEMU's project s390x and > >> > aarch64 machines. At the time this is being proposed, those machines > >> > have already had this playbook applied to them. > >> > > >> > Signed-off-by: Cleber Rosa <crosa@redhat.com> > >> > --- > >> > docs/devel/ci.rst | 30 ++++++++++ > >> > scripts/ci/setup/build-environment.yml | 76 ++++++++++++++++++++++++++ > >> > scripts/ci/setup/inventory | 1 + > >> > 3 files changed, 107 insertions(+) > >> > create mode 100644 scripts/ci/setup/build-environment.yml > >> > create mode 100644 scripts/ci/setup/inventory > >> > > >> > diff --git a/docs/devel/ci.rst b/docs/devel/ci.rst > >> > index 585b7bf4b8..a556558435 100644 > >> > --- a/docs/devel/ci.rst > >> > +++ b/docs/devel/ci.rst > >> > @@ -26,3 +26,33 @@ gitlab-runner, is called a "custom runner". > >> > The GitLab CI jobs definition for the custom runners are located under:: > >> > > >> > .gitlab-ci.d/custom-runners.yml > >> > + > >> > +Machine Setup Howto > >> > +------------------- > >> > + > >> > +For all Linux based systems, the setup can be mostly automated by the > >> > +execution of two Ansible playbooks. Start by adding your machines to > >> > +the ``inventory`` file under ``scripts/ci/setup``, such as this:: > >> > + > >> > + fully.qualified.domain > >> > + other.machine.hostname > >> > >> Is this really needed? Can't the host list be passed in the command > >> line? I find it off to imagine users wanting to configure whole fleets > >> of runners. > > > > Why not support both, since the playbook execution is not wrapped by anything, > > giving the option of using either and inventory or direct cmdline invocation > > seems like the proper way to do it. > > Sure - and I dare say people used to managing fleets of servers will > want to do it properly but in the first instance lets provide the simple > command line option so a user can get up and running without also > ensuring files are in the correct format. > Like I said before, I'm strongly in favor of a more straightforward documentation, instead of documenting multiple ways to perform the same task. I clearly believe that writing the inventory file (which will later be used for the second gitlab-runner playbook) is the best choice here. Do you think the command line approach is clearer? Should we switch? Regards, Cleber.
On Tue, Feb 23, 2021 at 03:01:50PM +0000, Alex Bennée wrote: > > Alex Bennée <alex.bennee@linaro.org> writes: > > > Cleber Rosa <crosa@redhat.com> writes: > > > >> To run basic jobs on custom runners, the environment needs to be > >> properly set up. The most common requirement is having the right > >> packages installed. > >> > <snip> > > > > So I got somewhat there with a direct command line invocation: > > > > ansible-playbook -u root -i 192.168.122.24,192.168.122.45 scripts/ci/setup/build-environment.yml -e 'ansible_python_interpreter=/usr/bin/python3' > > > > although for some reason a single host -i fails... > > > >> diff --git a/scripts/ci/setup/build-environment.yml b/scripts/ci/setup/build-environment.yml > >> new file mode 100644 > >> index 0000000000..0197e0a48b > >> --- /dev/null > >> +++ b/scripts/ci/setup/build-environment.yml > >> @@ -0,0 +1,76 @@ > >> +--- > >> +- name: Installation of basic packages to build QEMU > >> + hosts: all > >> + tasks: > >> + - name: Update apt cache > >> + apt: > >> + update_cache: yes > >> + when: > >> + - ansible_facts['distribution'] == 'Ubuntu' > > > > So are we limiting to Ubuntu here rather than say a Debian base? > > Also I'm getting: > > TASK [Update apt cache] ***************************************************************************************************************************************************** > fatal: [hackbox-ubuntu-2004]: FAILED! => {"msg": "The conditional check 'ansible_facts['distribution'] == 'Ubuntu'' failed. The error was: error while evaluating conditional (ansible_facts['distribution'] == 'Ubuntu'): 'dict object' has no attribute 'distribution'\n\nThe error appears to have been in '/home/alex/lsrc/qemu.git/scripts/ci/setup/build-environment.yml': line 5, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n tasks:\n - name: Update apt cache\n ^ here\n"} > > which is odd given that machine is definitely an Ubuntu one. > It's defintely odd. This is what I get on a fresh machine: TASK [Update apt cache] ************************************************************************************************************************* [WARNING]: Updating cache and auto-installing missing dependency: python3-apt ok: [localhost] Could you please let me know the output of: $ ansible -m setup -u $YOUR_USERNAME -i $HOSTNAME, all | grep ansible_distribution Thanks, - Cleber.
Cleber Rosa <crosa@redhat.com> writes: > On Tue, Feb 23, 2021 at 02:01:53PM +0000, Alex Bennée wrote: >> >> Cleber Rosa <crosa@redhat.com> writes: >> >> > To run basic jobs on custom runners, the environment needs to be >> > properly set up. The most common requirement is having the right >> > packages installed. >> > >> > The playbook introduced here covers the QEMU's project s390x and >> > aarch64 machines. At the time this is being proposed, those machines >> > have already had this playbook applied to them. >> > >> > Signed-off-by: Cleber Rosa <crosa@redhat.com> >> > --- >> > docs/devel/ci.rst | 30 ++++++++++ >> > scripts/ci/setup/build-environment.yml | 76 ++++++++++++++++++++++++++ >> > scripts/ci/setup/inventory | 1 + >> > 3 files changed, 107 insertions(+) >> > create mode 100644 scripts/ci/setup/build-environment.yml >> > create mode 100644 scripts/ci/setup/inventory >> > >> > diff --git a/docs/devel/ci.rst b/docs/devel/ci.rst >> > index 585b7bf4b8..a556558435 100644 >> > --- a/docs/devel/ci.rst >> > +++ b/docs/devel/ci.rst >> > @@ -26,3 +26,33 @@ gitlab-runner, is called a "custom runner". >> > The GitLab CI jobs definition for the custom runners are located under:: >> > >> > .gitlab-ci.d/custom-runners.yml >> > + >> > +Machine Setup Howto >> > +------------------- >> > + >> > +For all Linux based systems, the setup can be mostly automated by the >> > +execution of two Ansible playbooks. Start by adding your machines to >> > +the ``inventory`` file under ``scripts/ci/setup``, such as this:: >> > + >> > + fully.qualified.domain >> > + other.machine.hostname >> >> Is this really needed? Can't the host list be passed in the command >> line? I find it off to imagine users wanting to configure whole fleets >> of runners. >> > > No, it's not needed. > > But, in my experience, it's the most common way people use > ansible-playbook. As with all most tools QEMU relies on, that are > many different ways of using them. IMO documenting more than one way > to perform the same task makes the documentation unclear. > >> > + >> > +You may need to set some variables in the inventory file itself. One >> > +very common need is to tell Ansible to use a Python 3 interpreter on >> > +those hosts. This would look like:: >> > + >> > + fully.qualified.domain ansible_python_interpreter=/usr/bin/python3 >> > + other.machine.hostname ansible_python_interpreter=/usr/bin/python3 >> > + >> > +Build environment >> > +~~~~~~~~~~~~~~~~~ >> > + >> > +The ``scripts/ci/setup/build-environment.yml`` Ansible playbook will >> > +set up machines with the environment needed to perform builds and run >> > +QEMU tests. It covers a number of different Linux distributions and >> > +FreeBSD. >> > + >> > +To run the playbook, execute:: >> > + >> > + cd scripts/ci/setup >> > + ansible-playbook -i inventory build-environment.yml >> >> So I got somewhat there with a direct command line invocation: >> >> ansible-playbook -u root -i 192.168.122.24,192.168.122.45 scripts/ci/setup/build-environment.yml -e 'ansible_python_interpreter=/usr/bin/python3' >> > > Yes, and the "-e" is another example of the multiple ways to achieve > the same task. > >> although for some reason a single host -i fails... >> >> > diff --git a/scripts/ci/setup/build-environment.yml b/scripts/ci/setup/build-environment.yml > > It requires a comma separated list, even if it's a list with a single > item: > > https://docs.ansible.com/ansible/latest/cli/ansible-playbook.html#cmdoption-ansible-playbook-i > >> > new file mode 100644 >> > index 0000000000..0197e0a48b >> > --- /dev/null >> > +++ b/scripts/ci/setup/build-environment.yml >> > @@ -0,0 +1,76 @@ >> > +--- >> > +- name: Installation of basic packages to build QEMU >> > + hosts: all >> > + tasks: >> > + - name: Update apt cache >> > + apt: >> > + update_cache: yes >> > + when: >> > + - ansible_facts['distribution'] == 'Ubuntu' >> >> So are we limiting to Ubuntu here rather than say a Debian base? >> > > You have a point, because this would certainly work and be applicable > to Debian systems too. But, this is a new addition on v5, and I'm > limiting this patch to the machines that are available/connected right > now to the QEMU project on GitLab. > > I can change that to "distribution_family == Debian" if you think > it's a good idea. But IMO it'd make more sense for a patch > introducing the package list for Debian systems to change that. > >> > + >> > + - name: Install basic packages to build QEMU on Ubuntu 18.04/20.04 >> > + package: >> > + name: >> > + # Originally from tests/docker/dockerfiles/ubuntu1804.docker >> > + - ccache >> > + - clang >> > + - gcc >> > + - gettext >> > + - git >> > + - glusterfs-common >> > + - libaio-dev >> > + - libattr1-dev >> > + - libbrlapi-dev >> > + - libbz2-dev >> > + - libcacard-dev >> > + - libcap-ng-dev >> > + - libcurl4-gnutls-dev >> > + - libdrm-dev >> > + - libepoxy-dev >> > + - libfdt-dev >> > + - libgbm-dev >> > + - libgtk-3-dev >> > + - libibverbs-dev >> > + - libiscsi-dev >> > + - libjemalloc-dev >> > + - libjpeg-turbo8-dev >> > + - liblzo2-dev >> > + - libncurses5-dev >> > + - libncursesw5-dev >> > + - libnfs-dev >> > + - libnss3-dev >> > + - libnuma-dev >> > + - libpixman-1-dev >> > + - librados-dev >> > + - librbd-dev >> > + - librdmacm-dev >> > + - libsasl2-dev >> > + - libsdl2-dev >> > + - libseccomp-dev >> > + - libsnappy-dev >> > + - libspice-protocol-dev >> > + - libssh-dev >> > + - libusb-1.0-0-dev >> > + - libusbredirhost-dev >> > + - libvdeplug-dev >> > + - libvte-2.91-dev >> > + - libzstd-dev >> > + - make >> > + - ninja-build >> > + - python3-yaml >> > + - python3-sphinx >> > + - sparse >> > + - xfslibs-dev >> > + state: present >> > + when: >> > + - ansible_facts['distribution'] == 'Ubuntu' >> > + >> > + - name: Install packages to build QEMU on Ubuntu 18.04/20.04 on non-s390x >> > + package: >> > + name: >> > + - libspice-server-dev >> > + - libxen-dev >> > + state: present >> > + when: >> > + - ansible_facts['distribution'] == 'Ubuntu' >> > + - ansible_facts['architecture'] != 's390x' >> > diff --git a/scripts/ci/setup/inventory b/scripts/ci/setup/inventory >> > new file mode 100644 >> > index 0000000000..2fbb50c4a8 >> > --- /dev/null >> > +++ b/scripts/ci/setup/inventory >> > @@ -0,0 +1 @@ >> > +localhost >> >> I'm not sure we should have a default here because it will inevitably >> cause someone to do something to their machine when trying to setup a >> runner. >> > > Fair enough. Then I see two options: > > 1) follow the vars.yml.template example and only ship a > inventory.template file I'd go with the template approach, that way someones local hacks can at least live in their source tree without being overly bothered by checkouts and updates. > > 2) use a placeholder with an impossible hostname such as > "my-qemu-runner.example.org" or "your-host-name-here" > >> -- >> Alex Bennée >> > > Let me know what you think is more reasonable, and thanks for the > review! > > Regards, > - Cleber.
Cleber Rosa <crosa@redhat.com> writes: > On Tue, Feb 23, 2021 at 03:17:24PM +0000, Alex Bennée wrote: >> >> Erik Skultety <eskultet@redhat.com> writes: >> >> > On Tue, Feb 23, 2021 at 02:01:53PM +0000, Alex Bennée wrote: >> >> >> >> Cleber Rosa <crosa@redhat.com> writes: >> >> >> >> > To run basic jobs on custom runners, the environment needs to be >> >> > properly set up. The most common requirement is having the right >> >> > packages installed. >> >> > >> >> > The playbook introduced here covers the QEMU's project s390x and >> >> > aarch64 machines. At the time this is being proposed, those machines >> >> > have already had this playbook applied to them. >> >> > >> >> > Signed-off-by: Cleber Rosa <crosa@redhat.com> >> >> > --- >> >> > docs/devel/ci.rst | 30 ++++++++++ >> >> > scripts/ci/setup/build-environment.yml | 76 ++++++++++++++++++++++++++ >> >> > scripts/ci/setup/inventory | 1 + >> >> > 3 files changed, 107 insertions(+) >> >> > create mode 100644 scripts/ci/setup/build-environment.yml >> >> > create mode 100644 scripts/ci/setup/inventory >> >> > >> >> > diff --git a/docs/devel/ci.rst b/docs/devel/ci.rst >> >> > index 585b7bf4b8..a556558435 100644 >> >> > --- a/docs/devel/ci.rst >> >> > +++ b/docs/devel/ci.rst >> >> > @@ -26,3 +26,33 @@ gitlab-runner, is called a "custom runner". >> >> > The GitLab CI jobs definition for the custom runners are located under:: >> >> > >> >> > .gitlab-ci.d/custom-runners.yml >> >> > + >> >> > +Machine Setup Howto >> >> > +------------------- >> >> > + >> >> > +For all Linux based systems, the setup can be mostly automated by the >> >> > +execution of two Ansible playbooks. Start by adding your machines to >> >> > +the ``inventory`` file under ``scripts/ci/setup``, such as this:: >> >> > + >> >> > + fully.qualified.domain >> >> > + other.machine.hostname >> >> >> >> Is this really needed? Can't the host list be passed in the command >> >> line? I find it off to imagine users wanting to configure whole fleets >> >> of runners. >> > >> > Why not support both, since the playbook execution is not wrapped by anything, >> > giving the option of using either and inventory or direct cmdline invocation >> > seems like the proper way to do it. >> >> Sure - and I dare say people used to managing fleets of servers will >> want to do it properly but in the first instance lets provide the simple >> command line option so a user can get up and running without also >> ensuring files are in the correct format. >> > > Like I said before, I'm strongly in favor of a more straightforward > documentation, instead of documenting multiple ways to perform the > same task. I clearly believe that writing the inventory file (which > will later be used for the second gitlab-runner playbook) is the best > choice here. > > Do you think the command line approach is clearer? Should we switch? I think the command line is $LESS_STEPS for a user to follow but I'm happy to defer to the inventory approach if it's more idiomatic. I'd rather avoid uses having their pristine source trees being polluted with local customisations they have to keep stashing or loosing. > > Regards, > Cleber.
Cleber Rosa <crosa@redhat.com> writes: > On Tue, Feb 23, 2021 at 03:01:50PM +0000, Alex Bennée wrote: >> >> Alex Bennée <alex.bennee@linaro.org> writes: >> >> > Cleber Rosa <crosa@redhat.com> writes: >> > >> >> To run basic jobs on custom runners, the environment needs to be >> >> properly set up. The most common requirement is having the right >> >> packages installed. >> >> >> <snip> >> > >> > So I got somewhat there with a direct command line invocation: >> > >> > ansible-playbook -u root -i 192.168.122.24,192.168.122.45 scripts/ci/setup/build-environment.yml -e 'ansible_python_interpreter=/usr/bin/python3' >> > >> > although for some reason a single host -i fails... >> > >> >> diff --git a/scripts/ci/setup/build-environment.yml b/scripts/ci/setup/build-environment.yml >> >> new file mode 100644 >> >> index 0000000000..0197e0a48b >> >> --- /dev/null >> >> +++ b/scripts/ci/setup/build-environment.yml >> >> @@ -0,0 +1,76 @@ >> >> +--- >> >> +- name: Installation of basic packages to build QEMU >> >> + hosts: all >> >> + tasks: >> >> + - name: Update apt cache >> >> + apt: >> >> + update_cache: yes >> >> + when: >> >> + - ansible_facts['distribution'] == 'Ubuntu' >> > >> > So are we limiting to Ubuntu here rather than say a Debian base? >> >> Also I'm getting: >> >> TASK [Update apt cache] ***************************************************************************************************************************************************** >> fatal: [hackbox-ubuntu-2004]: FAILED! => {"msg": "The conditional check 'ansible_facts['distribution'] == 'Ubuntu'' failed. The error was: error while evaluating conditional (ansible_facts['distribution'] == 'Ubuntu'): 'dict object' has no attribute 'distribution'\n\nThe error appears to have been in '/home/alex/lsrc/qemu.git/scripts/ci/setup/build-environment.yml': line 5, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n tasks:\n - name: Update apt cache\n ^ here\n"} >> >> which is odd given that machine is definitely an Ubuntu one. >> > > It's defintely odd. This is what I get on a fresh machine: > > TASK [Update apt cache] ************************************************************************************************************************* > [WARNING]: Updating cache and auto-installing missing dependency: python3-apt > ok: [localhost] > > Could you please let me know the output of: > > $ ansible -m setup -u $YOUR_USERNAME -i $HOSTNAME, all | grep > ansible_distribution The key doesn't exist: hackbox-ubuntu-2004 | SUCCESS => { "ansible_facts": { "ansible_all_ipv4_addresses": [ "192.168.122.170" ], "ansible_all_ipv6_addresses": [ "fe80::5054:ff:fe54:7cfe" ], "ansible_apparmor": { "status": "enabled" }, "ansible_architecture": "x86_64", "ansible_bios_date": "04/01/2014", "ansible_bios_version": "1.10.2-1ubuntu1", "ansible_cmdline": { "BOOT_IMAGE": "/vmlinuz-5.4.0-65-generic", "maybe-ubiquity": true, "ro": true, "root": "/dev/mapper/ubuntu--vg-ubuntu--lv" }, "ansible_date_time": { "date": "2021-02-23", "day": "23", "epoch": "1614104601", "hour": "18", "iso8601": "2021-02-23T18:23:21Z", "iso8601_basic": "20210223T182321857461", "iso8601_basic_short": "20210223T182321", "iso8601_micro": "2021-02-23T18:23:21.857529Z", "minute": "23", "month": "02", "second": "21", "time": "18:23:21", "tz": "UTC", "tz_offset": "+0000", "weekday": "Tuesday", "weekday_number": "2", "weeknumber": "08", "year": "2021" }, "ansible_default_ipv4": { "address": "192.168.122.170", "alias": "enp1s0", "broadcast": "192.168.122.255", "gateway": "192.168.122.1", "interface": "enp1s0", "macaddress": "52:54:00:54:7c:fe", "mtu": 1500, "netmask": "255.255.255.0", "network": "192.168.122.0", "type": "ether" }, "ansible_default_ipv6": {}, "ansible_device_links": { "ids": { "dm-0": [ "dm-name-ubuntu--vg-ubuntu--lv", "dm-uuid-LVM-filR1BfuX6Mpp9J7CP9cbVsTT2ICh7Apc9qZsFohnsqycocacS0Sm6HAhjTBEAkq" ], "sda": [ "scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-1" ], "sda1": [ "scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-1-part1" ], "sda2": [ "scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-1-part2" ], "sda3": [ "lvm-pv-uuid-agDdyQ-V5gQ-aaov-933l-SFAL-0rmD-SlOkYy", "scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-1-part3" ], "sr0": [ "ata-QEMU_DVD-ROM_QM00001" ] }, "labels": {}, "masters": { "sda3": [ "dm-0" ] }, "uuids": { "dm-0": [ "291656fe-bd87-484c-b4a9-4453471a17e8" ], "sda2": [ "45018994-9625-44ad-877a-3980bcf943a3" ] } }, "ansible_devices": { "dm-0": { "holders": [], "host": "", "links": { "ids": [ "dm-name-ubuntu--vg-ubuntu--lv", "dm-uuid-LVM-filR1BfuX6Mpp9J7CP9cbVsTT2ICh7Apc9qZsFohnsqycocacS0Sm6HAhjTBEAkq" ], "labels": [], "masters": [], "uuids": [ "291656fe-bd87-484c-b4a9-4453471a17e8" ] }, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "", "sectors": "41943040", "sectorsize": "512", "size": "20.00 GB", "support_discard": "4096", "vendor": null, "virtual": 1 }, "loop0": { "holders": [], "host": "", "links": { "ids": [], "labels": [], "masters": [], "uuids": [] }, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "143120", "sectorsize": "512", "size": "69.88 MB", "support_discard": "4096", "vendor": null, "virtual": 1 }, "loop1": { "holders": [], "host": "", "links": { "ids": [], "labels": [], "masters": [], "uuids": [] }, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "0", "sectorsize": "512", "size": "0.00 Bytes", "support_discard": "4096", "vendor": null, "virtual": 1 }, "loop2": { "holders": [], "host": "", "links": { "ids": [], "labels": [], "masters": [], "uuids": [] }, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "0", "sectorsize": "512", "size": "0.00 Bytes", "support_discard": "4096", "vendor": null, "virtual": 1 }, "loop3": { "holders": [], "host": "", "links": { "ids": [], "labels": [], "masters": [], "uuids": [] }, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "113424", "sectorsize": "512", "size": "55.38 MB", "support_discard": "4096", "vendor": null, "virtual": 1 }, "loop4": { "holders": [], "host": "", "links": { "ids": [], "labels": [], "masters": [], "uuids": [] }, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "63672", "sectorsize": "512", "size": "31.09 MB", "support_discard": "4096", "vendor": null, "virtual": 1 }, "loop5": { "holders": [], "host": "", "links": { "ids": [], "labels": [], "masters": [], "uuids": [] }, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "142872", "sectorsize": "512", "size": "69.76 MB", "support_discard": "4096", "vendor": null, "virtual": 1 }, "loop6": { "holders": [], "host": "", "links": { "ids": [], "labels": [], "masters": [], "uuids": [] }, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "113592", "sectorsize": "512", "size": "55.46 MB", "support_discard": "4096", "vendor": null, "virtual": 1 }, "loop7": { "holders": [], "host": "", "links": { "ids": [], "labels": [], "masters": [], "uuids": [] }, "model": null, "partitions": {}, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "63664", "sectorsize": "512", "size": "31.09 MB", "support_discard": "4096", "vendor": null, "virtual": 1 }, "sda": { "holders": [], "host": "SCSI storage controller: Broadcom / LSI 53c895a", "links": { "ids": [ "scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-1" ], "labels": [], "masters": [], "uuids": [] }, "model": "QEMU HARDDISK", "partitions": { "sda1": { "holders": [], "links": { "ids": [ "scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-1-part1" ], "labels": [], "masters": [], "uuids": [] }, "sectors": "2048", "sectorsize": 512, "size": "1.00 MB", "start": "2048", "uuid": null }, "sda2": { "holders": [], "links": { "ids": [ "scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-1-part2" ], "labels": [], "masters": [], "uuids": [ "45018994-9625-44ad-877a-3980bcf943a3" ] }, "sectors": "2097152", "sectorsize": 512, "size": "1.00 GB", "start": "4096", "uuid": "45018994-9625-44ad-877a-3980bcf943a3" }, "sda3": { "holders": [ "ubuntu--vg-ubuntu--lv" ], "links": { "ids": [ "lvm-pv-uuid-agDdyQ-V5gQ-aaov-933l-SFAL-0rmD-SlOkYy", "scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-1-part3" ], "labels": [], "masters": [ "dm-0" ], "uuids": [] }, "sectors": "81782784", "sectorsize": 512, "size": "39.00 GB", "start": "2101248", "uuid": null } }, "removable": "0", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "83886080", "sectorsize": "512", "size": "40.00 GB", "support_discard": "4096", "vendor": "QEMU", "virtual": 1 }, "sr0": { "holders": [], "host": "SATA controller: Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port SATA Controller [AHCI mode] (rev 02)", "links": { "ids": [ "ata-QEMU_DVD-ROM_QM00001" ], "labels": [], "masters": [], "uuids": [] }, "model": "QEMU DVD-ROM", "partitions": {}, "removable": "1", "rotational": "1", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "mq-deadline", "sectors": "2097151", "sectorsize": "512", "size": "1024.00 MB", "support_discard": "0", "vendor": "QEMU", "virtual": 1 } }, "ansible_dns": { "nameservers": [ "127.0.0.53" ], "options": { "edns0": true, "trust-ad": true } }, "ansible_domain": "", "ansible_effective_group_id": 0, "ansible_effective_user_id": 0, "ansible_enp1s0": { "active": true, "device": "enp1s0", "features": { "esp_hw_offload": "off [fixed]", "esp_tx_csum_hw_offload": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "on", "loopback": "off [fixed]", "netns_local": "off [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "on [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tls_hw_record": "off [fixed]", "tls_hw_rx_offload": "off [fixed]", "tls_hw_tx_offload": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_esp_segmentation": "off [fixed]", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "on [fixed]", "tx_ipxip4_segmentation": "off [fixed]", "tx_ipxip6_segmentation": "off [fixed]", "tx_lockless": "off [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "off", "tx_tcp_segmentation": "on", "tx_udp_segmentation": "off [fixed]", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "vlan_challenged": "off [fixed]" }, "hw_timestamp_filters": [], "ipv4": { "address": "192.168.122.170", "broadcast": "192.168.122.255", "netmask": "255.255.255.0", "network": "192.168.122.0" }, "ipv6": [ { "address": "fe80::5054:ff:fe54:7cfe", "prefix": "64", "scope": "link" } ], "macaddress": "52:54:00:54:7c:fe", "module": "virtio_net", "mtu": 1500, "pciid": "virtio0", "promisc": false, "speed": -1, "timestamping": [ "tx_software", "rx_software", "software" ], "type": "ether" }, "ansible_env": { "DBUS_SESSION_BUS_ADDRESS": "unix:path=/run/user/0/bus", "HOME": "/root", "LANG": "en_GB.UTF-8", "LOGNAME": "root", "MOTD_SHOWN": "pam", "PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin", "PWD": "/root", "SHELL": "/bin/bash", "SHLVL": "0", "SSH_AUTH_SOCK": "/tmp/ssh-xGhYmKCci1/agent.4096", "SSH_CLIENT": "192.168.122.1 40374 22", "SSH_CONNECTION": "192.168.122.1 40374 192.168.122.170 22", "SSH_TTY": "/dev/pts/0", "TERM": "screen-256color", "USER": "root", "XDG_RUNTIME_DIR": "/run/user/0", "XDG_SESSION_CLASS": "user", "XDG_SESSION_ID": "17", "XDG_SESSION_TYPE": "tty", "_": "/bin/sh" }, "ansible_fips": false, "ansible_form_factor": "Other", "ansible_fqdn": "ubuntu2004", "ansible_hostname": "ubuntu2004", "ansible_interfaces": [ "enp1s0", "lo" ], "ansible_is_chroot": false, "ansible_iscsi_iqn": "iqn.1993-08.org.debian:01:af5bf2af245", "ansible_kernel": "5.4.0-65-generic", "ansible_lo": { "active": true, "device": "lo", "features": { "esp_hw_offload": "off [fixed]", "esp_tx_csum_hw_offload": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "on [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_gro_hw": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tls_hw_record": "off [fixed]", "tls_hw_rx_offload": "off [fixed]", "tls_hw_tx_offload": "off [fixed]", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "on [fixed]", "tx_checksumming": "on", "tx_esp_segmentation": "off [fixed]", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipxip4_segmentation": "off [fixed]", "tx_ipxip6_segmentation": "off [fixed]", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off [fixed]", "tx_scatter_gather": "on [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "tx_sctp_segmentation": "on", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_segmentation": "off [fixed]", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "vlan_challenged": "on [fixed]" }, "hw_timestamp_filters": [], "ipv4": { "address": "127.0.0.1", "broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0" }, "ipv6": [ { "address": "::1", "prefix": "128", "scope": "host" } ], "mtu": 65536, "promisc": false, "timestamping": [ "tx_software", "rx_software", "software" ], "type": "loopback" }, "ansible_local": {}, "ansible_lsb": { "codename": "focal", "description": "Ubuntu 20.04.2 LTS", "id": "Ubuntu", "major_release": "20", "release": "20.04" }, "ansible_lvm": { "lvs": { "ubuntu-lv": { "size_g": "20.00", "vg": "ubuntu-vg" } }, "pvs": { "/dev/sda3": { "free_g": "19.00", "size_g": "39.00", "vg": "ubuntu-vg" } }, "vgs": { "ubuntu-vg": { "free_g": "19.00", "num_lvs": "1", "num_pvs": "1", "size_g": "39.00" } } }, "ansible_machine": "x86_64", "ansible_machine_id": "64d7747e869a45b09d0aae9a6d463611", "ansible_memfree_mb": 2765, "ansible_memory_mb": { "nocache": { "free": 3687, "used": 248 }, "real": { "free": 2765, "total": 3935, "used": 1170 }, "swap": { "cached": 0, "free": 3934, "total": 3934, "used": 0 } }, "ansible_memtotal_mb": 3935, "ansible_mounts": [ { "block_available": 2357334, "block_size": 4096, "block_total": 5127828, "block_used": 2770494, "device": "/dev/mapper/ubuntu--vg-ubuntu--lv", "fstype": "ext4", "inode_available": 1130751, "inode_total": 1310720, "inode_used": 179969, "mount": "/", "options": "rw,relatime", "size_available": 9655640064, "size_total": 21003583488, "uuid": "291656fe-bd87-484c-b4a9-4453471a17e8" }, { "block_available": 181527, "block_size": 4096, "block_total": 249830, "block_used": 68303, "device": "/dev/sda2", "fstype": "ext4", "inode_available": 65220, "inode_total": 65536, "inode_used": 316, "mount": "/boot", "options": "rw,relatime", "size_available": 743534592, "size_total": 1023303680, "uuid": "45018994-9625-44ad-877a-3980bcf943a3" }, { "block_available": 0, "block_size": 131072, "block_total": 444, "block_used": 444, "device": "/dev/loop3", "fstype": "squashfs", "inode_available": 0, "inode_total": 10809, "inode_used": 10809, "mount": "/snap/core18/1944", "options": "ro,nodev,relatime", "size_available": 0, "size_total": 58195968, "uuid": "N/A" }, { "block_available": 0, "block_size": 131072, "block_total": 249, "block_used": 249, "device": "/dev/loop4", "fstype": "squashfs", "inode_available": 0, "inode_total": 472, "inode_used": 472, "mount": "/snap/snapd/10707", "options": "ro,nodev,relatime", "size_available": 0, "size_total": 32636928, "uuid": "N/A" }, { "block_available": 0, "block_size": 131072, "block_total": 559, "block_used": 559, "device": "/dev/loop5", "fstype": "squashfs", "inode_available": 0, "inode_total": 1578, "inode_used": 1578, "mount": "/snap/lxd/19032", "options": "ro,nodev,relatime", "size_available": 0, "size_total": 73269248, "uuid": "N/A" }, { "block_available": 0, "block_size": 131072, "block_total": 444, "block_used": 444, "device": "/dev/loop6", "fstype": "squashfs", "inode_available": 0, "inode_total": 10817, "inode_used": 10817, "mount": "/snap/core18/1988", "options": "ro,nodev,relatime", "size_available": 0, "size_total": 58195968, "uuid": "N/A" }, { "block_available": 0, "block_size": 131072, "block_total": 249, "block_used": 249, "device": "/dev/loop7", "fstype": "squashfs", "inode_available": 0, "inode_total": 470, "inode_used": 470, "mount": "/snap/snapd/11036", "options": "ro,nodev,relatime", "size_available": 0, "size_total": 32636928, "uuid": "N/A" }, { "block_available": 0, "block_size": 131072, "block_total": 560, "block_used": 560, "device": "/dev/loop0", "fstype": "squashfs", "inode_available": 0, "inode_total": 1578, "inode_used": 1578, "mount": "/snap/lxd/19188", "options": "ro,nodev,relatime", "size_available": 0, "size_total": 73400320, "uuid": "N/A" } ], "ansible_nodename": "ubuntu2004", "ansible_processor": [ "0", "GenuineIntel", "Intel Xeon Processor (Skylake, IBRS)", "1", "GenuineIntel", "Intel Xeon Processor (Skylake, IBRS)", "2", "GenuineIntel", "Intel Xeon Processor (Skylake, IBRS)", "3", "GenuineIntel", "Intel Xeon Processor (Skylake, IBRS)" ], "ansible_processor_cores": 1, "ansible_processor_count": 4, "ansible_processor_threads_per_core": 1, "ansible_processor_vcpus": 4, "ansible_product_name": "Standard PC (Q35 + ICH9, 2009)", "ansible_product_serial": "NA", "ansible_product_uuid": "64d7747e-869a-45b0-9d0a-ae9a6d463611", "ansible_product_version": "pc-q35-2.11", "ansible_python": { "executable": "/usr/bin/python3", "has_sslcontext": true, "type": "cpython", "version": { "major": 3, "micro": 5, "minor": 8, "releaselevel": "final", "serial": 0 }, "version_info": [ 3, 8, 5, "final", 0 ] }, "ansible_python_version": "3.8.5", "ansible_real_group_id": 0, "ansible_real_user_id": 0, "ansible_selinux": { "status": "Missing selinux Python library" }, "ansible_selinux_python_present": false, "ansible_service_mgr": "systemd", "ansible_ssh_host_key_dsa_public": "AAAAB3NzaC1kc3MAAACBANcblFlURYNVXrHiZ2ozUgS6NQWkL9q6dKRvhFV75WjqBQfZs4wAAd9qYdT/fAJfT+MHdaeKgAzIgCCEH0lwEOVJY5go1u3AOEuq6S2b9D1Tr6VufAjuVYuDbPqYCoYPMDepsgKfJIxLGcfs0SgaeJyCzKOh5prQrDfYPHIP3NRjAAAAFQDc3uAPrXGgg7q74VaX8yMC5evKjwAAAIEAy+8TP/tI05oSLikv6L5es6J/iIXouuCSSlpzYT+ZcA64PaXB7X9ziRUOF79fbWkVYGmCRutjayucFHfsnICwm17vLaA5Pdc18hKqgO64HLhX1fBf8BE3KKQFY2nqcop0ShRHsLHWoL5E8SJ0Jrjd+wqw/0SQ4EnxxdmW7mrf+KUAAACAeWRshM/sCGP/DDifYusYkhZ85d5vgeXK/h9d4V3WhnXa6TlNPTo7Y21rX842UJ8npSf+ZVZb9iRJMRxGJiGgQK3GPRvdopQPFM9Y+kTf4GfLS5Bmd4RdZXF0POEpe10xc0ewg5is9NsFnJI+mJFcEB9FH+TtS0T7PmP+l9ADkTs=", "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGumKEwBRlMPpWemu3oyScRXbm4/dH+q5iCKvhB4EsehElxsVTQXbNjQyv5Ei38yG34N2q5DvZSus+tD8LJEZW4=", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIBJpeIq8MEf3YBN7NLxd/ss/iqbvH9q34eLjYP0tubup", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABgQC8hyqeCETLm6kd/kG9lRy9HWIBFFRQTlsIUYdBDmb8dcA0Ye6JwcGhFbEJD5KaWKmyul0OP0dmV4BfLdf1dzDvilh0vTfgTTklbsPpEEjlfstqLHpKDZ4wL+Gj8eF54xW00oFwSR68CWNomyR0YrTczsN/CUb5HSejvYS48OzRP+it4iTyrlwVp8Lb7O7m/TnQFbys8uTaFFNpXFm4WrBtK0HlqVI/9LASXnuYqudCOgwkGlKamVnSwCO3Bt8MXFdhkgvXqoEp0sCdGZIM207jrN42hy6stXyjvn/43YbfTAiXwJDPUhllpbuUSTRF3zzlIvHbC0JRwq0wGd+eXS5kb9RS6v5QLptn0pA8kxQYg2uqO4I+Uc0R7akmgPu1S85jobS7MJIpZmNj57fGmUC7ZUvYTQ97lXcWrfNzk4pwl9TG85U/tQNwN6X5TmaFuSkqGSVRb+a86Z62//BH6lY8sOPEn+Ou883l3QBXjSkgQIjpWKy30GlUcd8Mn6nsgU0=", "ansible_swapfree_mb": 3934, "ansible_swaptotal_mb": 3934, "ansible_system": "Linux", "ansible_system_capabilities": [], "ansible_system_capabilities_enforced": "False", "ansible_system_vendor": "QEMU", "ansible_uptime_seconds": 12500, "ansible_user_dir": "/root", "ansible_user_gecos": "root", "ansible_user_gid": 0, "ansible_user_id": "root", "ansible_user_shell": "/bin/bash", "ansible_user_uid": 0, "ansible_userspace_architecture": "x86_64", "ansible_userspace_bits": "64", "ansible_virtualization_role": "guest", "ansible_virtualization_type": "kvm", "gather_subset": [ "all" ], "module_setup": true }, "changed": false } > > Thanks, > - Cleber.
On Tue, Feb 23, 2021 at 06:23:25PM +0000, Alex Bennée wrote: > > Cleber Rosa <crosa@redhat.com> writes: > > > On Tue, Feb 23, 2021 at 03:01:50PM +0000, Alex Bennée wrote: > >> > >> Alex Bennée <alex.bennee@linaro.org> writes: > >> > >> > Cleber Rosa <crosa@redhat.com> writes: > >> > > >> >> To run basic jobs on custom runners, the environment needs to be > >> >> properly set up. The most common requirement is having the right > >> >> packages installed. > >> >> > >> <snip> > >> > > >> > So I got somewhat there with a direct command line invocation: > >> > > >> > ansible-playbook -u root -i 192.168.122.24,192.168.122.45 scripts/ci/setup/build-environment.yml -e 'ansible_python_interpreter=/usr/bin/python3' > >> > > >> > although for some reason a single host -i fails... > >> > > >> >> diff --git a/scripts/ci/setup/build-environment.yml b/scripts/ci/setup/build-environment.yml > >> >> new file mode 100644 > >> >> index 0000000000..0197e0a48b > >> >> --- /dev/null > >> >> +++ b/scripts/ci/setup/build-environment.yml > >> >> @@ -0,0 +1,76 @@ > >> >> +--- > >> >> +- name: Installation of basic packages to build QEMU > >> >> + hosts: all > >> >> + tasks: > >> >> + - name: Update apt cache > >> >> + apt: > >> >> + update_cache: yes > >> >> + when: > >> >> + - ansible_facts['distribution'] == 'Ubuntu' > >> > > >> > So are we limiting to Ubuntu here rather than say a Debian base? > >> > >> Also I'm getting: > >> > >> TASK [Update apt cache] ***************************************************************************************************************************************************** > >> fatal: [hackbox-ubuntu-2004]: FAILED! => {"msg": "The conditional check 'ansible_facts['distribution'] == 'Ubuntu'' failed. The error was: error while evaluating conditional (ansible_facts['distribution'] == 'Ubuntu'): 'dict object' has no attribute 'distribution'\n\nThe error appears to have been in '/home/alex/lsrc/qemu.git/scripts/ci/setup/build-environment.yml': line 5, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n tasks:\n - name: Update apt cache\n ^ here\n"} > >> > >> which is odd given that machine is definitely an Ubuntu one. > >> > > > > It's defintely odd. This is what I get on a fresh machine: > > > > TASK [Update apt cache] ************************************************************************************************************************* > > [WARNING]: Updating cache and auto-installing missing dependency: python3-apt > > ok: [localhost] > > > > Could you please let me know the output of: > > > > $ ansible -m setup -u $YOUR_USERNAME -i $HOSTNAME, all | grep > > ansible_distribution > > The key doesn't exist: > > hackbox-ubuntu-2004 | SUCCESS => { > "ansible_facts": { > "ansible_all_ipv4_addresses": [ > "192.168.122.170" > ], > "ansible_all_ipv6_addresses": [ > "fe80::5054:ff:fe54:7cfe" > ], > "ansible_apparmor": { > "status": "enabled" > }, > "ansible_architecture": "x86_64", > "ansible_bios_date": "04/01/2014", > "ansible_bios_version": "1.10.2-1ubuntu1", > "ansible_cmdline": { > "BOOT_IMAGE": "/vmlinuz-5.4.0-65-generic", > "maybe-ubiquity": true, > "ro": true, > "root": "/dev/mapper/ubuntu--vg-ubuntu--lv" > }, > "ansible_date_time": { > "date": "2021-02-23", > "day": "23", > "epoch": "1614104601", > "hour": "18", > "iso8601": "2021-02-23T18:23:21Z", > "iso8601_basic": "20210223T182321857461", > "iso8601_basic_short": "20210223T182321", > "iso8601_micro": "2021-02-23T18:23:21.857529Z", > "minute": "23", > "month": "02", > "second": "21", > "time": "18:23:21", > "tz": "UTC", > "tz_offset": "+0000", > "weekday": "Tuesday", > "weekday_number": "2", > "weeknumber": "08", > "year": "2021" > }, > "ansible_default_ipv4": { > "address": "192.168.122.170", > "alias": "enp1s0", > "broadcast": "192.168.122.255", > "gateway": "192.168.122.1", > "interface": "enp1s0", > "macaddress": "52:54:00:54:7c:fe", > "mtu": 1500, > "netmask": "255.255.255.0", > "network": "192.168.122.0", > "type": "ether" > }, > "ansible_default_ipv6": {}, > "ansible_device_links": { > "ids": { > "dm-0": [ > "dm-name-ubuntu--vg-ubuntu--lv", > "dm-uuid-LVM-filR1BfuX6Mpp9J7CP9cbVsTT2ICh7Apc9qZsFohnsqycocacS0Sm6HAhjTBEAkq" > ], > "sda": [ > "scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-1" > ], > "sda1": [ > "scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-1-part1" > ], > "sda2": [ > "scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-1-part2" > ], > "sda3": [ > "lvm-pv-uuid-agDdyQ-V5gQ-aaov-933l-SFAL-0rmD-SlOkYy", > "scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-1-part3" > ], > "sr0": [ > "ata-QEMU_DVD-ROM_QM00001" > ] > }, > "labels": {}, > "masters": { > "sda3": [ > "dm-0" > ] > }, > "uuids": { > "dm-0": [ > "291656fe-bd87-484c-b4a9-4453471a17e8" > ], > "sda2": [ > "45018994-9625-44ad-877a-3980bcf943a3" > ] > } > }, > "ansible_devices": { > "dm-0": { > "holders": [], > "host": "", > "links": { > "ids": [ > "dm-name-ubuntu--vg-ubuntu--lv", > "dm-uuid-LVM-filR1BfuX6Mpp9J7CP9cbVsTT2ICh7Apc9qZsFohnsqycocacS0Sm6HAhjTBEAkq" > ], > "labels": [], > "masters": [], > "uuids": [ > "291656fe-bd87-484c-b4a9-4453471a17e8" > ] > }, > "model": null, > "partitions": {}, > "removable": "0", > "rotational": "1", > "sas_address": null, > "sas_device_handle": null, > "scheduler_mode": "", > "sectors": "41943040", > "sectorsize": "512", > "size": "20.00 GB", > "support_discard": "4096", > "vendor": null, > "virtual": 1 > }, > "loop0": { > "holders": [], > "host": "", > "links": { > "ids": [], > "labels": [], > "masters": [], > "uuids": [] > }, > "model": null, > "partitions": {}, > "removable": "0", > "rotational": "1", > "sas_address": null, > "sas_device_handle": null, > "scheduler_mode": "mq-deadline", > "sectors": "143120", > "sectorsize": "512", > "size": "69.88 MB", > "support_discard": "4096", > "vendor": null, > "virtual": 1 > }, > "loop1": { > "holders": [], > "host": "", > "links": { > "ids": [], > "labels": [], > "masters": [], > "uuids": [] > }, > "model": null, > "partitions": {}, > "removable": "0", > "rotational": "1", > "sas_address": null, > "sas_device_handle": null, > "scheduler_mode": "mq-deadline", > "sectors": "0", > "sectorsize": "512", > "size": "0.00 Bytes", > "support_discard": "4096", > "vendor": null, > "virtual": 1 > }, > "loop2": { > "holders": [], > "host": "", > "links": { > "ids": [], > "labels": [], > "masters": [], > "uuids": [] > }, > "model": null, > "partitions": {}, > "removable": "0", > "rotational": "1", > "sas_address": null, > "sas_device_handle": null, > "scheduler_mode": "mq-deadline", > "sectors": "0", > "sectorsize": "512", > "size": "0.00 Bytes", > "support_discard": "4096", > "vendor": null, > "virtual": 1 > }, > "loop3": { > "holders": [], > "host": "", > "links": { > "ids": [], > "labels": [], > "masters": [], > "uuids": [] > }, > "model": null, > "partitions": {}, > "removable": "0", > "rotational": "1", > "sas_address": null, > "sas_device_handle": null, > "scheduler_mode": "mq-deadline", > "sectors": "113424", > "sectorsize": "512", > "size": "55.38 MB", > "support_discard": "4096", > "vendor": null, > "virtual": 1 > }, > "loop4": { > "holders": [], > "host": "", > "links": { > "ids": [], > "labels": [], > "masters": [], > "uuids": [] > }, > "model": null, > "partitions": {}, > "removable": "0", > "rotational": "1", > "sas_address": null, > "sas_device_handle": null, > "scheduler_mode": "mq-deadline", > "sectors": "63672", > "sectorsize": "512", > "size": "31.09 MB", > "support_discard": "4096", > "vendor": null, > "virtual": 1 > }, > "loop5": { > "holders": [], > "host": "", > "links": { > "ids": [], > "labels": [], > "masters": [], > "uuids": [] > }, > "model": null, > "partitions": {}, > "removable": "0", > "rotational": "1", > "sas_address": null, > "sas_device_handle": null, > "scheduler_mode": "mq-deadline", > "sectors": "142872", > "sectorsize": "512", > "size": "69.76 MB", > "support_discard": "4096", > "vendor": null, > "virtual": 1 > }, > "loop6": { > "holders": [], > "host": "", > "links": { > "ids": [], > "labels": [], > "masters": [], > "uuids": [] > }, > "model": null, > "partitions": {}, > "removable": "0", > "rotational": "1", > "sas_address": null, > "sas_device_handle": null, > "scheduler_mode": "mq-deadline", > "sectors": "113592", > "sectorsize": "512", > "size": "55.46 MB", > "support_discard": "4096", > "vendor": null, > "virtual": 1 > }, > "loop7": { > "holders": [], > "host": "", > "links": { > "ids": [], > "labels": [], > "masters": [], > "uuids": [] > }, > "model": null, > "partitions": {}, > "removable": "0", > "rotational": "1", > "sas_address": null, > "sas_device_handle": null, > "scheduler_mode": "mq-deadline", > "sectors": "63664", > "sectorsize": "512", > "size": "31.09 MB", > "support_discard": "4096", > "vendor": null, > "virtual": 1 > }, > "sda": { > "holders": [], > "host": "SCSI storage controller: Broadcom / LSI 53c895a", > "links": { > "ids": [ > "scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-1" > ], > "labels": [], > "masters": [], > "uuids": [] > }, > "model": "QEMU HARDDISK", > "partitions": { > "sda1": { > "holders": [], > "links": { > "ids": [ > "scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-1-part1" > ], > "labels": [], > "masters": [], > "uuids": [] > }, > "sectors": "2048", > "sectorsize": 512, > "size": "1.00 MB", > "start": "2048", > "uuid": null > }, > "sda2": { > "holders": [], > "links": { > "ids": [ > "scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-1-part2" > ], > "labels": [], > "masters": [], > "uuids": [ > "45018994-9625-44ad-877a-3980bcf943a3" > ] > }, > "sectors": "2097152", > "sectorsize": 512, > "size": "1.00 GB", > "start": "4096", > "uuid": "45018994-9625-44ad-877a-3980bcf943a3" > }, > "sda3": { > "holders": [ > "ubuntu--vg-ubuntu--lv" > ], > "links": { > "ids": [ > "lvm-pv-uuid-agDdyQ-V5gQ-aaov-933l-SFAL-0rmD-SlOkYy", > "scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-1-part3" > ], > "labels": [], > "masters": [ > "dm-0" > ], > "uuids": [] > }, > "sectors": "81782784", > "sectorsize": 512, > "size": "39.00 GB", > "start": "2101248", > "uuid": null > } > }, > "removable": "0", > "rotational": "1", > "sas_address": null, > "sas_device_handle": null, > "scheduler_mode": "mq-deadline", > "sectors": "83886080", > "sectorsize": "512", > "size": "40.00 GB", > "support_discard": "4096", > "vendor": "QEMU", > "virtual": 1 > }, > "sr0": { > "holders": [], > "host": "SATA controller: Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port SATA Controller [AHCI mode] (rev 02)", > "links": { > "ids": [ > "ata-QEMU_DVD-ROM_QM00001" > ], > "labels": [], > "masters": [], > "uuids": [] > }, > "model": "QEMU DVD-ROM", > "partitions": {}, > "removable": "1", > "rotational": "1", > "sas_address": null, > "sas_device_handle": null, > "scheduler_mode": "mq-deadline", > "sectors": "2097151", > "sectorsize": "512", > "size": "1024.00 MB", > "support_discard": "0", > "vendor": "QEMU", > "virtual": 1 > } > }, > "ansible_dns": { > "nameservers": [ > "127.0.0.53" > ], > "options": { > "edns0": true, > "trust-ad": true > } > }, > "ansible_domain": "", > "ansible_effective_group_id": 0, > "ansible_effective_user_id": 0, > "ansible_enp1s0": { > "active": true, > "device": "enp1s0", > "features": { > "esp_hw_offload": "off [fixed]", > "esp_tx_csum_hw_offload": "off [fixed]", > "fcoe_mtu": "off [fixed]", > "generic_receive_offload": "on", > "generic_segmentation_offload": "on", > "highdma": "on [fixed]", > "hw_tc_offload": "off [fixed]", > "l2_fwd_offload": "off [fixed]", > "large_receive_offload": "on", > "loopback": "off [fixed]", > "netns_local": "off [fixed]", > "ntuple_filters": "off [fixed]", > "receive_hashing": "off [fixed]", > "rx_all": "off [fixed]", > "rx_checksumming": "on [fixed]", > "rx_fcs": "off [fixed]", > "rx_gro_hw": "off [fixed]", > "rx_udp_tunnel_port_offload": "off [fixed]", > "rx_vlan_filter": "on [fixed]", > "rx_vlan_offload": "off [fixed]", > "rx_vlan_stag_filter": "off [fixed]", > "rx_vlan_stag_hw_parse": "off [fixed]", > "scatter_gather": "on", > "tcp_segmentation_offload": "on", > "tls_hw_record": "off [fixed]", > "tls_hw_rx_offload": "off [fixed]", > "tls_hw_tx_offload": "off [fixed]", > "tx_checksum_fcoe_crc": "off [fixed]", > "tx_checksum_ip_generic": "on", > "tx_checksum_ipv4": "off [fixed]", > "tx_checksum_ipv6": "off [fixed]", > "tx_checksum_sctp": "off [fixed]", > "tx_checksumming": "on", > "tx_esp_segmentation": "off [fixed]", > "tx_fcoe_segmentation": "off [fixed]", > "tx_gre_csum_segmentation": "off [fixed]", > "tx_gre_segmentation": "off [fixed]", > "tx_gso_partial": "off [fixed]", > "tx_gso_robust": "on [fixed]", > "tx_ipxip4_segmentation": "off [fixed]", > "tx_ipxip6_segmentation": "off [fixed]", > "tx_lockless": "off [fixed]", > "tx_nocache_copy": "off", > "tx_scatter_gather": "on", > "tx_scatter_gather_fraglist": "off [fixed]", > "tx_sctp_segmentation": "off [fixed]", > "tx_tcp6_segmentation": "on", > "tx_tcp_ecn_segmentation": "on", > "tx_tcp_mangleid_segmentation": "off", > "tx_tcp_segmentation": "on", > "tx_udp_segmentation": "off [fixed]", > "tx_udp_tnl_csum_segmentation": "off [fixed]", > "tx_udp_tnl_segmentation": "off [fixed]", > "tx_vlan_offload": "off [fixed]", > "tx_vlan_stag_hw_insert": "off [fixed]", > "vlan_challenged": "off [fixed]" > }, > "hw_timestamp_filters": [], > "ipv4": { > "address": "192.168.122.170", > "broadcast": "192.168.122.255", > "netmask": "255.255.255.0", > "network": "192.168.122.0" > }, > "ipv6": [ > { > "address": "fe80::5054:ff:fe54:7cfe", > "prefix": "64", > "scope": "link" > } > ], > "macaddress": "52:54:00:54:7c:fe", > "module": "virtio_net", > "mtu": 1500, > "pciid": "virtio0", > "promisc": false, > "speed": -1, > "timestamping": [ > "tx_software", > "rx_software", > "software" > ], > "type": "ether" > }, > "ansible_env": { > "DBUS_SESSION_BUS_ADDRESS": "unix:path=/run/user/0/bus", > "HOME": "/root", > "LANG": "en_GB.UTF-8", > "LOGNAME": "root", > "MOTD_SHOWN": "pam", > "PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin", > "PWD": "/root", > "SHELL": "/bin/bash", > "SHLVL": "0", > "SSH_AUTH_SOCK": "/tmp/ssh-xGhYmKCci1/agent.4096", > "SSH_CLIENT": "192.168.122.1 40374 22", > "SSH_CONNECTION": "192.168.122.1 40374 192.168.122.170 22", > "SSH_TTY": "/dev/pts/0", > "TERM": "screen-256color", > "USER": "root", > "XDG_RUNTIME_DIR": "/run/user/0", > "XDG_SESSION_CLASS": "user", > "XDG_SESSION_ID": "17", > "XDG_SESSION_TYPE": "tty", > "_": "/bin/sh" > }, > "ansible_fips": false, > "ansible_form_factor": "Other", > "ansible_fqdn": "ubuntu2004", > "ansible_hostname": "ubuntu2004", > "ansible_interfaces": [ > "enp1s0", > "lo" > ], > "ansible_is_chroot": false, > "ansible_iscsi_iqn": "iqn.1993-08.org.debian:01:af5bf2af245", > "ansible_kernel": "5.4.0-65-generic", > "ansible_lo": { > "active": true, > "device": "lo", > "features": { > "esp_hw_offload": "off [fixed]", > "esp_tx_csum_hw_offload": "off [fixed]", > "fcoe_mtu": "off [fixed]", > "generic_receive_offload": "on", > "generic_segmentation_offload": "on", > "highdma": "on [fixed]", > "hw_tc_offload": "off [fixed]", > "l2_fwd_offload": "off [fixed]", > "large_receive_offload": "off [fixed]", > "loopback": "on [fixed]", > "netns_local": "on [fixed]", > "ntuple_filters": "off [fixed]", > "receive_hashing": "off [fixed]", > "rx_all": "off [fixed]", > "rx_checksumming": "on [fixed]", > "rx_fcs": "off [fixed]", > "rx_gro_hw": "off [fixed]", > "rx_udp_tunnel_port_offload": "off [fixed]", > "rx_vlan_filter": "off [fixed]", > "rx_vlan_offload": "off [fixed]", > "rx_vlan_stag_filter": "off [fixed]", > "rx_vlan_stag_hw_parse": "off [fixed]", > "scatter_gather": "on", > "tcp_segmentation_offload": "on", > "tls_hw_record": "off [fixed]", > "tls_hw_rx_offload": "off [fixed]", > "tls_hw_tx_offload": "off [fixed]", > "tx_checksum_fcoe_crc": "off [fixed]", > "tx_checksum_ip_generic": "on [fixed]", > "tx_checksum_ipv4": "off [fixed]", > "tx_checksum_ipv6": "off [fixed]", > "tx_checksum_sctp": "on [fixed]", > "tx_checksumming": "on", > "tx_esp_segmentation": "off [fixed]", > "tx_fcoe_segmentation": "off [fixed]", > "tx_gre_csum_segmentation": "off [fixed]", > "tx_gre_segmentation": "off [fixed]", > "tx_gso_partial": "off [fixed]", > "tx_gso_robust": "off [fixed]", > "tx_ipxip4_segmentation": "off [fixed]", > "tx_ipxip6_segmentation": "off [fixed]", > "tx_lockless": "on [fixed]", > "tx_nocache_copy": "off [fixed]", > "tx_scatter_gather": "on [fixed]", > "tx_scatter_gather_fraglist": "on [fixed]", > "tx_sctp_segmentation": "on", > "tx_tcp6_segmentation": "on", > "tx_tcp_ecn_segmentation": "on", > "tx_tcp_mangleid_segmentation": "on", > "tx_tcp_segmentation": "on", > "tx_udp_segmentation": "off [fixed]", > "tx_udp_tnl_csum_segmentation": "off [fixed]", > "tx_udp_tnl_segmentation": "off [fixed]", > "tx_vlan_offload": "off [fixed]", > "tx_vlan_stag_hw_insert": "off [fixed]", > "vlan_challenged": "on [fixed]" > }, > "hw_timestamp_filters": [], > "ipv4": { > "address": "127.0.0.1", > "broadcast": "host", > "netmask": "255.0.0.0", > "network": "127.0.0.0" > }, > "ipv6": [ > { > "address": "::1", > "prefix": "128", > "scope": "host" > } > ], > "mtu": 65536, > "promisc": false, > "timestamping": [ > "tx_software", > "rx_software", > "software" > ], > "type": "loopback" > }, > "ansible_local": {}, > "ansible_lsb": { > "codename": "focal", > "description": "Ubuntu 20.04.2 LTS", > "id": "Ubuntu", > "major_release": "20", > "release": "20.04" > }, > "ansible_lvm": { > "lvs": { > "ubuntu-lv": { > "size_g": "20.00", > "vg": "ubuntu-vg" > } > }, > "pvs": { > "/dev/sda3": { > "free_g": "19.00", > "size_g": "39.00", > "vg": "ubuntu-vg" > } > }, > "vgs": { > "ubuntu-vg": { > "free_g": "19.00", > "num_lvs": "1", > "num_pvs": "1", > "size_g": "39.00" > } > } > }, > "ansible_machine": "x86_64", > "ansible_machine_id": "64d7747e869a45b09d0aae9a6d463611", > "ansible_memfree_mb": 2765, > "ansible_memory_mb": { > "nocache": { > "free": 3687, > "used": 248 > }, > "real": { > "free": 2765, > "total": 3935, > "used": 1170 > }, > "swap": { > "cached": 0, > "free": 3934, > "total": 3934, > "used": 0 > } > }, > "ansible_memtotal_mb": 3935, > "ansible_mounts": [ > { > "block_available": 2357334, > "block_size": 4096, > "block_total": 5127828, > "block_used": 2770494, > "device": "/dev/mapper/ubuntu--vg-ubuntu--lv", > "fstype": "ext4", > "inode_available": 1130751, > "inode_total": 1310720, > "inode_used": 179969, > "mount": "/", > "options": "rw,relatime", > "size_available": 9655640064, > "size_total": 21003583488, > "uuid": "291656fe-bd87-484c-b4a9-4453471a17e8" > }, > { > "block_available": 181527, > "block_size": 4096, > "block_total": 249830, > "block_used": 68303, > "device": "/dev/sda2", > "fstype": "ext4", > "inode_available": 65220, > "inode_total": 65536, > "inode_used": 316, > "mount": "/boot", > "options": "rw,relatime", > "size_available": 743534592, > "size_total": 1023303680, > "uuid": "45018994-9625-44ad-877a-3980bcf943a3" > }, > { > "block_available": 0, > "block_size": 131072, > "block_total": 444, > "block_used": 444, > "device": "/dev/loop3", > "fstype": "squashfs", > "inode_available": 0, > "inode_total": 10809, > "inode_used": 10809, > "mount": "/snap/core18/1944", > "options": "ro,nodev,relatime", > "size_available": 0, > "size_total": 58195968, > "uuid": "N/A" > }, > { > "block_available": 0, > "block_size": 131072, > "block_total": 249, > "block_used": 249, > "device": "/dev/loop4", > "fstype": "squashfs", > "inode_available": 0, > "inode_total": 472, > "inode_used": 472, > "mount": "/snap/snapd/10707", > "options": "ro,nodev,relatime", > "size_available": 0, > "size_total": 32636928, > "uuid": "N/A" > }, > { > "block_available": 0, > "block_size": 131072, > "block_total": 559, > "block_used": 559, > "device": "/dev/loop5", > "fstype": "squashfs", > "inode_available": 0, > "inode_total": 1578, > "inode_used": 1578, > "mount": "/snap/lxd/19032", > "options": "ro,nodev,relatime", > "size_available": 0, > "size_total": 73269248, > "uuid": "N/A" > }, > { > "block_available": 0, > "block_size": 131072, > "block_total": 444, > "block_used": 444, > "device": "/dev/loop6", > "fstype": "squashfs", > "inode_available": 0, > "inode_total": 10817, > "inode_used": 10817, > "mount": "/snap/core18/1988", > "options": "ro,nodev,relatime", > "size_available": 0, > "size_total": 58195968, > "uuid": "N/A" > }, > { > "block_available": 0, > "block_size": 131072, > "block_total": 249, > "block_used": 249, > "device": "/dev/loop7", > "fstype": "squashfs", > "inode_available": 0, > "inode_total": 470, > "inode_used": 470, > "mount": "/snap/snapd/11036", > "options": "ro,nodev,relatime", > "size_available": 0, > "size_total": 32636928, > "uuid": "N/A" > }, > { > "block_available": 0, > "block_size": 131072, > "block_total": 560, > "block_used": 560, > "device": "/dev/loop0", > "fstype": "squashfs", > "inode_available": 0, > "inode_total": 1578, > "inode_used": 1578, > "mount": "/snap/lxd/19188", > "options": "ro,nodev,relatime", > "size_available": 0, > "size_total": 73400320, > "uuid": "N/A" > } > ], > "ansible_nodename": "ubuntu2004", > "ansible_processor": [ > "0", > "GenuineIntel", > "Intel Xeon Processor (Skylake, IBRS)", > "1", > "GenuineIntel", > "Intel Xeon Processor (Skylake, IBRS)", > "2", > "GenuineIntel", > "Intel Xeon Processor (Skylake, IBRS)", > "3", > "GenuineIntel", > "Intel Xeon Processor (Skylake, IBRS)" > ], > "ansible_processor_cores": 1, > "ansible_processor_count": 4, > "ansible_processor_threads_per_core": 1, > "ansible_processor_vcpus": 4, > "ansible_product_name": "Standard PC (Q35 + ICH9, 2009)", > "ansible_product_serial": "NA", > "ansible_product_uuid": "64d7747e-869a-45b0-9d0a-ae9a6d463611", > "ansible_product_version": "pc-q35-2.11", > "ansible_python": { > "executable": "/usr/bin/python3", > "has_sslcontext": true, > "type": "cpython", > "version": { > "major": 3, > "micro": 5, > "minor": 8, > "releaselevel": "final", > "serial": 0 > }, > "version_info": [ > 3, > 8, > 5, > "final", > 0 > ] > }, > "ansible_python_version": "3.8.5", > "ansible_real_group_id": 0, > "ansible_real_user_id": 0, > "ansible_selinux": { > "status": "Missing selinux Python library" > }, > "ansible_selinux_python_present": false, > "ansible_service_mgr": "systemd", > "ansible_ssh_host_key_dsa_public": "AAAAB3NzaC1kc3MAAACBANcblFlURYNVXrHiZ2ozUgS6NQWkL9q6dKRvhFV75WjqBQfZs4wAAd9qYdT/fAJfT+MHdaeKgAzIgCCEH0lwEOVJY5go1u3AOEuq6S2b9D1Tr6VufAjuVYuDbPqYCoYPMDepsgKfJIxLGcfs0SgaeJyCzKOh5prQrDfYPHIP3NRjAAAAFQDc3uAPrXGgg7q74VaX8yMC5evKjwAAAIEAy+8TP/tI05oSLikv6L5es6J/iIXouuCSSlpzYT+ZcA64PaXB7X9ziRUOF79fbWkVYGmCRutjayucFHfsnICwm17vLaA5Pdc18hKqgO64HLhX1fBf8BE3KKQFY2nqcop0ShRHsLHWoL5E8SJ0Jrjd+wqw/0SQ4EnxxdmW7mrf+KUAAACAeWRshM/sCGP/DDifYusYkhZ85d5vgeXK/h9d4V3WhnXa6TlNPTo7Y21rX842UJ8npSf+ZVZb9iRJMRxGJiGgQK3GPRvdopQPFM9Y+kTf4GfLS5Bmd4RdZXF0POEpe10xc0ewg5is9NsFnJI+mJFcEB9FH+TtS0T7PmP+l9ADkTs=", > "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGumKEwBRlMPpWemu3oyScRXbm4/dH+q5iCKvhB4EsehElxsVTQXbNjQyv5Ei38yG34N2q5DvZSus+tD8LJEZW4=", > "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIBJpeIq8MEf3YBN7NLxd/ss/iqbvH9q34eLjYP0tubup", > "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABgQC8hyqeCETLm6kd/kG9lRy9HWIBFFRQTlsIUYdBDmb8dcA0Ye6JwcGhFbEJD5KaWKmyul0OP0dmV4BfLdf1dzDvilh0vTfgTTklbsPpEEjlfstqLHpKDZ4wL+Gj8eF54xW00oFwSR68CWNomyR0YrTczsN/CUb5HSejvYS48OzRP+it4iTyrlwVp8Lb7O7m/TnQFbys8uTaFFNpXFm4WrBtK0HlqVI/9LASXnuYqudCOgwkGlKamVnSwCO3Bt8MXFdhkgvXqoEp0sCdGZIM207jrN42hy6stXyjvn/43YbfTAiXwJDPUhllpbuUSTRF3zzlIvHbC0JRwq0wGd+eXS5kb9RS6v5QLptn0pA8kxQYg2uqO4I+Uc0R7akmgPu1S85jobS7MJIpZmNj57fGmUC7ZUvYTQ97lXcWrfNzk4pwl9TG85U/tQNwN6X5TmaFuSkqGSVRb+a86Z62//BH6lY8sOPEn+Ou883l3QBXjSkgQIjpWKy30GlUcd8Mn6nsgU0=", > "ansible_swapfree_mb": 3934, > "ansible_swaptotal_mb": 3934, > "ansible_system": "Linux", > "ansible_system_capabilities": [], > "ansible_system_capabilities_enforced": "False", > "ansible_system_vendor": "QEMU", > "ansible_uptime_seconds": 12500, > "ansible_user_dir": "/root", > "ansible_user_gecos": "root", > "ansible_user_gid": 0, > "ansible_user_id": "root", > "ansible_user_shell": "/bin/bash", > "ansible_user_uid": 0, > "ansible_userspace_architecture": "x86_64", > "ansible_userspace_bits": "64", > "ansible_virtualization_role": "guest", > "ansible_virtualization_type": "kvm", > "gather_subset": [ > "all" > ], > "module_setup": true > }, > "changed": false > } > > Hi Alex, Thanks! I've compared this to the output I get when running "ansible" and connecting to an Ubuntu 20.04 VM, and it looks pretty much the same... *but* the distribution related fields. Thinking it could be an issue with the *ansible* code itself (maybe a bug in a given version) I then went on and tried the following on both x86_64 and aarch64: podman run --rm -ti ubuntu:20.04 /bin/sh -c 'apt update && apt -y install ansible && ansible -c local -i 127.0.0.1, -m setup all' And the distribution keys are properly reported. So both the ansible code I'm running, and ansible shipped with Ubuntu 20.04 seem fine. The next target for testing is the exact image you're using. Ansible probes the distribution largely based on the a "/etc/*release*" like file[1], so I'd like to know how to replicate your machine. Are you using a cloud image? Installing a given profile? Is the actual image something you could share? Thanks, - Cleber. [1] - https://github.com/ansible/ansible/blob/devel/lib/ansible/module_utils/facts/system/distribution.py
diff --git a/docs/devel/ci.rst b/docs/devel/ci.rst index 585b7bf4b8..a556558435 100644 --- a/docs/devel/ci.rst +++ b/docs/devel/ci.rst @@ -26,3 +26,33 @@ gitlab-runner, is called a "custom runner". The GitLab CI jobs definition for the custom runners are located under:: .gitlab-ci.d/custom-runners.yml + +Machine Setup Howto +------------------- + +For all Linux based systems, the setup can be mostly automated by the +execution of two Ansible playbooks. Start by adding your machines to +the ``inventory`` file under ``scripts/ci/setup``, such as this:: + + fully.qualified.domain + other.machine.hostname + +You may need to set some variables in the inventory file itself. One +very common need is to tell Ansible to use a Python 3 interpreter on +those hosts. This would look like:: + + fully.qualified.domain ansible_python_interpreter=/usr/bin/python3 + other.machine.hostname ansible_python_interpreter=/usr/bin/python3 + +Build environment +~~~~~~~~~~~~~~~~~ + +The ``scripts/ci/setup/build-environment.yml`` Ansible playbook will +set up machines with the environment needed to perform builds and run +QEMU tests. It covers a number of different Linux distributions and +FreeBSD. + +To run the playbook, execute:: + + cd scripts/ci/setup + ansible-playbook -i inventory build-environment.yml diff --git a/scripts/ci/setup/build-environment.yml b/scripts/ci/setup/build-environment.yml new file mode 100644 index 0000000000..0197e0a48b --- /dev/null +++ b/scripts/ci/setup/build-environment.yml @@ -0,0 +1,76 @@ +--- +- name: Installation of basic packages to build QEMU + hosts: all + tasks: + - name: Update apt cache + apt: + update_cache: yes + when: + - ansible_facts['distribution'] == 'Ubuntu' + + - name: Install basic packages to build QEMU on Ubuntu 18.04/20.04 + package: + name: + # Originally from tests/docker/dockerfiles/ubuntu1804.docker + - ccache + - clang + - gcc + - gettext + - git + - glusterfs-common + - libaio-dev + - libattr1-dev + - libbrlapi-dev + - libbz2-dev + - libcacard-dev + - libcap-ng-dev + - libcurl4-gnutls-dev + - libdrm-dev + - libepoxy-dev + - libfdt-dev + - libgbm-dev + - libgtk-3-dev + - libibverbs-dev + - libiscsi-dev + - libjemalloc-dev + - libjpeg-turbo8-dev + - liblzo2-dev + - libncurses5-dev + - libncursesw5-dev + - libnfs-dev + - libnss3-dev + - libnuma-dev + - libpixman-1-dev + - librados-dev + - librbd-dev + - librdmacm-dev + - libsasl2-dev + - libsdl2-dev + - libseccomp-dev + - libsnappy-dev + - libspice-protocol-dev + - libssh-dev + - libusb-1.0-0-dev + - libusbredirhost-dev + - libvdeplug-dev + - libvte-2.91-dev + - libzstd-dev + - make + - ninja-build + - python3-yaml + - python3-sphinx + - sparse + - xfslibs-dev + state: present + when: + - ansible_facts['distribution'] == 'Ubuntu' + + - name: Install packages to build QEMU on Ubuntu 18.04/20.04 on non-s390x + package: + name: + - libspice-server-dev + - libxen-dev + state: present + when: + - ansible_facts['distribution'] == 'Ubuntu' + - ansible_facts['architecture'] != 's390x' diff --git a/scripts/ci/setup/inventory b/scripts/ci/setup/inventory new file mode 100644 index 0000000000..2fbb50c4a8 --- /dev/null +++ b/scripts/ci/setup/inventory @@ -0,0 +1 @@ +localhost
To run basic jobs on custom runners, the environment needs to be properly set up. The most common requirement is having the right packages installed. The playbook introduced here covers the QEMU's project s390x and aarch64 machines. At the time this is being proposed, those machines have already had this playbook applied to them. Signed-off-by: Cleber Rosa <crosa@redhat.com> --- docs/devel/ci.rst | 30 ++++++++++ scripts/ci/setup/build-environment.yml | 76 ++++++++++++++++++++++++++ scripts/ci/setup/inventory | 1 + 3 files changed, 107 insertions(+) create mode 100644 scripts/ci/setup/build-environment.yml create mode 100644 scripts/ci/setup/inventory