Message ID | 4590e1381feb8962cadf2b40b22086531d662ef8.1692675172.git.me@ttaylorr.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | .github/workflows: add coverity action | expand |
On Mon, Aug 21, 2023 at 11:34:32PM -0400, Taylor Blau wrote: > Coverity is a static analysis tool that detects and generates reports on > various security and code quality issues. > > It is particularly useful when diagnosing memory safety issues which may > be used as part of exploiting a security vulnerability. I've been running a variant of this on my private repo for the last year or two. The biggest issue IMHO is that we don't have a good way to surface problems publicly. The CI "passes" in the sense that the build is uploaded to coverity, but the interesting bit is whether any new defects were found (because there are a ton of existing false positives that I haven't figured out how to silence). So my flow is something like: 1. Push to my fork. 2. Eventually get an email that says "N new defects found". 3. If N > 0, log into the coverity site and look at them. What's the plan for adding this to the main git.git repo? Is it to run on Junio's tree, and then have interested folks sign up for coverity access and look at it there? If so, how does step 2 work? Or is it just to have it, so interested parties can do their own coverity analysis? I'd be happy enough with that, as I am carrying a similar patch locally. > Coverity's website provides a service accepts "builds" (which is more or > less the collection of '*.o' files generated during a standard build > with "make") as input and generates reports as output. In order to > generate a report, we have to first compile Git and then upload the > build archive to Coverity. I don't think this is quite right. You run a build with their "cov-build" wrapper, which does some magic. It _does_ invoke the actual compiler, but does some extra magic, too (I don't know the details, but it is running cov-translate and cov-emit on the actual source file). This maybe seems like nit-picking, but it will play into some questions I have on the patch itself. > - We could upload the build archive to Coverity directly with a > straightforward curl request. But using the vapier/coverity-scan > Action comes with some additional niceties, such as caching the > (rather large) Coverity tool download between runs. Yeah, that is a nice bonus over what I've been using. The tool download is 800MB! > diff --git a/.github/workflows/coverity.yml b/.github/workflows/coverity.yml > new file mode 100644 > index 0000000000..26b9145d9e > --- /dev/null > +++ b/.github/workflows/coverity.yml > @@ -0,0 +1,35 @@ > +name: Coverity > + > +on: > + push: > + branches: > + - master > + tags: > + - '*' It's probably a good idea not to run this against every single topic branch. But it's nice if we can catch bugs _before_ they hit master. Since this isn't a blocking CI job, that's aspirational (it depends on somebody looking at the results and patching in a timely manner). But I routinely do my runs against "next" (actually an integration branch that I base on "next") and investigate the results within a day or two. Running against "seen" might be even better (though personally I usually avoid looking at it too closely, as sometimes it has too much of other people's broken stuff. I have plenty of my own). I'm not sure what workflow you're envisioning here. > +jobs: > + coverity: > + runs-on: ubuntu-latest > + env: > + HAVE_COVERITY_TOKEN: ${{ secrets.COVERITY_SCAN_EMAIL != '' && secrets.COVERITY_SCAN_TOKEN != '' }} > + steps: > + - id: check-coverity > + name: check whether Coverity token is configured > + run: | > + echo "enabled=$HAVE_COVERITY_TOKEN" >>$GITHUB_OUTPUT This check-coverity step is good, as obviously most people's forks won't be set up to run it. It is a shame we'll have to kick off an actions vm just to echo "enabled=0". Is there any way to embed that logic directly in the yaml (I won't be surprised if the answer is "no"; I've been looking for years for something similar for check-config in the main workflow). > + - uses: actions/checkout@v3 > + if: steps.check-coverity.outputs.enabled == 'true' > + - run: ci/install-dependencies.sh > + env: > + CC: gcc > + CC_PACKAGE: gcc-9 > + jobname: linux-gcc-default > + runs_on_pool: ubuntu-latest > + if: steps.check-coverity.outputs.enabled == 'true' In the workflow file I've been using, I didn't need to do any of this CC magic. I do run install-dependencies, but it seems to work fine without extra variables. I do set runs_on_pool=ubuntu-latest in mine. Maybe that matters (it does hit UBUNTU_COMMON_PKGS)? > + - uses: vapier/coverity-scan-action@v1 > + if: steps.check-coverity.outputs.enabled == 'true' > + with: > + email: ${{ secrets.COVERITY_SCAN_EMAIL }} > + token: ${{ secrets.COVERITY_SCAN_TOKEN }} > + command: make -j8 OK, this is presumably doing the cov-build stuff behind the scenes, as well as the curl upload. In my file, we have to feed a "project" field. I guess the action just figures this out from whichever repo we're using. Do GitHub repo names always correspond to coverity project names? (It does seem kind of superfluous; I think the SCAN_TOKEN is unique to the project). -Peff
On Fri, Aug 25, 2023 at 05:32:01PM -0400, Jeff King wrote: > > Coverity's website provides a service accepts "builds" (which is more or > > less the collection of '*.o' files generated during a standard build > > with "make") as input and generates reports as output. In order to > > generate a report, we have to first compile Git and then upload the > > build archive to Coverity. > > I don't think this is quite right. You run a build with their > "cov-build" wrapper, which does some magic. It _does_ invoke the actual > compiler, but does some extra magic, too (I don't know the details, but > it is running cov-translate and cov-emit on the actual source file). > > This maybe seems like nit-picking, but it will play into some questions > I have on the patch itself. I guess I never actually asked those questions, but they were: why are you asking for some specific version of gcc since we don't really care much about the actual compiler. But I think you don't care about the compiler version; you were just trying to feed enough data to install-dependencies to get it to run. -Peff
Hi Taylor, On Mon, 21 Aug 2023, Taylor Blau wrote: > diff --git a/.github/workflows/coverity.yml b/.github/workflows/coverity.yml > new file mode 100644 > index 0000000000..26b9145d9e > --- /dev/null > +++ b/.github/workflows/coverity.yml > @@ -0,0 +1,35 @@ > +name: Coverity > + > +on: > + push: > + branches: > + - master > + tags: > + - '*' I would consider it too late to do a Coverity analysis when the version is already tagged. Therefore, I would like to see this `tags: *' part go. And I agree with Peff that we should activate this build for `next`, too. And for good measure for `maint` (and maybe `maint-*`, although those branches are typically not pushed to `git/git`). > + > +jobs: > + coverity: > + runs-on: ubuntu-latest > + env: > + HAVE_COVERITY_TOKEN: ${{ secrets.COVERITY_SCAN_EMAIL != '' && secrets.COVERITY_SCAN_TOKEN != '' }} > + steps: > + - id: check-coverity > + name: check whether Coverity token is configured > + run: | > + echo "enabled=$HAVE_COVERITY_TOKEN" >>$GITHUB_OUTPUT The canonical output for a step that has no other outputs is called `result`. That output name is handled by linters such as https://rhysd.github.io/actionlint/, too, in contrast to outputs with different names that may be set in a free-form `run:` step that is impossible to parse by `actionlint`. But I see a bigger problem: Contrary to what the commit message claims, the design makes this workflow far from a no-op: There will always be the need for a VM to be spun up, to receive the orchestration script, to download the necessary Actions (in this instance, `vapier/coverity-scan-action`), only to check environment variables, see that actually nothing needs to be done, update the job's and the run's status (claiming "success" even if nothing was sent to Coverity) and then shut down. We discussed this in the past, as it is a great waste of resources (even if other people pay for it, it still costs money [*1*] and electricity), and for some of us even more relevant: it is a great waste of time because when you are already waiting for 20 queued jobs to be picked up, having to wait for the next build agent to become available only to spend, I don't know, half a minute (wall clock time)? on getting this workflow run queued, the next build agent to become available, the job to be set up, the first step to be run, the others to be skipped, and then to wrap it all up by updating the run as completed. I would not fault anyone for being frustrated with this. Therefore, I would strongly suggest looking for alternative ways to skip this workflow run, before applying this patch. I see these viable alternatives, all pulling the `if:` directive from the step level to the job level, just like the `l10n.yml` workflow does [*2*], to avoid even starting the job unless really needed: - Limit it to the primary repository, `git/git`: if: github.event.repository.fork == false This is what Git for Windows does in the build that validates the revision to use in the `git-for-windows/setup-git-for-windows-sdk` Action that is used in Git's CI runs to build and test Git on Windows: https://github.com/git-for-windows/git-sdk-64/blob/894191270a78/.github/workflows/ci-artifacts.yml#L14 While this still clutters the list of workflow runs with plenty of skipped runs (see https://github.com/git/git/actions/workflows/l10n.yml for an example how this looks), at least it avoids wasting a build agent on essentially only checking for the presence of two secrets. Developers wishing to run this workflow in their fork could always create a branch, edit the workflow to remove this `if:` directive, and push the branch to their fork. - Limit it to `git/git` explicitly, by name: if: github.event.repository.owner.login == 'git' This is how we run the `vs-build`/`vs-test` combo only in the `git-for-windows` fork: https://github.com/git/git/blob/v2.42.0/.github/workflows/main.yml#L150 The advantage here is that it would be slightly more obvious what we are doing here than checking `fork == false`, as the org name is mentioned explicitly. - Limit it to `git/git` but also allow it to be triggered manually: if: github.event.repository.fork == false || github.event_name == 'workflow_dispatch' This is what Git for Windows does in the test verifying that it works in the "Nano Server" container image that allows for kind of a minimal Windows system to be run via Docker: https://github.com/git-for-windows/git-sdk-64/blob/894191270a78/.github/workflows/nano-server.yml#L7 The advantage of allowing the workflow to be triggered via a `workflow_dispatch` is that developers wishing to run this workflow in their fork would not need to edit the GitHub workflow but could run it unmodified. For details, see https://docs.github.com/en/actions/using-workflows/manually-running-a-workflow - Limit it by repository "topics" (think: "repository tags"): if: contains(github.event.repository.topics, 'has-coverity-secrets') On GitHub, every repository can be annotated with "topics", i.e. a list of labels (which in other scenarios would be called "tags"). These topics can be seen by the GitHub workflow run, _before_ even starting a job, via the `github.event.repository` attribute. This is how InnoSetup does it: https://github.com/jrsoftware/issrc/blob/main/.github/workflows/build.yml#L12 The primary repository does not have the secrets required to run this workflow, and hence does not have the "has-issrc-build-env" topic as can be seen here: https://github.com/jrsoftware/issrc (look below the "About" section on the right side, where you see the topics "installer" and "inno-setup"). My personal fork does have them secrets, and the topic "has-issrc-build-env" says as much: https://github.com/dscho/issrc The advantage of this solution is that it is very easy to "turn on and off" the workflow on a repository level, simply by adding or removing the "has-coverity-secrets" topic. See: https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/customizing-your-repository/classifying-your-repository-with-topics The downside is that this method is somewhat unintuitive and needs the developer to read documentation (*shudder*) to know what they need to do. I have no strong preference as long as the job is skipped whenever the Coverity build (read: the `coverity` GitHub workflow run) should be skipped. > + - uses: actions/checkout@v3 > + if: steps.check-coverity.outputs.enabled == 'true' > + - run: ci/install-dependencies.sh > + env: > + CC: gcc > + CC_PACKAGE: gcc-9 > + jobname: linux-gcc-default How about modifying `ci/install-dependencies.sh` to introduce a new job name, `coverity`, that is handled like `linux-gcc-default`? I.e. something like this: -- snip -- diff --git a/ci/install-dependencies.sh b/ci/install-dependencies.sh index 107757a1fea..e6a894e507d 100755 --- a/ci/install-dependencies.sh +++ b/ci/install-dependencies.sh @@ -74,7 +74,7 @@ Documentation) test -n "$ALREADY_HAVE_ASCIIDOCTOR" || sudo gem install --version 1.5.8 asciidoctor ;; -linux-gcc-default) +linux-gcc-default|coverity) sudo apt-get -q update sudo apt-get -q -y install $UBUNTU_COMMON_PKGS ;; diff --git a/ci/lib.sh b/ci/lib.sh index 1b0cc2b57db..7a196aa0d17 100755 --- a/ci/lib.sh +++ b/ci/lib.sh @@ -227,7 +227,7 @@ export SKIP_DASHED_BUILT_INS=YesPlease case "$runs_on_pool" in ubuntu-latest) - if test "$jobname" = "linux-gcc-default" + if test "$jobname" = "linux-gcc-default" || test "$jobname" = "coverity" then break fi -- snap -- That would be a lot cleaner and would also allow for Coverity-specific things in this step that won't affect `linux-gcc-default` at the same time. > + runs_on_pool: ubuntu-latest > + if: steps.check-coverity.outputs.enabled == 'true' > + - uses: vapier/coverity-scan-action@v1 In contrast to the other Actions we use in Git, this one is neither provided by GitHub nor by the Git (or Git for Windows) project [*3*]. The recommended way to handle such third-party Actions is to pin them by full length commit SHA: https://docs.github.com/en/actions/security-guides/security-hardening-for-github-actions#using-third-party-actions This is particularly important because we're passing secret values to that Action. > + if: steps.check-coverity.outputs.enabled == 'true' > + with: > + email: ${{ secrets.COVERITY_SCAN_EMAIL }} > + token: ${{ secrets.COVERITY_SCAN_TOKEN }} > + command: make -j8 > + The commit message would make for a fine place to describe the role of these `COVERITY_SCAN_*` secrets, how to obtain them, and how to configure them (i.e. providing a link to https://docs.github.com/en/actions/security-guides/encrypted-secrets#creating-encrypted-secrets-for-a-repository). The reason I am suggesting that is that many a time, I have authored commits like this one, only to puzzle about the exact steps to configure the the thing properly when I needed it six months later, and regretting dearly not having had the foresight to help my future self while writing the commit message. Another important issue that should be discussed in the commit message is the matter of Coverity's false positives. I regularly dread going through the new reports in Coverity (I am doing this for git-for-windows) because of the many falsely pointed out "buffer overruns" when working e.g. with `strvec` where Coverity assumes that its `empty_strvec` is ever written to. In Git for Windows, I have not had any luck with suppressing these false reports. For what it's worth, this is the "model file" I am currently using: -- snip -- /* modelfile for git */ char strbuf_slopbuf[64]; void *malloc(size_t); void *calloc(size_t, size_t); void *realloc(void *, size_t); void free(void *); void *xrealloc(void *ptr, size_t size) { void *ret = realloc(ptr, size); if (!ret) __coverity_panic__(); return ret; } void *xmalloc(size_t size) { void *mem = malloc(size); if (!mem) __coverity_panic__(); return mem; } void xcalloc(size_t num, size_t size) { void *ret = calloc(num, size); if (!ret) __coverity_panic__(); return ret; } void usage(const char *err) { __coverity_panic__(); } void usagef(const char *err, ...) { __coverity_panic__(); } void die(const char *err, ...) { __coverity_panic__(); } void die_errno(const char *err, ...) { __coverity_panic__(); } void BUG_fl(const char *file, int line, const char *fmt, ...) { __coverity_panic__(); } struct strbuf { size_t alloc; size_t len; char *buf; }; void strbuf_setlen(struct strbuf *sb, size_t len) { sb->len = len; sb->buf[len] = '\0'; } void strbuf_grow(struct strbuf *sb, size_t amount); void *memcpy(void *dest, const void *src, size_t n); void strbuf_add(struct strbuf *sb, const void *data, size_t len) { strbuf_grow(sb, len); memcpy(sb->buf + sb->len, data, len); sb->len = len; sb->buf[len] = '\0'; } void *memmove(void *dest, const void *src, size_t n); void strbuf_splice(struct strbuf *sb, size_t pos, size_t len, const void *data, size_t dlen) { if (pos > sb->len) die("`pos' is too far after the end of the buffer"); if (pos + len > sb->len) die("`pos + len' is too far after the end of the buffer"); if (dlen >= len) strbuf_grow(sb, dlen - len); memmove(sb->buf + pos + dlen, sb->buf + pos + len, sb->len - pos - len); memcpy(sb->buf + pos, data, dlen); sb->len = sb->len + dlen - len; sb->buf[sb->len + dlen - len] = '\0'; } -- snap -- It is getting a bit ridiculous at this point, the size of it and the meager benefit it reaps. But it is better than nothing, I guess. The reason I would really like to see an extensive discussion about this in the commit message is that this place would be _the_ most logical habitat for such a description, to help developers who want to set up their own Coverity coverage (either in their own Git fork, or even in a separate project) trying to learn from the Git project. Ciao, Johannes Footnote *1*: In case it was unclear: Build agents are essentially virtual machines. Those virtual machines are destroyed after every run, and recreated to a pristine state before picking up the next job. To avoid having to wait for a virtual machine to be created whenever a job needs to be run, there is a pool of virtual machines ready to run. There have been times when we ran so many jobs in the Git project that we drained that pool and had to wait, painfully, for the virtual machines to be created which, let me tell you, is not a matter of a mere couple of seconds. This complexity is reflected in the pricing you can see at https://docs.github.com/en/billing/managing-billing-for-github-actions/about-billing-for-github-actions Just to put things into perspective: In our security work, we have to run our GitHub workflow in a private repository, where GitHub Actions are not provided free of cost by GitHub. We have a very nice donation from GitHub in the form of 50,000 build minutes we can use every month. If we spent all of those minutes running on Linux (i.e. the cheapest option), GitHub would essentially donate $400 to us. That's what we usually spend in a matter of days preparing for security releases: The workflow run to validate v2.39.3, for example, while taking "only" 36 minutes and 1 second wall clock time, used up over 12 hours of those 50,000 build minutes, and every time we iterated, we pushed 11 branches and 11 tags. Three iterations in, we were out of build minutes. All this is to say: We, the Git project, are not exactly cautious when it comes to spending GitHub Actions build minutes. I really would love to see us all to become a lot more mindful on this matter. Footnote *2*: https://github.com/git/git/blob/v2.42.0/.github/workflows/l10n.yml#L13 Footnote *3*: We do already use a third-party Action in `l10n.yml`: `mshick/add-pr-comment@v1`. As you can verify easily, this is not pinned by full length commit SHA. But then, this step only gets the `GITHUB_TOKEN` secret, which only has the permission to write to the Pull Request (i.e. add a comment), not to write to the repository. Still, we will probably want to pin that Action at some stage, better safe 'n sorry. #leftoverbits
On Tue, Aug 29, 2023 at 10:18:24AM +0200, Johannes Schindelin wrote: > - Limit it by repository "topics" (think: "repository tags"): > > if: contains(github.event.repository.topics, 'has-coverity-secrets') FWIW, I like this approach the most. As you note, it's not exactly obvious to discover, but I think it is the most flexible. And setting up Coverity runs already requires a lot of non-obvious steps (like creating an account with them and getting a token to paste into the GitHub secrets area). My gut feeling is that we should be able to do something with env variables to avoid the (ab)use of repository tags, but when I looked into this in the past, it wasn't possible. However, poking at it again now, it seems that the "vars" context (but not "secrets") is available to "jobs.*.if". I'm not sure if I missed before, or if that's a new feature since the last time I checked. At any rate, it seems to work to do: if: vars.ENABLE_COVERITY != '' or even make COVERITY_SCAN_EMAIL a "var" instead of a "secret", and use that. Likewise, I think we could do: if: contains(vars.COVERITY_BRANCHES, github.ref_name) to allow individual repositories to do their own branch selection (that is matching by substring, which is probably good enough, but if you want to get fancy, I think we can use fromJSON to interpret the contents of the variable). (I had mostly looked into this in the context of branch selection for our ci-config job, and I think we could do something similar there). -Peff
Hi Peff, On Tue, 29 Aug 2023, Jeff King wrote: > On Tue, Aug 29, 2023 at 10:18:24AM +0200, Johannes Schindelin wrote: > > > - Limit it by repository "topics" (think: "repository tags"): > > > > if: contains(github.event.repository.topics, 'has-coverity-secrets') > > FWIW, I like this approach the most. [...] > > My gut feeling is that we should be able to do something with env > variables [...] Environment variables need an environment, i.e. a running build agent. That's why they aren't available in our use case, but only inside a step (which is too late for our purposes). I am unsure why secrets aren't available in job-level `if:` expressions, but they aren't, and that's that, for now. > [...] it seems that the "vars" context (but not "secrets") is available > to "jobs.*.if". I'm not sure if I missed before, or if that's a new > feature since the last time I checked. I had missed that, too. It was announced here: https://github.blog/2023-01-10-introducing-required-workflows-and-configuration-variables-to-github-actions/#configuration-variables (I must have glanced over that post when I saw it because it talked about required workflows, which are currently irrelevant to my interests). FWIW the feature is documented here: https://docs.github.com/en/actions/learn-github-actions/variables And https://docs.github.com/en/actions/learn-github-actions/variables#using-the-vars-context-to-access-configuration-variable-values specifically says: Configuration variables can be accessed across the workflow using `vars` context. I.e. it suggests that the context can be used even in the `run-name` attribute of any workflow. Nice. FWIW I was unable to deduce any authoritative information as to where the `secrets` context can be accessed from https://docs.github.com/en/actions/learn-github-actions/contexts#secrets-context, but I must assume that access to that context is highly restricted and probably cannot be used outside the `steps:` attribute, explaining why a job-level (and in my previous tests, even step-level) `if:` condition cannot access them. > (I had mostly looked into this in the context of branch selection for > our ci-config job, and I think we could do something similar there). FWIW I concur. Ciao, Johannes
On Wed, Aug 30, 2023 at 10:07:35AM +0200, Johannes Schindelin wrote: > > My gut feeling is that we should be able to do something with env > > variables [...] > > Environment variables need an environment, i.e. a running build agent. > That's why they aren't available in our use case, but only inside a step > (which is too late for our purposes). Yeah, sorry I was a bit loose with my language. I meant "variables set from the GitHub UI that are often used as env variables". :) > > [...] it seems that the "vars" context (but not "secrets") is available > > to "jobs.*.if". I'm not sure if I missed before, or if that's a new > > feature since the last time I checked. > > I had missed that, too. It was announced here: > https://github.blog/2023-01-10-introducing-required-workflows-and-configuration-variables-to-github-actions/#configuration-variables > (I must have glanced over that post when I saw it because it talked about > required workflows, which are currently irrelevant to my interests). Yeah, I found that, too (after sending my email). So it looks like this is a good, supported path for going forward. I have a working patch that can replace the ci/config/allow-ref script. We can't totally ditch the "config" job with it, as there are other things it does, but it's possible those can be migrated, too. IMHO the allow-ref one is the _most_ important, though, because it addresses the case of "everything should be skipped". Taking that from spinning up one VM per ref to zero is very good. I'll post that patch later today. > FWIW I was unable to deduce any authoritative information as to where the > `secrets` context can be accessed from > https://docs.github.com/en/actions/learn-github-actions/contexts#secrets-context, > but I must assume that access to that context is highly restricted and > probably cannot be used outside the `steps:` attribute, explaining why a > job-level (and in my previous tests, even step-level) `if:` condition > cannot access them. I found the "vars" context through this table: https://docs.github.com/en/actions/learn-github-actions/contexts#context-availability which does show where "secrets" is available. -Peff
diff --git a/.github/workflows/coverity.yml b/.github/workflows/coverity.yml new file mode 100644 index 0000000000..26b9145d9e --- /dev/null +++ b/.github/workflows/coverity.yml @@ -0,0 +1,35 @@ +name: Coverity + +on: + push: + branches: + - master + tags: + - '*' + +jobs: + coverity: + runs-on: ubuntu-latest + env: + HAVE_COVERITY_TOKEN: ${{ secrets.COVERITY_SCAN_EMAIL != '' && secrets.COVERITY_SCAN_TOKEN != '' }} + steps: + - id: check-coverity + name: check whether Coverity token is configured + run: | + echo "enabled=$HAVE_COVERITY_TOKEN" >>$GITHUB_OUTPUT + - uses: actions/checkout@v3 + if: steps.check-coverity.outputs.enabled == 'true' + - run: ci/install-dependencies.sh + env: + CC: gcc + CC_PACKAGE: gcc-9 + jobname: linux-gcc-default + runs_on_pool: ubuntu-latest + if: steps.check-coverity.outputs.enabled == 'true' + - uses: vapier/coverity-scan-action@v1 + if: steps.check-coverity.outputs.enabled == 'true' + with: + email: ${{ secrets.COVERITY_SCAN_EMAIL }} + token: ${{ secrets.COVERITY_SCAN_TOKEN }} + command: make -j8 +
Coverity is a static analysis tool that detects and generates reports on various security and code quality issues. It is particularly useful when diagnosing memory safety issues which may be used as part of exploiting a security vulnerability. Coverity's website provides a service accepts "builds" (which is more or less the collection of '*.o' files generated during a standard build with "make") as input and generates reports as output. In order to generate a report, we have to first compile Git and then upload the build archive to Coverity. This Action generates and uploads a build archive to Coverity when a GitHub repository has configured the "COVERITY_SCAN_EMAIL" and "COVERITY_SCAN_TOKEN" repository secrets, respectively. This enables Coverity to automatically report on new changes pushed to 'master', as well as any newly created tags. A couple of implementation notes: - In order to successfully build 'git', we (ab-)use the ci/install-dependencies.sh script by faking in some of the environment variables to take on values specific to the main.yml workflow file in order to install the correct set of dependencies. - We could upload the build archive to Coverity directly with a straightforward curl request. But using the vapier/coverity-scan Action comes with some additional niceties, such as caching the (rather large) Coverity tool download between runs. If the repository is missing either of the two secret tokens mentioned earlier, this Action is a no-op. Signed-off-by: Taylor Blau <me@ttaylorr.com> --- .github/workflows/coverity.yml | 35 ++++++++++++++++++++++++++++++++++ 1 file changed, 35 insertions(+) create mode 100644 .github/workflows/coverity.yml