30 Commits

Author SHA1 Message Date
Renovate Bot
bf31b6227c chore(deps): update workflow dependencies 2026-05-14 16:58:49 +00:00
Renovate Bot
f23605c614 chore(deps): update workflow dependencies (major) (#967)
Reviewed-on: https://gitea.com/gitea/runner/pulls/967
Reviewed-by: techknowlogick <9+techknowlogick@noreply.gitea.com>
Co-authored-by: Renovate Bot <renovate-bot@gitea.com>
Co-committed-by: Renovate Bot <renovate-bot@gitea.com>
2026-05-14 16:54:48 +00:00
thisisqasim
00b7fec80f Simplify kubernetes dind example allowing for default docker config in workflows (#709)
With this docker clients in workflows can connect on the default socket without needing to change DOCKER_HOST. Startup probe also removes the need for custom shell command.

Co-authored-by: silverwind <me@silverwind.io>
Reviewed-on: https://gitea.com/gitea/runner/pulls/709
Co-authored-by: thisisqasim <40013+thisisqasim@noreply.gitea.com>
Co-committed-by: thisisqasim <40013+thisisqasim@noreply.gitea.com>
2026-05-14 05:52:41 +00:00
silverwind
dda5841af8 chore(deps): bump retry-go, golangci-lint, govulncheck (#965)
Bumps `github.com/avast/retry-go` v4.7.0 -> v5.0.0, `golangci-lint` v2.11.4 -> v2.12.2 (aligns with gitea/gitea), and pins `govulncheck` to v1.3.0.

- `retry-go` v5 replaces the package-level `retry.Do(fn, opts...)` with a builder API `retry.New(opts...).Do(fn)`. The single call site in `internal/pkg/report/reporter.go` was migrated.
- `golangci-lint` v2.12.2 surfaces three new findings in `act/` (modernize/slicesbackward, govet/inline): one backward loop now uses `slices.Backward`, and the deprecated `reflect.Ptr` alias is replaced with `reflect.Pointer`.
- `go.mod`: the two direct-`require` blocks are merged into one, and a stray `gopkg.in/yaml.v3 // indirect` is moved into the indirect block. Purely cosmetic; `go.sum` is unchanged.

---
This PR was written with the help of Claude Opus 4.7

Reviewed-on: https://gitea.com/gitea/runner/pulls/965
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: silverwind <me@silverwind.io>
Co-committed-by: silverwind <me@silverwind.io>
2026-05-14 05:39:28 +00:00
silverwind
32bed52686 fix(deps): bump docker deps, switch to moby/moby (#943)
Fixes: https://gitea.com/gitea/runner/issues/859

Migration approach mirrors [actions-oss/act-cli#154](https://github.com/actions-oss/act-cli/pull/154).

### Dependency changes

- `github.com/docker/docker` v25.0.15 → **removed** (v29 doesn't exist as docker/docker; the project moved to moby/moby)
- `github.com/docker/cli` v25.0.7 → v29.4.3
- `github.com/docker/go-connections` v0.6.0 → v0.7.0
- `github.com/docker/docker-credential-helpers` v0.9.5 → v0.9.6
- `github.com/moby/go-archive` added at v0.2.0
- `github.com/moby/moby/api` added at v1.54.2
- `github.com/moby/moby/client` added at v0.4.1
- `github.com/moby/buildkit` removed (only used `dockerignore.ReadAll`, swapped for `moby/patternmatcher/ignorefile.ReadAll` directly)
- `github.com/containerd/errdefs` v0.3.0 → v1.0.0

### Migration

- v28: type aliases moved to their subpackages (`types.{Container,Image,Network,Exec}*` → `container/image/network/...`); deprecated APIs replaced (`ImageInspectWithRaw`, `client.IsErrNotFound`, `archive.CanonicalTarNameForPath`, `opts.ValidateMACAddress`, `ListOpts.GetAll`)
- v29: structural client redesign — every `cli.X(ctx, ...)` call switched to options-everywhere/Result-typed signatures, `ContainerExec*` → `Exec*`, `ContainerWait` returns a struct with `Result`/`Error` channels, `Tty`→`TTY`, `Copy*Container` takes options struct, `client.NewClientWithOpts` → `client.New`. `pkg/stdcopy` moved to `moby/moby/api/pkg/stdcopy`. The vendored copy of `cli/command/container/opts.go` was refreshed from cli v29 (now uses `netip.Addr` for IPs, port-set conversion helpers). A small local `parsePlatform` helper centralises the `os/arch[/variant]` parsing previously inlined into multiple call sites.

### Behaviour preservation

The migration introduced several behavioural shifts vs the v25 client; all were caught in review and reverted/fixed in follow-up commits:

- `GetDockerClient`: cli v29's `Ping(NegotiateAPIVersion: true)` returns errors that the old `NegotiateAPIVersion` silently swallowed. Restored best-effort behaviour (warn-log + continue) so daemons with blocked `_ping` or API < 1.40 keep working. The SSH-helper `client.New` call no longer inherits `client.FromEnv`, matching the old `NewClientWithOpts(WithHost, WithDialContext)` so `DOCKER_API_VERSION`/`DOCKER_TLS_VERIFY` don't leak into the SSH-tunneled client
- `parsePlatform`: malformed input now returns an explicit error instead of silently dropping to "no platform constraint" and pulling the host-default architecture. Single-segment (`"linux"`), 4+-segment (`"linux/arm/v7/extra"`), and trailing-slash (`"linux/arm/"`) inputs are all rejected
- `LoadDockerAuthConfig`/`LoadDockerAuthConfigs`: `config.LoadDefaultConfigFile(nil)` panics on a malformed config file (it does `fmt.Fprintln` on the nil `io.Writer`). Switched to `config.Load(config.Dir())` so load errors reach the logger and the panic path is gone. Restored the old behaviour of returning `config.Load` and `GetAuthConfig` errors to the caller (the v29 refactor had silently downgraded them to warn-only). A `reference.ParseNormalizedNamed` failure on the image string falls through to the `docker.io` default rather than aborting, since the old string-based hostname extraction was infallible

Test assertions also updated for two upstream error-message string shifts (`go-connections` port-range parser; `cli/opts` envfile BOM check). Added unit-test coverage for the new `parsePlatform` helper, locking in the intentional limits (single-segment, 4+-segment, and trailing-slash platforms rejected).

---
This PR was written with the help of Claude Opus 4.7

Reviewed-on: https://gitea.com/gitea/runner/pulls/943
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: silverwind <me@silverwind.io>
Co-committed-by: silverwind <me@silverwind.io>
2026-05-14 02:29:05 +00:00
Lunny Xiao
a7e972d8de fix: respect proxy env vars in runner client (#962)
Fixes #957.

## Why
The runner builds a custom `http.Transport` for its RPC client. From first principles, once we stop using the default transport we also stop inheriting its default proxy resolution behavior, so `HTTP_PROXY`/`HTTPS_PROXY` are ignored unless we wire that behavior back explicitly.

## What
- set `Proxy: http.ProxyFromEnvironment` on the custom transport
- add a regression test that verifies `getHTTPClient` honors proxy environment variables

Reviewed-on: https://gitea.com/gitea/runner/pulls/962
Reviewed-by: ChristopherHX <38043+christopherhx@noreply.gitea.com>
2026-05-13 19:10:52 +00:00
Lunny Xiao
763b38ece3 fix: isolate per-task runner envs (#959)
## Summary
- clone the runner environment map for each task before injecting runtime and OIDC tokens
- keep the shared base environment immutable so concurrent jobs cannot hit `concurrent map writes`
- add a unit test covering task-local env cloning

Fixes #958

---------

Co-authored-by: Nicolas <bircni@icloud.com>
Reviewed-on: https://gitea.com/gitea/runner/pulls/959
Reviewed-by: Nicolas <bircni@icloud.com>
2026-05-12 18:50:25 +00:00
Renovate Bot
a1f13cb970 fix(deps): update module github.com/opencontainers/selinux to v1.14.1 (#955)
This PR contains the following updates:

| Package | Change | [Age](https://docs.renovatebot.com/merge-confidence/) | [Confidence](https://docs.renovatebot.com/merge-confidence/) |
|---|---|---|---|
| [github.com/opencontainers/selinux](https://github.com/opencontainers/selinux) | `v1.14.0` → `v1.14.1` | ![age](https://developer.mend.io/api/mc/badges/age/go/github.com%2fopencontainers%2fselinux/v1.14.1?slim=true) | ![confidence](https://developer.mend.io/api/mc/badges/confidence/go/github.com%2fopencontainers%2fselinux/v1.14.0/v1.14.1?slim=true) |

---

> ⚠️ **Warning**
>
> Some dependencies could not be looked up. Check the [Dependency Dashboard](issues/856) for more information.

---

### Configuration

📅 **Schedule**: (UTC)

- Branch creation
  - At any time (no schedule defined)
- Automerge
  - At any time (no schedule defined)

🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update again.

---

 - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box

---

This PR has been generated by [Mend Renovate](https://github.com/renovatebot/renovate).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0My4xNzAuMjAiLCJ1cGRhdGVkSW5WZXIiOiI0My4xNzAuMjAiLCJ0YXJnZXRCcmFuY2giOiJtYWluIiwibGFiZWxzIjpbXX0=-->

Reviewed-on: https://gitea.com/gitea/runner/pulls/955
Reviewed-by: silverwind <2021+silverwind@noreply.gitea.com>
Co-authored-by: Renovate Bot <renovate-bot@gitea.com>
Co-committed-by: Renovate Bot <renovate-bot@gitea.com>
2026-05-12 13:34:52 +00:00
silverwind
1e3ab0c40a fix(deps): update mergo to v1.0.2 (now dario.cat/mergo) (#954)
At v1.0.0 the `github.com/imdario/mergo` module was relocated to `dario.cat/mergo`, so a plain version bump (as in https://gitea.com/gitea/runner/pulls/951) leaves the import path pointing at the old, unmaintained location. This PR updates the import in `act/container/docker_run.go` and adjusts `go.mod` accordingly. The public API (`mergo.Merge`, `mergo.WithOverride`, `mergo.WithAppendSlice`) is unchanged.

Supersedes https://gitea.com/gitea/runner/pulls/951.

---
This PR was written with the help of Claude Opus 4.7

Reviewed-on: https://gitea.com/gitea/runner/pulls/954
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: silverwind <me@silverwind.io>
Co-committed-by: silverwind <me@silverwind.io>
2026-05-10 22:05:05 +00:00
silverwind
295eecb9af fix: ensure dbfs_data is cleaned up after task completion (#952)
Fixes #950.

After #819, the daemon flushes logs eagerly on the job-result entry (via the `stateNotify` path), so `Close()` typically runs `ReportLog(true)` with an empty buffer. Gitea's `UpdateLog` handler short-circuits on `len(Rows)==0` before honoring `NoMore`, so the final request never runs `TransferLogs` and `dbfs_data` rows leak. The server-side short-circuit is latent since the original Actions implementation in 2023; #819 made it deterministically reachable.

Workaround: inject a sentinel row in `Close()` after the daemon has exited so the final `UpdateLog` always carries at least one row. Done after the daemon waits so the sentinel can't be flushed before `ReportLog(true)` reads it.

https://github.com/go-gitea/gitea/pull/37631 drops the empty-rows short-circuit when `NoMore=true`; that would work with or without this PR.

Reviewed-on: https://gitea.com/gitea/runner/pulls/952
Reviewed-by: Nicolas <bircni@icloud.com>
Reviewed-by: Zettat123 <39446+zettat123@noreply.gitea.com>
Co-authored-by: silverwind <me@silverwind.io>
Co-committed-by: silverwind <me@silverwind.io>
2026-05-10 20:55:57 +00:00
Zettat123
ef6ca957b5 fix(artifactcache): preserve cache key case to stop redundant uploads (#947)
## Summary

`artifactcache.Handler` was lowercasing cache keys before storing and returning them. This caused actions like `actions/setup-go` to treat every restore as a partial hit and re-upload the cache on every job run.

Similar issue: [act#2497](https://github.com/nektos/act/issues/2497)

## Root Cause

These actions build cache keys that include `RUNNER_OS` (e.g. `setup-go-Linux-x64-...` See [setup-go/cache-restore.ts](78961f6f84/src/cache-restore.ts (L11-L51)) ). In `gitea/runner`,  `RUNNER_OS` is always `Linux` by default (See https://gitea.com/gitea/runner/search?q=RUNNER_OS).

These actions decide whether to save the cache data in their post step using **strict** `===` comparison between the primary key and the key returned from the runner. See [setup-go cache-save.ts](78961f6f84/src/cache-save.ts (L44-L86)) .

|State | Value|
|--- | ---|
|CachePrimaryKey | `setup-go-Linux-x64-ubuntu-22.04-go-1.24.9-abc123` |
|CacheMatchedKey | `setup-go-linux-x64-ubuntu-22.04-go-1.24.9-abc123` |

Because the runner's cache server lowercased the stored key, the response carried `setup-go-linux-...` while the action's primary key was `setup-go-Linux-...`. Strict equality failed, then the actions updated same data again. This repeated on every run, wasting disk and bandwidth. The duplicate blobs accumulate until GC .

https://gitea.com/gitea/runner/actions/runs/462560/jobs/737401#jobstep-2-15
![image.png](/attachments/d3487457-1d09-44b5-9937-a0b8cab1bcc5)

https://gitea.com/gitea/runner/actions/runs/462560/jobs/737401#jobstep-6-22
![image.png](/attachments/9217dc71-cb0c-456b-a516-0017458123c7)

## Fix
Drop the `strings.ToLower` calls in `find` and `reserve` so the original key case is preserved end-to-end. This fix will invalidate existing "case insensitive" keys.

## Notes

The [original act review](https://github.com/nektos/act/pull/1770/changes/BASE..d44b8d15649d9d09d1d891130b8f3962097a81f3#r1177624608) suggested making cache keys case-insensitive because `isExactKeyMatch` compares cache key ignoring case. However, the actions (`setup-go` / `setup-node` / `setup-ruby`) compare with strict `===` rather than `isExactKeyMatch`.

---------

Co-authored-by: Nicolas <bircni@icloud.com>
Reviewed-on: https://gitea.com/gitea/runner/pulls/947
Reviewed-by: silverwind <2021+silverwind@noreply.gitea.com>
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Reviewed-by: Nicolas <bircni@icloud.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
2026-05-09 12:27:52 +00:00
Renovate Bot
8088df52b9 fix(deps): update module golang.org/x/term to v0.43.0 (#948)
This PR contains the following updates:

| Package | Change | [Age](https://docs.renovatebot.com/merge-confidence/) | [Confidence](https://docs.renovatebot.com/merge-confidence/) |
|---|---|---|---|
| [golang.org/x/term](https://pkg.go.dev/golang.org/x/term) | [`v0.42.0` → `v0.43.0`](https://cs.opensource.google/go/x/term/+/refs/tags/v0.42.0...refs/tags/v0.43.0) | ![age](https://developer.mend.io/api/mc/badges/age/go/golang.org%2fx%2fterm/v0.43.0?slim=true) | ![confidence](https://developer.mend.io/api/mc/badges/confidence/go/golang.org%2fx%2fterm/v0.42.0/v0.43.0?slim=true) |

---

> ⚠️ **Warning**
>
> Some dependencies could not be looked up. Check the [Dependency Dashboard](issues/856) for more information.

---

### Configuration

📅 **Schedule**: (UTC)

- Branch creation
  - At any time (no schedule defined)
- Automerge
  - At any time (no schedule defined)

🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update again.

---

 - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box

---

This PR has been generated by [Mend Renovate](https://github.com/renovatebot/renovate).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0My4xNjkuNCIsInVwZGF0ZWRJblZlciI6IjQzLjE2OS40IiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->

Reviewed-on: https://gitea.com/gitea/runner/pulls/948
Reviewed-by: silverwind <2021+silverwind@noreply.gitea.com>
Co-authored-by: Renovate Bot <renovate-bot@gitea.com>
Co-committed-by: Renovate Bot <renovate-bot@gitea.com>
2026-05-09 05:28:44 +00:00
Nicolas
3ea7d39690 fix: overwrite read-only files when copying action directories (#942)
## Summary

  - `CopyCollector.WriteFile` now removes any existing destination file
    before writing, handling read-only modes (e.g. git pack files at
    `0444`) that cause `EACCES`/`ERROR_ACCESS_DENIED` on macOS and Windows.
  - Added `O_TRUNC` to the `OpenFile` flags as a safety net.

  ## Root cause

  When a composite action with a post step runs on a host runner,
  `runPostStep` calls `maybeCopyToActionDir`, which re-copies the action
  into `miscpath/act/actions/<name>/`. The first copy (main step) writes
  `.git/objects/pack/*.idx` at the destination with mode `0444` (as set
  by go-git). The second copy (post step) calls
  `os.OpenFile(dest, O_CREATE|O_WRONLY, …)` on that existing `0444` file,
  which fails immediately:

  - macOS: `open <path>: permission denied`
  - Windows: `open <path>: Access is denied`

Fixes: https://gitea.com/gitea/runner/issues/941
Fixes: https://gitea.com/gitea/runner/issues/876

---------

Co-authored-by: silverwind <me@silverwind.io>
Co-authored-by: silverwind <2021+silverwind@noreply.gitea.com>
Reviewed-on: https://gitea.com/gitea/runner/pulls/942
Reviewed-by: silverwind <2021+silverwind@noreply.gitea.com>
Co-authored-by: Nicolas <bircni@icloud.com>
Co-committed-by: Nicolas <bircni@icloud.com>
2026-05-08 04:11:42 +00:00
Schallbert
861d351845 add apparmor=rootlesskit in security_opt (#937)
paste depends_on chain from socket-runner-setup  to runner-dind-setup
add apparmor=rootlesscit in security_opt
add explanations for elevated privileges

---------

Co-authored-by: silverwind <me@silverwind.io>
Co-authored-by: silverwind <2021+silverwind@noreply.gitea.com>
Reviewed-on: https://gitea.com/gitea/runner/pulls/937
Reviewed-by: silverwind <2021+silverwind@noreply.gitea.com>
Co-authored-by: Schallbert <schallbert@mailbox.org>
Co-committed-by: Schallbert <schallbert@mailbox.org>
2026-05-07 21:20:33 +00:00
silverwind
cce8543d06 fix: serialize action-cache reads to prevent worktree race (#938)
`NewGitCloneExecutor` holds a per-directory mutex while it `git checkout --force`s a remote action into the shared `<ActionCacheDir>/<UsesHash>`, but four read sites ran unlocked:

- `maybeCopyToActionDir`'s tar walk via `JobContainer.CopyDir`
- `prepareActionExecutor`'s `readAction` parse of `action.yml`
- `newReusableWorkflowExecutor`'s `model.NewWorkflowPlanner` after `cloneRemoteReusableWorkflow` released its lock
- `execAsDocker` when `ActionCache == nil`: `docker build` walks `contextDir` for the daemon-side build context

When two matrix jobs share a `uses:`, a read interleaved with a peer's checkout produces partial state — observed as `Cannot find module .../dist/index.js` and `setup-uv` failing on a half-written `action.yml`.

Exports `acquireCloneLock` as `AcquireCloneLock` and takes it at all four sites. `container.ImageExistsLocally` / `NewDockerBuildExecutor` and `model.NewWorkflowPlanner` are indirected through package-level vars so the docker-action build path and the reusable-workflow read site are testable without a real daemon, mirroring `ContainerNewContainer`. Three regression tests cover the higher-risk sites (`maybeCopyToActionDir`, `execAsDocker`, `newReusableWorkflowExecutor`); each fails if its `AcquireCloneLock` is removed.

Subsumed by https://gitea.com/gitea/runner/pulls/814 once that lands. Related: https://gitea.com/gitea/runner/pulls/930

---
This PR was written with the help of Claude Opus 4.7

---------

Co-authored-by: Nicolas <bircni@icloud.com>
Reviewed-on: https://gitea.com/gitea/runner/pulls/938
Reviewed-by: Nicolas <bircni@icloud.com>
Reviewed-by: Zettat123 <39446+zettat123@noreply.gitea.com>
Co-authored-by: silverwind <me@silverwind.io>
Co-committed-by: silverwind <me@silverwind.io>
2026-05-07 19:57:04 +00:00
silverwind
75643645f0 feat: remove emojis from runner logging, add Starting job container group (#940)
Aligns runner log output more closely with `actions/runner`:

- Strip the whale, rocket, cloud, construction, chequered-flag, and exclamation-mark glyphs from log lines and drop the now-unused `logPrefix` constant.
- Reword `no outputs used step '%s'` → `No outputs registered for step '%s'` (the original was ungrammatical and inaccurate — it fires when `set-output` references an unknown step ID).
- Wrap the docker pull/network/create/start phase of job container startup in a `::group::Starting job container` / `::endgroup::` collapsible section, mirroring `actions/runner`. Since act drives Docker through the SDK rather than the CLI, we can't echo `##[command]/usr/bin/docker create ...` lines verbatim — instead the helper emits a summary inside the group:

  ```
  ::group::Starting job container
  image: <image>
  name: <container-name>
  network: <network-name>
  ::endgroup::
  ```

- Extracted the emit into a `printStartJobContainerGroup` helper (parallel to `printRunActionHeader` in `step_run.go`) and added a golden-style test `TestPrintStartJobContainerGroupGolden`.
- Drive-by: replace two remaining literal `"raw_output"` strings in `run_context.go` with the existing `rawOutputField` constant.

Closes #935

---
This PR was written with the help of Claude Opus 4.7

Reviewed-on: https://gitea.com/gitea/runner/pulls/940
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: silverwind <me@silverwind.io>
Co-committed-by: silverwind <me@silverwind.io>
2026-05-07 19:32:53 +00:00
Renovate Bot
dff63b3ecc fix(deps): update module github.com/go-git/go-git/v5 to v5.19.0 (#934)
This PR contains the following updates:

| Package | Change | [Age](https://docs.renovatebot.com/merge-confidence/) | [Confidence](https://docs.renovatebot.com/merge-confidence/) |
|---|---|---|---|
| [github.com/go-git/go-git/v5](https://github.com/go-git/go-git) | `v5.18.0` → `v5.19.0` | ![age](https://developer.mend.io/api/mc/badges/age/go/github.com%2fgo-git%2fgo-git%2fv5/v5.19.0?slim=true) | ![confidence](https://developer.mend.io/api/mc/badges/confidence/go/github.com%2fgo-git%2fgo-git%2fv5/v5.18.0/v5.19.0?slim=true) |

---

> ⚠️ **Warning**
>
> Some dependencies could not be looked up. Check the [Dependency Dashboard](issues/856) for more information.

---

### Configuration

📅 **Schedule**: (UTC)

- Branch creation
  - At any time (no schedule defined)
- Automerge
  - At any time (no schedule defined)

🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update again.

---

 - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box

---

This PR has been generated by [Mend Renovate](https://github.com/renovatebot/renovate).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0My4xNjUuMSIsInVwZGF0ZWRJblZlciI6IjQzLjE2NS4xIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->

Reviewed-on: https://gitea.com/gitea/runner/pulls/934
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Renovate Bot <renovate-bot@gitea.com>
Co-committed-by: Renovate Bot <renovate-bot@gitea.com>
2026-05-07 01:27:20 +00:00
Renovate Bot
a5d9fe9651 fix(deps): update module github.com/opencontainers/selinux to v1.14.0 (#928)
This PR contains the following updates:

| Package | Change | [Age](https://docs.renovatebot.com/merge-confidence/) | [Confidence](https://docs.renovatebot.com/merge-confidence/) |
|---|---|---|---|
| [github.com/opencontainers/selinux](https://github.com/opencontainers/selinux) | `v1.13.1` → `v1.14.0` | ![age](https://developer.mend.io/api/mc/badges/age/go/github.com%2fopencontainers%2fselinux/v1.14.0?slim=true) | ![confidence](https://developer.mend.io/api/mc/badges/confidence/go/github.com%2fopencontainers%2fselinux/v1.13.1/v1.14.0?slim=true) |

---

> ⚠️ **Warning**
>
> Some dependencies could not be looked up. Check the [Dependency Dashboard](issues/856) for more information.

---

### Configuration

📅 **Schedule**: (UTC)

- Branch creation
  - At any time (no schedule defined)
- Automerge
  - At any time (no schedule defined)

🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update again.

---

 - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box

---

This PR has been generated by [Mend Renovate](https://github.com/renovatebot/renovate).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0My4xNjMuMyIsInVwZGF0ZWRJblZlciI6IjQzLjE2My4zIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->

Reviewed-on: https://gitea.com/gitea/runner/pulls/928
Reviewed-by: silverwind <2021+silverwind@noreply.gitea.com>
Co-authored-by: Renovate Bot <renovate-bot@gitea.com>
Co-committed-by: Renovate Bot <renovate-bot@gitea.com>
2026-05-07 01:27:06 +00:00
silverwind
d607f3b342 test: clean up dead/stale fixtures and bump test container images (#932)
Audit-driven cleanup of `act/` test fixtures. Three commits:

**1. Remove dead fixtures** — 12 fixture directories that no Go test references: `dir with spaces`, `environment-variables`, `issue-104`, `issue-122`, `issue-141`, `localdockerimagetest_`, `node`, `parallel`, `python`, `uses-composite-with-inputs`, `uses-composite-with-pre-and-post-steps`, `shells/custom` (under `act/runner/testdata/`), plus `act/artifactcache/testdata/example`.

**2. Collapse `actions/node{12,16,20}` to a single `actions/node24` fixture** — the trio dispatched through identical `IsNode()` code paths and exercised the container's node binary, not the `using:` string. Bumps bundled deps to current (`@actions/core@^3`, `@actions/github@^9`, `@vercel/ncc@^0.38.4`) — both runtime packages are now ESM-only, so `index.js` is rewritten to ESM and `"type": "module"` added. Drops committed `node_modules/` and `package-lock.json` (now gitignored locally; `dist/` continues to be ignored by the repo-root `.gitignore` as before). Reduces `local-action-js/push.yml` to a single `test-node24` job and bumps four other stale `using: node12/16` references in fixtures.

**3. Bump test container base images** to `node:24-bookworm-slim` / `node:24-bookworm` / `ubuntu:24.04`. Replaces `node:16-buster-slim`, `node:16-buster`, `node:12.20.1-buster-slim`, and the EOL `node:12-buster-slim` / `node:16-buster-slim` / `ubuntu:18.04` base images in `actions/{docker-local,docker-local-noargs,action1}/Dockerfile`.

The runner's model still accepts `using: node12/16/20` for third-party actions in the wild — those constants are untouched.

Fixes: https://gitea.com/gitea/runner/issues/931

---
This PR was written with the help of Claude Opus 4.7

Reviewed-on: https://gitea.com/gitea/runner/pulls/932
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: silverwind <me@silverwind.io>
Co-committed-by: silverwind <me@silverwind.io>
2026-05-07 00:11:56 +00:00
silverwind
5e59402fb2 fix: re-fetch cached reusable workflow on every run (#930)
`cloneIfRequired` only ran the underlying clone executor when the target directory was missing, so a reusable workflow referenced by a moving ref (`uses: org/repo/.gitea/workflows/wf.yml@master`) was cached forever after the first invocation — edits to the source file never propagated.

Always invoke `git.NewGitCloneExecutor`. It handles existing repositories via fetch + pull + hard-reset, so branch and tag refs are brought up to date on each run, matching GitHub Actions semantics.

Drops the global `executorLock` too: `NewGitCloneExecutor` already takes a per-directory lock via `acquireCloneLock`, so the outer mutex only added unnecessary serialization across unrelated reusable-workflow clones — worse now that every invocation runs the full fetch.

Includes a regression test that drives the wrapper against a local bare repo, pushes a new commit on `master` between two invocations, and asserts the cached workflow file reflects the new tip.

Fixes: https://github.com/go-gitea/gitea/issues/37483
Fixes: https://gitea.com/gitea/runner/issues/726
Related: https://github.com/go-gitea/gitea/issues/30543

Would be subsumed by https://gitea.com/gitea/runner/pulls/814 ("WIP: Introduce new action cache") once that lands.

---
This PR was written with the help of Claude Opus 4.7

Reviewed-on: https://gitea.com/gitea/runner/pulls/930
Reviewed-by: Zettat123 <39446+zettat123@noreply.gitea.com>
Co-authored-by: silverwind <me@silverwind.io>
Co-committed-by: silverwind <me@silverwind.io>
2026-05-06 16:10:27 +00:00
Renovate Bot
dfeb463904 chore(deps): update docker docker tag to v29 (#924)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| docker | stage | major | `28-dind-rootless` → `29-dind-rootless` |
| docker | stage | major | `28-dind` → `29-dind` |

---

> ⚠️ **Warning**
>
> Some dependencies could not be looked up. Check the [Dependency Dashboard](issues/856) for more information.

---

### Configuration

📅 **Schedule**: (UTC)

- Branch creation
  - At any time (no schedule defined)
- Automerge
  - At any time (no schedule defined)

🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about these updates again.

---

 - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box

---

This PR has been generated by [Mend Renovate](https://github.com/renovatebot/renovate).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0My4xNjAuNiIsInVwZGF0ZWRJblZlciI6IjQzLjE2MC42IiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->

---------

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Reviewed-on: https://gitea.com/gitea/runner/pulls/924
Reviewed-by: silverwind <2021+silverwind@noreply.gitea.com>
Co-authored-by: Renovate Bot <renovate-bot@gitea.com>
Co-committed-by: Renovate Bot <renovate-bot@gitea.com>
2026-05-05 23:14:36 +00:00
silverwind
594c9ade7c Align step failure log output with GitHub Actions (#927)
Fixes #926.

Before:

<img src="/attachments/a5ae9221-eee2-410a-964e-6103ce126df4" alt="image.png" width="400">

After:

<img width="400" alt="image.png" src="attachments/2f2d67c4-6080-4ec3-9ae5-df33e6479920">

Also gets rid of a bunch of emojis in the logging and the obsolete link to `nektos/act` and align some other error messages.

---
This PR was written with the help of Claude Opus 4.7

---------

Co-authored-by: Nicolas <bircni@icloud.com>
Reviewed-on: https://gitea.com/gitea/runner/pulls/927
Reviewed-by: Nicolas <bircni@icloud.com>
Co-authored-by: silverwind <me@silverwind.io>
Co-committed-by: silverwind <me@silverwind.io>
2026-05-05 20:17:32 +00:00
Nicolas
2a4d56c650 feat: add startup janitor for stale bind-workdir task workspaces (#870)
- Add idle-time cleanup for stale bind-workdir task directories instead of cleaning them on the task execution path.
- Make cleanup behavior configurable with `runner.startup_cleanup_age` as the stale-age threshold (default: `24h`) and `runner.idle_cleanup_interval` as the idle cleanup cadence (default: `10m`).
- Restrict cleanup scope to numeric task directory names only, to avoid touching operator-managed folders.
- Document the cleanup settings in `config.example.yaml` and `README.md`.
- Add tests for stale-directory cleanup, idle cleanup throttling, and config default/override parsing.

## Why

When a runner or host crashes, normal per-task cleanup may not run, leaving stale task directories under the bind-workdir root. Running this cleanup only while the runner is idle recovers that disk space without adding overhead to active job execution.

If you want, I can also tighten the wording around `startup_cleanup_age`, since the key name now reads a bit misleadingly relative to the actual behavior.

---------

Co-authored-by: silverwind <me@silverwind.io>
Reviewed-on: https://gitea.com/gitea/runner/pulls/870
Reviewed-by: silverwind <2021+silverwind@noreply.gitea.com>
2026-05-05 20:11:44 +00:00
Nicolas
a22119cf88 fix(host): correct host workspace cleanup on Windows (#883)
## Summary
- Fix host-mode cleanup to remove the job **workspace** directory after a run (instead of leaving checkouts behind).
- On Windows, track step process PIDs and terminate remaining process trees during teardown before attempting workspace deletion (prevents file-lock failures).
- Skip workspace deletion when `bind_workdir` is enabled to avoid conflicting with runner-level task directory cleanup.

## Implementation details
- `HostEnvironment` now records PIDs for started commands and best-effort terminates them on Windows during `Remove()`.
- Workspace removal uses a small retry loop on Windows to handle transient locks.
- `BindWorkdir` is propagated into `HostEnvironment` so cleanup behavior matches runner configuration.

---------

Co-authored-by: silverwind <me@silverwind.io>
Co-authored-by: silverwind <2021+silverwind@noreply.gitea.com>
Reviewed-on: https://gitea.com/gitea/runner/pulls/883
Reviewed-by: silverwind <2021+silverwind@noreply.gitea.com>
2026-05-05 18:28:12 +00:00
Renovate Bot
b68ecf2580 chore(deps): update crazy-max/ghaction-import-gpg action to v7 (#923)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [crazy-max/ghaction-import-gpg](https://github.com/crazy-max/ghaction-import-gpg) | action | major | `v6` → `v7` |

---

### Release Notes

<details>
<summary>crazy-max/ghaction-import-gpg (crazy-max/ghaction-import-gpg)</summary>

### [`v7`](https://github.com/crazy-max/ghaction-import-gpg/compare/v7.0.0...v7.0.0)

[Compare Source](https://github.com/crazy-max/ghaction-import-gpg/compare/v7.0.0...v7.0.0)

### [`v7.0.0`](https://github.com/crazy-max/ghaction-import-gpg/releases/tag/v7.0.0)

[Compare Source](https://github.com/crazy-max/ghaction-import-gpg/compare/v6.3.0...v7.0.0)

- Node 24 as default runtime (requires [Actions Runner v2.327.1](https://github.com/actions/runner/releases/tag/v2.327.1) or later) by [@&#8203;crazy-max](https://github.com/crazy-max) in [#&#8203;241](https://github.com/crazy-max/ghaction-import-gpg/pull/241)
- Switch to ESM and update config/test wiring by [@&#8203;crazy-max](https://github.com/crazy-max) in [#&#8203;239](https://github.com/crazy-max/ghaction-import-gpg/pull/239)
- Bump [@&#8203;actions/core](https://github.com/actions/core) from 1.11.1 to 3.0.0 in [#&#8203;232](https://github.com/crazy-max/ghaction-import-gpg/pull/232)
- Bump [@&#8203;actions/exec](https://github.com/actions/exec) from 1.1.1 to 3.0.0 in [#&#8203;242](https://github.com/crazy-max/ghaction-import-gpg/pull/242)
- Bump brace-expansion from 1.1.11 to 1.1.12 in [#&#8203;221](https://github.com/crazy-max/ghaction-import-gpg/pull/221)
- Bump minimatch from 3.1.2 to 3.1.5 in [#&#8203;240](https://github.com/crazy-max/ghaction-import-gpg/pull/240)
- Bump openpgp from 6.1.0 to 6.3.0 in [#&#8203;233](https://github.com/crazy-max/ghaction-import-gpg/pull/233)

**Full Changelog**: <https://github.com/crazy-max/ghaction-import-gpg/compare/v6.3.0...v7.0.0>

### [`v6.3.0`](https://github.com/crazy-max/ghaction-import-gpg/releases/tag/v6.3.0)

[Compare Source](https://github.com/crazy-max/ghaction-import-gpg/compare/v6.2.0...v6.3.0)

- Bump openpgp from 5.11.2 to 6.1.0 in [#&#8203;215](https://github.com/crazy-max/ghaction-import-gpg/pull/215)
- Bump cross-spawn from 7.0.3 to 7.0.6 in [#&#8203;212](https://github.com/crazy-max/ghaction-import-gpg/pull/212)

**Full Changelog**: <https://github.com/crazy-max/ghaction-import-gpg/compare/v6.2.0...v6.3.0>

### [`v6.2.0`](https://github.com/crazy-max/ghaction-import-gpg/releases/tag/v6.2.0)

[Compare Source](https://github.com/crazy-max/ghaction-import-gpg/compare/v6.1.0...v6.2.0)

- Bump [@&#8203;actions/core](https://github.com/actions/core) from 1.10.1 to 1.11.1 in [#&#8203;209](https://github.com/crazy-max/ghaction-import-gpg/pull/209)
- Bump braces from 3.0.2 to 3.0.3 in [#&#8203;203](https://github.com/crazy-max/ghaction-import-gpg/pull/203)
- Bump ip from 2.0.0 to 2.0.1 in [#&#8203;196](https://github.com/crazy-max/ghaction-import-gpg/pull/196)
- Bump micromatch from 4.0.4 to 4.0.8 in [#&#8203;207](https://github.com/crazy-max/ghaction-import-gpg/pull/207)
- Bump openpgp from 5.11.0 to 5.11.2 in [#&#8203;205](https://github.com/crazy-max/ghaction-import-gpg/pull/205)
- Bump tar from 6.1.14 to 6.2.1 in [#&#8203;198](https://github.com/crazy-max/ghaction-import-gpg/pull/198)

**Full Changelog**: <https://github.com/crazy-max/ghaction-import-gpg/compare/v6.1.0...v6.2.0>

### [`v6.1.0`](https://github.com/crazy-max/ghaction-import-gpg/releases/tag/v6.1.0)

[Compare Source](https://github.com/crazy-max/ghaction-import-gpg/compare/v6...v6.1.0)

- Bump [@&#8203;actions/core](https://github.com/actions/core) from 1.10.0 to 1.10.1 in [#&#8203;186](https://github.com/crazy-max/ghaction-import-gpg/pull/186)
- Bump [@&#8203;babel/traverse](https://github.com/babel/traverse) from 7.17.3 to 7.23.2 in [#&#8203;191](https://github.com/crazy-max/ghaction-import-gpg/pull/191)
- Bump debug from 4.1.1 to 4.3.4 in [#&#8203;190](https://github.com/crazy-max/ghaction-import-gpg/pull/190)
- Bump openpgp from 5.10.1 to 5.11.0 in [#&#8203;192](https://github.com/crazy-max/ghaction-import-gpg/pull/192)

**Full Changelog**: <https://github.com/crazy-max/ghaction-import-gpg/compare/v6.0.0...v6.1.0>

</details>

---

### Configuration

📅 **Schedule**: (UTC)

- Branch creation
  - At any time (no schedule defined)
- Automerge
  - At any time (no schedule defined)

🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update again.

---

 - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box

---

This PR has been generated by [Mend Renovate](https://github.com/renovatebot/renovate).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0My4xNjAuNiIsInVwZGF0ZWRJblZlciI6IjQzLjE2MC42IiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->

Reviewed-on: https://gitea.com/gitea/runner/pulls/923
Reviewed-by: silverwind <2021+silverwind@noreply.gitea.com>
Co-authored-by: Renovate Bot <renovate-bot@gitea.com>
Co-committed-by: Renovate Bot <renovate-bot@gitea.com>
2026-05-04 12:44:49 +00:00
Renovate Bot
d1434237c2 fix(deps): update module golang.org/x/term to v0.42.0 (#920)
This PR contains the following updates:

| Package | Change | [Age](https://docs.renovatebot.com/merge-confidence/) | [Confidence](https://docs.renovatebot.com/merge-confidence/) |
|---|---|---|---|
| [golang.org/x/term](https://pkg.go.dev/golang.org/x/term) | [`v0.41.0` → `v0.42.0`](https://cs.opensource.google/go/x/term/+/refs/tags/v0.41.0...refs/tags/v0.42.0) | ![age](https://developer.mend.io/api/mc/badges/age/go/golang.org%2fx%2fterm/v0.42.0?slim=true) | ![confidence](https://developer.mend.io/api/mc/badges/confidence/go/golang.org%2fx%2fterm/v0.41.0/v0.42.0?slim=true) |

---

### Configuration

📅 **Schedule**: (UTC)

- Branch creation
  - At any time (no schedule defined)
- Automerge
  - At any time (no schedule defined)

🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update again.

---

 - [x] <!-- rebase-check -->If you want to rebase/retry this PR, check this box

---

This PR has been generated by [Mend Renovate](https://github.com/renovatebot/renovate).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0My4xNjAuNCIsInVwZGF0ZWRJblZlciI6IjQzLjE2MC40IiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->

Reviewed-on: https://gitea.com/gitea/runner/pulls/920
Reviewed-by: Nicolas <bircni@icloud.com>
Co-authored-by: Renovate Bot <renovate-bot@gitea.com>
Co-committed-by: Renovate Bot <renovate-bot@gitea.com>
2026-05-03 11:36:03 +00:00
Renovate Bot
35c65e2b14 chore(deps): update actions/hello-world-docker-action action to v2 (#921)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [actions/hello-world-docker-action](https://github.com/actions/hello-world-docker-action) | action | major | `v1` → `v2` |

---

### Release Notes

<details>
<summary>actions/hello-world-docker-action (actions/hello-world-docker-action)</summary>

### [`v2`](https://github.com/actions/hello-world-docker-action/releases/tag/v2): Version v2

[Compare Source](https://github.com/actions/hello-world-docker-action/compare/v1...v2)

Update action to use the new environment file method for setting outputs.

</details>

---

### Configuration

📅 **Schedule**: (UTC)

- Branch creation
  - At any time (no schedule defined)
- Automerge
  - At any time (no schedule defined)

🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update again.

---

 - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box

---

This PR has been generated by [Mend Renovate](https://github.com/renovatebot/renovate).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0My4xNjAuNCIsInVwZGF0ZWRJblZlciI6IjQzLjE2MC40IiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->

Reviewed-on: https://gitea.com/gitea/runner/pulls/921
Reviewed-by: Nicolas <bircni@icloud.com>
Co-authored-by: Renovate Bot <renovate-bot@gitea.com>
Co-committed-by: Renovate Bot <renovate-bot@gitea.com>
2026-05-03 04:44:33 +00:00
Nicolas
c45a4e6d32 ci: Fix triggers (#882)
Currently on a branch a workflow got triggered 2x one time on push and one time as its a PR
Now it only gets triggered on PR and on push onto main

Reviewed-on: https://gitea.com/gitea/runner/pulls/882
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Nicolas <bircni@icloud.com>
Co-committed-by: Nicolas <bircni@icloud.com>
2026-05-01 16:37:37 +00:00
Renovate Bot
68d9fc45c9 chore(deps): update dependency @vercel/ncc to ^0.38.0 (#881)
This PR contains the following updates:

| Package | Change | [Age](https://docs.renovatebot.com/merge-confidence/) | [Confidence](https://docs.renovatebot.com/merge-confidence/) |
|---|---|---|---|
| [@vercel/ncc](https://github.com/vercel/ncc) | [`^0.24.1` → `^0.38.0`](https://renovatebot.com/diffs/npm/@vercel%2fncc/0.24.1/0.38.4) | ![age](https://developer.mend.io/api/mc/badges/age/npm/@vercel%2fncc/0.38.4?slim=true) | ![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/@vercel%2fncc/0.24.1/0.38.4?slim=true) |

---

### Release Notes

<details>
<summary>vercel/ncc (@&#8203;vercel/ncc)</summary>

### [`v0.38.4`](https://github.com/vercel/ncc/releases/tag/0.38.4)

[Compare Source](https://github.com/vercel/ncc/compare/0.38.3...0.38.4)

##### Bug Fixes

- **cjs-build:** enable evaluating import.meta in cjs build ([#&#8203;1236](https://github.com/vercel/ncc/issues/1236)) ([e72d34d](e72d34d97e)), closes [/github.com/vercel/ncc/pull/897#discussion\_r836916315](https://github.com//github.com/vercel/ncc/pull/897/issues/discussion_r836916315) [#&#8203;1019](https://github.com/vercel/ncc/issues/1019)

### [`v0.38.3`](https://github.com/vercel/ncc/releases/tag/0.38.3)

[Compare Source](https://github.com/vercel/ncc/compare/0.38.2...0.38.3)

##### Bug Fixes

- add missing `--asset-builds` to cli help message ([#&#8203;1228](https://github.com/vercel/ncc/issues/1228)) ([84f8c52](84f8c52872))

### [`v0.38.2`](https://github.com/vercel/ncc/releases/tag/0.38.2)

[Compare Source](https://github.com/vercel/ncc/compare/0.38.1...0.38.2)

##### Bug Fixes

- **deps:** update webpack to v5.94.0, terser to v5.33.0 ([#&#8203;1213](https://github.com/vercel/ncc/issues/1213)) ([158a1fd](158a1fdcbc)), closes [#&#8203;1193](https://github.com/vercel/ncc/issues/1193) [#&#8203;1194](https://github.com/vercel/ncc/issues/1194) [#&#8203;1177](https://github.com/vercel/ncc/issues/1177) [#&#8203;1204](https://github.com/vercel/ncc/issues/1204) [#&#8203;1195](https://github.com/vercel/ncc/issues/1195)

Huge thanks to [@&#8203;theoludwig](https://github.com/theoludwig) 🎉

### [`v0.38.1`](https://github.com/vercel/ncc/releases/tag/0.38.1)

[Compare Source](https://github.com/vercel/ncc/compare/0.38.0...0.38.1)

##### Bug Fixes

- sourcemap sources removes webpack path ([#&#8203;1122](https://github.com/vercel/ncc/issues/1122)) ([ce5984e](ce5984e4b0)), closes [#&#8203;1011](https://github.com/vercel/ncc/issues/1011) [#&#8203;1121](https://github.com/vercel/ncc/issues/1121)

### [`v0.38.0`](https://github.com/vercel/ncc/releases/tag/0.38.0)

[Compare Source](https://github.com/vercel/ncc/compare/0.37.0...0.38.0)

##### Features

- Log minification error when `--debug` ([#&#8203;1102](https://github.com/vercel/ncc/issues/1102)) ([e2779f4](e2779f4203))

### [`v0.37.0`](https://github.com/vercel/ncc/releases/tag/0.37.0)

[Compare Source](https://github.com/vercel/ncc/compare/0.36.1...0.37.0)

##### Features

- add support for TypeScript 5.0's array extends in tsconfig ([#&#8203;1105](https://github.com/vercel/ncc/issues/1105)) ([f898f8e](f898f8ea85))

### [`v0.36.1`](https://github.com/vercel/ncc/releases/tag/0.36.1)

[Compare Source](https://github.com/vercel/ncc/compare/0.36.0...0.36.1)

##### Bug Fixes

- add missing pr title lint action ([#&#8203;1032](https://github.com/vercel/ncc/issues/1032)) ([c2d03cf](c2d03cf6db))
- add `ncc --version` and `ncc --help` ([#&#8203;1030](https://github.com/vercel/ncc/issues/1030)) ([d38b619](d38b619554))

### [`v0.36.0`](https://github.com/vercel/ncc/releases/tag/0.36.0)

[Compare Source](https://github.com/vercel/ncc/compare/0.34.0...0.36.0)

##### Bug Fixes

- gitignore should include release.config.js ([#&#8203;1016](https://github.com/vercel/ncc/issues/1016)) ([44e2eac](44e2eac6c9))
- node 18 by update source-map used by Terser to 0.7.4 ([#&#8203;999](https://github.com/vercel/ncc/issues/999)) ([2f69f83](2f69f838aa))

##### Features

- add semantic-release to autopublish ([#&#8203;1015](https://github.com/vercel/ncc/issues/1015)) ([be3405d](be3405dbc3)), closes [#&#8203;1000](https://github.com/vercel/ncc/issues/1000)

### [`v0.34.0`](https://github.com/vercel/ncc/releases/tag/0.34.0)

[Compare Source](https://github.com/vercel/ncc/compare/0.33.4...0.34.0)

##### Changes

Add support for TS 4.7

- Chore(deps-dev): bump ts-loader from 8.3.0 to 9.3.0: [#&#8203;921](https://github.com/vercel/ncc/issues/921)
- Chore(deps-dev): bump express from 4.17.1 to 4.18.1: [#&#8203;917](https://github.com/vercel/ncc/issues/917)
- Chore: add `memory-fs` to the devDependencies: [#&#8203;927](https://github.com/vercel/ncc/issues/927)

##### Credits

Huge thanks to [@&#8203;stscoundrel](https://github.com/stscoundrel) and [@&#8203;shogo82148](https://github.com/shogo82148) for helping!

### [`v0.33.4`](https://github.com/vercel/ncc/releases/tag/0.33.4)

[Compare Source](https://github.com/vercel/ncc/compare/0.33.3...0.33.4)

##### Changes

- Fix: Add missing variable declaration: [#&#8203;773](https://github.com/vercel/ncc/issues/773)
- Chore: add windows to CI: [#&#8203;896](https://github.com/vercel/ncc/issues/896)
- Chore: bump webpack-asset-relocator-loader to 1.7.2: [#&#8203;912](https://github.com/vercel/ncc/issues/912)
- Chore(deps-dev): bump vm2 from 3.9.4 to 3.9.6: [#&#8203;872](https://github.com/vercel/ncc/issues/872)
- Chore(deps): bump url-parse from 1.5.3 to 1.5.7: [#&#8203;875](https://github.com/vercel/ncc/issues/875)
- Chore(deps): bump url-parse from 1.5.7 to 1.5.10: [#&#8203;879](https://github.com/vercel/ncc/issues/879)
- Chore(deps-dev): bump stripe from 8.167.0 to 8.205.0: [#&#8203;882](https://github.com/vercel/ncc/issues/882)
- Chore(deps-dev): bump typescript from 4.4.2 to 4.6.2: [#&#8203;881](https://github.com/vercel/ncc/issues/881)
- Chore(deps-dev): bump twilio from 3.66.1 to 3.75.0: [#&#8203;884](https://github.com/vercel/ncc/issues/884)
- Chore(deps): bump actions/setup-node from 2 to 3: [#&#8203;880](https://github.com/vercel/ncc/issues/880)
- Chore(deps-dev): bump graphql from 15.5.1 to 15.8.0: [#&#8203;885](https://github.com/vercel/ncc/issues/885)
- Chore: replace deprecated String.prototype.substr(): [#&#8203;894](https://github.com/vercel/ncc/issues/894)
- Chore(deps): bump actions/checkout from 2 to 3: [#&#8203;902](https://github.com/vercel/ncc/issues/902)
- Chore(deps-dev): bump [@&#8203;azure/cosmos](https://github.com/azure/cosmos) from 3.12.3 to 3.15.1: [#&#8203;905](https://github.com/vercel/ncc/issues/905)
- Chore(deps-dev): bump stripe from 8.205.0 to 8.214.0: [#&#8203;906](https://github.com/vercel/ncc/issues/906)
- Chore(deps-dev): bump [@&#8203;google-cloud/bigquery](https://github.com/google-cloud/bigquery) from 5.7.0 to 5.12.0: [#&#8203;903](https://github.com/vercel/ncc/issues/903)
- Chore(deps-dev): bump tsconfig-paths from 3.10.1 to 3.14.1: [#&#8203;904](https://github.com/vercel/ncc/issues/904)

##### Credits

Huge thanks to [@&#8203;CommanderRoot](https://github.com/CommanderRoot) for helping!

### [`v0.33.3`](https://github.com/vercel/ncc/releases/tag/0.33.3)

[Compare Source](https://github.com/vercel/ncc/compare/0.33.2...0.33.3)

##### Patches

- Fix: bump license-webpack-plugin: [#&#8203;871](https://github.com/vercel/ncc/issues/871)
- Chore(deps): bump follow-redirects from 1.14.7 to 1.14.8: [#&#8203;870](https://github.com/vercel/ncc/issues/870)

### [`v0.33.2`](https://github.com/vercel/ncc/releases/tag/0.33.2)

[Compare Source](https://github.com/vercel/ncc/compare/0.33.1...0.33.2)

##### Patches

- Fix: use `sha256` instead of deprecated `md5` for hash algorithm: [#&#8203;868](https://github.com/vercel/ncc/issues/868)
- Fix: typo in build script: [#&#8203;835](https://github.com/vercel/ncc/issues/835)
- Chore(test) Add Node.js 16 to CI: [#&#8203;801](https://github.com/vercel/ncc/issues/801)
- Chore(deps): bump nodemailer from 6.5.0 to 6.7.2: [#&#8203;833](https://github.com/vercel/ncc/issues/833)
- Chore(deps-dev): bump terser from 5.7.1 to 5.10.0: [#&#8203;840](https://github.com/vercel/ncc/issues/840)
- Chore(deps-dev): bump passport from 0.4.1 to 0.5.2: [#&#8203;839](https://github.com/vercel/ncc/issues/839)
- Chore(deps-dev): bump sequelize from 6.6.5 to 6.12.4: [#&#8203;843](https://github.com/vercel/ncc/issues/843)
- Chore(deps-dev): bump analytics-node from 5.0.0 to 6.0.0: [#&#8203;838](https://github.com/vercel/ncc/issues/838)
- Chore(deps): bump follow-redirects from 1.14.5 to 1.14.7: [#&#8203;846](https://github.com/vercel/ncc/issues/846)
- Chore(deps): bump cached-path-relative from 1.0.2 to 1.1.0: [#&#8203;854](https://github.com/vercel/ncc/issues/854)
- Chore(deps-dev): bump license-webpack-plugin from 2.3.20 to 4.0.1: [#&#8203;859](https://github.com/vercel/ncc/issues/859)
- Chore(deps): bump simple-get from 3.1.0 to 3.1.1: [#&#8203;864](https://github.com/vercel/ncc/issues/864)
- Chore(deps-dev): bump aws-sdk from 2.1024.0 to 2.1068.0: [#&#8203;867](https://github.com/vercel/ncc/issues/867)

##### Credits

Huge thanks to [@&#8203;shakefu](https://github.com/shakefu) for helping!

### [`v0.33.1`](https://github.com/vercel/ncc/releases/tag/0.33.1)

[Compare Source](https://github.com/vercel/ncc/compare/0.33.0...0.33.1)

##### Patches

- Allow configuring mainFields for nccing browser modules: [#&#8203;832](https://github.com/vercel/ncc/issues/832)

### [`v0.33.0`](https://github.com/vercel/ncc/releases/tag/0.33.0)

[Compare Source](https://github.com/vercel/ncc/compare/0.32.0...0.33.0)

##### Minor Changes

- Chore(deps-dev): bump [@&#8203;vercel/webpack-asset-relocator-loader](https://github.com/vercel/webpack-asset-relocator-loader): [#&#8203;826](https://github.com/vercel/ncc/issues/826)
- Fix: Fix source maps: [#&#8203;818](https://github.com/vercel/ncc/issues/818)
- Feat: Allow using matches from externals for regex matching: [#&#8203;825](https://github.com/vercel/ncc/issues/825)

##### Patches

- Chore(deps-dev): bump koa from 2.13.1 to 2.13.4: [#&#8203;822](https://github.com/vercel/ncc/issues/822)
- Chore(deps-dev): bump mariadb from 2.5.4 to 2.5.5: [#&#8203;823](https://github.com/vercel/ncc/issues/823)

##### Credits

Huge thanks to [@&#8203;fenix20113](https://github.com/fenix20113) for helping!

### [`v0.32.0`](https://github.com/vercel/ncc/releases/tag/0.32.0)

[Compare Source](https://github.com/vercel/ncc/compare/0.31.1...0.32.0)

##### Changes

- Feat: bump to webpack\@&#8203;5.61.0: [#&#8203;809](https://github.com/vercel/ncc/issues/809)
- Docs: add debug command description: [#&#8203;800](https://github.com/vercel/ncc/issues/800)
- Chore(deps): bump object-path from 0.11.7 to 0.11.8: [#&#8203;778](https://github.com/vercel/ncc/issues/778)
- Chore(deps): bump tmpl from 1.0.4 to 1.0.5: [#&#8203;779](https://github.com/vercel/ncc/issues/779)
- Chore(deps-dev): bump vm2 from 3.9.3 to 3.9.4: [#&#8203;795](https://github.com/vercel/ncc/issues/795)
- Chore(deps-dev): bump axios from 0.21.1 to 0.21.2: [#&#8203;810](https://github.com/vercel/ncc/issues/810)
- Chore(deps-dev): bump aws-sdk from 2.958.0 to 2.1024.0: [#&#8203;812](https://github.com/vercel/ncc/issues/812)
- Chore(deps-dev): bump webpack from 5.61.0 to 5.62.1: [#&#8203;813](https://github.com/vercel/ncc/issues/813)
- Chore(deps): bump passport-oauth2 from 1.5.0 to 1.6.1: [#&#8203;811](https://github.com/vercel/ncc/issues/811)
- Chore(deps): bump url-parse from 1.5.1 to 1.5.3: [#&#8203;815](https://github.com/vercel/ncc/issues/815)

##### Credits

Huge thanks to [@&#8203;fireairforce](https://github.com/fireairforce) and [@&#8203;jesec](https://github.com/jesec) for helping!

### [`v0.31.1`](https://github.com/vercel/ncc/releases/tag/0.31.1)

[Compare Source](https://github.com/vercel/ncc/compare/0.31.0...0.31.1)

##### Patches

- Fix `tsconfig.json` detection: [#&#8203;770](https://github.com/vercel/ncc/issues/770)

### [`v0.31.0`](https://github.com/vercel/ncc/releases/tag/0.31.0)

[Compare Source](https://github.com/vercel/ncc/compare/0.30.0...0.31.0)

##### Changes

- Fix `compilerOptions` from `tsconfig.json`: [#&#8203;766](https://github.com/vercel/ncc/issues/766)
- Bump typescript to 4.4.2: [#&#8203;767](https://github.com/vercel/ncc/issues/767)
- Chore(deps-dev): bump graceful-fs from 4.2.6 to 4.2.8: [#&#8203;761](https://github.com/vercel/ncc/issues/761)
- Chore(deps): bump tar from 4.4.15 to 4.4.19: [#&#8203;763](https://github.com/vercel/ncc/issues/763)
- Chore(deps): bump object-path from 0.11.5 to 0.11.7: [#&#8203;764](https://github.com/vercel/ncc/issues/764)

### [`v0.30.0`](https://github.com/vercel/ncc/releases/tag/0.30.0)

[Compare Source](https://github.com/vercel/ncc/compare/0.29.2...0.30.0)

##### Changes

- Major: Change asset builds to opt-in with new option `--asset-builds`: [#&#8203;756](https://github.com/vercel/ncc/issues/756)
- Chore: bump typescript from 3.9.9 to 4.3.5: [#&#8203;739](https://github.com/vercel/ncc/issues/739)
- Chore: bump codecov to 3.8.3: [#&#8203;752](https://github.com/vercel/ncc/issues/752)

##### Description

Previous, `fs.readFile('./asset.js')` would compile `asset.js` instead of including as an asset.

With this release, the default behavior has been changed to include `asset.js` as an asset only.

If you want the old behavior, you can use the `--asset-builds` option.

##### Credits

Huge thanks to [@&#8203;guybedford](https://github.com/guybedford) for helping!

### [`v0.29.2`](https://github.com/vercel/ncc/releases/tag/0.29.2)

[Compare Source](https://github.com/vercel/ncc/compare/0.29.1...0.29.2)

##### Patches

- Fix: ensure nested builds of `__nccwpck_require__`: [#&#8203;751](https://github.com/vercel/ncc/issues/751)

##### Credits

Huge thanks to [@&#8203;guybedford](https://github.com/guybedford) for helping!

### [`v0.29.1`](https://github.com/vercel/ncc/releases/tag/0.29.1)

[Compare Source](https://github.com/vercel/ncc/compare/0.29.0...0.29.1)

##### Patches

- Fix: add stringify-loader: [#&#8203;742](https://github.com/vercel/ncc/issues/742)
- Fix: package.json asset type module setting: [#&#8203;733](https://github.com/vercel/ncc/issues/733)
- Chore(deps): update dependencies: [#&#8203;736](https://github.com/vercel/ncc/issues/736)
- Chore(deps): bump tar from 4.4.13 to 4.4.15: [#&#8203;743](https://github.com/vercel/ncc/issues/743)
- Chore(deps): bump path-parse from 1.0.6 to 1.0.7: [#&#8203;745](https://github.com/vercel/ncc/issues/745)
- Chore(deps-dev): bump pdfkit from 0.12.1 to 0.12.3: [#&#8203;740](https://github.com/vercel/ncc/issues/740)

##### Credits

Huge thanks to [@&#8203;guybedford](https://github.com/guybedford), [@&#8203;mmorel-35](https://github.com/mmorel-35), and [@&#8203;jpcloureiro](https://github.com/jpcloureiro) for helping!

### [`v0.29.0`](https://github.com/vercel/ncc/releases/tag/0.29.0)

[Compare Source](https://github.com/vercel/ncc/compare/0.28.6...0.29.0)

##### Changes

- Major: output ESM for `.mjs` or `type=module` builds: [#&#8203;720](https://github.com/vercel/ncc/issues/720)
- Feat: update to webpack-asset-reloactor-loader\@&#8203;1.5.0: [#&#8203;718](https://github.com/vercel/ncc/issues/718)
- Feat: update to webpack\@&#8203;5.42.0: [#&#8203;723](https://github.com/vercel/ncc/issues/723)
- Feat: update to webpack\@&#8203;5.43.0: [#&#8203;724](https://github.com/vercel/ncc/issues/724)
- Chore: bump set-getter from 0.1.0 to 0.1.1: [#&#8203;719](https://github.com/vercel/ncc/issues/719)
- Fix: typo in readme: [#&#8203;712](https://github.com/vercel/ncc/issues/712)

##### Credits

Huge thanks to [@&#8203;rethab](https://github.com/rethab) and [@&#8203;guybedford](https://github.com/guybedford) for helping!

### [`v0.28.6`](https://github.com/vercel/ncc/releases/tag/0.28.6)

[Compare Source](https://github.com/vercel/ncc/compare/0.28.5...0.28.6)

##### Patches

- Fix: Update to webpack\@&#8203;5.36.2: [#&#8203;707](https://github.com/vercel/ncc/issues/707)
- Fix: Update ts-loader to fix webpack warning: [#&#8203;710](https://github.com/vercel/ncc/issues/710)
- Deps: Bump hosted-git-info from 2.8.8 to 2.8.9: [#&#8203;708](https://github.com/vercel/ncc/issues/708)

##### Credits

Huge thanks to [@&#8203;adriencohen](https://github.com/adriencohen) and [@&#8203;huozhi](https://github.com/huozhi) for helping!

### [`v0.28.5`](https://github.com/vercel/ncc/releases/tag/0.28.5)

[Compare Source](https://github.com/vercel/ncc/compare/0.28.4...0.28.5)

##### Patches

- Fix: handle terser error: [#&#8203;703](https://github.com/vercel/ncc/issues/703)
- Fix: treat compilation.errors as a set: [#&#8203;705](https://github.com/vercel/ncc/issues/705)
- Fix: unify `target` arg description, add `transpile-only` arg to readme: [#&#8203;702](https://github.com/vercel/ncc/issues/702)

##### Credits

Huge thanks to [@&#8203;guybedford](https://github.com/guybedford) and [@&#8203;Simek](https://github.com/Simek) for helping!

### [`v0.28.4`](https://github.com/vercel/ncc/releases/tag/0.28.4)

[Compare Source](https://github.com/vercel/ncc/compare/0.28.3...0.28.4)

##### Patches

- Fix: Adjust caching to use hashes: [#&#8203;698](https://github.com/vercel/ncc/issues/698)
- Fix: support top-level await: [#&#8203;700](https://github.com/vercel/ncc/issues/700)
- Fix: publish should build without cache: [#&#8203;701](https://github.com/vercel/ncc/issues/701)
- Chore:  redis from 2.8.0 to 3.1.1: [#&#8203;699](https://github.com/vercel/ncc/issues/699)
- Chore: Bump ssri from 6.0.1 to 6.0.2: [#&#8203;695](https://github.com/vercel/ncc/issues/695)
- Chore: rename master to main: [#&#8203;694](https://github.com/vercel/ncc/issues/694)

##### Credits

Huge thanks to [@&#8203;guybedford](https://github.com/guybedford) for helping!

### [`v0.28.3`](https://github.com/vercel/ncc/releases/tag/0.28.3)

[Compare Source](https://github.com/vercel/ncc/compare/0.28.2...0.28.3)

##### Patches

- Fix: lock license plugin version: [#&#8203;692](https://github.com/vercel/ncc/issues/692)

##### Credits

Huge thanks to [@&#8203;huozhi](https://github.com/huozhi) for helping!

### [`v0.28.2`](https://github.com/vercel/ncc/releases/tag/0.28.2)

[Compare Source](https://github.com/vercel/ncc/compare/0.28.1...0.28.2)

##### Patches

- Fix: unknown compiler option `incremental`: [#&#8203;685](https://github.com/vercel/ncc/issues/685)
- Fix: replace .npmignore with "files" prop: [#&#8203;688](https://github.com/vercel/ncc/issues/688)

##### Credits

Huge thanks to [@&#8203;Songkeys](https://github.com/Songkeys) for helping!

### [`v0.28.1`](https://github.com/vercel/ncc/releases/tag/0.28.1)

[Compare Source](https://github.com/vercel/ncc/compare/0.28.0...0.28.1)

##### Patches

- Fix: Rebuild bundle to fix [#&#8203;684](https://github.com/vercel/ncc/issues/684)
- Deps: Bump codecov to 3.8.1: [#&#8203;683](https://github.com/vercel/ncc/issues/683)

### [`v0.28.0`](https://github.com/vercel/ncc/releases/tag/0.28.0)

[Compare Source](https://github.com/vercel/ncc/compare/0.27.0...0.28.0)

##### Minor Changes

- Feat: `exports` conditions semantics: [#&#8203;665](https://github.com/vercel/ncc/issues/665)
- Feat: `imports` support, webpack upgrade: [#&#8203;672](https://github.com/vercel/ncc/issues/672)
- Feat: Support Regexp externals: [#&#8203;654](https://github.com/vercel/ncc/issues/654)
- Fix: TS interop test and fix: [#&#8203;671](https://github.com/vercel/ncc/issues/671)
- Fix: Upgrade local TS: [#&#8203;674](https://github.com/vercel/ncc/issues/674)
- Chore: Interop test case: [#&#8203;667](https://github.com/vercel/ncc/issues/667)
- Chore: add console output when minifying fails: [#&#8203;648](https://github.com/vercel/ncc/issues/648)
- Chore: Add Node.js 14 to CI: [#&#8203;659](https://github.com/vercel/ncc/issues/659)
- Docs: Fix out \[file] → out \[dir]: [#&#8203;675](https://github.com/vercel/ncc/issues/675)
- Deps: Update to webpack\@&#8203;5.30.0: [#&#8203;681](https://github.com/vercel/ncc/issues/681)
- Deps: Update to webpack\@&#8203;5.26.3: [#&#8203;664](https://github.com/vercel/ncc/issues/664)
- Deps: Update to webpack\@&#8203;5.24.4: [#&#8203;658](https://github.com/vercel/ncc/issues/658)
- Deps: Update to webpack-asset-relocator-loader\@&#8203;1.3.0: [#&#8203;682](https://github.com/vercel/ncc/issues/682)
- Deps: Upgrade to webpack-asset-relocator-loader\@&#8203;1.2.4: [#&#8203;673](https://github.com/vercel/ncc/issues/673)
- Deps: Update to webpack-asset-relocator\@&#8203;1.2.3: [#&#8203;662](https://github.com/vercel/ncc/issues/662)
- Deps: Upgrade to terser\@&#8203;5.6.1: [#&#8203;669](https://github.com/vercel/ncc/issues/669)
- Deps: Bump socket.io from 2.2.0 to 2.4.0: [#&#8203;645](https://github.com/vercel/ncc/issues/645)
- Deps: Bump pug from 2.0.3 to 3.0.1: [#&#8203;656](https://github.com/vercel/ncc/issues/656)
- Deps: Bump elliptic from 6.5.3 to 6.5.4: [#&#8203;657](https://github.com/vercel/ncc/issues/657)
- Deps: Bump msgpack5 from 4.2.1 to 4.5.1: [#&#8203;660](https://github.com/vercel/ncc/issues/660)
- Deps: Bump y18n from 3.2.1 to 3.2.2: [#&#8203;678](https://github.com/vercel/ncc/issues/678)

##### Credits

Huge thanks to [@&#8203;guybedford](https://github.com/guybedford), [@&#8203;Songkeys](https://github.com/Songkeys), [@&#8203;adriencohen](https://github.com/adriencohen), and [@&#8203;huozhi](https://github.com/huozhi) for helping!

### [`v0.27.0`](https://github.com/vercel/ncc/releases/tag/0.27.0)

[Compare Source](https://github.com/vercel/ncc/compare/0.26.2...0.27.0)

##### Changes

- Feat: support customEmit ncc option: [#&#8203;634](https://github.com/vercel/ncc/issues/634)
- Fix: correct declaration output dir: [#&#8203;627](https://github.com/vercel/ncc/issues/627)
- Update to webpack-asset-relocator\@&#8203;1.2.0: [#&#8203;640](https://github.com/vercel/ncc/issues/640)

##### Credits

Huge thanks to [@&#8203;guybedford](https://github.com/guybedford) and [@&#8203;zeroooooooo](https://github.com/zeroooooooo) for helping!

### [`v0.26.2`](https://github.com/vercel/ncc/releases/tag/0.26.2)

[Compare Source](https://github.com/vercel/ncc/compare/0.26.1...0.26.2)

##### Patches

- Enable minification for sourcemap-register.js: [#&#8203;631](https://github.com/vercel/ncc/issues/631)
- Avoid **webpack\_require** conflicts: [#&#8203;633](https://github.com/vercel/ncc/issues/633)
- Bump axios from 0.18.1 to 0.21.1: [#&#8203;636](https://github.com/vercel/ncc/issues/636)
- Fix: skip typechecking on sub-builds: [#&#8203;637](https://github.com/vercel/ncc/issues/637)

##### Credits

Huge thanks to [@&#8203;xom9ikk](https://github.com/xom9ikk) and [@&#8203;guybedford](https://github.com/guybedford) for helping!

### [`v0.26.1`](https://github.com/vercel/ncc/releases/tag/0.26.1)

[Compare Source](https://github.com/vercel/ncc/compare/0.26.0...0.26.1)

##### Patches

- Ensure separate asset compilation states in subbuilds: [#&#8203;630](https://github.com/vercel/ncc/issues/630)

##### Credits

Huge thanks to [@&#8203;guybedford](https://github.com/guybedford) for helping!

### [`v0.26.0`](https://github.com/vercel/ncc/releases/tag/0.26.0)

[Compare Source](https://github.com/vercel/ncc/compare/0.25.1...0.26.0)

##### Changes

- Asset subbundle builds: [#&#8203;625](https://github.com/vercel/ncc/issues/625)
- Bump ini from 1.3.5 to 1.3.7: [#&#8203;624](https://github.com/vercel/ncc/issues/624)
- Update example with missing TypeScript dependency: [#&#8203;623](https://github.com/vercel/ncc/issues/623)
- Update readme with missing TS and ES options: [#&#8203;615](https://github.com/vercel/ncc/issues/615)

##### Credits

Huge thanks to [@&#8203;restuwahyu13](https://github.com/restuwahyu13) and [@&#8203;guybedford](https://github.com/guybedford) for helping!

### [`v0.25.1`](https://github.com/vercel/ncc/releases/tag/0.25.1)

[Compare Source](https://github.com/vercel/ncc/compare/0.25.0...0.25.1)

##### Changes

- Allow passing `target`: [#&#8203;614](https://github.com/vercel/ncc/issues/614)

##### Credits

Huge thanks to [@&#8203;ijjk](https://github.com/ijjk) for helping!

### [`v0.25.0`](https://github.com/vercel/ncc/releases/tag/0.25.0)

[Compare Source](https://github.com/vercel/ncc/compare/0.24.1...0.25.0)

##### Changes

- Bump `webpack` from `5.0.0-beta.28` to `5.2.0`: [#&#8203;602](https://github.com/vercel/ncc/issues/602)
- Bump `npm-user-validate` from `1.0.0` to `1.0.1`: [#&#8203;604](https://github.com/vercel/ncc/issues/604)
- Bump `object-path` from `0.11.4` to `0.11.5`: [#&#8203;607](https://github.com/vercel/ncc/issues/607)
- Fix `--quiet` flag on ts build: [#&#8203;605](https://github.com/vercel/ncc/issues/605)
- Add integration test for `@slack/web-api`: [#&#8203;591](https://github.com/vercel/ncc/issues/591)

##### Credits

Huge thanks to [@&#8203;huozhi](https://github.com/huozhi) and [@&#8203;ataylorme](https://github.com/ataylorme) for helping!

</details>

---

### Configuration

📅 **Schedule**: (UTC)

- Branch creation
  - At any time (no schedule defined)
- Automerge
  - At any time (no schedule defined)

🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update again.

---

 - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box

---

This PR has been generated by [Mend Renovate](https://github.com/renovatebot/renovate).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0My4xNTAuMCIsInVwZGF0ZWRJblZlciI6IjQzLjE1MC4wIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->

---------

Co-authored-by: Nicolas <bircni@icloud.com>
Reviewed-on: https://gitea.com/gitea/runner/pulls/881
Reviewed-by: Nicolas <bircni@icloud.com>
Co-authored-by: Renovate Bot <renovate-bot@gitea.com>
Co-committed-by: Renovate Bot <renovate-bot@gitea.com>
2026-05-01 09:00:34 +00:00
Renovate Bot
b1c873a66b chore(deps): update dependency @actions/core to v1.11.1 (#880)
This PR contains the following updates:

| Package | Change | [Age](https://docs.renovatebot.com/merge-confidence/) | [Confidence](https://docs.renovatebot.com/merge-confidence/) |
|---|---|---|---|
| [@actions/core](https://github.com/actions/toolkit/tree/main/packages/core) ([source](https://github.com/actions/toolkit/tree/HEAD/packages/core)) | [`1.10.0` → `1.11.1`](https://renovatebot.com/diffs/npm/@actions%2fcore/1.10.0/1.11.1) | ![age](https://developer.mend.io/api/mc/badges/age/npm/@actions%2fcore/1.11.1?slim=true) | ![confidence](https://developer.mend.io/api/mc/badges/confidence/npm/@actions%2fcore/1.10.0/1.11.1?slim=true) |

---

### Release Notes

<details>
<summary>actions/toolkit (@&#8203;actions/core)</summary>

### [`v1.11.1`](https://github.com/actions/toolkit/blob/HEAD/packages/core/RELEASES.md#1111)

- Fix uses of `crypto.randomUUID` on Node 18 and earlier [#&#8203;1842](https://github.com/actions/toolkit/pull/1842)

##### 1.11.0

- Add platform info utilities [#&#8203;1551](https://github.com/actions/toolkit/pull/1551)
- Remove dependency on `uuid` package [#&#8203;1824](https://github.com/actions/toolkit/pull/1824)

##### 1.10.1

- Fix error message reference in oidc utils [#&#8203;1511](https://github.com/actions/toolkit/pull/1511)

##### 1.10.0

- `saveState` and `setOutput` now use environment files if available [#&#8203;1178](https://github.com/actions/toolkit/pull/1178)
- `getMultilineInput` now correctly trims whitespace by default [#&#8203;1185](https://github.com/actions/toolkit/pull/1185)

##### 1.9.1

- Randomize delimiter when calling `core.exportVariable`

##### 1.9.0

- Added `toPosixPath`, `toWin32Path` and `toPlatformPath` utilities [#&#8203;1102](https://github.com/actions/toolkit/pull/1102)

##### 1.8.2

- Update to v2.0.1 of `@actions/http-client` [#&#8203;1087](https://github.com/actions/toolkit/pull/1087)

##### 1.8.1

- Update to v2.0.0 of `@actions/http-client`

##### 1.8.0

- Deprecate `markdownSummary` extension export in favor of `summary`
  - [#&#8203;1072](https://github.com/actions/toolkit/pull/1072)
  - [#&#8203;1073](https://github.com/actions/toolkit/pull/1073)

##### 1.7.0

- [Added `markdownSummary` extension](https://github.com/actions/toolkit/pull/1014)

##### 1.6.0

- [Added OIDC Client function `getIDToken`](https://github.com/actions/toolkit/pull/919)
- [Added `file` parameter to `AnnotationProperties`](https://github.com/actions/toolkit/pull/896)

##### 1.5.0

- [Added support for notice annotations and more annotation fields](https://github.com/actions/toolkit/pull/855)

##### 1.4.0

- [Added the `getMultilineInput` function](https://github.com/actions/toolkit/pull/829)

##### 1.3.0

- [Added the trimWhitespace option to getInput](https://github.com/actions/toolkit/pull/802)
- [Added the getBooleanInput function](https://github.com/actions/toolkit/pull/725)

##### 1.2.7

- [Prepend newline for set-output](https://github.com/actions/toolkit/pull/772)

##### 1.2.6

- [Update `exportVariable` and `addPath` to use environment files](https://github.com/actions/toolkit/pull/571)

##### 1.2.5

- [Correctly bundle License File with package](https://github.com/actions/toolkit/pull/548)

##### 1.2.4

- [Be more lenient in accepting non-string command inputs](https://github.com/actions/toolkit/pull/405)
- [Add Echo commands](https://github.com/actions/toolkit/pull/411)

##### 1.2.3

- [IsDebug logging](README.md#logging)

##### 1.2.2

- [Fix escaping for runner commands](https://github.com/actions/toolkit/pull/302)

##### 1.2.1

- [Remove trailing comma from commands](https://github.com/actions/toolkit/pull/263)
- [Add "types" to package.json](https://github.com/actions/toolkit/pull/221)

##### 1.2.0

- saveState and getState functions for wrapper tasks (on finally entry points that run post job)

##### 1.1.3

- setSecret added to register a secret with the runner to be masked from the logs
- exportSecret which was not implemented and never worked was removed after clarification from product.

##### 1.1.1

- Add support for action input variables with multiple spaces [#&#8203;127](https://github.com/actions/toolkit/issues/127)
- Switched ## commands to :: commands (should have no noticeable impact) \[[#&#8203;110](https://github.com/actions/toolkit/issues/110))([#&#8203;110](https://github.com/actions/toolkit/pull/110))

##### 1.1.0

- Added helpers for `group` and `endgroup` [#&#8203;98](https://github.com/actions/toolkit/pull/98)

##### 1.0.0

- Initial release

### [`v1.11.0`](https://github.com/actions/toolkit/blob/HEAD/packages/core/RELEASES.md#1110)

- Add platform info utilities [#&#8203;1551](https://github.com/actions/toolkit/pull/1551)
- Remove dependency on `uuid` package [#&#8203;1824](https://github.com/actions/toolkit/pull/1824)

### [`v1.10.1`](https://github.com/actions/toolkit/blob/HEAD/packages/core/RELEASES.md#1101)

- Fix error message reference in oidc utils [#&#8203;1511](https://github.com/actions/toolkit/pull/1511)

</details>

---

### Configuration

📅 **Schedule**: (UTC)

- Branch creation
  - At any time (no schedule defined)
- Automerge
  - At any time (no schedule defined)

🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update again.

---

 - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box

---

This PR has been generated by [Mend Renovate](https://github.com/renovatebot/renovate).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0My4xNTAuMCIsInVwZGF0ZWRJblZlciI6IjQzLjE1MC4wIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->

Reviewed-on: https://gitea.com/gitea/runner/pulls/880
Reviewed-by: Nicolas <bircni@icloud.com>
Co-authored-by: Renovate Bot <renovate-bot@gitea.com>
Co-committed-by: Renovate Bot <renovate-bot@gitea.com>
2026-05-01 08:48:49 +00:00
1129 changed files with 2611 additions and 260613 deletions

View File

@@ -24,7 +24,7 @@ jobs:
with:
go-version-file: "go.mod"
- name: goreleaser
uses: goreleaser/goreleaser-action@v6
uses: goreleaser/goreleaser-action@v7
with:
distribution: goreleaser-pro
args: release --nightly
@@ -57,13 +57,13 @@ jobs:
fetch-depth: 0 # all history for all branches and tags
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
uses: docker/setup-qemu-action@v4
- name: Set up Docker BuildX
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@v4
- name: Login to DockerHub
uses: docker/login-action@v3
uses: docker/login-action@v4
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
@@ -72,7 +72,7 @@ jobs:
run: echo "${{ env.DOCKER_ORG }}/runner:nightly${{ matrix.variant.tag_suffix }}"
- name: Build and push
uses: docker/build-push-action@v6
uses: docker/build-push-action@v7
with:
context: .
file: ./Dockerfile

View File

@@ -17,13 +17,13 @@ jobs:
go-version-file: "go.mod"
- name: Import GPG key
id: import_gpg
uses: crazy-max/ghaction-import-gpg@v6
uses: crazy-max/ghaction-import-gpg@v7
with:
gpg_private_key: ${{ secrets.GPG_PRIVATE_KEY }}
passphrase: ${{ secrets.PASSPHRASE }}
fingerprint: CC64B1DB67ABBEECAB24B6455FC346329753F4B0
- name: goreleaser
uses: goreleaser/goreleaser-action@v6
uses: goreleaser/goreleaser-action@v7
with:
distribution: goreleaser-pro
args: release
@@ -60,20 +60,20 @@ jobs:
fetch-depth: 0 # all history for all branches and tags
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
uses: docker/setup-qemu-action@v4
- name: Set up Docker BuildX
uses: docker/setup-buildx-action@v3
uses: docker/setup-buildx-action@v4
- name: Login to DockerHub
uses: docker/login-action@v3
uses: docker/login-action@v4
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: "Docker meta"
id: docker_meta
uses: https://github.com/docker/metadata-action@v5
uses: docker/metadata-action@v6
with:
images: |
${{ env.DOCKER_ORG }}/runner
@@ -86,7 +86,7 @@ jobs:
suffix=${{ matrix.variant.tag_suffix }},onlatest=true
- name: Build and push
uses: docker/build-push-action@v6
uses: docker/build-push-action@v7
with:
context: .
file: ./Dockerfile

View File

@@ -1,7 +1,9 @@
name: checks
on:
- push
- pull_request
push:
branches:
- main
pull_request:
jobs:
lint:
@@ -17,4 +19,4 @@ jobs:
- name: build
run: make build
- name: test
run: make test
run: make test

View File

@@ -17,7 +17,7 @@ RUN make clean && make build
### DIND VARIANT
#
#
FROM docker:28-dind AS dind
FROM docker:29-dind AS dind
RUN apk add --no-cache s6 bash git tzdata
@@ -32,7 +32,7 @@ ENTRYPOINT ["s6-svscan","/etc/s6"]
### DIND-ROOTLESS VARIANT
#
#
FROM docker:28-dind-rootless AS dind-rootless
FROM docker:29-dind-rootless AS dind-rootless
USER root
RUN apk add --no-cache s6 bash git tzdata

View File

@@ -18,8 +18,8 @@ DOCKER_TAG ?= nightly
DOCKER_REF := $(DOCKER_IMAGE):$(DOCKER_TAG)
DOCKER_ROOTLESS_REF := $(DOCKER_IMAGE):$(DOCKER_TAG)-dind-rootless
GOLANGCI_LINT_PACKAGE ?= github.com/golangci/golangci-lint/v2/cmd/golangci-lint@v2.11.4
GOVULNCHECK_PACKAGE ?= golang.org/x/vuln/cmd/govulncheck@v1
GOLANGCI_LINT_PACKAGE ?= github.com/golangci/golangci-lint/v2/cmd/golangci-lint@v2.12.2
GOVULNCHECK_PACKAGE ?= golang.org/x/vuln/cmd/govulncheck@v1.3.0
STATIC ?=
EXTLDFLAGS ?=

View File

@@ -132,6 +132,12 @@ Besides `GITEA_INSTANCE_URL` and `GITEA_RUNNER_REGISTRATION_TOKEN`, the image en
For a fuller container-oriented walkthrough, see [examples/docker](examples/docker/README.md).
When `container.bind_workdir` is enabled, stale task workspace directories can be cleaned while the runner is idle:
- directories older than `runner.workdir_cleanup_age` are removed (default: `24h`; set `0` to disable)
- cleanup runs every `runner.idle_cleanup_interval` (default: `10m`; set `0` to disable)
- only purely numeric subdirectories under `container.workdir_parent` are treated as task workspaces and may be removed
- cleanup assumes `container.workdir_parent` is not shared across multiple runners
### Example Deployments
Check out the [examples](examples) directory for sample deployment types.

View File

@@ -325,10 +325,6 @@ func (h *Handler) openDB() (*bolthold.Store, error) {
func (h *Handler) find(w http.ResponseWriter, r *http.Request, _ httprouter.Params) {
cred := credFromContext(r.Context())
keys := strings.Split(r.URL.Query().Get("keys"), ",")
// cache keys are case insensitive
for i, key := range keys {
keys[i] = strings.ToLower(key)
}
version := r.URL.Query().Get("version")
db, err := h.openDB()
@@ -371,8 +367,6 @@ func (h *Handler) reserve(w http.ResponseWriter, r *http.Request, _ httprouter.P
h.responseJSON(w, r, 400, err)
return
}
// cache keys are case insensitive
api.Key = strings.ToLower(api.Key)
cache := api.ToCache()
cache.Repo = cred.Repo

View File

@@ -459,17 +459,20 @@ func TestHandler(t *testing.T) {
assert.Equal(t, contents[except], content)
})
t.Run("case insensitive", func(t *testing.T) {
t.Run("case preserved", func(t *testing.T) {
// Some actions (e.g. actions/setup-go, actions/setup-node) build cache keys that contain mixed-case fragments such as RUNNER_OS=Linux,
// then compare the cacheKey returned by the cache server to their original key with case-sensitive equality to decide whether the
// cache was a complete hit. The server must therefore preserve the original key case.
version := "c19da02a2bd7e77277f1ac29ab45c09b7d46a4ee758284e26bb3045ad11d9d20"
key := strings.ToLower(t.Name())
key := strings.ToLower(t.Name()) + "_ABC"
content := make([]byte, 100)
_, err := rand.Read(content)
require.NoError(t, err)
uploadCacheNormally(t, base, key+"_ABC", version, content)
uploadCacheNormally(t, base, key, version, content)
{
reqKey := key + "_aBc"
resp, err := testClient.Get(fmt.Sprintf("%s/cache?keys=%s&version=%s", base, reqKey, version))
resp, err := testClient.Get(fmt.Sprintf("%s/cache?keys=%s&version=%s", base, key, version))
require.NoError(t, err)
defer resp.Body.Close()
require.Equal(t, 200, resp.StatusCode)
@@ -480,7 +483,8 @@ func TestHandler(t *testing.T) {
}{}
require.NoError(t, json.NewDecoder(resp.Body).Decode(&got))
assert.Equal(t, "hit", got.Result)
assert.Equal(t, key+"_abc", got.CacheKey)
assert.Equal(t, key, got.CacheKey)
assert.NotEqual(t, strings.ToLower(key), got.CacheKey)
}
})
@@ -643,7 +647,7 @@ func uploadCacheNormally(t *testing.T, base, key, version string, content []byte
}{}
require.NoError(t, json.NewDecoder(resp.Body).Decode(&got))
assert.Equal(t, "hit", got.Result)
assert.Equal(t, strings.ToLower(key), got.CacheKey)
assert.Equal(t, key, got.CacheKey)
archiveLocation = got.ArchiveLocation
}
{

View File

@@ -1,30 +0,0 @@
# Copied from https://github.com/actions/cache#example-cache-workflow
name: Caching Primes
on: push
jobs:
build:
runs-on: ubuntu-latest
steps:
- run: env
- uses: actions/checkout@v3
- name: Cache Primes
id: cache-primes
uses: actions/cache@v3
with:
path: prime-numbers
key: ${{ runner.os }}-primes-${{ github.run_id }}
restore-keys: |
${{ runner.os }}-primes
${{ runner.os }}
- name: Generate Prime Numbers
if: steps.cache-primes.outputs.cache-hit != 'true'
run: cat /proc/sys/kernel/random/uuid > prime-numbers
- name: Use Prime Numbers
run: cat prime-numbers

View File

@@ -260,7 +260,7 @@ func TestArtifactFlow(t *testing.T) {
defer cancel()
platforms := map[string]string{
"ubuntu-latest": "node:16-buster", // Don't use node:16-buster-slim because it doesn't have curl command, which is used in the tests
"ubuntu-latest": "node:24-bookworm", // Don't use node:24-bookworm-slim because it doesn't have curl command, which is used in the tests
}
tables := []TestJobFileInfo{

View File

@@ -4,6 +4,8 @@
package common
import "slices"
// CartesianProduct takes map of lists and returns list of unique tuples
func CartesianProduct(mapOfLists map[string][]any) []map[string]any {
listNames := make([]string, 0)
@@ -46,7 +48,7 @@ func cartN(a ...[]any) [][]any {
for j, n := range n {
pi[j] = a[j][n]
}
for j := len(n) - 1; j >= 0; j-- {
for j := range slices.Backward(n) {
n[j]++
if n[j] < len(a[j]) {
break

View File

@@ -38,9 +38,11 @@ var (
ErrNoRepo = errors.New("unable to find git repo")
)
// acquireCloneLock returns an unlock function after locking the per-directory mutex for dir.
// Only concurrent operations targeting the same directory are erialized; clones into different directories run in parallel.
func acquireCloneLock(dir string) func() {
// AcquireCloneLock returns an unlock function after locking the per-directory mutex for dir.
// Only concurrent operations targeting the same directory are serialized; clones into different directories run in parallel.
// Callers reading files inside dir (e.g. tarring a checked-out action into a job container) must hold this lock too,
// otherwise a concurrent NewGitCloneExecutor on the same dir can mutate the worktree mid-read.
func AcquireCloneLock(dir string) func() {
v, _ := cloneLocks.LoadOrStore(dir, &sync.Mutex{})
mu := v.(*sync.Mutex)
mu.Lock()
@@ -305,10 +307,10 @@ func gitOptions(token string) (fetchOptions git.FetchOptions, pullOptions git.Pu
func NewGitCloneExecutor(input NewGitCloneExecutorInput) common.Executor {
return func(ctx context.Context) error {
logger := common.Logger(ctx)
logger.Infof(" \u2601 git clone '%s' # ref=%s", input.URL, input.Ref)
logger.Infof("git clone '%s' # ref=%s", input.URL, input.Ref)
logger.Debugf(" cloning %s to %s", input.URL, input.Dir)
defer acquireCloneLock(input.Dir)()
defer AcquireCloneLock(input.Dir)()
refName := plumbing.ReferenceName("refs/heads/" + input.Ref)
r, err := CloneIfRequired(ctx, refName, input, logger)

View File

@@ -310,11 +310,11 @@ func TestAcquireCloneLock(t *testing.T) {
t.Run("same directory serializes", func(t *testing.T) {
dir := t.TempDir()
unlock1 := acquireCloneLock(dir)
unlock1 := AcquireCloneLock(dir)
secondAcquired := make(chan struct{})
go func() {
unlock := acquireCloneLock(dir)
unlock := AcquireCloneLock(dir)
close(secondAcquired)
unlock()
}()
@@ -338,12 +338,12 @@ func TestAcquireCloneLock(t *testing.T) {
dirA := t.TempDir()
dirB := t.TempDir()
unlockA := acquireCloneLock(dirA)
unlockA := AcquireCloneLock(dirA)
defer unlockA()
done := make(chan struct{})
go func() {
unlock := acquireCloneLock(dirB)
unlock := AcquireCloneLock(dirB)
unlock()
close(done)
}()

View File

@@ -6,6 +6,7 @@ package container
import (
"context"
"fmt"
"io"
"gitea.com/gitea/runner/act/common"
@@ -13,6 +14,13 @@ import (
"github.com/docker/go-connections/nat"
)
// ExitCodeError reports a non-zero process exit code from a container command.
type ExitCodeError int
func (e ExitCodeError) Error() string {
return fmt.Sprintf("Process completed with exit code %d.", int(e))
}
// NewContainerInput the input for the New function
type NewContainerInput struct {
Image string

View File

@@ -8,34 +8,38 @@ package container
import (
"context"
"strings"
"gitea.com/gitea/runner/act/common"
"github.com/distribution/reference"
"github.com/docker/cli/cli/config"
"github.com/docker/cli/cli/config/credentials"
"github.com/docker/docker/api/types/registry"
"github.com/moby/moby/api/types/registry"
)
func LoadDockerAuthConfig(ctx context.Context, image string) (registry.AuthConfig, error) {
logger := common.Logger(ctx)
config, err := config.Load(config.Dir())
// config.LoadDefaultConfigFile panics on nil io.Writer when the config
// file is malformed; use config.Load to route errors through the logger.
cfg, err := config.Load(config.Dir())
if err != nil {
logger.Warnf("Could not load docker config: %v", err)
return registry.AuthConfig{}, err
}
if !config.ContainsAuth() {
config.CredentialsStore = credentials.DetectDefaultStore(config.CredentialsStore)
if !cfg.ContainsAuth() {
cfg.CredentialsStore = credentials.DetectDefaultStore(cfg.CredentialsStore)
}
hostName := "index.docker.io"
index := strings.IndexRune(image, '/')
if index > -1 && (strings.ContainsAny(image[:index], ".:") || image[:index] == "localhost") {
hostName = image[:index]
registryKey := registryAuthConfigKey("docker.io")
if image != "" {
if registryRef, refErr := reference.ParseNormalizedNamed(image); refErr != nil {
logger.Warnf("Could not normalize image reference: %v", refErr)
} else {
registryKey = registryAuthConfigKey(reference.Domain(registryRef))
}
}
authConfig, err := config.GetAuthConfig(hostName)
authConfig, err := cfg.GetAuthConfig(registryKey)
if err != nil {
logger.Warnf("Could not get auth config from docker config: %v", err)
return registry.AuthConfig{}, err
@@ -46,17 +50,20 @@ func LoadDockerAuthConfig(ctx context.Context, image string) (registry.AuthConfi
func LoadDockerAuthConfigs(ctx context.Context) map[string]registry.AuthConfig {
logger := common.Logger(ctx)
config, err := config.Load(config.Dir())
cfg, err := config.Load(config.Dir())
if err != nil {
logger.Warnf("Could not load docker config: %v", err)
return nil
}
if !config.ContainsAuth() {
config.CredentialsStore = credentials.DetectDefaultStore(config.CredentialsStore)
if !cfg.ContainsAuth() {
cfg.CredentialsStore = credentials.DetectDefaultStore(cfg.CredentialsStore)
}
creds, _ := config.GetAllCredentials()
creds, err := cfg.GetAllCredentials()
if err != nil {
logger.Warnf("Could not get docker auth configs: %v", err)
return nil
}
authConfigs := make(map[string]registry.AuthConfig, len(creds))
for k, v := range creds {
authConfigs[k] = registry.AuthConfig(v)
@@ -64,3 +71,10 @@ func LoadDockerAuthConfigs(ctx context.Context) map[string]registry.AuthConfig {
return authConfigs
}
func registryAuthConfigKey(domainName string) string {
if domainName == "docker.io" || domainName == "index.docker.io" {
return "https://index.docker.io/v1/"
}
return domainName
}

View File

@@ -14,10 +14,12 @@ import (
"gitea.com/gitea/runner/act/common"
"github.com/docker/docker/api/types"
"github.com/docker/docker/pkg/archive"
"github.com/moby/buildkit/frontend/dockerfile/dockerignore"
"github.com/moby/go-archive"
"github.com/moby/go-archive/compression"
"github.com/moby/moby/client"
"github.com/moby/patternmatcher"
"github.com/moby/patternmatcher/ignorefile"
specs "github.com/opencontainers/image-spec/specs-go/v1"
)
// NewDockerBuildExecutor function to create a run executor for the container
@@ -25,9 +27,9 @@ func NewDockerBuildExecutor(input NewDockerBuildExecutorInput) common.Executor {
return func(ctx context.Context) error {
logger := common.Logger(ctx)
if input.Platform != "" {
logger.Infof("%sdocker build -t %s --platform %s %s", logPrefix, input.ImageTag, input.Platform, input.ContextDir)
logger.Infof("docker build -t %s --platform %s %s", input.ImageTag, input.Platform, input.ContextDir)
} else {
logger.Infof("%sdocker build -t %s %s", logPrefix, input.ImageTag, input.ContextDir)
logger.Infof("docker build -t %s %s", input.ImageTag, input.ContextDir)
}
if common.Dryrun(ctx) {
return nil
@@ -42,13 +44,19 @@ func NewDockerBuildExecutor(input NewDockerBuildExecutorInput) common.Executor {
logger.Debugf("Building image from '%v'", input.ContextDir)
tags := []string{input.ImageTag}
options := types.ImageBuildOptions{
options := client.ImageBuildOptions{
Tags: tags,
Remove: true,
Platform: input.Platform,
AuthConfigs: LoadDockerAuthConfigs(ctx),
Dockerfile: input.Dockerfile,
}
platform, err := parsePlatform(input.Platform)
if err != nil {
return err
}
if platform != nil {
options.Platforms = []specs.Platform{*platform}
}
var buildContext io.ReadCloser
if input.BuildContext != nil {
buildContext = io.NopCloser(input.BuildContext)
@@ -76,7 +84,7 @@ func createBuildContext(ctx context.Context, contextDir, relDockerfile string) (
common.Logger(ctx).Debugf("Creating archive for build context dir '%s' with relative dockerfile '%s'", contextDir, relDockerfile)
// And canonicalize dockerfile name to a platform-independent one
relDockerfile = archive.CanonicalTarNameForPath(relDockerfile)
relDockerfile = filepath.ToSlash(relDockerfile)
f, err := os.Open(filepath.Join(contextDir, ".dockerignore"))
if err != nil && !os.IsNotExist(err) {
@@ -86,7 +94,7 @@ func createBuildContext(ctx context.Context, contextDir, relDockerfile string) (
var excludes []string
if err == nil {
excludes, err = dockerignore.ReadAll(f) //nolint:staticcheck // pre-existing issue from nektos/act
excludes, err = ignorefile.ReadAll(f)
if err != nil {
return nil, err
}
@@ -106,9 +114,8 @@ func createBuildContext(ctx context.Context, contextDir, relDockerfile string) (
includes = append(includes, ".dockerignore", relDockerfile)
}
compression := archive.Uncompressed
buildCtx, err := archive.TarWithOptions(contextDir, &archive.TarOptions{
Compression: compression,
Compression: compression.None,
ExcludePatterns: excludes,
IncludeFiles: includes,
})

File diff suppressed because it is too large Load Diff

View File

@@ -16,15 +16,18 @@ package container
import (
"fmt"
"io"
"net/netip"
"os"
"runtime"
"strings"
"testing"
"time"
"github.com/docker/docker/api/types/container"
networktypes "github.com/docker/docker/api/types/network"
"github.com/docker/go-connections/nat"
"github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts"
"github.com/moby/moby/api/types/container"
networktypes "github.com/moby/moby/api/types/network"
"github.com/pkg/errors"
"github.com/spf13/pflag"
"gotest.tools/v3/assert"
@@ -77,21 +80,21 @@ func setupRunFlags() (*pflag.FlagSet, *containerOptions) {
return flags, copts
}
func mustParse(t *testing.T, args string) (*container.Config, *container.HostConfig) {
func mustParse(t *testing.T, args string) (*container.Config, *container.HostConfig, *networktypes.NetworkingConfig) {
t.Helper()
config, hostConfig, _, err := parseRun(append(strings.Split(args, " "), "ubuntu", "bash"))
config, hostConfig, networkingConfig, err := parseRun(append(strings.Split(args, " "), "ubuntu", "bash"))
assert.NilError(t, err)
return config, hostConfig
return config, hostConfig, networkingConfig
}
func TestParseRunLinks(t *testing.T) {
if _, hostConfig := mustParse(t, "--link a:b"); len(hostConfig.Links) == 0 || hostConfig.Links[0] != "a:b" {
if _, hostConfig, _ := mustParse(t, "--link a:b"); len(hostConfig.Links) == 0 || hostConfig.Links[0] != "a:b" {
t.Fatalf("Error parsing links. Expected []string{\"a:b\"}, received: %v", hostConfig.Links)
}
if _, hostConfig := mustParse(t, "--link a:b --link c:d"); len(hostConfig.Links) < 2 || hostConfig.Links[0] != "a:b" || hostConfig.Links[1] != "c:d" {
if _, hostConfig, _ := mustParse(t, "--link a:b --link c:d"); len(hostConfig.Links) < 2 || hostConfig.Links[0] != "a:b" || hostConfig.Links[1] != "c:d" {
t.Fatalf("Error parsing links. Expected []string{\"a:b\", \"c:d\"}, received: %v", hostConfig.Links)
}
if _, hostConfig := mustParse(t, ""); len(hostConfig.Links) != 0 {
if _, hostConfig, _ := mustParse(t, ""); len(hostConfig.Links) != 0 {
t.Fatalf("Error parsing links. No link expected, received: %v", hostConfig.Links)
}
}
@@ -140,7 +143,7 @@ func TestParseRunAttach(t *testing.T) {
}
for _, tc := range tests {
t.Run(tc.input, func(t *testing.T) {
config, _ := mustParse(t, tc.input)
config, _, _ := mustParse(t, tc.input)
assert.Equal(t, config.AttachStdin, tc.expected.AttachStdin)
assert.Equal(t, config.AttachStdout, tc.expected.AttachStdout)
assert.Equal(t, config.AttachStderr, tc.expected.AttachStderr)
@@ -194,10 +197,10 @@ func TestParseRunWithInvalidArgs(t *testing.T) {
}
}
func TestParseWithVolumes(t *testing.T) {
func TestParseWithVolumes(t *testing.T) { //nolint:gocyclo // verbatim copy from docker/cli tests
// A single volume
arr, tryit := setupPlatformVolume([]string{`/tmp`}, []string{`c:\tmp`})
if config, hostConfig := mustParse(t, tryit); hostConfig.Binds != nil {
if config, hostConfig, _ := mustParse(t, tryit); hostConfig.Binds != nil {
t.Fatalf("Error parsing volume flags, %q should not mount-bind anything. Received %v", tryit, hostConfig.Binds)
} else if _, exists := config.Volumes[arr[0]]; !exists {
t.Fatalf("Error parsing volume flags, %q is missing from volumes. Received %v", tryit, config.Volumes)
@@ -205,7 +208,7 @@ func TestParseWithVolumes(t *testing.T) {
// Two volumes
arr, tryit = setupPlatformVolume([]string{`/tmp`, `/var`}, []string{`c:\tmp`, `c:\var`})
if config, hostConfig := mustParse(t, tryit); hostConfig.Binds != nil {
if config, hostConfig, _ := mustParse(t, tryit); hostConfig.Binds != nil {
t.Fatalf("Error parsing volume flags, %q should not mount-bind anything. Received %v", tryit, hostConfig.Binds)
} else if _, exists := config.Volumes[arr[0]]; !exists {
t.Fatalf("Error parsing volume flags, %s is missing from volumes. Received %v", arr[0], config.Volumes)
@@ -215,13 +218,13 @@ func TestParseWithVolumes(t *testing.T) {
// A single bind mount
arr, tryit = setupPlatformVolume([]string{`/hostTmp:/containerTmp`}, []string{os.Getenv("TEMP") + `:c:\containerTmp`})
if config, hostConfig := mustParse(t, tryit); hostConfig.Binds == nil || hostConfig.Binds[0] != arr[0] {
if config, hostConfig, _ := mustParse(t, tryit); hostConfig.Binds == nil || hostConfig.Binds[0] != arr[0] {
t.Fatalf("Error parsing volume flags, %q should mount-bind the path before the colon into the path after the colon. Received %v %v", arr[0], hostConfig.Binds, config.Volumes)
}
// Two bind mounts.
arr, tryit = setupPlatformVolume([]string{`/hostTmp:/containerTmp`, `/hostVar:/containerVar`}, []string{os.Getenv("ProgramData") + `:c:\ContainerPD`, os.Getenv("TEMP") + `:c:\containerTmp`})
if _, hostConfig := mustParse(t, tryit); hostConfig.Binds == nil || compareRandomizedStrings(hostConfig.Binds[0], hostConfig.Binds[1], arr[0], arr[1]) != nil {
if _, hostConfig, _ := mustParse(t, tryit); hostConfig.Binds == nil || compareRandomizedStrings(hostConfig.Binds[0], hostConfig.Binds[1], arr[0], arr[1]) != nil {
t.Fatalf("Error parsing volume flags, `%s and %s` did not mount-bind correctly. Received %v", arr[0], arr[1], hostConfig.Binds)
}
@@ -230,26 +233,26 @@ func TestParseWithVolumes(t *testing.T) {
arr, tryit = setupPlatformVolume(
[]string{`/hostTmp:/containerTmp:ro`, `/hostVar:/containerVar:rw`},
[]string{os.Getenv("TEMP") + `:c:\containerTmp:rw`, os.Getenv("ProgramData") + `:c:\ContainerPD:rw`})
if _, hostConfig := mustParse(t, tryit); hostConfig.Binds == nil || compareRandomizedStrings(hostConfig.Binds[0], hostConfig.Binds[1], arr[0], arr[1]) != nil {
if _, hostConfig, _ := mustParse(t, tryit); hostConfig.Binds == nil || compareRandomizedStrings(hostConfig.Binds[0], hostConfig.Binds[1], arr[0], arr[1]) != nil {
t.Fatalf("Error parsing volume flags, `%s and %s` did not mount-bind correctly. Received %v", arr[0], arr[1], hostConfig.Binds)
}
// Similar to previous test but with alternate modes which are only supported by Linux
if runtime.GOOS != "windows" {
arr, tryit = setupPlatformVolume([]string{`/hostTmp:/containerTmp:ro,Z`, `/hostVar:/containerVar:rw,Z`}, []string{})
if _, hostConfig := mustParse(t, tryit); hostConfig.Binds == nil || compareRandomizedStrings(hostConfig.Binds[0], hostConfig.Binds[1], arr[0], arr[1]) != nil {
if _, hostConfig, _ := mustParse(t, tryit); hostConfig.Binds == nil || compareRandomizedStrings(hostConfig.Binds[0], hostConfig.Binds[1], arr[0], arr[1]) != nil {
t.Fatalf("Error parsing volume flags, `%s and %s` did not mount-bind correctly. Received %v", arr[0], arr[1], hostConfig.Binds)
}
arr, tryit = setupPlatformVolume([]string{`/hostTmp:/containerTmp:Z`, `/hostVar:/containerVar:z`}, []string{})
if _, hostConfig := mustParse(t, tryit); hostConfig.Binds == nil || compareRandomizedStrings(hostConfig.Binds[0], hostConfig.Binds[1], arr[0], arr[1]) != nil {
if _, hostConfig, _ := mustParse(t, tryit); hostConfig.Binds == nil || compareRandomizedStrings(hostConfig.Binds[0], hostConfig.Binds[1], arr[0], arr[1]) != nil {
t.Fatalf("Error parsing volume flags, `%s and %s` did not mount-bind correctly. Received %v", arr[0], arr[1], hostConfig.Binds)
}
}
// One bind mount and one volume
arr, tryit = setupPlatformVolume([]string{`/hostTmp:/containerTmp`, `/containerVar`}, []string{os.Getenv("TEMP") + `:c:\containerTmp`, `c:\containerTmp`})
if config, hostConfig := mustParse(t, tryit); hostConfig.Binds == nil || len(hostConfig.Binds) > 1 || hostConfig.Binds[0] != arr[0] {
if config, hostConfig, _ := mustParse(t, tryit); hostConfig.Binds == nil || len(hostConfig.Binds) > 1 || hostConfig.Binds[0] != arr[0] {
t.Fatalf("Error parsing volume flags, %s and %s should only one and only one bind mount %s. Received %s", arr[0], arr[1], arr[0], hostConfig.Binds)
} else if _, exists := config.Volumes[arr[1]]; !exists {
t.Fatalf("Error parsing volume flags %s and %s. %s is missing from volumes. Received %v", arr[0], arr[1], arr[1], config.Volumes)
@@ -258,7 +261,7 @@ func TestParseWithVolumes(t *testing.T) {
// Root to non-c: drive letter (Windows specific)
if runtime.GOOS == "windows" {
arr, tryit = setupPlatformVolume([]string{}, []string{os.Getenv("SystemDrive") + `\:d:`})
if config, hostConfig := mustParse(t, tryit); hostConfig.Binds == nil || len(hostConfig.Binds) > 1 || hostConfig.Binds[0] != arr[0] || len(config.Volumes) != 0 {
if config, hostConfig, _ := mustParse(t, tryit); hostConfig.Binds == nil || len(hostConfig.Binds) > 1 || hostConfig.Binds[0] != arr[0] || len(config.Volumes) != 0 {
t.Fatalf("Error parsing %s. Should have a single bind mount and no volumes", arr[0])
}
}
@@ -294,6 +297,36 @@ func compareRandomizedStrings(a, b, c, d string) error {
return errors.Errorf("strings don't match")
}
func mustNetworkPort(t *testing.T, value string) networktypes.Port {
t.Helper()
port, err := networktypes.ParsePort(value)
if err != nil {
t.Fatalf("failed to parse network port %q: %v", value, err)
}
return port
}
func mustAddr(t *testing.T, value string) netip.Addr {
t.Helper()
addr, err := netip.ParseAddr(value)
if err != nil {
t.Fatalf("failed to parse address %q: %v", value, err)
}
return addr
}
func mustAddrs(t *testing.T, values ...string) []netip.Addr {
t.Helper()
addrs := make([]netip.Addr, 0, len(values))
for _, value := range values {
addrs = append(addrs, mustAddr(t, value))
}
return addrs
}
// Simple parse with MacAddress validation
func TestParseWithMacAddress(t *testing.T) {
invalidMacAddress := "--mac-address=invalidMacAddress"
@@ -301,9 +334,10 @@ func TestParseWithMacAddress(t *testing.T) {
if _, _, _, err := parseRun([]string{invalidMacAddress, "img", "cmd"}); err != nil && err.Error() != "invalidMacAddress is not a valid mac address" {
t.Fatalf("Expected an error with %v mac-address, got %v", invalidMacAddress, err)
}
if config, _ := mustParse(t, validMacAddress); config.MacAddress != "92:d0:c6:0a:29:33" { //nolint:staticcheck // pre-existing issue from nektos/act
t.Fatalf("Expected the config to have '92:d0:c6:0a:29:33' as MacAddress, got '%v'", config.MacAddress) //nolint:staticcheck // pre-existing issue from nektos/act
}
_, hostConfig, networkingConfig := mustParse(t, validMacAddress)
endpoint := networkingConfig.EndpointsConfig[string(hostConfig.NetworkMode)]
assert.Check(t, endpoint != nil)
assert.Equal(t, "92:d0:c6:0a:29:33", endpoint.MacAddress.String())
}
func TestRunFlagsParseWithMemory(t *testing.T) {
@@ -312,7 +346,7 @@ func TestRunFlagsParseWithMemory(t *testing.T) {
err := flags.Parse(args)
assert.ErrorContains(t, err, `invalid argument "invalid" for "-m, --memory" flag`)
_, hostconfig := mustParse(t, "--memory=1G")
_, hostconfig, _ := mustParse(t, "--memory=1G")
assert.Check(t, is.Equal(int64(1073741824), hostconfig.Memory))
}
@@ -322,10 +356,10 @@ func TestParseWithMemorySwap(t *testing.T) {
err := flags.Parse(args)
assert.ErrorContains(t, err, `invalid argument "invalid" for "--memory-swap" flag`)
_, hostconfig := mustParse(t, "--memory-swap=1G")
_, hostconfig, _ := mustParse(t, "--memory-swap=1G")
assert.Check(t, is.Equal(int64(1073741824), hostconfig.MemorySwap))
_, hostconfig = mustParse(t, "--memory-swap=-1")
_, hostconfig, _ = mustParse(t, "--memory-swap=-1")
assert.Check(t, is.Equal(int64(-1), hostconfig.MemorySwap))
}
@@ -340,14 +374,14 @@ func TestParseHostname(t *testing.T) {
hostnameWithDomain := "--hostname=hostname.domainname"
hostnameWithDomainTld := "--hostname=hostname.domainname.tld"
for hostname, expectedHostname := range validHostnames {
if config, _ := mustParse(t, "--hostname="+hostname); config.Hostname != expectedHostname {
if config, _, _ := mustParse(t, "--hostname="+hostname); config.Hostname != expectedHostname {
t.Fatalf("Expected the config to have 'hostname' as %q, got %q", expectedHostname, config.Hostname)
}
}
if config, _ := mustParse(t, hostnameWithDomain); config.Hostname != "hostname.domainname" || config.Domainname != "" {
if config, _, _ := mustParse(t, hostnameWithDomain); config.Hostname != "hostname.domainname" || config.Domainname != "" {
t.Fatalf("Expected the config to have 'hostname' as hostname.domainname, got %q", config.Hostname)
}
if config, _ := mustParse(t, hostnameWithDomainTld); config.Hostname != "hostname.domainname.tld" || config.Domainname != "" {
if config, _, _ := mustParse(t, hostnameWithDomainTld); config.Hostname != "hostname.domainname.tld" || config.Domainname != "" {
t.Fatalf("Expected the config to have 'hostname' as hostname.domainname.tld, got %q", config.Hostname)
}
}
@@ -361,26 +395,28 @@ func TestParseHostnameDomainname(t *testing.T) {
"domainname-63-bytes-long-should-be-valid-and-without-any-errors": "domainname-63-bytes-long-should-be-valid-and-without-any-errors",
}
for domainname, expectedDomainname := range validDomainnames {
if config, _ := mustParse(t, "--domainname="+domainname); config.Domainname != expectedDomainname {
if config, _, _ := mustParse(t, "--domainname="+domainname); config.Domainname != expectedDomainname {
t.Fatalf("Expected the config to have 'domainname' as %q, got %q", expectedDomainname, config.Domainname)
}
}
if config, _ := mustParse(t, "--hostname=some.prefix --domainname=domainname"); config.Hostname != "some.prefix" || config.Domainname != "domainname" {
if config, _, _ := mustParse(t, "--hostname=some.prefix --domainname=domainname"); config.Hostname != "some.prefix" || config.Domainname != "domainname" {
t.Fatalf("Expected the config to have 'hostname' as 'some.prefix' and 'domainname' as 'domainname', got %q and %q", config.Hostname, config.Domainname)
}
if config, _ := mustParse(t, "--hostname=another-prefix --domainname=domainname.tld"); config.Hostname != "another-prefix" || config.Domainname != "domainname.tld" {
if config, _, _ := mustParse(t, "--hostname=another-prefix --domainname=domainname.tld"); config.Hostname != "another-prefix" || config.Domainname != "domainname.tld" {
t.Fatalf("Expected the config to have 'hostname' as 'another-prefix' and 'domainname' as 'domainname.tld', got %q and %q", config.Hostname, config.Domainname)
}
}
func TestParseWithExpose(t *testing.T) {
invalids := map[string]string{
":": "invalid port format for --expose: :",
"8080:9090": "invalid port format for --expose: 8080:9090",
"NaN/tcp": `invalid range format for --expose: NaN/tcp, error: strconv.ParseUint: parsing "NaN": invalid syntax`,
"NaN-NaN/tcp": `invalid range format for --expose: NaN-NaN/tcp, error: strconv.ParseUint: parsing "NaN": invalid syntax`,
"8080-NaN/tcp": `invalid range format for --expose: 8080-NaN/tcp, error: strconv.ParseUint: parsing "NaN": invalid syntax`,
"1234567890-8080/tcp": `invalid range format for --expose: 1234567890-8080/tcp, error: strconv.ParseUint: parsing "1234567890": value out of range`,
invalids := []string{
":",
"8080:9090",
"/tcp",
"/udp",
"NaN/tcp",
"NaN-NaN/tcp",
"8080-NaN/tcp",
"1234567890-8080/tcp",
}
valids := map[string][]nat.Port{
"8080/tcp": {"8080/tcp"},
@@ -389,9 +425,9 @@ func TestParseWithExpose(t *testing.T) {
"8080-8080/udp": {"8080/udp"},
"8080-8082/tcp": {"8080/tcp", "8081/tcp", "8082/tcp"},
}
for expose, expectedError := range invalids {
if _, _, _, err := parseRun([]string{fmt.Sprintf("--expose=%v", expose), "img", "cmd"}); err == nil || err.Error() != expectedError {
t.Fatalf("Expected error '%v' with '--expose=%v', got '%v'", expectedError, expose, err)
for _, expose := range invalids {
if _, _, _, err := parseRun([]string{fmt.Sprintf("--expose=%v", expose), "img", "cmd"}); err == nil {
t.Fatalf("Expected error with '--expose=%v', got none", expose)
}
}
for expose, exposedPorts := range valids {
@@ -403,7 +439,7 @@ func TestParseWithExpose(t *testing.T) {
t.Fatalf("Expected %v exposed port, got %v", len(exposedPorts), len(config.ExposedPorts))
}
for _, port := range exposedPorts {
if _, ok := config.ExposedPorts[port]; !ok {
if _, ok := config.ExposedPorts[mustNetworkPort(t, string(port))]; !ok {
t.Fatalf("Expected %v, got %v", exposedPorts, config.ExposedPorts)
}
}
@@ -418,7 +454,7 @@ func TestParseWithExpose(t *testing.T) {
}
ports := []nat.Port{"80/tcp", "81/tcp"}
for _, port := range ports {
if _, ok := config.ExposedPorts[port]; !ok {
if _, ok := config.ExposedPorts[mustNetworkPort(t, string(port))]; !ok {
t.Fatalf("Expected %v, got %v", ports, config.ExposedPorts)
}
}
@@ -498,9 +534,9 @@ func TestParseNetworkConfig(t *testing.T) {
expected: map[string]*networktypes.EndpointSettings{
"net1": {
IPAMConfig: &networktypes.EndpointIPAMConfig{
IPv4Address: "172.20.88.22",
IPv6Address: "2001:db8::8822",
LinkLocalIPs: []string{"169.254.2.2", "fe80::169:254:2:2"},
IPv4Address: mustAddr(t, "172.20.88.22"),
IPv6Address: mustAddr(t, "2001:db8::8822"),
LinkLocalIPs: mustAddrs(t, "169.254.2.2", "fe80::169:254:2:2"),
},
Links: []string{"foo:bar", "bar:baz"},
Aliases: []string{"web1", "web2"},
@@ -527,9 +563,9 @@ func TestParseNetworkConfig(t *testing.T) {
"net1": {
DriverOpts: map[string]string{"field1": "value1"},
IPAMConfig: &networktypes.EndpointIPAMConfig{
IPv4Address: "172.20.88.22",
IPv6Address: "2001:db8::8822",
LinkLocalIPs: []string{"169.254.2.2", "fe80::169:254:2:2"},
IPv4Address: mustAddr(t, "172.20.88.22"),
IPv6Address: mustAddr(t, "2001:db8::8822"),
LinkLocalIPs: mustAddrs(t, "169.254.2.2", "fe80::169:254:2:2"),
},
Links: []string{"foo:bar", "bar:baz"},
Aliases: []string{"web1", "web2"},
@@ -538,8 +574,8 @@ func TestParseNetworkConfig(t *testing.T) {
"net3": {
DriverOpts: map[string]string{"field3": "value3"},
IPAMConfig: &networktypes.EndpointIPAMConfig{
IPv4Address: "172.20.88.22",
IPv6Address: "2001:db8::8822",
IPv4Address: mustAddr(t, "172.20.88.22"),
IPv6Address: mustAddr(t, "2001:db8::8822"),
},
Aliases: []string{"web3"},
},
@@ -556,8 +592,8 @@ func TestParseNetworkConfig(t *testing.T) {
"field2": "value2",
},
IPAMConfig: &networktypes.EndpointIPAMConfig{
IPv4Address: "172.20.88.22",
IPv6Address: "2001:db8::8822",
IPv4Address: mustAddr(t, "172.20.88.22"),
IPv6Address: mustAddr(t, "2001:db8::8822"),
},
Aliases: []string{"web1", "web2"},
},
@@ -610,7 +646,9 @@ func TestParseNetworkConfig(t *testing.T) {
assert.NilError(t, err)
assert.DeepEqual(t, hConfig.NetworkMode, tc.expectedCfg.NetworkMode)
assert.DeepEqual(t, nwConfig.EndpointsConfig, tc.expected)
if diff := cmp.Diff(tc.expected, nwConfig.EndpointsConfig, cmpopts.EquateComparable(netip.Addr{})); diff != "" {
t.Fatalf("unexpected endpoints (-want +got):\n%s", diff)
}
})
}
}
@@ -631,7 +669,7 @@ func TestParseModes(t *testing.T) {
}
// uts ko
_, _, _, err = parseRun([]string{"--uts=container:", "img", "cmd"})
_, _, _, err = parseRun([]string{"--uts=container:", "img", "cmd"}) //nolint:dogsled // verbatim copy from docker/cli tests
assert.ErrorContains(t, err, "--uts: invalid UTS mode")
// uts ok
@@ -691,10 +729,9 @@ func TestParseRestartPolicy(t *testing.T) {
}
func TestParseRestartPolicyAutoRemove(t *testing.T) {
expected := "Conflicting options: --restart and --rm"
_, _, _, err := parseRun([]string{"--rm", "--restart=always", "img", "cmd"})
if err == nil || err.Error() != expected {
t.Fatalf("Expected error %v, but got none", expected)
_, _, _, err := parseRun([]string{"--rm", "--restart=always", "img", "cmd"}) //nolint:dogsled // verbatim copy from docker/cli tests
if err == nil {
t.Fatal("Expected error for conflicting --restart and --rm, but got none")
}
}
@@ -752,7 +789,7 @@ func TestParseLoggingOpts(t *testing.T) {
}
}
func TestParseEnvfileVariables(t *testing.T) { //nolint:dupl // pre-existing issue from nektos/act
func TestParseEnvfileVariables(t *testing.T) { //nolint:dupl // verbatim copy from docker/cli tests
e := "open nonexistent: no such file or directory"
if runtime.GOOS == "windows" {
e = "open nonexistent: The system cannot find the file specified."
@@ -795,7 +832,7 @@ func TestParseEnvfileVariablesWithBOMUnicode(t *testing.T) {
}
// UTF16 with BOM
e := "contains invalid utf8 bytes at line"
e := "invalid env file"
if _, _, _, err := parseRun([]string{"--env-file=testdata/utf16.env", "img", "cmd"}); err == nil || !strings.Contains(err.Error(), e) {
t.Fatalf("Expected an error with message '%s', got %v", e, err)
}
@@ -805,7 +842,7 @@ func TestParseEnvfileVariablesWithBOMUnicode(t *testing.T) {
}
}
func TestParseLabelfileVariables(t *testing.T) { //nolint:dupl // pre-existing issue from nektos/act
func TestParseLabelfileVariables(t *testing.T) { //nolint:dupl // verbatim copy from docker/cli tests
e := "open nonexistent: no such file or directory"
if runtime.GOOS == "windows" {
e = "open nonexistent: The system cannot find the file specified."

View File

@@ -10,8 +10,8 @@ import (
"context"
"fmt"
"github.com/docker/docker/api/types"
"github.com/docker/docker/client"
cerrdefs "github.com/containerd/errdefs"
"github.com/moby/moby/client"
)
// ImageExistsLocally returns a boolean indicating if an image with the
@@ -23,8 +23,8 @@ func ImageExistsLocally(ctx context.Context, imageName, platform string) (bool,
}
defer cli.Close()
inspectImage, _, err := cli.ImageInspectWithRaw(ctx, imageName)
if client.IsErrNotFound(err) {
inspectImage, err := cli.ImageInspect(ctx, imageName)
if cerrdefs.IsNotFound(err) {
return false, nil
} else if err != nil {
return false, err
@@ -46,14 +46,14 @@ func RemoveImage(ctx context.Context, imageName string, force, pruneChildren boo
}
defer cli.Close()
inspectImage, _, err := cli.ImageInspectWithRaw(ctx, imageName)
if client.IsErrNotFound(err) {
inspectImage, err := cli.ImageInspect(ctx, imageName)
if cerrdefs.IsNotFound(err) {
return false, nil
} else if err != nil {
return false, err
}
if _, err = cli.ImageRemove(ctx, inspectImage.ID, types.ImageRemoveOptions{
if _, err = cli.ImageRemove(ctx, inspectImage.ID, client.ImageRemoveOptions{
Force: force,
PruneChildren: pruneChildren,
}); err != nil {

View File

@@ -9,8 +9,8 @@ import (
"io"
"testing"
"github.com/docker/docker/api/types"
"github.com/docker/docker/client"
"github.com/moby/moby/client"
specs "github.com/opencontainers/image-spec/specs-go/v1"
log "github.com/sirupsen/logrus"
"github.com/stretchr/testify/assert"
)
@@ -38,34 +38,34 @@ func TestImageExistsLocally(t *testing.T) {
assert.False(t, invalidImagePlatform)
// pull an image
cli, err := client.NewClientWithOpts(client.FromEnv)
cli, err := client.New(client.FromEnv)
assert.NoError(t, err) //nolint:testifylint // pre-existing issue from nektos/act
cli.NegotiateAPIVersion(context.Background())
defer cli.Close()
// Chose alpine latest because it's so small
// maybe we should build an image instead so that tests aren't reliable on dockerhub
readerDefault, err := cli.ImagePull(ctx, "node:16-buster-slim", types.ImagePullOptions{
Platform: "linux/amd64",
readerDefault, err := cli.ImagePull(ctx, "node:24-bookworm-slim", client.ImagePullOptions{
Platforms: []specs.Platform{{OS: "linux", Architecture: "amd64"}},
})
assert.NoError(t, err) //nolint:testifylint // pre-existing issue from nektos/act
defer readerDefault.Close()
_, err = io.ReadAll(readerDefault)
assert.NoError(t, err) //nolint:testifylint // pre-existing issue from nektos/act
imageDefaultArchExists, err := ImageExistsLocally(ctx, "node:16-buster-slim", "linux/amd64")
imageDefaultArchExists, err := ImageExistsLocally(ctx, "node:24-bookworm-slim", "linux/amd64")
assert.NoError(t, err) //nolint:testifylint // pre-existing issue from nektos/act
assert.True(t, imageDefaultArchExists)
// Validate if another architecture platform can be pulled
readerArm64, err := cli.ImagePull(ctx, "node:16-buster-slim", types.ImagePullOptions{
Platform: "linux/arm64",
readerArm64, err := cli.ImagePull(ctx, "node:24-bookworm-slim", client.ImagePullOptions{
Platforms: []specs.Platform{{OS: "linux", Architecture: "arm64"}},
})
assert.NoError(t, err) //nolint:testifylint // pre-existing issue from nektos/act
defer readerArm64.Close()
_, err = io.ReadAll(readerArm64)
assert.NoError(t, err) //nolint:testifylint // pre-existing issue from nektos/act
imageArm64Exists, err := ImageExistsLocally(ctx, "node:16-buster-slim", "linux/arm64")
imageArm64Exists, err := ImageExistsLocally(ctx, "node:24-bookworm-slim", "linux/arm64")
assert.NoError(t, err) //nolint:testifylint // pre-existing issue from nektos/act
assert.True(t, imageArm64Exists)
}

View File

@@ -26,8 +26,6 @@ type dockerMessage struct {
Progress string `json:"progress"`
}
const logPrefix = " \U0001F433 "
func logDockerResponse(logger logrus.FieldLogger, dockerResponse io.ReadCloser, isError bool) error {
if dockerResponse == nil {
return nil

View File

@@ -11,7 +11,7 @@ import (
"gitea.com/gitea/runner/act/common"
"github.com/docker/docker/api/types"
"github.com/moby/moby/client"
)
func NewDockerNetworkCreateExecutor(name string) common.Executor {
@@ -23,20 +23,20 @@ func NewDockerNetworkCreateExecutor(name string) common.Executor {
defer cli.Close()
// Only create the network if it doesn't exist
networks, err := cli.NetworkList(ctx, types.NetworkListOptions{})
networks, err := cli.NetworkList(ctx, client.NetworkListOptions{})
if err != nil {
return err
}
// For Gitea, reduce log noise
// common.Logger(ctx).Debugf("%v", networks)
for _, network := range networks {
if network.Name == name {
for _, n := range networks.Items {
if n.Name == name {
common.Logger(ctx).Debugf("Network %v exists", name)
return nil
}
}
_, err = cli.NetworkCreate(ctx, name, types.NetworkCreate{
_, err = cli.NetworkCreate(ctx, name, client.NetworkCreateOptions{
Driver: "bridge",
Scope: "local",
})
@@ -56,23 +56,23 @@ func NewDockerNetworkRemoveExecutor(name string) common.Executor {
}
defer cli.Close()
// Make shure that all network of the specified name are removed
// Make sure that all network of the specified name are removed
// cli.NetworkRemove refuses to remove a network if there are duplicates
networks, err := cli.NetworkList(ctx, types.NetworkListOptions{})
networks, err := cli.NetworkList(ctx, client.NetworkListOptions{})
if err != nil {
return err
}
// For Gitea, reduce log noise
// common.Logger(ctx).Debugf("%v", networks)
for _, network := range networks {
if network.Name == name {
result, err := cli.NetworkInspect(ctx, network.ID, types.NetworkInspectOptions{})
for _, n := range networks.Items {
if n.Name == name {
result, err := cli.NetworkInspect(ctx, n.ID, client.NetworkInspectOptions{})
if err != nil {
return err
}
if len(result.Containers) == 0 {
if err = cli.NetworkRemove(ctx, network.ID); err != nil {
if len(result.Network.Containers) == 0 {
if _, err = cli.NetworkRemove(ctx, n.ID, client.NetworkRemoveOptions{}); err != nil {
common.Logger(ctx).Debugf("%v", err)
}
} else {

View File

@@ -0,0 +1,39 @@
// Copyright 2026 The Gitea Authors. All rights reserved.
// Copyright 2025 The nektos/act Authors. All rights reserved.
// SPDX-License-Identifier: MIT
//go:build !(WITHOUT_DOCKER || !(linux || darwin || windows || netbsd))
package container
import (
"fmt"
"strings"
specs "github.com/opencontainers/image-spec/specs-go/v1"
)
// parsePlatform parses an "os/arch[/variant]" string into a Platform. An empty input
// returns (nil, nil), meaning "no platform constraint". A non-empty but malformed
// string is rejected explicitly so it cannot silently fall through to the daemon's
// default architecture.
func parsePlatform(platform string) (*specs.Platform, error) {
if platform == "" {
return nil, nil //nolint:nilnil // no platform constraint requested
}
parts := strings.Split(platform, "/")
if len(parts) < 2 || len(parts) > 3 || parts[0] == "" || parts[1] == "" || (len(parts) == 3 && parts[2] == "") {
return nil, fmt.Errorf("invalid platform %q: expected os/arch[/variant]", platform)
}
spec := &specs.Platform{
OS: strings.ToLower(parts[0]),
Architecture: strings.ToLower(parts[1]),
}
if len(parts) == 3 {
spec.Variant = strings.ToLower(parts[2])
}
return spec, nil
}

View File

@@ -0,0 +1,63 @@
// Copyright 2026 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package container
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestParsePlatform(t *testing.T) {
t.Run("empty input returns nil platform without error", func(t *testing.T) {
got, err := parsePlatform("")
require.NoError(t, err)
assert.Nil(t, got)
})
t.Run("os/arch", func(t *testing.T) {
got, err := parsePlatform("linux/amd64")
require.NoError(t, err)
require.NotNil(t, got)
assert.Equal(t, "linux", got.OS)
assert.Equal(t, "amd64", got.Architecture)
assert.Empty(t, got.Variant)
})
t.Run("os/arch/variant", func(t *testing.T) {
got, err := parsePlatform("linux/arm/v7")
require.NoError(t, err)
require.NotNil(t, got)
assert.Equal(t, "linux", got.OS)
assert.Equal(t, "arm", got.Architecture)
assert.Equal(t, "v7", got.Variant)
})
t.Run("input is lowercased", func(t *testing.T) {
got, err := parsePlatform("Linux/AMD64/V8")
require.NoError(t, err)
require.NotNil(t, got)
assert.Equal(t, "linux", got.OS)
assert.Equal(t, "amd64", got.Architecture)
assert.Equal(t, "v8", got.Variant)
})
for _, bad := range []string{
"amd64",
"linux",
"linux/",
"/amd64",
"/",
"//",
"linux/arm/",
"linux/arm/v7/extra",
} {
t.Run("rejects "+bad, func(t *testing.T) {
got, err := parsePlatform(bad)
require.Error(t, err)
assert.Nil(t, got)
})
}
}

View File

@@ -8,23 +8,23 @@ package container
import (
"context"
"encoding/base64"
"encoding/json"
"fmt"
"strings"
"gitea.com/gitea/runner/act/common"
"github.com/distribution/reference"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/registry"
"github.com/moby/moby/api/pkg/authconfig"
"github.com/moby/moby/api/types/registry"
"github.com/moby/moby/client"
specs "github.com/opencontainers/image-spec/specs-go/v1"
)
// NewDockerPullExecutor function to create a run executor for the container
func NewDockerPullExecutor(input NewDockerPullExecutorInput) common.Executor {
return func(ctx context.Context) error {
logger := common.Logger(ctx)
logger.Debugf("%sdocker pull %v", logPrefix, input.Image)
logger.Debugf("docker pull %v", input.Image)
if common.Dryrun(ctx) {
return nil
@@ -78,26 +78,29 @@ func NewDockerPullExecutor(input NewDockerPullExecutorInput) common.Executor {
}
}
func getImagePullOptions(ctx context.Context, input NewDockerPullExecutorInput) (types.ImagePullOptions, error) {
imagePullOptions := types.ImagePullOptions{
Platform: input.Platform,
func getImagePullOptions(ctx context.Context, input NewDockerPullExecutorInput) (client.ImagePullOptions, error) {
imagePullOptions := client.ImagePullOptions{}
platform, err := parsePlatform(input.Platform)
if err != nil {
return imagePullOptions, err
}
if platform != nil {
imagePullOptions.Platforms = []specs.Platform{*platform}
}
logger := common.Logger(ctx)
if input.Username != "" && input.Password != "" {
logger.Debugf("using authentication for docker pull")
authConfig := registry.AuthConfig{
encodedAuth, err := authconfig.Encode(registry.AuthConfig{
Username: input.Username,
Password: input.Password,
}
encodedJSON, err := json.Marshal(authConfig)
})
if err != nil {
return imagePullOptions, err
}
imagePullOptions.RegistryAuth = base64.URLEncoding.EncodeToString(encodedJSON)
imagePullOptions.RegistryAuth = encodedAuth
} else {
authConfig, err := LoadDockerAuthConfig(ctx, input.Image)
if err != nil {
@@ -108,19 +111,17 @@ func getImagePullOptions(ctx context.Context, input NewDockerPullExecutorInput)
}
logger.Info("using DockerAuthConfig authentication for docker pull")
encodedJSON, err := json.Marshal(authConfig)
imagePullOptions.RegistryAuth, err = authconfig.Encode(authConfig)
if err != nil {
return imagePullOptions, err
}
imagePullOptions.RegistryAuth = base64.URLEncoding.EncodeToString(encodedJSON)
}
return imagePullOptions, nil
}
func cleanImage(ctx context.Context, image string) string {
ref, err := reference.ParseAnyReference(image)
func cleanImage(ctx context.Context, imageName string) string {
ref, err := reference.ParseAnyReference(imageName)
if err != nil {
common.Logger(ctx).Error(err)
return ""

View File

@@ -23,22 +23,22 @@ import (
"gitea.com/gitea/runner/act/common"
"gitea.com/gitea/runner/act/filecollector"
"dario.cat/mergo"
"github.com/Masterminds/semver"
"github.com/docker/cli/cli/compose/loader"
"github.com/docker/cli/cli/connhelper"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/mount"
"github.com/docker/docker/api/types/network"
"github.com/docker/docker/client"
"github.com/docker/docker/pkg/stdcopy"
"github.com/go-git/go-billy/v5/helper/polyfill"
"github.com/go-git/go-billy/v5/osfs"
"github.com/go-git/go-git/v5/plumbing/format/gitignore"
"github.com/gobwas/glob"
"github.com/imdario/mergo"
"github.com/joho/godotenv"
"github.com/kballard/go-shellquote"
"github.com/moby/moby/api/pkg/stdcopy"
"github.com/moby/moby/api/types/container"
"github.com/moby/moby/api/types/mount"
"github.com/moby/moby/api/types/network"
"github.com/moby/moby/api/types/system"
"github.com/moby/moby/client"
specs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/spf13/pflag"
"golang.org/x/term"
@@ -53,7 +53,7 @@ func NewContainer(input *NewContainerInput) ExecutionsEnvironment {
func (cr *containerReference) ConnectToNetwork(name string) common.Executor {
return common.
NewDebugExecutor("%sdocker network connect %s %s", logPrefix, name, cr.input.Name).
NewDebugExecutor("docker network connect %s %s", name, cr.input.Name).
Then(
common.NewPipelineExecutor(
cr.connect(),
@@ -64,9 +64,13 @@ func (cr *containerReference) ConnectToNetwork(name string) common.Executor {
func (cr *containerReference) connectToNetwork(name string, aliases []string) common.Executor {
return func(ctx context.Context) error {
return cr.cli.NetworkConnect(ctx, name, cr.input.Name, &network.EndpointSettings{
Aliases: aliases,
_, err := cr.cli.NetworkConnect(ctx, name, client.NetworkConnectOptions{
Container: cr.input.Name,
EndpointConfig: &network.EndpointSettings{
Aliases: aliases,
},
})
return err
}
}
@@ -74,7 +78,7 @@ func (cr *containerReference) connectToNetwork(name string, aliases []string) co
// API version is 1.41 and beyond
func supportsContainerImagePlatform(ctx context.Context, cli client.APIClient) bool {
logger := common.Logger(ctx)
ver, err := cli.ServerVersion(ctx)
ver, err := cli.ServerVersion(ctx, client.ServerVersionOptions{})
if err != nil {
logger.Panicf("Failed to get Docker API Version: %s", err)
return false
@@ -90,7 +94,7 @@ func supportsContainerImagePlatform(ctx context.Context, cli client.APIClient) b
func (cr *containerReference) Create(capAdd, capDrop []string) common.Executor {
return common.
NewInfoExecutor("%sdocker create image=%s platform=%s entrypoint=%+q cmd=%+q network=%+q", logPrefix, cr.input.Image, cr.input.Platform, cr.input.Entrypoint, cr.input.Cmd, cr.input.NetworkMode).
NewInfoExecutor("docker create image=%s platform=%s entrypoint=%+q cmd=%+q network=%+q", cr.input.Image, cr.input.Platform, cr.input.Entrypoint, cr.input.Cmd, cr.input.NetworkMode).
Then(
common.NewPipelineExecutor(
cr.connect(),
@@ -102,7 +106,7 @@ func (cr *containerReference) Create(capAdd, capDrop []string) common.Executor {
func (cr *containerReference) Start(attach bool) common.Executor {
return common.
NewInfoExecutor("%sdocker run image=%s platform=%s entrypoint=%+q cmd=%+q network=%+q", logPrefix, cr.input.Image, cr.input.Platform, cr.input.Entrypoint, cr.input.Cmd, cr.input.NetworkMode).
NewInfoExecutor("docker run image=%s platform=%s entrypoint=%+q cmd=%+q network=%+q", cr.input.Image, cr.input.Platform, cr.input.Entrypoint, cr.input.Cmd, cr.input.NetworkMode).
Then(
common.NewPipelineExecutor(
cr.connect(),
@@ -125,7 +129,7 @@ func (cr *containerReference) Start(attach bool) common.Executor {
func (cr *containerReference) Pull(forcePull bool) common.Executor {
return common.
NewInfoExecutor("%sdocker pull image=%s platform=%s username=%s forcePull=%t", logPrefix, cr.input.Image, cr.input.Platform, cr.input.Username, forcePull).
NewInfoExecutor("docker pull image=%s platform=%s username=%s forcePull=%t", cr.input.Image, cr.input.Platform, cr.input.Username, forcePull).
Then(
NewDockerPullExecutor(NewDockerPullExecutorInput{
Image: cr.input.Image,
@@ -147,7 +151,7 @@ func (cr *containerReference) Copy(destPath string, files ...*FileEntry) common.
func (cr *containerReference) CopyDir(destPath, srcPath string, useGitIgnore bool) common.Executor {
return common.NewPipelineExecutor(
common.NewInfoExecutor("%sdocker cp src=%s dst=%s", logPrefix, srcPath, destPath),
common.NewInfoExecutor("docker cp src=%s dst=%s", srcPath, destPath),
cr.copyDir(destPath, srcPath, useGitIgnore),
func(ctx context.Context) error {
// If this fails, then folders have wrong permissions on non root container
@@ -163,8 +167,11 @@ func (cr *containerReference) GetContainerArchive(ctx context.Context, srcPath s
if common.Dryrun(ctx) {
return nil, errors.New("DRYRUN is not supported in GetContainerArchive")
}
a, _, err := cr.cli.CopyFromContainer(ctx, cr.id, srcPath)
return a, err
result, err := cr.cli.CopyFromContainer(ctx, cr.id, client.CopyFromContainerOptions{SourcePath: srcPath})
if err != nil {
return nil, err
}
return result.Content, nil
}
func (cr *containerReference) UpdateFromEnv(srcPath string, env *map[string]string) common.Executor {
@@ -177,7 +184,7 @@ func (cr *containerReference) UpdateFromImageEnv(env *map[string]string) common.
func (cr *containerReference) Exec(command []string, env map[string]string, user, workdir string) common.Executor {
return common.NewPipelineExecutor(
common.NewInfoExecutor("%sdocker exec cmd=[%s] user=%s workdir=%s", logPrefix, strings.Join(command, " "), user, workdir),
common.NewInfoExecutor("docker exec cmd=[%s] user=%s workdir=%s", strings.Join(command, " "), user, workdir),
cr.connect(),
cr.find(),
cr.exec(command, env, user, workdir),
@@ -222,22 +229,27 @@ func GetDockerClient(ctx context.Context) (cli client.APIClient, err error) {
if err != nil {
return nil, err
}
cli, err = client.NewClientWithOpts(
cli, err = client.New(
client.WithHost(helper.Host),
client.WithDialContext(helper.Dialer),
)
} else {
cli, err = client.NewClientWithOpts(client.FromEnv)
cli, err = client.New(client.FromEnv)
}
if err != nil {
return nil, fmt.Errorf("failed to connect to docker daemon: %w", err)
}
cli.NegotiateAPIVersion(ctx)
// Best-effort API version negotiation, matching the old client.NegotiateAPIVersion
// behaviour. Ping failures here are non-fatal: the connection is exercised again on
// the first real call, and the client falls back to the daemon's default API version.
if _, err := cli.Ping(ctx, client.PingOptions{NegotiateAPIVersion: true}); err != nil {
common.Logger(ctx).Warnf("docker daemon ping during version negotiation failed, continuing: %v", err)
}
return cli, nil
}
func GetHostInfo(ctx context.Context) (info types.Info, err error) { //nolint:staticcheck // pre-existing issue from nektos/act
func GetHostInfo(ctx context.Context) (info system.Info, err error) {
var cli client.APIClient
cli, err = GetDockerClient(ctx)
if err != nil {
@@ -245,12 +257,12 @@ func GetHostInfo(ctx context.Context) (info types.Info, err error) { //nolint:st
}
defer cli.Close()
info, err = cli.Info(ctx)
result, err := cli.Info(ctx, client.InfoOptions{})
if err != nil {
return info, err
}
return info, nil
return result.Info, nil
}
// Arch fetches values from docker info and translates architecture to
@@ -307,14 +319,14 @@ func (cr *containerReference) find() common.Executor {
if cr.id != "" {
return nil
}
containers, err := cr.cli.ContainerList(ctx, types.ContainerListOptions{ //nolint:staticcheck // pre-existing issue from nektos/act
containers, err := cr.cli.ContainerList(ctx, client.ContainerListOptions{
All: true,
})
if err != nil {
return fmt.Errorf("failed to list containers: %w", err)
}
for _, c := range containers {
for _, c := range containers.Items {
for _, name := range c.Names {
if name[1:] == cr.input.Name {
cr.id = c.ID
@@ -335,7 +347,7 @@ func (cr *containerReference) remove() common.Executor {
}
logger := common.Logger(ctx)
err := cr.cli.ContainerRemove(ctx, cr.id, types.ContainerRemoveOptions{ //nolint:staticcheck // pre-existing issue from nektos/act
_, err := cr.cli.ContainerRemove(ctx, cr.id, client.ContainerRemoveOptions{
RemoveVolumes: true,
Force: true,
})
@@ -440,12 +452,20 @@ func (cr *containerReference) create(capAdd, capDrop []string) common.Executor {
logger := common.Logger(ctx)
isTerminal := term.IsTerminal(int(os.Stdout.Fd()))
input := cr.input
exposedPorts, err := convertPortSet(input.ExposedPorts)
if err != nil {
return err
}
portBindings, err := convertPortMap(input.PortBindings)
if err != nil {
return err
}
config := &container.Config{
Image: input.Image,
WorkingDir: input.WorkingDir,
Env: input.Env,
ExposedPorts: input.ExposedPorts,
ExposedPorts: exposedPorts,
Tty: isTerminal,
}
// For Gitea, reduce log noise
@@ -470,15 +490,9 @@ func (cr *containerReference) create(capAdd, capDrop []string) common.Executor {
var platSpecs *specs.Platform
if supportsContainerImagePlatform(ctx, cr.cli) && cr.input.Platform != "" {
desiredPlatform := strings.SplitN(cr.input.Platform, `/`, 2)
if len(desiredPlatform) != 2 {
return fmt.Errorf("incorrect container platform option '%s'", cr.input.Platform)
}
platSpecs = &specs.Platform{
Architecture: desiredPlatform[1],
OS: desiredPlatform[0],
platSpecs, err = parsePlatform(cr.input.Platform)
if err != nil {
return err
}
}
@@ -490,13 +504,13 @@ func (cr *containerReference) create(capAdd, capDrop []string) common.Executor {
NetworkMode: container.NetworkMode(input.NetworkMode),
Privileged: input.Privileged,
UsernsMode: container.UsernsMode(input.UsernsMode),
PortBindings: input.PortBindings,
PortBindings: portBindings,
AutoRemove: input.AutoRemove,
}
// For Gitea, reduce log noise
// logger.Debugf("Common container.HostConfig ==> %+v", hostConfig)
config, hostConfig, err := cr.mergeContainerConfigs(ctx, config, hostConfig)
config, hostConfig, err = cr.mergeContainerConfigs(ctx, config, hostConfig)
if err != nil {
return err
}
@@ -520,7 +534,13 @@ func (cr *containerReference) create(capAdd, capDrop []string) common.Executor {
}
}
resp, err := cr.cli.ContainerCreate(ctx, config, hostConfig, networkingConfig, platSpecs, input.Name)
resp, err := cr.cli.ContainerCreate(ctx, client.ContainerCreateOptions{
Config: config,
HostConfig: hostConfig,
NetworkingConfig: networkingConfig,
Platform: platSpecs,
Name: input.Name,
})
if err != nil {
return fmt.Errorf("failed to create container: '%w'", err)
}
@@ -538,7 +558,7 @@ func (cr *containerReference) extractFromImageEnv(env *map[string]string) common
return func(ctx context.Context) error {
logger := common.Logger(ctx)
inspect, _, err := cr.cli.ImageInspectWithRaw(ctx, cr.input.Image)
inspect, err := cr.cli.ImageInspect(ctx, cr.input.Image)
if err != nil {
logger.Error(err)
return fmt.Errorf("inspect image: %w", err)
@@ -602,12 +622,12 @@ func (cr *containerReference) exec(cmd []string, env map[string]string, user, wo
}
logger.Debugf("Working directory '%s'", wd)
idResp, err := cr.cli.ContainerExecCreate(ctx, cr.id, types.ExecConfig{
idResp, err := cr.cli.ExecCreate(ctx, cr.id, client.ExecCreateOptions{
User: user,
Cmd: cmd,
WorkingDir: wd,
Env: envList,
Tty: isTerminal,
TTY: isTerminal,
AttachStderr: true,
AttachStdout: true,
})
@@ -615,38 +635,34 @@ func (cr *containerReference) exec(cmd []string, env map[string]string, user, wo
return fmt.Errorf("failed to create exec: %w", err)
}
resp, err := cr.cli.ContainerExecAttach(ctx, idResp.ID, types.ExecStartCheck{
Tty: isTerminal,
resp, err := cr.cli.ExecAttach(ctx, idResp.ID, client.ExecAttachOptions{
TTY: isTerminal,
})
if err != nil {
return fmt.Errorf("failed to attach to exec: %w", err)
}
defer resp.Close()
err = cr.waitForCommand(ctx, isTerminal, resp, idResp, user, workdir)
err = cr.waitForCommand(ctx, isTerminal, resp.HijackedResponse, idResp, user, workdir)
if err != nil {
return err
}
inspectResp, err := cr.cli.ContainerExecInspect(ctx, idResp.ID)
inspectResp, err := cr.cli.ExecInspect(ctx, idResp.ID, client.ExecInspectOptions{})
if err != nil {
return fmt.Errorf("failed to inspect exec: %w", err)
}
switch inspectResp.ExitCode {
case 0:
if inspectResp.ExitCode == 0 {
return nil
case 127:
return fmt.Errorf("exitcode '%d': command not found, please refer to https://github.com/nektos/act/issues/107 for more information", inspectResp.ExitCode)
default:
return fmt.Errorf("exitcode '%d': failure", inspectResp.ExitCode)
}
return ExitCodeError(inspectResp.ExitCode)
}
}
func (cr *containerReference) tryReadID(opt string, cbk func(id int)) common.Executor {
return func(ctx context.Context) error {
idResp, err := cr.cli.ContainerExecCreate(ctx, cr.id, types.ExecConfig{
idResp, err := cr.cli.ExecCreate(ctx, cr.id, client.ExecCreateOptions{
Cmd: []string{"id", opt},
AttachStdout: true,
AttachStderr: true,
@@ -655,7 +671,7 @@ func (cr *containerReference) tryReadID(opt string, cbk func(id int)) common.Exe
return nil
}
resp, err := cr.cli.ContainerExecAttach(ctx, idResp.ID, types.ExecStartCheck{})
resp, err := cr.cli.ExecAttach(ctx, idResp.ID, client.ExecAttachOptions{})
if err != nil {
return nil
}
@@ -685,7 +701,7 @@ func (cr *containerReference) tryReadGID() common.Executor {
return cr.tryReadID("-g", func(id int) { cr.GID = id })
}
func (cr *containerReference) waitForCommand(ctx context.Context, isTerminal bool, resp types.HijackedResponse, _ types.IDResponse, _, _ string) error {
func (cr *containerReference) waitForCommand(ctx context.Context, isTerminal bool, resp client.HijackedResponse, _ client.ExecCreateResult, _, _ string) error {
logger := common.Logger(ctx)
cmdResponse := make(chan error)
@@ -740,12 +756,18 @@ func (cr *containerReference) CopyTarStream(ctx context.Context, destPath string
Typeflag: tar.TypeDir,
})
tw.Close()
err := cr.cli.CopyToContainer(ctx, cr.id, "/", buf, types.CopyToContainerOptions{})
_, err := cr.cli.CopyToContainer(ctx, cr.id, client.CopyToContainerOptions{
DestinationPath: "/",
Content: buf,
})
if err != nil {
return fmt.Errorf("failed to mkdir to copy content to container: %w", err)
}
// Copy Content
err = cr.cli.CopyToContainer(ctx, cr.id, destPath, tarStream, types.CopyToContainerOptions{})
_, err = cr.cli.CopyToContainer(ctx, cr.id, client.CopyToContainerOptions{
DestinationPath: destPath,
Content: tarStream,
})
if err != nil {
return fmt.Errorf("failed to copy content to container: %w", err)
}
@@ -819,7 +841,10 @@ func (cr *containerReference) copyDir(dstPath, srcPath string, useGitIgnore bool
if err != nil {
return fmt.Errorf("failed to seek tar archive: %w", err)
}
err = cr.cli.CopyToContainer(ctx, cr.id, "/", tarFile, types.CopyToContainerOptions{})
_, err = cr.cli.CopyToContainer(ctx, cr.id, client.CopyToContainerOptions{
DestinationPath: "/",
Content: tarFile,
})
if err != nil {
return fmt.Errorf("failed to copy content to container: %w", err)
}
@@ -853,7 +878,10 @@ func (cr *containerReference) copyContent(dstPath string, files ...*FileEntry) c
}
logger.Debugf("Extracting content to '%s'", dstPath)
err := cr.cli.CopyToContainer(ctx, cr.id, dstPath, &buf, types.CopyToContainerOptions{})
_, err := cr.cli.CopyToContainer(ctx, cr.id, client.CopyToContainerOptions{
DestinationPath: dstPath,
Content: &buf,
})
if err != nil {
return fmt.Errorf("failed to copy content to container: %w", err)
}
@@ -863,7 +891,7 @@ func (cr *containerReference) copyContent(dstPath string, files ...*FileEntry) c
func (cr *containerReference) attach() common.Executor {
return func(ctx context.Context) error {
out, err := cr.cli.ContainerAttach(ctx, cr.id, types.ContainerAttachOptions{ //nolint:staticcheck // pre-existing issue from nektos/act
out, err := cr.cli.ContainerAttach(ctx, cr.id, client.ContainerAttachOptions{
Stream: true,
Stdout: true,
Stderr: true,
@@ -901,7 +929,7 @@ func (cr *containerReference) start() common.Executor {
logger := common.Logger(ctx)
logger.Debugf("Starting container: %v", cr.id)
if err := cr.cli.ContainerStart(ctx, cr.id, types.ContainerStartOptions{}); err != nil { //nolint:staticcheck // pre-existing issue from nektos/act
if _, err := cr.cli.ContainerStart(ctx, cr.id, client.ContainerStartOptions{}); err != nil {
return fmt.Errorf("failed to start container: %w", err)
}
@@ -913,14 +941,16 @@ func (cr *containerReference) start() common.Executor {
func (cr *containerReference) wait() common.Executor {
return func(ctx context.Context) error {
logger := common.Logger(ctx)
statusCh, errCh := cr.cli.ContainerWait(ctx, cr.id, container.WaitConditionNotRunning)
waitResult := cr.cli.ContainerWait(ctx, cr.id, client.ContainerWaitOptions{
Condition: container.WaitConditionNotRunning,
})
var statusCode int64
select {
case err := <-errCh:
case err := <-waitResult.Error:
if err != nil {
return fmt.Errorf("failed to wait for container: %w", err)
}
case status := <-statusCh:
case status := <-waitResult.Result:
statusCode = status.StatusCode
}
@@ -930,7 +960,7 @@ func (cr *containerReference) wait() common.Executor {
return nil
}
return fmt.Errorf("exit with `FAILURE`: %v", statusCode)
return ExitCodeError(statusCode)
}
}

View File

@@ -17,18 +17,23 @@ import (
"gitea.com/gitea/runner/act/common"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/client"
"github.com/moby/moby/api/types/container"
mobyclient "github.com/moby/moby/client"
"github.com/sirupsen/logrus/hooks/test"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
)
func TestDocker(t *testing.T) {
if testing.Short() {
t.Skip("skipping integration test")
}
ctx := context.Background()
client, err := GetDockerClient(ctx)
assert.NoError(t, err) //nolint:testifylint // pre-existing issue from nektos/act
if err != nil {
t.Skipf("skipping integration test: %v", err)
}
defer client.Close()
dockerBuild := NewDockerBuildExecutor(NewDockerBuildExecutorInput{
@@ -66,28 +71,33 @@ func TestDocker(t *testing.T) {
}
type mockDockerClient struct {
client.APIClient
mobyclient.APIClient
mock.Mock
}
func (m *mockDockerClient) ContainerExecCreate(ctx context.Context, id string, opts types.ExecConfig) (types.IDResponse, error) {
func (m *mockDockerClient) ExecCreate(ctx context.Context, id string, opts mobyclient.ExecCreateOptions) (mobyclient.ExecCreateResult, error) {
args := m.Called(ctx, id, opts)
return args.Get(0).(types.IDResponse), args.Error(1)
return args.Get(0).(mobyclient.ExecCreateResult), args.Error(1)
}
func (m *mockDockerClient) ContainerExecAttach(ctx context.Context, id string, opts types.ExecStartCheck) (types.HijackedResponse, error) {
func (m *mockDockerClient) ExecAttach(ctx context.Context, id string, opts mobyclient.ExecAttachOptions) (mobyclient.ExecAttachResult, error) {
args := m.Called(ctx, id, opts)
return args.Get(0).(types.HijackedResponse), args.Error(1)
return args.Get(0).(mobyclient.ExecAttachResult), args.Error(1)
}
func (m *mockDockerClient) ContainerExecInspect(ctx context.Context, execID string) (types.ContainerExecInspect, error) {
args := m.Called(ctx, execID)
return args.Get(0).(types.ContainerExecInspect), args.Error(1)
func (m *mockDockerClient) ExecInspect(ctx context.Context, execID string, opts mobyclient.ExecInspectOptions) (mobyclient.ExecInspectResult, error) {
args := m.Called(ctx, execID, opts)
return args.Get(0).(mobyclient.ExecInspectResult), args.Error(1)
}
func (m *mockDockerClient) CopyToContainer(ctx context.Context, id, path string, content io.Reader, options types.CopyToContainerOptions) error {
args := m.Called(ctx, id, path, content, options)
return args.Error(0)
func (m *mockDockerClient) ContainerWait(ctx context.Context, containerID string, opts mobyclient.ContainerWaitOptions) mobyclient.ContainerWaitResult {
args := m.Called(ctx, containerID, opts)
return args.Get(0).(mobyclient.ContainerWaitResult)
}
func (m *mockDockerClient) CopyToContainer(ctx context.Context, id string, options mobyclient.CopyToContainerOptions) (mobyclient.CopyToContainerResult, error) {
args := m.Called(ctx, id, options)
return args.Get(0).(mobyclient.CopyToContainerResult), args.Error(1)
}
type endlessReader struct {
@@ -119,10 +129,12 @@ func TestDockerExecAbort(t *testing.T) {
conn.On("Write", mock.AnythingOfType("[]uint8")).Return(1, nil)
client := &mockDockerClient{}
client.On("ContainerExecCreate", ctx, "123", mock.AnythingOfType("types.ExecConfig")).Return(types.IDResponse{ID: "id"}, nil)
client.On("ContainerExecAttach", ctx, "id", mock.AnythingOfType("types.ExecStartCheck")).Return(types.HijackedResponse{
Conn: conn,
Reader: bufio.NewReader(endlessReader{}),
client.On("ExecCreate", ctx, "123", mock.AnythingOfType("client.ExecCreateOptions")).Return(mobyclient.ExecCreateResult{ID: "id"}, nil)
client.On("ExecAttach", ctx, "id", mock.AnythingOfType("client.ExecAttachOptions")).Return(mobyclient.ExecAttachResult{
HijackedResponse: mobyclient.HijackedResponse{
Conn: conn,
Reader: bufio.NewReader(endlessReader{}),
},
}, nil)
cr := &containerReference{
@@ -156,12 +168,14 @@ func TestDockerExecFailure(t *testing.T) {
conn := &mockConn{}
client := &mockDockerClient{}
client.On("ContainerExecCreate", ctx, "123", mock.AnythingOfType("types.ExecConfig")).Return(types.IDResponse{ID: "id"}, nil)
client.On("ContainerExecAttach", ctx, "id", mock.AnythingOfType("types.ExecStartCheck")).Return(types.HijackedResponse{
Conn: conn,
Reader: bufio.NewReader(strings.NewReader("output")),
client.On("ExecCreate", ctx, "123", mock.AnythingOfType("client.ExecCreateOptions")).Return(mobyclient.ExecCreateResult{ID: "id"}, nil)
client.On("ExecAttach", ctx, "id", mock.AnythingOfType("client.ExecAttachOptions")).Return(mobyclient.ExecAttachResult{
HijackedResponse: mobyclient.HijackedResponse{
Conn: conn,
Reader: bufio.NewReader(strings.NewReader("output")),
},
}, nil)
client.On("ContainerExecInspect", ctx, "id").Return(types.ContainerExecInspect{
client.On("ExecInspect", ctx, "id", mobyclient.ExecInspectOptions{}).Return(mobyclient.ExecInspectResult{
ExitCode: 1,
}, nil)
@@ -174,20 +188,56 @@ func TestDockerExecFailure(t *testing.T) {
}
err := cr.exec([]string{""}, map[string]string{}, "user", "workdir")(ctx)
assert.Error(t, err, "exit with `FAILURE`: 1") //nolint:testifylint // pre-existing issue from nektos/act
var exitErr ExitCodeError
require.ErrorAs(t, err, &exitErr)
assert.Equal(t, ExitCodeError(1), exitErr)
assert.Equal(t, "Process completed with exit code 1.", err.Error())
conn.AssertExpectations(t)
client.AssertExpectations(t)
}
func TestDockerWaitFailure(t *testing.T) {
ctx := context.Background()
statusCh := make(chan container.WaitResponse, 1)
statusCh <- container.WaitResponse{StatusCode: 2}
errCh := make(chan error, 1)
client := &mockDockerClient{}
client.On("ContainerWait", ctx, "123", mobyclient.ContainerWaitOptions{Condition: container.WaitConditionNotRunning}).
Return(mobyclient.ContainerWaitResult{
Result: (<-chan container.WaitResponse)(statusCh),
Error: (<-chan error)(errCh),
})
cr := &containerReference{
id: "123",
cli: client,
input: &NewContainerInput{
Image: "image",
},
}
err := cr.wait()(ctx)
var exitErr ExitCodeError
require.ErrorAs(t, err, &exitErr)
assert.Equal(t, ExitCodeError(2), exitErr)
assert.Equal(t, "Process completed with exit code 2.", err.Error())
client.AssertExpectations(t)
}
func TestDockerCopyTarStream(t *testing.T) {
ctx := context.Background()
conn := &mockConn{}
client := &mockDockerClient{}
client.On("CopyToContainer", ctx, "123", "/", mock.Anything, mock.AnythingOfType("types.CopyToContainerOptions")).Return(nil)
client.On("CopyToContainer", ctx, "123", "/var/run/act", mock.Anything, mock.AnythingOfType("types.CopyToContainerOptions")).Return(nil)
client.On("CopyToContainer", ctx, "123", mock.MatchedBy(func(opts mobyclient.CopyToContainerOptions) bool {
return opts.DestinationPath == "/" && opts.Content != nil
})).Return(mobyclient.CopyToContainerResult{}, nil)
client.On("CopyToContainer", ctx, "123", mock.MatchedBy(func(opts mobyclient.CopyToContainerOptions) bool {
return opts.DestinationPath == "/var/run/act" && opts.Content != nil
})).Return(mobyclient.CopyToContainerResult{}, nil)
cr := &containerReference{
id: "123",
cli: client,
@@ -198,20 +248,18 @@ func TestDockerCopyTarStream(t *testing.T) {
_ = cr.CopyTarStream(ctx, "/var/run/act", &bytes.Buffer{})
conn.AssertExpectations(t)
client.AssertExpectations(t)
}
func TestDockerCopyTarStreamErrorInCopyFiles(t *testing.T) {
ctx := context.Background()
conn := &mockConn{}
merr := errors.New("Failure")
client := &mockDockerClient{}
client.On("CopyToContainer", ctx, "123", "/", mock.Anything, mock.AnythingOfType("types.CopyToContainerOptions")).Return(merr)
client.On("CopyToContainer", ctx, "123", "/", mock.Anything, mock.AnythingOfType("types.CopyToContainerOptions")).Return(merr)
client.On("CopyToContainer", ctx, "123", mock.MatchedBy(func(opts mobyclient.CopyToContainerOptions) bool {
return opts.DestinationPath == "/" && opts.Content != nil
})).Return(mobyclient.CopyToContainerResult{}, merr)
cr := &containerReference{
id: "123",
cli: client,
@@ -223,20 +271,21 @@ func TestDockerCopyTarStreamErrorInCopyFiles(t *testing.T) {
err := cr.CopyTarStream(ctx, "/var/run/act", &bytes.Buffer{})
assert.ErrorIs(t, err, merr) //nolint:testifylint // pre-existing issue from nektos/act
conn.AssertExpectations(t)
client.AssertExpectations(t)
}
func TestDockerCopyTarStreamErrorInMkdir(t *testing.T) {
ctx := context.Background()
conn := &mockConn{}
merr := errors.New("Failure")
client := &mockDockerClient{}
client.On("CopyToContainer", ctx, "123", "/", mock.Anything, mock.AnythingOfType("types.CopyToContainerOptions")).Return(nil)
client.On("CopyToContainer", ctx, "123", "/var/run/act", mock.Anything, mock.AnythingOfType("types.CopyToContainerOptions")).Return(merr)
client.On("CopyToContainer", ctx, "123", mock.MatchedBy(func(opts mobyclient.CopyToContainerOptions) bool {
return opts.DestinationPath == "/" && opts.Content != nil
})).Return(mobyclient.CopyToContainerResult{}, nil)
client.On("CopyToContainer", ctx, "123", mock.MatchedBy(func(opts mobyclient.CopyToContainerOptions) bool {
return opts.DestinationPath == "/var/run/act" && opts.Content != nil
})).Return(mobyclient.CopyToContainerResult{}, merr)
cr := &containerReference{
id: "123",
cli: client,
@@ -248,7 +297,6 @@ func TestDockerCopyTarStreamErrorInMkdir(t *testing.T) {
err := cr.CopyTarStream(ctx, "/var/run/act", &bytes.Buffer{})
assert.ErrorIs(t, err, merr) //nolint:testifylint // pre-existing issue from nektos/act
conn.AssertExpectations(t)
client.AssertExpectations(t)
}

View File

@@ -12,7 +12,7 @@ import (
"gitea.com/gitea/runner/act/common"
"github.com/docker/docker/api/types"
"github.com/moby/moby/api/types/system"
"github.com/pkg/errors"
)
@@ -51,8 +51,8 @@ func RunnerArch(ctx context.Context) string {
return runtime.GOOS
}
func GetHostInfo(ctx context.Context) (info types.Info, err error) {
return types.Info{}, nil
func GetHostInfo(ctx context.Context) (info system.Info, err error) {
return system.Info{}, nil
}
func NewDockerVolumeRemoveExecutor(volume string, force bool) common.Executor {

View File

@@ -11,8 +11,7 @@ import (
"gitea.com/gitea/runner/act/common"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/volume"
"github.com/moby/moby/client"
)
func NewDockerVolumeRemoveExecutor(volumeName string, force bool) common.Executor {
@@ -23,12 +22,12 @@ func NewDockerVolumeRemoveExecutor(volumeName string, force bool) common.Executo
}
defer cli.Close()
list, err := cli.VolumeList(ctx, volume.ListOptions{Filters: filters.NewArgs()})
list, err := cli.VolumeList(ctx, client.VolumeListOptions{})
if err != nil {
return err
}
for _, vol := range list.Volumes {
for _, vol := range list.Items {
if vol.Name == volumeName {
return removeExecutor(volumeName, force)(ctx)
}
@@ -42,7 +41,7 @@ func NewDockerVolumeRemoveExecutor(volumeName string, force bool) common.Executo
func removeExecutor(volume string, force bool) common.Executor {
return func(ctx context.Context) error {
logger := common.Logger(ctx)
logger.Debugf("%sdocker volume rm %s", logPrefix, volume)
logger.Debugf("docker volume rm %s", volume)
if common.Dryrun(ctx) {
return nil
@@ -54,6 +53,7 @@ func removeExecutor(volume string, force bool) common.Executor {
}
defer cli.Close()
return cli.VolumeRemove(ctx, volume, force)
_, err = cli.VolumeRemove(ctx, volume, client.VolumeRemoveOptions{Force: force})
return err
}
}

View File

@@ -16,7 +16,9 @@ import (
"os/exec"
"path/filepath"
"runtime"
"strconv"
"strings"
"sync"
"time"
"gitea.com/gitea/runner/act/common"
@@ -34,9 +36,15 @@ type HostEnvironment struct {
TmpDir string
ToolCache string
Workdir string
ActPath string
CleanUp func()
StdOut io.Writer
// BindWorkdir is true when the app runner mounts the workspace on the host and
// deletes the task directory after the job; host teardown must not remove Workdir.
BindWorkdir bool
ActPath string
CleanUp func()
StdOut io.Writer
mu sync.Mutex
runningPIDs map[int]struct{}
}
func (e *HostEnvironment) Create(_, _ []string) common.Executor {
@@ -344,8 +352,30 @@ func (e *HostEnvironment) exec(ctx context.Context, command []string, cmdline st
if ppty != nil {
go writeKeepAlive(ppty)
}
err = cmd.Run()
// Split Start/Wait so the PID can be registered before the process can exit;
// cmd.Run() would block until exit, by which time the PID may have been reused.
if err := cmd.Start(); err != nil {
return err
}
if cmd.Process != nil {
e.mu.Lock()
if e.runningPIDs == nil {
e.runningPIDs = map[int]struct{}{}
}
e.runningPIDs[cmd.Process.Pid] = struct{}{}
e.mu.Unlock()
defer func(pid int) {
e.mu.Lock()
delete(e.runningPIDs, pid)
e.mu.Unlock()
}(cmd.Process.Pid)
}
err = cmd.Wait()
if err != nil {
var exitErr *exec.ExitError
if errors.As(err, &exitErr) {
return ExitCodeError(exitErr.ExitCode())
}
return err
}
if tty != nil {
@@ -385,12 +415,83 @@ func (e *HostEnvironment) UpdateFromEnv(srcPath string, env *map[string]string)
return parseEnvFile(e, srcPath, env)
}
func removePathWithRetry(ctx context.Context, path string) error {
if path == "" {
return nil
}
attempts := 1
delay := time.Duration(0)
if runtime.GOOS == "windows" {
attempts = 5
delay = 200 * time.Millisecond
}
var lastErr error
for i := 0; i < attempts; i++ {
if i > 0 {
select {
case <-ctx.Done():
return ctx.Err()
case <-time.After(delay):
}
}
lastErr = os.RemoveAll(path)
if lastErr == nil {
return nil
}
}
return lastErr
}
func (e *HostEnvironment) terminateRunningProcesses(ctx context.Context) {
if runtime.GOOS != "windows" {
return
}
e.mu.Lock()
pids := make([]int, 0, len(e.runningPIDs))
for pid := range e.runningPIDs {
pids = append(pids, pid)
}
e.mu.Unlock()
if len(pids) == 0 {
return
}
logger := common.Logger(ctx)
for _, pid := range pids {
// Best-effort: forcibly terminate process tree to release file handles
// so that workspace cleanup can succeed on Windows.
cmd := exec.CommandContext(ctx, "taskkill", "/PID", strconv.Itoa(pid), "/T", "/F")
out, err := cmd.CombinedOutput()
if err != nil {
logger.Debugf("taskkill failed for pid=%d: %v output=%s", pid, err, strings.TrimSpace(string(out)))
}
}
}
func (e *HostEnvironment) Remove() common.Executor {
return func(ctx context.Context) error {
// Ensure any lingering child processes are ended before attempting
// to remove the workspace (Windows file locks otherwise prevent cleanup).
e.terminateRunningProcesses(ctx)
// Only removes per-job misc state. Must not remove the cache/toolcache root.
if e.CleanUp != nil {
e.CleanUp()
}
return os.RemoveAll(e.Path)
logger := common.Logger(ctx)
var errs []error
if err := removePathWithRetry(ctx, e.Path); err != nil {
logger.Warnf("failed to remove host misc state %s: %v", e.Path, err)
errs = append(errs, err)
}
if !e.BindWorkdir && e.Workdir != "" {
if err := removePathWithRetry(ctx, e.Workdir); err != nil {
logger.Warnf("failed to remove host workspace %s: %v", e.Workdir, err)
errs = append(errs, err)
}
}
return errors.Join(errs...)
}
}

View File

@@ -11,9 +11,14 @@ import (
"os"
"path"
"path/filepath"
"runtime"
"testing"
"gitea.com/gitea/runner/act/common"
"github.com/sirupsen/logrus"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// Type assert HostEnvironment implements ExecutionsEnvironment
@@ -69,3 +74,76 @@ func TestGetContainerArchive(t *testing.T) {
_, err = reader.Next()
assert.ErrorIs(t, err, io.EOF)
}
func TestHostEnvironmentExecExitCode(t *testing.T) {
if runtime.GOOS == "windows" {
t.Skip("uses POSIX shell")
}
dir := t.TempDir()
ctx := context.Background()
e := &HostEnvironment{
Path: filepath.Join(dir, "path"),
TmpDir: filepath.Join(dir, "tmp"),
ToolCache: filepath.Join(dir, "tool_cache"),
ActPath: filepath.Join(dir, "act_path"),
StdOut: io.Discard,
Workdir: filepath.Join(dir, "path"),
}
for _, p := range []string{e.Path, e.TmpDir, e.ToolCache, e.ActPath} {
assert.NoError(t, os.MkdirAll(p, 0o700)) //nolint:testifylint // test setup
}
err := e.Exec([]string{"sh", "-c", "exit 3"}, map[string]string{"PATH": os.Getenv("PATH")}, "", "")(ctx)
var exitErr ExitCodeError
require.ErrorAs(t, err, &exitErr)
assert.Equal(t, ExitCodeError(3), exitErr)
assert.Equal(t, "Process completed with exit code 3.", err.Error())
}
func TestHostEnvironmentRemoveCleansWorkdir(t *testing.T) {
logger := logrus.New()
ctx := common.WithLogger(context.Background(), logrus.NewEntry(logger))
base := t.TempDir()
miscRoot := filepath.Join(base, "misc")
path := filepath.Join(miscRoot, "hostexecutor")
require.NoError(t, os.MkdirAll(path, 0o700))
workdir := filepath.Join(base, "workspace", "owner", "repo")
require.NoError(t, os.MkdirAll(workdir, 0o700))
e := &HostEnvironment{
Path: path,
Workdir: workdir,
BindWorkdir: false,
CleanUp: func() {
_ = os.RemoveAll(miscRoot)
},
StdOut: os.Stdout,
}
require.NoError(t, e.Remove()(ctx))
_, err := os.Stat(workdir)
assert.ErrorIs(t, err, os.ErrNotExist)
}
func TestHostEnvironmentRemoveSkipsWorkdirWhenBindWorkdir(t *testing.T) {
logger := logrus.New()
ctx := common.WithLogger(context.Background(), logrus.NewEntry(logger))
base := t.TempDir()
miscRoot := filepath.Join(base, "misc")
path := filepath.Join(miscRoot, "hostexecutor")
require.NoError(t, os.MkdirAll(path, 0o700))
workdir := filepath.Join(base, "workspace", "123", "owner", "repo")
require.NoError(t, os.MkdirAll(workdir, 0o700))
e := &HostEnvironment{
Path: path,
Workdir: workdir,
BindWorkdir: true,
CleanUp: func() {
_ = os.RemoveAll(miscRoot)
},
StdOut: os.Stdout,
}
require.NoError(t, e.Remove()(ctx))
_, err := os.Stat(workdir)
require.NoError(t, err)
}

View File

@@ -251,7 +251,7 @@ func (impl *interperterImpl) evaluateArrayDeref(arrayDerefNode *actionlint.Array
func (impl *interperterImpl) getPropertyValue(left reflect.Value, property string) (value any, err error) {
switch left.Kind() {
case reflect.Ptr:
case reflect.Pointer:
return impl.getPropertyValue(left.Elem(), property)
case reflect.Struct:
@@ -321,7 +321,7 @@ func (impl *interperterImpl) getPropertyValue(left reflect.Value, property strin
}
func (impl *interperterImpl) getMapValue(value reflect.Value) (any, error) {
if value.Kind() == reflect.Ptr {
if value.Kind() == reflect.Pointer {
return impl.getMapValue(value.Elem())
}

View File

@@ -73,10 +73,16 @@ func (cc *CopyCollector) WriteFile(fpath string, fi fs.FileInfo, linkName string
if err := os.MkdirAll(filepath.Dir(fdestpath), 0o777); err != nil {
return err
}
// Remove any existing destination so we can overwrite read-only files
// (e.g. git pack files at mode 0444 trip EACCES on macOS and "Access is
// denied" on Windows when reopened with O_WRONLY) and so os.Symlink does
// not fail with EEXIST. os.Remove clears the Windows read-only attribute
// internally; on Unix unlink only needs write permission on the parent.
_ = os.Remove(fdestpath)
if f == nil {
return os.Symlink(linkName, fdestpath)
}
df, err := os.OpenFile(fdestpath, os.O_CREATE|os.O_WRONLY, fi.Mode())
df, err := os.OpenFile(fdestpath, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, fi.Mode())
if err != nil {
return err
}

View File

@@ -8,7 +8,9 @@ import (
"archive/tar"
"context"
"io"
"os"
"path/filepath"
"runtime"
"strings"
"testing"
@@ -20,6 +22,7 @@ import (
"github.com/go-git/go-git/v5/plumbing/format/index"
"github.com/go-git/go-git/v5/storage/filesystem"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
type memoryFs struct {
@@ -174,3 +177,47 @@ func TestSymlinks(t *testing.T) {
assert.Equal(t, ".env", files["test.env"].Linkname)
assert.ErrorIs(t, err, io.EOF, "tar must be read cleanly to EOF")
}
// Regression for https://gitea.com/gitea/runner/issues/876 and /941:
// re-copying an action directory must overwrite a pre-existing read-only
// file (e.g. a git pack .idx at mode 0444) instead of failing with EACCES
// on macOS or "Access is denied" on Windows.
func TestCopyCollectorWriteFileOverwritesReadOnlyFile(t *testing.T) {
dst := t.TempDir()
target := filepath.Join(dst, "sub", "pack.idx")
require.NoError(t, os.MkdirAll(filepath.Dir(target), 0o755))
require.NoError(t, os.WriteFile(target, []byte("old"), 0o444))
src := filepath.Join(t.TempDir(), "pack.idx")
require.NoError(t, os.WriteFile(src, []byte("new"), 0o444))
fi, err := os.Stat(src)
require.NoError(t, err)
cc := &CopyCollector{DstDir: dst}
require.NoError(t, cc.WriteFile("sub/pack.idx", fi, "", strings.NewReader("new")))
got, err := os.ReadFile(target)
require.NoError(t, err)
assert.Equal(t, "new", string(got))
}
// Without the destination removal, os.Symlink fails with EEXIST when the
// path already holds a regular file from an earlier copy of the action.
func TestCopyCollectorWriteFileOverwritesFileWithSymlink(t *testing.T) {
if runtime.GOOS == "windows" {
t.Skip("creating symlinks requires elevated privileges on Windows")
}
dst := t.TempDir()
target := filepath.Join(dst, "link")
require.NoError(t, os.WriteFile(target, []byte("stale"), 0o644))
fi, err := os.Lstat(target)
require.NoError(t, err)
cc := &CopyCollector{DstDir: dst}
require.NoError(t, cc.WriteFile("link", fi, "target", nil))
resolved, err := os.Readlink(target)
require.NoError(t, err)
assert.Equal(t, "target", resolved)
}

View File

@@ -5,7 +5,7 @@ jobs:
with-volumes:
runs-on: ubuntu-latest
container:
image: node:16-buster-slim
image: node:24-bookworm-slim
volumes:
- my_docker_volume:/path/to/volume
- /path/to/nonexist/directory

View File

@@ -19,6 +19,7 @@ import (
"strings"
"gitea.com/gitea/runner/act/common"
"gitea.com/gitea/runner/act/common/git"
"gitea.com/gitea/runner/act/container"
"gitea.com/gitea/runner/act/model"
@@ -44,6 +45,11 @@ type runAction func(step actionStep, actionDir string, remoteAction *remoteActio
//go:embed res/trampoline.js
var trampoline embed.FS
var (
ContainerImageExistsLocally = container.ImageExistsLocally
ContainerNewDockerBuildExecutor = container.NewDockerBuildExecutor
)
func readActionImpl(ctx context.Context, step *model.Step, actionDir, actionPath string, readFile actionYamlReader, writeFile fileWriter) (*model.Action, error) {
logger := common.Logger(ctx)
allErrors := []error{}
@@ -148,6 +154,8 @@ func maybeCopyToActionDir(ctx context.Context, step actionStep, actionDir, actio
return rc.JobContainer.CopyTarStream(ctx, containerActionDirCopy, ta)
}
defer git.AcquireCloneLock(actionDir)()
if err := removeGitIgnore(ctx, actionDir); err != nil {
return err
}
@@ -197,7 +205,7 @@ func runActionImpl(step actionStep, actionDir string, remoteAction *remoteAction
if remoteAction == nil {
location = containerActionDir
}
return execAsDocker(ctx, step, actionName, location, remoteAction == nil)
return execAsDocker(ctx, step, actionName, actionDir, location, remoteAction == nil)
case x.IsComposite():
if err := maybeCopyToActionDir(ctx, step, actionDir, actionPath, containerActionDir); err != nil {
return err
@@ -265,7 +273,7 @@ func removeGitIgnore(ctx context.Context, directory string) error {
}
// TODO: break out parts of function to reduce complexicity
func execAsDocker(ctx context.Context, step actionStep, actionName, basedir string, localAction bool) error {
func execAsDocker(ctx context.Context, step actionStep, actionName, actionDir, basedir string, localAction bool) error {
logger := common.Logger(ctx)
rc := step.getRunContext()
action := step.getActionModel()
@@ -284,12 +292,12 @@ func execAsDocker(ctx context.Context, step actionStep, actionName, basedir stri
image = strings.ToLower(image)
contextDir, fileName := filepath.Split(filepath.Join(basedir, action.Runs.Image))
anyArchExists, err := container.ImageExistsLocally(ctx, image, "any")
anyArchExists, err := ContainerImageExistsLocally(ctx, image, "any")
if err != nil {
return err
}
correctArchExists, err := container.ImageExistsLocally(ctx, image, rc.Config.ContainerArchitecture)
correctArchExists, err := ContainerImageExistsLocally(ctx, image, rc.Config.ContainerArchitecture)
if err != nil {
return err
}
@@ -321,13 +329,21 @@ func execAsDocker(ctx context.Context, step actionStep, actionName, basedir stri
}
defer buildContext.Close()
}
prepImage = container.NewDockerBuildExecutor(container.NewDockerBuildExecutorInput{
prepImage = ContainerNewDockerBuildExecutor(container.NewDockerBuildExecutorInput{
ContextDir: contextDir,
Dockerfile: fileName,
ImageTag: image,
BuildContext: buildContext,
Platform: rc.Config.ContainerArchitecture,
})
if buildContext == nil {
// Held across the whole build: the daemon drains contextDir lazily.
inner := prepImage
prepImage = func(ctx context.Context) error {
defer git.AcquireCloneLock(actionDir)()
return inner(ctx)
}
}
} else {
logger.Debugf("image '%s' for architecture '%s' already exists", image, rc.Config.ContainerArchitecture)
}

View File

@@ -9,8 +9,13 @@ import (
"io"
"io/fs"
"strings"
"sync"
"testing"
"time"
"gitea.com/gitea/runner/act/common"
"gitea.com/gitea/runner/act/common/git"
"gitea.com/gitea/runner/act/container"
"gitea.com/gitea/runner/act/model"
"github.com/stretchr/testify/assert"
@@ -252,3 +257,153 @@ func TestActionRunner(t *testing.T) {
})
}
}
func TestMaybeCopyToActionDirHoldsCloneLock(t *testing.T) {
ctx := context.Background()
actionDir := t.TempDir()
releaseCopy := make(chan struct{})
release := sync.OnceFunc(func() { close(releaseCopy) })
defer release()
copyEntered := make(chan struct{})
cm := &containerMock{}
cm.On("CopyDir", "/var/run/act/actions/", actionDir+"/", false).Return(func(ctx context.Context) error {
close(copyEntered)
<-releaseCopy
return nil
})
step := &stepActionRemote{
Step: &model.Step{Uses: "remote/action@v1"},
RunContext: &RunContext{
Config: &Config{},
JobContainer: cm,
},
}
copyDone := make(chan error, 1)
go func() {
copyDone <- maybeCopyToActionDir(ctx, step, actionDir, "", "/var/run/act/actions/")
}()
select {
case <-copyEntered:
case err := <-copyDone:
t.Fatalf("maybeCopyToActionDir returned before CopyDir was entered: %v", err)
case <-time.After(time.Second):
t.Fatal("CopyDir was not entered within 1 second")
}
peerAcquired := make(chan struct{})
go func() {
unlock := git.AcquireCloneLock(actionDir)
close(peerAcquired)
unlock()
}()
select {
case <-peerAcquired:
t.Fatal("peer AcquireCloneLock returned while CopyDir was running")
case <-time.After(50 * time.Millisecond):
}
release()
select {
case err := <-copyDone:
if err != nil {
t.Fatalf("maybeCopyToActionDir returned error: %v", err)
}
case <-time.After(time.Second):
t.Fatal("maybeCopyToActionDir did not return after CopyDir was unblocked")
}
select {
case <-peerAcquired:
case <-time.After(time.Second):
t.Fatal("peer AcquireCloneLock did not proceed after lock released")
}
cm.AssertExpectations(t)
}
func TestExecAsDockerHoldsCloneLockForRemoteUncached(t *testing.T) {
actionDir := t.TempDir()
unlockOnce := sync.OnceFunc(git.AcquireCloneLock(actionDir))
defer unlockOnce()
innerEntered := make(chan struct{})
releaseInner := make(chan struct{})
releaseOnce := sync.OnceFunc(func() { close(releaseInner) })
defer releaseOnce()
origImageExists := ContainerImageExistsLocally
ContainerImageExistsLocally = func(_ context.Context, _, _ string) (bool, error) {
return false, nil
}
defer func() { ContainerImageExistsLocally = origImageExists }()
origBuildExec := ContainerNewDockerBuildExecutor
ContainerNewDockerBuildExecutor = func(_ container.NewDockerBuildExecutorInput) common.Executor {
return func(_ context.Context) error {
close(innerEntered)
<-releaseInner
return nil
}
}
defer func() { ContainerNewDockerBuildExecutor = origBuildExec }()
step := &stepActionRemote{
Step: &model.Step{ID: "1", Uses: "remote/action@v1", With: map[string]string{}},
RunContext: &RunContext{
Config: &Config{},
Run: &model.Run{
JobID: "1",
Workflow: &model.Workflow{
Name: "wf",
Jobs: map[string]*model.Job{"1": {}},
},
},
JobContainer: &containerMock{},
},
action: &model.Action{Runs: model.ActionRuns{Using: "docker", Image: "Dockerfile"}},
env: map[string]string{},
}
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
done := make(chan error, 1)
go func() { done <- execAsDocker(ctx, step, "test-action", actionDir, actionDir, false) }()
select {
case <-innerEntered:
t.Fatal("inner build executor ran before clone lock was released")
case err := <-done:
t.Fatalf("execAsDocker returned before inner was entered: %v", err)
case <-time.After(50 * time.Millisecond):
}
unlockOnce()
select {
case <-innerEntered:
case err := <-done:
t.Fatalf("execAsDocker returned without entering inner: %v", err)
case <-time.After(time.Second):
t.Fatal("inner build executor not entered after lock released")
}
cancel()
releaseOnce()
select {
case <-done:
case <-time.After(time.Second):
t.Fatal("execAsDocker did not return after inner was released and ctx was canceled")
}
}

View File

@@ -120,7 +120,7 @@ func (rc *RunContext) setOutput(ctx context.Context, kvPairs map[string]string,
result, ok := rc.StepResults[stepID]
if !ok {
logger.Infof(" \U00002757 no outputs used step '%s'", stepID)
logger.Infof("No outputs registered for step '%s'", stepID)
return
}

View File

@@ -24,6 +24,13 @@ type jobInfo interface {
result(result string)
}
// reportStepError emits the GitHub Actions ##[error] annotation and records
// the error against the job so the job is reported as failed.
func reportStepError(ctx context.Context, err error) {
common.Logger(ctx).Errorf("##[error]%v", err)
common.SetJobError(ctx, err)
}
func newJobExecutor(info jobInfo, sf stepFactory, rc *RunContext) common.Executor {
steps := make([]common.Executor, 0)
preSteps := make([]common.Executor, 0)
@@ -32,7 +39,7 @@ func newJobExecutor(info jobInfo, sf stepFactory, rc *RunContext) common.Executo
steps = append(steps, func(ctx context.Context) error {
logger := common.Logger(ctx)
if len(info.matrix()) > 0 {
logger.Infof("\U0001F9EA Matrix: %v", info.matrix())
logger.Infof("Matrix: %v", info.matrix())
}
return nil
})
@@ -75,33 +82,36 @@ func newJobExecutor(info jobInfo, sf stepFactory, rc *RunContext) common.Executo
preExec := step.pre()
preSteps = append(preSteps, useStepLogger(rc, stepModel, stepStagePre, func(ctx context.Context) error {
logger := common.Logger(ctx)
preErr := preExec(ctx)
if preErr != nil {
logger.Errorf("%v", preErr)
common.SetJobError(ctx, preErr)
reportStepError(ctx, preErr)
} else if ctx.Err() != nil {
logger.Errorf("%v", ctx.Err())
common.SetJobError(ctx, ctx.Err())
reportStepError(ctx, ctx.Err())
}
return preErr
}))
stepExec := step.main()
steps = append(steps, useStepLogger(rc, stepModel, stepStageMain, func(ctx context.Context) error {
logger := common.Logger(ctx)
err := stepExec(ctx)
if err != nil {
logger.Errorf("%v", err)
common.SetJobError(ctx, err)
reportStepError(ctx, err)
} else if ctx.Err() != nil {
logger.Errorf("%v", ctx.Err())
common.SetJobError(ctx, ctx.Err())
reportStepError(ctx, ctx.Err())
}
return nil
}))
postExec := useStepLogger(rc, stepModel, stepStagePost, step.post())
postFn := step.post()
postExec := useStepLogger(rc, stepModel, stepStagePost, func(ctx context.Context) error {
err := postFn(ctx)
if err != nil {
reportStepError(ctx, err)
} else if ctx.Err() != nil {
reportStepError(ctx, ctx.Err())
}
return err
})
if postExecutor != nil {
// run the post executor in reverse order
postExecutor = postExec.Finally(postExecutor)
@@ -196,7 +206,7 @@ func setJobResult(ctx context.Context, info jobInfo, rc *RunContext, success boo
jobResultMessage = "failed"
}
logger.WithField("jobResult", jobResult).Infof("\U0001F3C1 Job %s", jobResultMessage)
logger.WithField("jobResult", jobResult).Infof("Job %s", jobResultMessage)
}
func setJobOutputs(ctx context.Context, rc *RunContext) {

View File

@@ -7,15 +7,11 @@ package runner
import (
"archive/tar"
"context"
"errors"
"fmt"
"io/fs"
"net/url"
"os"
"path"
"regexp"
"strings"
"sync"
"gitea.com/gitea/runner/act/common"
"gitea.com/gitea/runner/act/common/git"
@@ -51,7 +47,7 @@ func newLocalReusableWorkflowExecutor(rc *RunContext) common.Executor {
token := rc.Config.GetToken()
return common.NewPipelineExecutor(
newMutexExecutor(cloneIfRequired(rc, *remoteReusableWorkflow, workflowDir, token)),
cloneRemoteReusableWorkflow(rc, remoteReusableWorkflow.CloneURL(), remoteReusableWorkflow.Ref, workflowDir, token),
newReusableWorkflowExecutor(rc, workflowDir, remoteReusableWorkflow.FilePath()),
)
}
@@ -85,7 +81,7 @@ func newRemoteReusableWorkflowExecutor(rc *RunContext) common.Executor {
token := getGitCloneToken(rc.Config, remoteReusableWorkflow.CloneURL())
return common.NewPipelineExecutor(
newMutexExecutor(cloneIfRequired(rc, *remoteReusableWorkflow, workflowDir, token)),
cloneRemoteReusableWorkflow(rc, remoteReusableWorkflow.CloneURL(), remoteReusableWorkflow.Ref, workflowDir, token),
newReusableWorkflowExecutor(rc, workflowDir, remoteReusableWorkflow.FilePath()),
)
}
@@ -125,46 +121,37 @@ func newActionCacheReusableWorkflowExecutor(rc *RunContext, filename string, rem
}
}
var executorLock sync.Mutex
func newMutexExecutor(executor common.Executor) common.Executor {
// cloneRemoteReusableWorkflow always invokes the clone executor — moving refs
// (branches, tags) must be re-resolved each run, matching GitHub Actions.
//
// Callers must not change remoteReusableWorkflow.URL, because:
// 1. Gitea doesn't support specifying GithubContext.ServerURL by the GITHUB_SERVER_URL env
// 2. Gitea has already full URL with rc.Config.GitHubInstance when calling newRemoteReusableWorkflowWithPlat
//
// remoteReusableWorkflow.URL = rc.getGithubContext(ctx).ServerURL
func cloneRemoteReusableWorkflow(rc *RunContext, cloneURL, ref, targetDirectory, token string) common.Executor {
return func(ctx context.Context) error {
executorLock.Lock()
defer executorLock.Unlock()
return executor(ctx)
cloneURL = rc.NewExpressionEvaluator(ctx).Interpolate(ctx, cloneURL)
return git.NewGitCloneExecutor(git.NewGitCloneExecutorInput{
URL: cloneURL,
Ref: ref,
Dir: targetDirectory,
Token: token,
OfflineMode: rc.Config.ActionOfflineMode,
})(ctx)
}
}
func cloneIfRequired(rc *RunContext, remoteReusableWorkflow remoteReusableWorkflow, targetDirectory, token string) common.Executor {
return common.NewConditionalExecutor(
func(ctx context.Context) bool {
_, err := os.Stat(targetDirectory)
notExists := errors.Is(err, fs.ErrNotExist)
return notExists
},
func(ctx context.Context) error {
// interpolate the cloneURL
cloneURL := rc.NewExpressionEvaluator(ctx).Interpolate(ctx, remoteReusableWorkflow.CloneURL())
// Do not change the remoteReusableWorkflow.URL, because:
// 1. Gitea doesn't support specifying GithubContext.ServerURL by the GITHUB_SERVER_URL env
// 2. Gitea has already full URL with rc.Config.GitHubInstance when calling newRemoteReusableWorkflowWithPlat
// remoteReusableWorkflow.URL = rc.getGithubContext(ctx).ServerURL
return git.NewGitCloneExecutor(git.NewGitCloneExecutorInput{
URL: cloneURL,
Ref: remoteReusableWorkflow.Ref,
Dir: targetDirectory,
Token: token,
OfflineMode: rc.Config.ActionOfflineMode,
})(ctx)
},
nil,
)
}
var modelNewWorkflowPlanner = model.NewWorkflowPlanner
func newReusableWorkflowExecutor(rc *RunContext, directory, workflow string) common.Executor {
return func(ctx context.Context) error {
planner, err := model.NewWorkflowPlanner(path.Join(directory, workflow), true)
// Scoped to the yaml read so concurrent invocations don't serialize
// on the whole job run.
planner, err := func() (model.WorkflowPlanner, error) {
defer git.AcquireCloneLock(directory)()
return modelNewWorkflowPlanner(path.Join(directory, workflow), true)
}()
if err != nil {
return err
}
@@ -298,7 +285,7 @@ func setReusedWorkflowCallerResult(rc *RunContext, runner Runner) common.Executo
rc.caller.setReusedWorkflowJobResult(rc.JobName, reusedWorkflowJobResult)
} else {
rc.result(reusedWorkflowJobResult)
logger.WithField("jobResult", reusedWorkflowJobResult).Infof("\U0001F3C1 Job %s", reusedWorkflowJobResultMessage)
logger.WithField("jobResult", reusedWorkflowJobResult).Infof("Job %s", reusedWorkflowJobResultMessage)
}
}

View File

@@ -0,0 +1,134 @@
// Copyright 2026 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package runner
import (
"context"
"errors"
"os"
"os/exec"
"path/filepath"
"sync"
"testing"
"time"
"gitea.com/gitea/runner/act/common/git"
"gitea.com/gitea/runner/act/model"
"github.com/stretchr/testify/require"
)
// Regression test for go-gitea/gitea#37483: a remote reusable workflow at a moving
// ref (branch/tag) must reflect the new tip on every invocation, not stay pinned
// to the cache populated on the first run.
func TestReusableWorkflowCachedBranchRefRefreshes(t *testing.T) {
if _, err := exec.LookPath("git"); err != nil {
t.Skip("git not available in PATH")
}
remoteDir := t.TempDir()
gitMust(t, "", "init", "--bare", "--initial-branch=master", remoteDir)
workDir := t.TempDir()
gitMust(t, "", "clone", remoteDir, workDir)
gitMust(t, workDir, "config", "user.email", "test@test")
gitMust(t, workDir, "config", "user.name", "test")
gitMust(t, workDir, "checkout", "-b", "master")
const workflowPath = ".gitea/workflows/reusable.yml"
tmpl := func(tag string) string {
return "name: reusable\non:\n workflow_call:\njobs:\n build:\n runs-on: ubuntu-latest\n steps:\n - run: echo " + tag + "\n"
}
require.NoError(t, os.MkdirAll(filepath.Join(workDir, ".gitea/workflows"), 0o755))
require.NoError(t, os.WriteFile(filepath.Join(workDir, workflowPath), []byte(tmpl("v1")), 0o644))
gitMust(t, workDir, "add", workflowPath)
gitMust(t, workDir, "commit", "-m", "v1")
gitMust(t, workDir, "push", "-u", "origin", "master")
rc := &RunContext{
Config: &Config{},
Run: &model.Run{
JobID: "j1",
Workflow: &model.Workflow{
Name: "wf",
Jobs: map[string]*model.Job{"j1": {}},
},
},
}
cacheDir := t.TempDir()
require.NoError(t, cloneRemoteReusableWorkflow(rc, remoteDir, "master", cacheDir, "")(context.Background()))
got, err := os.ReadFile(filepath.Join(cacheDir, workflowPath))
require.NoError(t, err)
require.Equal(t, tmpl("v1"), string(got))
// Branch tip moves; cache key (cacheDir) does not.
require.NoError(t, os.WriteFile(filepath.Join(workDir, workflowPath), []byte(tmpl("v2")), 0o644))
gitMust(t, workDir, "commit", "-am", "v2")
gitMust(t, workDir, "push", "origin", "master")
require.NoError(t, cloneRemoteReusableWorkflow(rc, remoteDir, "master", cacheDir, "")(context.Background()))
got, err = os.ReadFile(filepath.Join(cacheDir, workflowPath))
require.NoError(t, err)
require.Equal(t, tmpl("v2"), string(got), "cached workflow file must reflect the updated branch tip")
}
func TestNewReusableWorkflowExecutorHoldsCloneLock(t *testing.T) {
workflowDir := t.TempDir()
unlockOnce := sync.OnceFunc(git.AcquireCloneLock(workflowDir))
defer unlockOnce()
plannerCalled := make(chan struct{})
origPlanner := modelNewWorkflowPlanner
modelNewWorkflowPlanner = func(string, bool) (model.WorkflowPlanner, error) {
close(plannerCalled)
return nil, errors.New("stop")
}
defer func() { modelNewWorkflowPlanner = origPlanner }()
rc := &RunContext{
Config: &Config{},
Run: &model.Run{Workflow: &model.Workflow{Jobs: map[string]*model.Job{}}},
}
exec := newReusableWorkflowExecutor(rc, workflowDir, "reusable.yml")
done := make(chan error, 1)
go func() { done <- exec(context.Background()) }()
select {
case <-plannerCalled:
t.Fatal("planner ran while clone lock was held")
case err := <-done:
t.Fatalf("executor returned before planner was reached: %v", err)
case <-time.After(50 * time.Millisecond):
}
unlockOnce()
select {
case <-plannerCalled:
case <-time.After(time.Second):
t.Fatal("planner not called after lock was released")
}
select {
case err := <-done:
require.Error(t, err)
case <-time.After(time.Second):
t.Fatal("executor did not return after planner ran")
}
}
func gitMust(t *testing.T, dir string, args ...string) {
t.Helper()
cmd := exec.Command("git", args...)
if dir != "" {
cmd.Dir = dir
}
out, err := cmd.CombinedOutput()
require.NoError(t, err, "git %v: %s", args, string(out))
}

View File

@@ -193,7 +193,7 @@ func (rc *RunContext) GetBindsAndMounts() ([]string, map[string]string) {
func (rc *RunContext) startHostEnvironment() common.Executor {
return func(ctx context.Context) error {
logger := common.Logger(ctx)
rawLogger := logger.WithField("raw_output", true)
rawLogger := logger.WithField(rawOutputField, true)
logWriter := common.NewLineWriter(rc.commandHandler(ctx), func(s string) bool {
if rc.Config.LogOutput {
rawLogger.Infof("%s", s)
@@ -220,11 +220,12 @@ func (rc *RunContext) startHostEnvironment() common.Executor {
}
toolCache := filepath.Join(cacheDir, "tool_cache")
rc.JobContainer = &container.HostEnvironment{
Path: path,
TmpDir: runnerTmp,
ToolCache: toolCache,
Workdir: rc.Config.Workdir,
ActPath: actPath,
Path: path,
TmpDir: runnerTmp,
ToolCache: toolCache,
Workdir: rc.Config.Workdir,
BindWorkdir: rc.Config.BindWorkdir,
ActPath: actPath,
CleanUp: func() {
os.RemoveAll(miscpath)
},
@@ -259,11 +260,24 @@ func (rc *RunContext) startHostEnvironment() common.Executor {
}
}
// printStartJobContainerGroup mirrors actions/runner's "Starting job container"
// section: emit the group header and summary, return a closer for ::endgroup::.
func printStartJobContainerGroup(ctx context.Context, image, name, network string) func() {
rawLogger := common.Logger(ctx).WithField(rawOutputField, true)
rawLogger.Infof("::group::Starting job container")
rawLogger.Infof("image: %s", image)
rawLogger.Infof("name: %s", name)
rawLogger.Infof("network: %s", network)
return func() {
rawLogger.Infof("::endgroup::")
}
}
func (rc *RunContext) startJobContainer() common.Executor {
return func(ctx context.Context) error {
logger := common.Logger(ctx)
image := rc.platformImage(ctx)
rawLogger := logger.WithField("raw_output", true)
rawLogger := logger.WithField(rawOutputField, true)
logWriter := common.NewLineWriter(rc.commandHandler(ctx), func(s string) bool {
if rc.Config.LogOutput {
rawLogger.Infof("%s", s)
@@ -278,7 +292,6 @@ func (rc *RunContext) startJobContainer() common.Executor {
return fmt.Errorf("failed to handle credentials: %s", err)
}
logger.Infof("\U0001f680 Start image=%s", image)
name := rc.jobContainerName()
// For gitea, to support --volumes-from <container_name_or_id> in options.
// We need to set the container name to the environment variable.
@@ -423,6 +436,7 @@ func (rc *RunContext) startJobContainer() common.Executor {
return errors.New("Failed to create job container")
}
defer printStartJobContainerGroup(ctx, image, name, networkName)()
return common.NewPipelineExecutor(
rc.pullServicesImages(rc.Config.ForcePull),
rc.JobContainer.Pull(rc.Config.ForcePull),
@@ -729,7 +743,7 @@ func (rc *RunContext) isEnabled(ctx context.Context) (bool, error) {
jobType, jobTypeErr := job.Type()
if runJobErr != nil {
return false, fmt.Errorf(" \u274C Error in if-expression: \"if: %s\" (%s)", job.If.Value, runJobErr)
return false, fmt.Errorf("if-expression %q evaluation failed: %s", job.If.Value, runJobErr)
}
if jobType == model.JobTypeInvalid {
@@ -752,7 +766,7 @@ func (rc *RunContext) isEnabled(ctx context.Context) (bool, error) {
img := rc.platformImage(ctx)
if img == "" {
for _, platformName := range rc.runsOnPlatformNames(ctx) {
l.Infof("\U0001F6A7 Skipping unsupported platform -- Try running with `-P %+v=...`", platformName)
l.Infof("Skipping unsupported platform -- Try running with `-P %+v=...`", platformName)
}
return false, nil
}

View File

@@ -5,6 +5,7 @@
package runner
import (
"bytes"
"context"
"fmt"
"os"
@@ -12,6 +13,7 @@ import (
"strings"
"testing"
"gitea.com/gitea/runner/act/common"
"gitea.com/gitea/runner/act/exprparser"
"gitea.com/gitea/runner/act/model"
@@ -635,3 +637,25 @@ func TestCreateContainerNameBoundedForLongMatrixInput(t *testing.T) {
assert.LessOrEqual(t, len(name+"-network"), 255)
assert.LessOrEqual(t, len(name+"-job1234567890"), 255)
}
func TestPrintStartJobContainerGroupGolden(t *testing.T) {
buf := &bytes.Buffer{}
logger := log.New()
logger.SetOutput(buf)
logger.SetLevel(log.InfoLevel)
logger.SetFormatter(&jobLogFormatter{color: cyan})
entry := logger.WithFields(log.Fields{"job": "j1"})
ctx := common.WithLogger(context.Background(), entry)
printStartJobContainerGroup(ctx, "node:20", "GITEA-WORKFLOW-build-JOB-test", "gitea-runner-network")()
want := strings.Join([]string{
"[j1] | ::group::Starting job container",
"[j1] | image: node:20",
"[j1] | name: GITEA-WORKFLOW-build-JOB-test",
"[j1] | network: gitea-runner-network",
"[j1] | ::endgroup::",
"",
}, "\n")
assert.Equal(t, want, buf.String())
}

View File

@@ -16,7 +16,7 @@ import (
"gitea.com/gitea/runner/act/common"
"gitea.com/gitea/runner/act/model"
docker_container "github.com/docker/docker/api/types/container"
docker_container "github.com/moby/moby/api/types/container"
log "github.com/sirupsen/logrus"
)

View File

@@ -26,7 +26,7 @@ import (
)
var (
baseImage = "node:16-buster-slim"
baseImage = "node:24-bookworm-slim"
platforms map[string]string
logLevel = log.DebugLevel
workdir = "testdata"
@@ -230,11 +230,9 @@ func TestRunEvent(t *testing.T) {
tables := []TestJobFileInfo{
// Shells
{workdir, "shells/defaults", "push", "", platforms, secrets},
// TODO: figure out why it fails
// {workdir, "shells/custom", "push", "", map[string]string{"ubuntu-latest": "catthehacker/ubuntu:pwsh-latest"}, }, // custom image with pwsh
{workdir, "shells/pwsh", "push", "", map[string]string{"ubuntu-latest": "catthehacker/ubuntu:pwsh-latest"}, secrets}, // custom image with pwsh
{workdir, "shells/bash", "push", "", platforms, secrets},
{workdir, "shells/python", "push", "", map[string]string{"ubuntu-latest": "node:16-buster"}, secrets}, // slim doesn't have python
{workdir, "shells/python", "push", "", map[string]string{"ubuntu-latest": "node:24-bookworm"}, secrets}, // slim doesn't have python
{workdir, "shells/sh", "push", "", platforms, secrets},
// Local action
@@ -463,7 +461,7 @@ func TestDryrunEvent(t *testing.T) {
{workdir, "shells/defaults", "push", "", platforms, secrets},
{workdir, "shells/pwsh", "push", "", map[string]string{"ubuntu-latest": "catthehacker/ubuntu:pwsh-latest"}, secrets}, // custom image with pwsh
{workdir, "shells/bash", "push", "", platforms, secrets},
{workdir, "shells/python", "push", "", map[string]string{"ubuntu-latest": "node:16-buster"}, secrets}, // slim doesn't have python
{workdir, "shells/python", "push", "", map[string]string{"ubuntu-latest": "node:24-bookworm"}, secrets}, // slim doesn't have python
{workdir, "shells/sh", "push", "", platforms, secrets},
// Local action
@@ -593,7 +591,7 @@ func TestRunWithService(t *testing.T) {
ctx := context.Background()
platforms := map[string]string{
"ubuntu-latest": "node:12.20.1-buster-slim",
"ubuntu-latest": "node:24-bookworm-slim",
}
workflowPath := "services"

View File

@@ -107,7 +107,7 @@ func runStepExecutor(step step, stage stepStage, executor common.Executor) commo
if strings.Contains(stepString, "::add-mask::") {
stepString = "add-mask command"
}
logger.Infof("\u2B50 Run %s %s", stage, stepString)
logger.Infof("Run %s %s", stage, stepString)
// Prepare and clean Runner File Commands
actPath := rc.JobContainer.GetActPath()
@@ -158,7 +158,7 @@ func runStepExecutor(step step, stage stepStage, executor common.Executor) commo
err = executor(timeoutctx)
if err == nil {
logger.WithField("stepResult", stepResult.Outcome).Infof(" \u2705 Success - %s %s", stage, stepString)
logger.WithField("stepResult", stepResult.Outcome).Infof("Success - %s %s", stage, stepString)
} else {
stepResult.Outcome = model.StepStatusFailure
@@ -169,6 +169,7 @@ func runStepExecutor(step step, stage stepStage, executor common.Executor) commo
}
if continueOnError {
logger.Errorf("##[error]%v", err)
logger.Infof("Failed but continue next step")
err = nil
stepResult.Conclusion = model.StepStatusSuccess
@@ -176,7 +177,9 @@ func runStepExecutor(step step, stage stepStage, executor common.Executor) commo
stepResult.Conclusion = model.StepStatusFailure
}
logger.WithField("stepResult", stepResult.Outcome).Errorf(" \u274C Failure - %s %s", stage, stepString)
// Infof: Errorf entries are promoted to the user log by the reporter,
// which would duplicate the ##[error] annotation emitted elsewhere.
logger.WithField("stepResult", stepResult.Outcome).Infof("Failure - %s %s", stage, stepString)
}
// Process Runner File Commands
orgerr := err
@@ -268,7 +271,7 @@ func isStepEnabled(ctx context.Context, expr string, step step, stage stepStage)
runStep, err := EvalBool(ctx, rc.NewStepExpressionEvaluator(ctx, step), expr, defaultStatusCheck)
if err != nil {
return false, fmt.Errorf(" \u274C Error in if-expression: \"if: %s\" (%s)", expr, err)
return false, fmt.Errorf("if-expression %q evaluation failed: %s", expr, err)
}
return runStep, nil
@@ -284,7 +287,7 @@ func isContinueOnError(ctx context.Context, expr string, step step, _ stepStage)
continueOnError, err := EvalBool(ctx, rc.NewStepExpressionEvaluator(ctx, step), expr, exprparser.DefaultStatusCheckNone)
if err != nil {
return false, fmt.Errorf(" \u274C Error in continue-on-error-expression: \"continue-on-error: %s\" (%s)", expr, err)
return false, fmt.Errorf("continue-on-error expression %q evaluation failed: %s", expr, err)
}
return continueOnError, nil

View File

@@ -145,6 +145,7 @@ func (sar *stepActionRemote) prepareActionExecutor() common.Executor {
return common.NewPipelineExecutor(
ntErr,
func(ctx context.Context) error {
defer git.AcquireCloneLock(actionDir)()
actionModel, err := sar.readAction(ctx, sar.Step, actionDir, sar.remoteAction.Path, remoteReader(ctx), os.WriteFile)
sar.action = actionModel
return err

View File

@@ -1,5 +1,5 @@
name: 'Test'
description: 'Test'
runs:
using: 'node12'
using: 'node24'
main: 'index.js'

View File

@@ -1 +1 @@
FROM ubuntu:18.04
FROM ubuntu:24.04

View File

@@ -1,5 +1,5 @@
# Container image that runs your code
FROM node:12-buster-slim
FROM node:24-bookworm-slim
# Copies your code file from your action repository to the filesystem path `/` of the container
COPY entrypoint.sh /entrypoint.sh

View File

@@ -1,5 +1,5 @@
# Container image that runs your code
FROM node:16-buster-slim
FROM node:24-bookworm-slim
# Copies your code file from your action repository to the filesystem path `/` of the container
COPY entrypoint.sh /entrypoint.sh

View File

@@ -8,7 +8,7 @@ inputs:
default: World
runs:
using: docker
image: docker://node:16-buster-slim
image: docker://node:24-bookworm-slim
entrypoint: /bin/sh -c
env:
TEST: enabled

View File

@@ -1,13 +0,0 @@
name: 'Hello World'
description: 'Greet someone and record the time'
inputs:
who-to-greet: # id of input
description: 'Who to greet'
required: true
default: 'World'
outputs:
time: # id of output
description: 'The time we greeted you'
runs:
using: 'node12'
main: 'dist/index.js'

View File

@@ -1,15 +0,0 @@
const core = require('@actions/core');
const github = require('@actions/github');
try {
// `who-to-greet` input defined in action metadata file
const nameToGreet = core.getInput('who-to-greet');
console.log(`Hello ${nameToGreet}!`);
const time = (new Date()).toTimeString();
core.setOutput("time", time);
// Get the JSON webhook payload for the event that triggered the workflow
const payload = JSON.stringify(github.context.payload, undefined, 2)
console.log(`The event payload: ${payload}`);
} catch (error) {
core.setFailed(error.message);
}

View File

@@ -1 +0,0 @@
../@vercel/ncc/dist/ncc/cli.js

View File

@@ -1 +0,0 @@
../uuid/dist/bin/uuid

View File

@@ -1,244 +0,0 @@
{
"name": "node12",
"version": "1.0.0",
"lockfileVersion": 3,
"requires": true,
"packages": {
"node_modules/@actions/core": {
"version": "1.10.0",
"resolved": "https://registry.npmjs.org/@actions/core/-/core-1.10.0.tgz",
"integrity": "sha512-2aZDDa3zrrZbP5ZYg159sNoLRb61nQ7awl5pSvIq5Qpj81vwDzdMRKzkWJGJuwVvWpvZKx7vspJALyvaaIQyug==",
"dependencies": {
"@actions/http-client": "^2.0.1",
"uuid": "^8.3.2"
}
},
"node_modules/@actions/github": {
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/@actions/github/-/github-4.0.0.tgz",
"integrity": "sha512-Ej/Y2E+VV6sR9X7pWL5F3VgEWrABaT292DRqRU6R4hnQjPtC/zD3nagxVdXWiRQvYDh8kHXo7IDmG42eJ/dOMA==",
"dependencies": {
"@actions/http-client": "^1.0.8",
"@octokit/core": "^3.0.0",
"@octokit/plugin-paginate-rest": "^2.2.3",
"@octokit/plugin-rest-endpoint-methods": "^4.0.0"
}
},
"node_modules/@actions/github/node_modules/@actions/http-client": {
"version": "1.0.11",
"resolved": "https://registry.npmjs.org/@actions/http-client/-/http-client-1.0.11.tgz",
"integrity": "sha512-VRYHGQV1rqnROJqdMvGUbY/Kn8vriQe/F9HR2AlYHzmKuM/p3kjNuXhmdBfcVgsvRWTz5C5XW5xvndZrVBuAYg==",
"dependencies": {
"tunnel": "0.0.6"
}
},
"node_modules/@actions/http-client": {
"version": "2.1.1",
"resolved": "https://registry.npmjs.org/@actions/http-client/-/http-client-2.1.1.tgz",
"integrity": "sha512-qhrkRMB40bbbLo7gF+0vu+X+UawOvQQqNAA/5Unx774RS8poaOhThDOG6BGmxvAnxhQnDp2BG/ZUm65xZILTpw==",
"dependencies": {
"tunnel": "^0.0.6"
}
},
"node_modules/@octokit/auth-token": {
"version": "2.5.0",
"resolved": "https://registry.npmjs.org/@octokit/auth-token/-/auth-token-2.5.0.tgz",
"integrity": "sha512-r5FVUJCOLl19AxiuZD2VRZ/ORjp/4IN98Of6YJoJOkY75CIBuYfmiNHGrDwXr+aLGG55igl9QrxX3hbiXlLb+g==",
"dependencies": {
"@octokit/types": "^6.0.3"
}
},
"node_modules/@octokit/core": {
"version": "3.6.0",
"resolved": "https://registry.npmjs.org/@octokit/core/-/core-3.6.0.tgz",
"integrity": "sha512-7RKRKuA4xTjMhY+eG3jthb3hlZCsOwg3rztWh75Xc+ShDWOfDDATWbeZpAHBNRpm4Tv9WgBMOy1zEJYXG6NJ7Q==",
"dependencies": {
"@octokit/auth-token": "^2.4.4",
"@octokit/graphql": "^4.5.8",
"@octokit/request": "^5.6.3",
"@octokit/request-error": "^2.0.5",
"@octokit/types": "^6.0.3",
"before-after-hook": "^2.2.0",
"universal-user-agent": "^6.0.0"
}
},
"node_modules/@octokit/endpoint": {
"version": "6.0.12",
"resolved": "https://registry.npmjs.org/@octokit/endpoint/-/endpoint-6.0.12.tgz",
"integrity": "sha512-lF3puPwkQWGfkMClXb4k/eUT/nZKQfxinRWJrdZaJO85Dqwo/G0yOC434Jr2ojwafWJMYqFGFa5ms4jJUgujdA==",
"dependencies": {
"@octokit/types": "^6.0.3",
"is-plain-object": "^5.0.0",
"universal-user-agent": "^6.0.0"
}
},
"node_modules/@octokit/graphql": {
"version": "4.8.0",
"resolved": "https://registry.npmjs.org/@octokit/graphql/-/graphql-4.8.0.tgz",
"integrity": "sha512-0gv+qLSBLKF0z8TKaSKTsS39scVKF9dbMxJpj3U0vC7wjNWFuIpL/z76Qe2fiuCbDRcJSavkXsVtMS6/dtQQsg==",
"dependencies": {
"@octokit/request": "^5.6.0",
"@octokit/types": "^6.0.3",
"universal-user-agent": "^6.0.0"
}
},
"node_modules/@octokit/openapi-types": {
"version": "12.11.0",
"resolved": "https://registry.npmjs.org/@octokit/openapi-types/-/openapi-types-12.11.0.tgz",
"integrity": "sha512-VsXyi8peyRq9PqIz/tpqiL2w3w80OgVMwBHltTml3LmVvXiphgeqmY9mvBw9Wu7e0QWk/fqD37ux8yP5uVekyQ=="
},
"node_modules/@octokit/plugin-paginate-rest": {
"version": "2.21.3",
"resolved": "https://registry.npmjs.org/@octokit/plugin-paginate-rest/-/plugin-paginate-rest-2.21.3.tgz",
"integrity": "sha512-aCZTEf0y2h3OLbrgKkrfFdjRL6eSOo8komneVQJnYecAxIej7Bafor2xhuDJOIFau4pk0i/P28/XgtbyPF0ZHw==",
"dependencies": {
"@octokit/types": "^6.40.0"
},
"peerDependencies": {
"@octokit/core": ">=2"
}
},
"node_modules/@octokit/plugin-rest-endpoint-methods": {
"version": "4.15.1",
"resolved": "https://registry.npmjs.org/@octokit/plugin-rest-endpoint-methods/-/plugin-rest-endpoint-methods-4.15.1.tgz",
"integrity": "sha512-4gQg4ySoW7ktKB0Mf38fHzcSffVZd6mT5deJQtpqkuPuAqzlED5AJTeW8Uk7dPRn7KaOlWcXB0MedTFJU1j4qA==",
"dependencies": {
"@octokit/types": "^6.13.0",
"deprecation": "^2.3.1"
},
"peerDependencies": {
"@octokit/core": ">=3"
}
},
"node_modules/@octokit/request": {
"version": "5.6.3",
"resolved": "https://registry.npmjs.org/@octokit/request/-/request-5.6.3.tgz",
"integrity": "sha512-bFJl0I1KVc9jYTe9tdGGpAMPy32dLBXXo1dS/YwSCTL/2nd9XeHsY616RE3HPXDVk+a+dBuzyz5YdlXwcDTr2A==",
"dependencies": {
"@octokit/endpoint": "^6.0.1",
"@octokit/request-error": "^2.1.0",
"@octokit/types": "^6.16.1",
"is-plain-object": "^5.0.0",
"node-fetch": "^2.6.7",
"universal-user-agent": "^6.0.0"
}
},
"node_modules/@octokit/request-error": {
"version": "2.1.0",
"resolved": "https://registry.npmjs.org/@octokit/request-error/-/request-error-2.1.0.tgz",
"integrity": "sha512-1VIvgXxs9WHSjicsRwq8PlR2LR2x6DwsJAaFgzdi0JfJoGSO8mYI/cHJQ+9FbN21aa+DrgNLnwObmyeSC8Rmpg==",
"dependencies": {
"@octokit/types": "^6.0.3",
"deprecation": "^2.0.0",
"once": "^1.4.0"
}
},
"node_modules/@octokit/types": {
"version": "6.41.0",
"resolved": "https://registry.npmjs.org/@octokit/types/-/types-6.41.0.tgz",
"integrity": "sha512-eJ2jbzjdijiL3B4PrSQaSjuF2sPEQPVCPzBvTHJD9Nz+9dw2SGH4K4xeQJ77YfTq5bRQ+bD8wT11JbeDPmxmGg==",
"dependencies": {
"@octokit/openapi-types": "^12.11.0"
}
},
"node_modules/@vercel/ncc": {
"version": "0.24.1",
"resolved": "https://registry.npmjs.org/@vercel/ncc/-/ncc-0.24.1.tgz",
"integrity": "sha512-r9m7brz2hNmq5TF3sxrK4qR/FhXn44XIMglQUir4sT7Sh5GOaYXlMYikHFwJStf8rmQGTlvOoBXt4yHVonRG8A==",
"dev": true,
"bin": {
"ncc": "dist/ncc/cli.js"
}
},
"node_modules/before-after-hook": {
"version": "2.2.3",
"resolved": "https://registry.npmjs.org/before-after-hook/-/before-after-hook-2.2.3.tgz",
"integrity": "sha512-NzUnlZexiaH/46WDhANlyR2bXRopNg4F/zuSA3OpZnllCUgRaOF2znDioDWrmbNVsuZk6l9pMquQB38cfBZwkQ=="
},
"node_modules/deprecation": {
"version": "2.3.1",
"resolved": "https://registry.npmjs.org/deprecation/-/deprecation-2.3.1.tgz",
"integrity": "sha512-xmHIy4F3scKVwMsQ4WnVaS8bHOx0DmVwRywosKhaILI0ywMDWPtBSku2HNxRvF7jtwDRsoEwYQSfbxj8b7RlJQ=="
},
"node_modules/is-plain-object": {
"version": "5.0.0",
"resolved": "https://registry.npmjs.org/is-plain-object/-/is-plain-object-5.0.0.tgz",
"integrity": "sha512-VRSzKkbMm5jMDoKLbltAkFQ5Qr7VDiTFGXxYFXXowVj387GeGNOCsOH6Msy00SGZ3Fp84b1Naa1psqgcCIEP5Q==",
"engines": {
"node": ">=0.10.0"
}
},
"node_modules/node-fetch": {
"version": "2.6.12",
"resolved": "https://registry.npmjs.org/node-fetch/-/node-fetch-2.6.12.tgz",
"integrity": "sha512-C/fGU2E8ToujUivIO0H+tpQ6HWo4eEmchoPIoXtxCrVghxdKq+QOHqEZW7tuP3KlV3bC8FRMO5nMCC7Zm1VP6g==",
"dependencies": {
"whatwg-url": "^5.0.0"
},
"engines": {
"node": "4.x || >=6.0.0"
},
"peerDependencies": {
"encoding": "^0.1.0"
},
"peerDependenciesMeta": {
"encoding": {
"optional": true
}
}
},
"node_modules/once": {
"version": "1.4.0",
"resolved": "https://registry.npmjs.org/once/-/once-1.4.0.tgz",
"integrity": "sha512-lNaJgI+2Q5URQBkccEKHTQOPaXdUxnZZElQTZY0MFUAuaEqe1E+Nyvgdz/aIyNi6Z9MzO5dv1H8n58/GELp3+w==",
"dependencies": {
"wrappy": "1"
}
},
"node_modules/tr46": {
"version": "0.0.3",
"resolved": "https://registry.npmjs.org/tr46/-/tr46-0.0.3.tgz",
"integrity": "sha512-N3WMsuqV66lT30CrXNbEjx4GEwlow3v6rr4mCcv6prnfwhS01rkgyFdjPNBYd9br7LpXV1+Emh01fHnq2Gdgrw=="
},
"node_modules/tunnel": {
"version": "0.0.6",
"resolved": "https://registry.npmjs.org/tunnel/-/tunnel-0.0.6.tgz",
"integrity": "sha512-1h/Lnq9yajKY2PEbBadPXj3VxsDDu844OnaAo52UVmIzIvwwtBPIuNvkjuzBlTWpfJyUbG3ez0KSBibQkj4ojg==",
"engines": {
"node": ">=0.6.11 <=0.7.0 || >=0.7.3"
}
},
"node_modules/universal-user-agent": {
"version": "6.0.0",
"resolved": "https://registry.npmjs.org/universal-user-agent/-/universal-user-agent-6.0.0.tgz",
"integrity": "sha512-isyNax3wXoKaulPDZWHQqbmIx1k2tb9fb3GGDBRxCscfYV2Ch7WxPArBsFEG8s/safwXTT7H4QGhaIkTp9447w=="
},
"node_modules/uuid": {
"version": "8.3.2",
"resolved": "https://registry.npmjs.org/uuid/-/uuid-8.3.2.tgz",
"integrity": "sha512-+NYs2QeMWy+GWFOEm9xnn6HCDp0l7QBD7ml8zLUmJ+93Q5NF0NocErnwkTkXVFNiX3/fpC6afS8Dhb/gz7R7eg==",
"bin": {
"uuid": "dist/bin/uuid"
}
},
"node_modules/webidl-conversions": {
"version": "3.0.1",
"resolved": "https://registry.npmjs.org/webidl-conversions/-/webidl-conversions-3.0.1.tgz",
"integrity": "sha512-2JAn3z8AR6rjK8Sm8orRC0h/bcl/DqL7tRPdGZ4I1CjdF+EaMLmYxBHyXuKL849eucPFhvBoxMsflfOb8kxaeQ=="
},
"node_modules/whatwg-url": {
"version": "5.0.0",
"resolved": "https://registry.npmjs.org/whatwg-url/-/whatwg-url-5.0.0.tgz",
"integrity": "sha512-saE57nupxk6v3HY35+jzBwYa0rKSy0XR8JSxZPwgLr7ys0IBzhGviA1/TUGJLmSVqs8pb9AnvICXEuOHLprYTw==",
"dependencies": {
"tr46": "~0.0.3",
"webidl-conversions": "^3.0.0"
}
},
"node_modules/wrappy": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/wrappy/-/wrappy-1.0.2.tgz",
"integrity": "sha512-l4Sp/DRseor9wL6EvV2+TuQn63dMkPjZ/sp9XkghTEbV9KlPS1xUsZ3u7/IQO4wxtcFB4bgpQPRcR3QCvezPcQ=="
}
}
}

View File

@@ -1,9 +0,0 @@
The MIT License (MIT)
Copyright 2019 GitHub
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

View File

@@ -1,15 +0,0 @@
export interface CommandProperties {
[key: string]: any;
}
/**
* Commands
*
* Command Format:
* ::name key=value,key=value::message
*
* Examples:
* ::warning::This is the message
* ::set-env name=MY_VAR::some value
*/
export declare function issueCommand(command: string, properties: CommandProperties, message: any): void;
export declare function issue(name: string, message?: string): void;

View File

@@ -1,92 +0,0 @@
"use strict";
var __createBinding = (this && this.__createBinding) || (Object.create ? (function(o, m, k, k2) {
if (k2 === undefined) k2 = k;
Object.defineProperty(o, k2, { enumerable: true, get: function() { return m[k]; } });
}) : (function(o, m, k, k2) {
if (k2 === undefined) k2 = k;
o[k2] = m[k];
}));
var __setModuleDefault = (this && this.__setModuleDefault) || (Object.create ? (function(o, v) {
Object.defineProperty(o, "default", { enumerable: true, value: v });
}) : function(o, v) {
o["default"] = v;
});
var __importStar = (this && this.__importStar) || function (mod) {
if (mod && mod.__esModule) return mod;
var result = {};
if (mod != null) for (var k in mod) if (k !== "default" && Object.hasOwnProperty.call(mod, k)) __createBinding(result, mod, k);
__setModuleDefault(result, mod);
return result;
};
Object.defineProperty(exports, "__esModule", { value: true });
exports.issue = exports.issueCommand = void 0;
const os = __importStar(require("os"));
const utils_1 = require("./utils");
/**
* Commands
*
* Command Format:
* ::name key=value,key=value::message
*
* Examples:
* ::warning::This is the message
* ::set-env name=MY_VAR::some value
*/
function issueCommand(command, properties, message) {
const cmd = new Command(command, properties, message);
process.stdout.write(cmd.toString() + os.EOL);
}
exports.issueCommand = issueCommand;
function issue(name, message = '') {
issueCommand(name, {}, message);
}
exports.issue = issue;
const CMD_STRING = '::';
class Command {
constructor(command, properties, message) {
if (!command) {
command = 'missing.command';
}
this.command = command;
this.properties = properties;
this.message = message;
}
toString() {
let cmdStr = CMD_STRING + this.command;
if (this.properties && Object.keys(this.properties).length > 0) {
cmdStr += ' ';
let first = true;
for (const key in this.properties) {
if (this.properties.hasOwnProperty(key)) {
const val = this.properties[key];
if (val) {
if (first) {
first = false;
}
else {
cmdStr += ',';
}
cmdStr += `${key}=${escapeProperty(val)}`;
}
}
}
}
cmdStr += `${CMD_STRING}${escapeData(this.message)}`;
return cmdStr;
}
}
function escapeData(s) {
return utils_1.toCommandValue(s)
.replace(/%/g, '%25')
.replace(/\r/g, '%0D')
.replace(/\n/g, '%0A');
}
function escapeProperty(s) {
return utils_1.toCommandValue(s)
.replace(/%/g, '%25')
.replace(/\r/g, '%0D')
.replace(/\n/g, '%0A')
.replace(/:/g, '%3A')
.replace(/,/g, '%2C');
}
//# sourceMappingURL=command.js.map

View File

@@ -1 +0,0 @@
{"version":3,"file":"command.js","sourceRoot":"","sources":["../src/command.ts"],"names":[],"mappings":";;;;;;;;;;;;;;;;;;;;;;AAAA,uCAAwB;AACxB,mCAAsC;AAWtC;;;;;;;;;GASG;AACH,SAAgB,YAAY,CAC1B,OAAe,EACf,UAA6B,EAC7B,OAAY;IAEZ,MAAM,GAAG,GAAG,IAAI,OAAO,CAAC,OAAO,EAAE,UAAU,EAAE,OAAO,CAAC,CAAA;IACrD,OAAO,CAAC,MAAM,CAAC,KAAK,CAAC,GAAG,CAAC,QAAQ,EAAE,GAAG,EAAE,CAAC,GAAG,CAAC,CAAA;AAC/C,CAAC;AAPD,oCAOC;AAED,SAAgB,KAAK,CAAC,IAAY,EAAE,OAAO,GAAG,EAAE;IAC9C,YAAY,CAAC,IAAI,EAAE,EAAE,EAAE,OAAO,CAAC,CAAA;AACjC,CAAC;AAFD,sBAEC;AAED,MAAM,UAAU,GAAG,IAAI,CAAA;AAEvB,MAAM,OAAO;IAKX,YAAY,OAAe,EAAE,UAA6B,EAAE,OAAe;QACzE,IAAI,CAAC,OAAO,EAAE;YACZ,OAAO,GAAG,iBAAiB,CAAA;SAC5B;QAED,IAAI,CAAC,OAAO,GAAG,OAAO,CAAA;QACtB,IAAI,CAAC,UAAU,GAAG,UAAU,CAAA;QAC5B,IAAI,CAAC,OAAO,GAAG,OAAO,CAAA;IACxB,CAAC;IAED,QAAQ;QACN,IAAI,MAAM,GAAG,UAAU,GAAG,IAAI,CAAC,OAAO,CAAA;QAEtC,IAAI,IAAI,CAAC,UAAU,IAAI,MAAM,CAAC,IAAI,CAAC,IAAI,CAAC,UAAU,CAAC,CAAC,MAAM,GAAG,CAAC,EAAE;YAC9D,MAAM,IAAI,GAAG,CAAA;YACb,IAAI,KAAK,GAAG,IAAI,CAAA;YAChB,KAAK,MAAM,GAAG,IAAI,IAAI,CAAC,UAAU,EAAE;gBACjC,IAAI,IAAI,CAAC,UAAU,CAAC,cAAc,CAAC,GAAG,CAAC,EAAE;oBACvC,MAAM,GAAG,GAAG,IAAI,CAAC,UAAU,CAAC,GAAG,CAAC,CAAA;oBAChC,IAAI,GAAG,EAAE;wBACP,IAAI,KAAK,EAAE;4BACT,KAAK,GAAG,KAAK,CAAA;yBACd;6BAAM;4BACL,MAAM,IAAI,GAAG,CAAA;yBACd;wBAED,MAAM,IAAI,GAAG,GAAG,IAAI,cAAc,CAAC,GAAG,CAAC,EAAE,CAAA;qBAC1C;iBACF;aACF;SACF;QAED,MAAM,IAAI,GAAG,UAAU,GAAG,UAAU,CAAC,IAAI,CAAC,OAAO,CAAC,EAAE,CAAA;QACpD,OAAO,MAAM,CAAA;IACf,CAAC;CACF;AAED,SAAS,UAAU,CAAC,CAAM;IACxB,OAAO,sBAAc,CAAC,CAAC,CAAC;SACrB,OAAO,CAAC,IAAI,EAAE,KAAK,CAAC;SACpB,OAAO,CAAC,KAAK,EAAE,KAAK,CAAC;SACrB,OAAO,CAAC,KAAK,EAAE,KAAK,CAAC,CAAA;AAC1B,CAAC;AAED,SAAS,cAAc,CAAC,CAAM;IAC5B,OAAO,sBAAc,CAAC,CAAC,CAAC;SACrB,OAAO,CAAC,IAAI,EAAE,KAAK,CAAC;SACpB,OAAO,CAAC,KAAK,EAAE,KAAK,CAAC;SACrB,OAAO,CAAC,KAAK,EAAE,KAAK,CAAC;SACrB,OAAO,CAAC,IAAI,EAAE,KAAK,CAAC;SACpB,OAAO,CAAC,IAAI,EAAE,KAAK,CAAC,CAAA;AACzB,CAAC"}

View File

@@ -1,198 +0,0 @@
/**
* Interface for getInput options
*/
export interface InputOptions {
/** Optional. Whether the input is required. If required and not present, will throw. Defaults to false */
required?: boolean;
/** Optional. Whether leading/trailing whitespace will be trimmed for the input. Defaults to true */
trimWhitespace?: boolean;
}
/**
* The code to exit an action
*/
export declare enum ExitCode {
/**
* A code indicating that the action was successful
*/
Success = 0,
/**
* A code indicating that the action was a failure
*/
Failure = 1
}
/**
* Optional properties that can be sent with annotatation commands (notice, error, and warning)
* See: https://docs.github.com/en/rest/reference/checks#create-a-check-run for more information about annotations.
*/
export interface AnnotationProperties {
/**
* A title for the annotation.
*/
title?: string;
/**
* The path of the file for which the annotation should be created.
*/
file?: string;
/**
* The start line for the annotation.
*/
startLine?: number;
/**
* The end line for the annotation. Defaults to `startLine` when `startLine` is provided.
*/
endLine?: number;
/**
* The start column for the annotation. Cannot be sent when `startLine` and `endLine` are different values.
*/
startColumn?: number;
/**
* The start column for the annotation. Cannot be sent when `startLine` and `endLine` are different values.
* Defaults to `startColumn` when `startColumn` is provided.
*/
endColumn?: number;
}
/**
* Sets env variable for this action and future actions in the job
* @param name the name of the variable to set
* @param val the value of the variable. Non-string values will be converted to a string via JSON.stringify
*/
export declare function exportVariable(name: string, val: any): void;
/**
* Registers a secret which will get masked from logs
* @param secret value of the secret
*/
export declare function setSecret(secret: string): void;
/**
* Prepends inputPath to the PATH (for this action and future actions)
* @param inputPath
*/
export declare function addPath(inputPath: string): void;
/**
* Gets the value of an input.
* Unless trimWhitespace is set to false in InputOptions, the value is also trimmed.
* Returns an empty string if the value is not defined.
*
* @param name name of the input to get
* @param options optional. See InputOptions.
* @returns string
*/
export declare function getInput(name: string, options?: InputOptions): string;
/**
* Gets the values of an multiline input. Each value is also trimmed.
*
* @param name name of the input to get
* @param options optional. See InputOptions.
* @returns string[]
*
*/
export declare function getMultilineInput(name: string, options?: InputOptions): string[];
/**
* Gets the input value of the boolean type in the YAML 1.2 "core schema" specification.
* Support boolean input list: `true | True | TRUE | false | False | FALSE` .
* The return value is also in boolean type.
* ref: https://yaml.org/spec/1.2/spec.html#id2804923
*
* @param name name of the input to get
* @param options optional. See InputOptions.
* @returns boolean
*/
export declare function getBooleanInput(name: string, options?: InputOptions): boolean;
/**
* Sets the value of an output.
*
* @param name name of the output to set
* @param value value to store. Non-string values will be converted to a string via JSON.stringify
*/
export declare function setOutput(name: string, value: any): void;
/**
* Enables or disables the echoing of commands into stdout for the rest of the step.
* Echoing is disabled by default if ACTIONS_STEP_DEBUG is not set.
*
*/
export declare function setCommandEcho(enabled: boolean): void;
/**
* Sets the action status to failed.
* When the action exits it will be with an exit code of 1
* @param message add error issue message
*/
export declare function setFailed(message: string | Error): void;
/**
* Gets whether Actions Step Debug is on or not
*/
export declare function isDebug(): boolean;
/**
* Writes debug message to user log
* @param message debug message
*/
export declare function debug(message: string): void;
/**
* Adds an error issue
* @param message error issue message. Errors will be converted to string via toString()
* @param properties optional properties to add to the annotation.
*/
export declare function error(message: string | Error, properties?: AnnotationProperties): void;
/**
* Adds a warning issue
* @param message warning issue message. Errors will be converted to string via toString()
* @param properties optional properties to add to the annotation.
*/
export declare function warning(message: string | Error, properties?: AnnotationProperties): void;
/**
* Adds a notice issue
* @param message notice issue message. Errors will be converted to string via toString()
* @param properties optional properties to add to the annotation.
*/
export declare function notice(message: string | Error, properties?: AnnotationProperties): void;
/**
* Writes info to log with console.log.
* @param message info message
*/
export declare function info(message: string): void;
/**
* Begin an output group.
*
* Output until the next `groupEnd` will be foldable in this group
*
* @param name The name of the output group
*/
export declare function startGroup(name: string): void;
/**
* End an output group.
*/
export declare function endGroup(): void;
/**
* Wrap an asynchronous function call in a group.
*
* Returns the same type as the function itself.
*
* @param name The name of the group
* @param fn The function to wrap in the group
*/
export declare function group<T>(name: string, fn: () => Promise<T>): Promise<T>;
/**
* Saves state for current action, the state can only be retrieved by this action's post job execution.
*
* @param name name of the state to store
* @param value value to store. Non-string values will be converted to a string via JSON.stringify
*/
export declare function saveState(name: string, value: any): void;
/**
* Gets the value of an state set by this action's main execution.
*
* @param name name of the state to get
* @returns string
*/
export declare function getState(name: string): string;
export declare function getIDToken(aud?: string): Promise<string>;
/**
* Summary exports
*/
export { summary } from './summary';
/**
* @deprecated use core.summary
*/
export { markdownSummary } from './summary';
/**
* Path exports
*/
export { toPosixPath, toWin32Path, toPlatformPath } from './path-utils';

View File

@@ -1,336 +0,0 @@
"use strict";
var __createBinding = (this && this.__createBinding) || (Object.create ? (function(o, m, k, k2) {
if (k2 === undefined) k2 = k;
Object.defineProperty(o, k2, { enumerable: true, get: function() { return m[k]; } });
}) : (function(o, m, k, k2) {
if (k2 === undefined) k2 = k;
o[k2] = m[k];
}));
var __setModuleDefault = (this && this.__setModuleDefault) || (Object.create ? (function(o, v) {
Object.defineProperty(o, "default", { enumerable: true, value: v });
}) : function(o, v) {
o["default"] = v;
});
var __importStar = (this && this.__importStar) || function (mod) {
if (mod && mod.__esModule) return mod;
var result = {};
if (mod != null) for (var k in mod) if (k !== "default" && Object.hasOwnProperty.call(mod, k)) __createBinding(result, mod, k);
__setModuleDefault(result, mod);
return result;
};
var __awaiter = (this && this.__awaiter) || function (thisArg, _arguments, P, generator) {
function adopt(value) { return value instanceof P ? value : new P(function (resolve) { resolve(value); }); }
return new (P || (P = Promise))(function (resolve, reject) {
function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } }
function rejected(value) { try { step(generator["throw"](value)); } catch (e) { reject(e); } }
function step(result) { result.done ? resolve(result.value) : adopt(result.value).then(fulfilled, rejected); }
step((generator = generator.apply(thisArg, _arguments || [])).next());
});
};
Object.defineProperty(exports, "__esModule", { value: true });
exports.getIDToken = exports.getState = exports.saveState = exports.group = exports.endGroup = exports.startGroup = exports.info = exports.notice = exports.warning = exports.error = exports.debug = exports.isDebug = exports.setFailed = exports.setCommandEcho = exports.setOutput = exports.getBooleanInput = exports.getMultilineInput = exports.getInput = exports.addPath = exports.setSecret = exports.exportVariable = exports.ExitCode = void 0;
const command_1 = require("./command");
const file_command_1 = require("./file-command");
const utils_1 = require("./utils");
const os = __importStar(require("os"));
const path = __importStar(require("path"));
const oidc_utils_1 = require("./oidc-utils");
/**
* The code to exit an action
*/
var ExitCode;
(function (ExitCode) {
/**
* A code indicating that the action was successful
*/
ExitCode[ExitCode["Success"] = 0] = "Success";
/**
* A code indicating that the action was a failure
*/
ExitCode[ExitCode["Failure"] = 1] = "Failure";
})(ExitCode = exports.ExitCode || (exports.ExitCode = {}));
//-----------------------------------------------------------------------
// Variables
//-----------------------------------------------------------------------
/**
* Sets env variable for this action and future actions in the job
* @param name the name of the variable to set
* @param val the value of the variable. Non-string values will be converted to a string via JSON.stringify
*/
// eslint-disable-next-line @typescript-eslint/no-explicit-any
function exportVariable(name, val) {
const convertedVal = utils_1.toCommandValue(val);
process.env[name] = convertedVal;
const filePath = process.env['GITHUB_ENV'] || '';
if (filePath) {
return file_command_1.issueFileCommand('ENV', file_command_1.prepareKeyValueMessage(name, val));
}
command_1.issueCommand('set-env', { name }, convertedVal);
}
exports.exportVariable = exportVariable;
/**
* Registers a secret which will get masked from logs
* @param secret value of the secret
*/
function setSecret(secret) {
command_1.issueCommand('add-mask', {}, secret);
}
exports.setSecret = setSecret;
/**
* Prepends inputPath to the PATH (for this action and future actions)
* @param inputPath
*/
function addPath(inputPath) {
const filePath = process.env['GITHUB_PATH'] || '';
if (filePath) {
file_command_1.issueFileCommand('PATH', inputPath);
}
else {
command_1.issueCommand('add-path', {}, inputPath);
}
process.env['PATH'] = `${inputPath}${path.delimiter}${process.env['PATH']}`;
}
exports.addPath = addPath;
/**
* Gets the value of an input.
* Unless trimWhitespace is set to false in InputOptions, the value is also trimmed.
* Returns an empty string if the value is not defined.
*
* @param name name of the input to get
* @param options optional. See InputOptions.
* @returns string
*/
function getInput(name, options) {
const val = process.env[`INPUT_${name.replace(/ /g, '_').toUpperCase()}`] || '';
if (options && options.required && !val) {
throw new Error(`Input required and not supplied: ${name}`);
}
if (options && options.trimWhitespace === false) {
return val;
}
return val.trim();
}
exports.getInput = getInput;
/**
* Gets the values of an multiline input. Each value is also trimmed.
*
* @param name name of the input to get
* @param options optional. See InputOptions.
* @returns string[]
*
*/
function getMultilineInput(name, options) {
const inputs = getInput(name, options)
.split('\n')
.filter(x => x !== '');
if (options && options.trimWhitespace === false) {
return inputs;
}
return inputs.map(input => input.trim());
}
exports.getMultilineInput = getMultilineInput;
/**
* Gets the input value of the boolean type in the YAML 1.2 "core schema" specification.
* Support boolean input list: `true | True | TRUE | false | False | FALSE` .
* The return value is also in boolean type.
* ref: https://yaml.org/spec/1.2/spec.html#id2804923
*
* @param name name of the input to get
* @param options optional. See InputOptions.
* @returns boolean
*/
function getBooleanInput(name, options) {
const trueValue = ['true', 'True', 'TRUE'];
const falseValue = ['false', 'False', 'FALSE'];
const val = getInput(name, options);
if (trueValue.includes(val))
return true;
if (falseValue.includes(val))
return false;
throw new TypeError(`Input does not meet YAML 1.2 "Core Schema" specification: ${name}\n` +
`Support boolean input list: \`true | True | TRUE | false | False | FALSE\``);
}
exports.getBooleanInput = getBooleanInput;
/**
* Sets the value of an output.
*
* @param name name of the output to set
* @param value value to store. Non-string values will be converted to a string via JSON.stringify
*/
// eslint-disable-next-line @typescript-eslint/no-explicit-any
function setOutput(name, value) {
const filePath = process.env['GITHUB_OUTPUT'] || '';
if (filePath) {
return file_command_1.issueFileCommand('OUTPUT', file_command_1.prepareKeyValueMessage(name, value));
}
process.stdout.write(os.EOL);
command_1.issueCommand('set-output', { name }, utils_1.toCommandValue(value));
}
exports.setOutput = setOutput;
/**
* Enables or disables the echoing of commands into stdout for the rest of the step.
* Echoing is disabled by default if ACTIONS_STEP_DEBUG is not set.
*
*/
function setCommandEcho(enabled) {
command_1.issue('echo', enabled ? 'on' : 'off');
}
exports.setCommandEcho = setCommandEcho;
//-----------------------------------------------------------------------
// Results
//-----------------------------------------------------------------------
/**
* Sets the action status to failed.
* When the action exits it will be with an exit code of 1
* @param message add error issue message
*/
function setFailed(message) {
process.exitCode = ExitCode.Failure;
error(message);
}
exports.setFailed = setFailed;
//-----------------------------------------------------------------------
// Logging Commands
//-----------------------------------------------------------------------
/**
* Gets whether Actions Step Debug is on or not
*/
function isDebug() {
return process.env['RUNNER_DEBUG'] === '1';
}
exports.isDebug = isDebug;
/**
* Writes debug message to user log
* @param message debug message
*/
function debug(message) {
command_1.issueCommand('debug', {}, message);
}
exports.debug = debug;
/**
* Adds an error issue
* @param message error issue message. Errors will be converted to string via toString()
* @param properties optional properties to add to the annotation.
*/
function error(message, properties = {}) {
command_1.issueCommand('error', utils_1.toCommandProperties(properties), message instanceof Error ? message.toString() : message);
}
exports.error = error;
/**
* Adds a warning issue
* @param message warning issue message. Errors will be converted to string via toString()
* @param properties optional properties to add to the annotation.
*/
function warning(message, properties = {}) {
command_1.issueCommand('warning', utils_1.toCommandProperties(properties), message instanceof Error ? message.toString() : message);
}
exports.warning = warning;
/**
* Adds a notice issue
* @param message notice issue message. Errors will be converted to string via toString()
* @param properties optional properties to add to the annotation.
*/
function notice(message, properties = {}) {
command_1.issueCommand('notice', utils_1.toCommandProperties(properties), message instanceof Error ? message.toString() : message);
}
exports.notice = notice;
/**
* Writes info to log with console.log.
* @param message info message
*/
function info(message) {
process.stdout.write(message + os.EOL);
}
exports.info = info;
/**
* Begin an output group.
*
* Output until the next `groupEnd` will be foldable in this group
*
* @param name The name of the output group
*/
function startGroup(name) {
command_1.issue('group', name);
}
exports.startGroup = startGroup;
/**
* End an output group.
*/
function endGroup() {
command_1.issue('endgroup');
}
exports.endGroup = endGroup;
/**
* Wrap an asynchronous function call in a group.
*
* Returns the same type as the function itself.
*
* @param name The name of the group
* @param fn The function to wrap in the group
*/
function group(name, fn) {
return __awaiter(this, void 0, void 0, function* () {
startGroup(name);
let result;
try {
result = yield fn();
}
finally {
endGroup();
}
return result;
});
}
exports.group = group;
//-----------------------------------------------------------------------
// Wrapper action state
//-----------------------------------------------------------------------
/**
* Saves state for current action, the state can only be retrieved by this action's post job execution.
*
* @param name name of the state to store
* @param value value to store. Non-string values will be converted to a string via JSON.stringify
*/
// eslint-disable-next-line @typescript-eslint/no-explicit-any
function saveState(name, value) {
const filePath = process.env['GITHUB_STATE'] || '';
if (filePath) {
return file_command_1.issueFileCommand('STATE', file_command_1.prepareKeyValueMessage(name, value));
}
command_1.issueCommand('save-state', { name }, utils_1.toCommandValue(value));
}
exports.saveState = saveState;
/**
* Gets the value of an state set by this action's main execution.
*
* @param name name of the state to get
* @returns string
*/
function getState(name) {
return process.env[`STATE_${name}`] || '';
}
exports.getState = getState;
function getIDToken(aud) {
return __awaiter(this, void 0, void 0, function* () {
return yield oidc_utils_1.OidcClient.getIDToken(aud);
});
}
exports.getIDToken = getIDToken;
/**
* Summary exports
*/
var summary_1 = require("./summary");
Object.defineProperty(exports, "summary", { enumerable: true, get: function () { return summary_1.summary; } });
/**
* @deprecated use core.summary
*/
var summary_2 = require("./summary");
Object.defineProperty(exports, "markdownSummary", { enumerable: true, get: function () { return summary_2.markdownSummary; } });
/**
* Path exports
*/
var path_utils_1 = require("./path-utils");
Object.defineProperty(exports, "toPosixPath", { enumerable: true, get: function () { return path_utils_1.toPosixPath; } });
Object.defineProperty(exports, "toWin32Path", { enumerable: true, get: function () { return path_utils_1.toWin32Path; } });
Object.defineProperty(exports, "toPlatformPath", { enumerable: true, get: function () { return path_utils_1.toPlatformPath; } });
//# sourceMappingURL=core.js.map

File diff suppressed because one or more lines are too long

View File

@@ -1,2 +0,0 @@
export declare function issueFileCommand(command: string, message: any): void;
export declare function prepareKeyValueMessage(key: string, value: any): string;

View File

@@ -1,58 +0,0 @@
"use strict";
// For internal use, subject to change.
var __createBinding = (this && this.__createBinding) || (Object.create ? (function(o, m, k, k2) {
if (k2 === undefined) k2 = k;
Object.defineProperty(o, k2, { enumerable: true, get: function() { return m[k]; } });
}) : (function(o, m, k, k2) {
if (k2 === undefined) k2 = k;
o[k2] = m[k];
}));
var __setModuleDefault = (this && this.__setModuleDefault) || (Object.create ? (function(o, v) {
Object.defineProperty(o, "default", { enumerable: true, value: v });
}) : function(o, v) {
o["default"] = v;
});
var __importStar = (this && this.__importStar) || function (mod) {
if (mod && mod.__esModule) return mod;
var result = {};
if (mod != null) for (var k in mod) if (k !== "default" && Object.hasOwnProperty.call(mod, k)) __createBinding(result, mod, k);
__setModuleDefault(result, mod);
return result;
};
Object.defineProperty(exports, "__esModule", { value: true });
exports.prepareKeyValueMessage = exports.issueFileCommand = void 0;
// We use any as a valid input type
/* eslint-disable @typescript-eslint/no-explicit-any */
const fs = __importStar(require("fs"));
const os = __importStar(require("os"));
const uuid_1 = require("uuid");
const utils_1 = require("./utils");
function issueFileCommand(command, message) {
const filePath = process.env[`GITHUB_${command}`];
if (!filePath) {
throw new Error(`Unable to find environment variable for file command ${command}`);
}
if (!fs.existsSync(filePath)) {
throw new Error(`Missing file at path: ${filePath}`);
}
fs.appendFileSync(filePath, `${utils_1.toCommandValue(message)}${os.EOL}`, {
encoding: 'utf8'
});
}
exports.issueFileCommand = issueFileCommand;
function prepareKeyValueMessage(key, value) {
const delimiter = `ghadelimiter_${uuid_1.v4()}`;
const convertedValue = utils_1.toCommandValue(value);
// These should realistically never happen, but just in case someone finds a
// way to exploit uuid generation let's not allow keys or values that contain
// the delimiter.
if (key.includes(delimiter)) {
throw new Error(`Unexpected input: name should not contain the delimiter "${delimiter}"`);
}
if (convertedValue.includes(delimiter)) {
throw new Error(`Unexpected input: value should not contain the delimiter "${delimiter}"`);
}
return `${key}<<${delimiter}${os.EOL}${convertedValue}${os.EOL}${delimiter}`;
}
exports.prepareKeyValueMessage = prepareKeyValueMessage;
//# sourceMappingURL=file-command.js.map

View File

@@ -1 +0,0 @@
{"version":3,"file":"file-command.js","sourceRoot":"","sources":["../src/file-command.ts"],"names":[],"mappings":";AAAA,uCAAuC;;;;;;;;;;;;;;;;;;;;;;AAEvC,mCAAmC;AACnC,uDAAuD;AAEvD,uCAAwB;AACxB,uCAAwB;AACxB,+BAAiC;AACjC,mCAAsC;AAEtC,SAAgB,gBAAgB,CAAC,OAAe,EAAE,OAAY;IAC5D,MAAM,QAAQ,GAAG,OAAO,CAAC,GAAG,CAAC,UAAU,OAAO,EAAE,CAAC,CAAA;IACjD,IAAI,CAAC,QAAQ,EAAE;QACb,MAAM,IAAI,KAAK,CACb,wDAAwD,OAAO,EAAE,CAClE,CAAA;KACF;IACD,IAAI,CAAC,EAAE,CAAC,UAAU,CAAC,QAAQ,CAAC,EAAE;QAC5B,MAAM,IAAI,KAAK,CAAC,yBAAyB,QAAQ,EAAE,CAAC,CAAA;KACrD;IAED,EAAE,CAAC,cAAc,CAAC,QAAQ,EAAE,GAAG,sBAAc,CAAC,OAAO,CAAC,GAAG,EAAE,CAAC,GAAG,EAAE,EAAE;QACjE,QAAQ,EAAE,MAAM;KACjB,CAAC,CAAA;AACJ,CAAC;AAdD,4CAcC;AAED,SAAgB,sBAAsB,CAAC,GAAW,EAAE,KAAU;IAC5D,MAAM,SAAS,GAAG,gBAAgB,SAAM,EAAE,EAAE,CAAA;IAC5C,MAAM,cAAc,GAAG,sBAAc,CAAC,KAAK,CAAC,CAAA;IAE5C,4EAA4E;IAC5E,6EAA6E;IAC7E,iBAAiB;IACjB,IAAI,GAAG,CAAC,QAAQ,CAAC,SAAS,CAAC,EAAE;QAC3B,MAAM,IAAI,KAAK,CACb,4DAA4D,SAAS,GAAG,CACzE,CAAA;KACF;IAED,IAAI,cAAc,CAAC,QAAQ,CAAC,SAAS,CAAC,EAAE;QACtC,MAAM,IAAI,KAAK,CACb,6DAA6D,SAAS,GAAG,CAC1E,CAAA;KACF;IAED,OAAO,GAAG,GAAG,KAAK,SAAS,GAAG,EAAE,CAAC,GAAG,GAAG,cAAc,GAAG,EAAE,CAAC,GAAG,GAAG,SAAS,EAAE,CAAA;AAC9E,CAAC;AApBD,wDAoBC"}

View File

@@ -1,7 +0,0 @@
export declare class OidcClient {
private static createHttpClient;
private static getRequestToken;
private static getIDTokenUrl;
private static getCall;
static getIDToken(audience?: string): Promise<string>;
}

View File

@@ -1,77 +0,0 @@
"use strict";
var __awaiter = (this && this.__awaiter) || function (thisArg, _arguments, P, generator) {
function adopt(value) { return value instanceof P ? value : new P(function (resolve) { resolve(value); }); }
return new (P || (P = Promise))(function (resolve, reject) {
function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } }
function rejected(value) { try { step(generator["throw"](value)); } catch (e) { reject(e); } }
function step(result) { result.done ? resolve(result.value) : adopt(result.value).then(fulfilled, rejected); }
step((generator = generator.apply(thisArg, _arguments || [])).next());
});
};
Object.defineProperty(exports, "__esModule", { value: true });
exports.OidcClient = void 0;
const http_client_1 = require("@actions/http-client");
const auth_1 = require("@actions/http-client/lib/auth");
const core_1 = require("./core");
class OidcClient {
static createHttpClient(allowRetry = true, maxRetry = 10) {
const requestOptions = {
allowRetries: allowRetry,
maxRetries: maxRetry
};
return new http_client_1.HttpClient('actions/oidc-client', [new auth_1.BearerCredentialHandler(OidcClient.getRequestToken())], requestOptions);
}
static getRequestToken() {
const token = process.env['ACTIONS_ID_TOKEN_REQUEST_TOKEN'];
if (!token) {
throw new Error('Unable to get ACTIONS_ID_TOKEN_REQUEST_TOKEN env variable');
}
return token;
}
static getIDTokenUrl() {
const runtimeUrl = process.env['ACTIONS_ID_TOKEN_REQUEST_URL'];
if (!runtimeUrl) {
throw new Error('Unable to get ACTIONS_ID_TOKEN_REQUEST_URL env variable');
}
return runtimeUrl;
}
static getCall(id_token_url) {
var _a;
return __awaiter(this, void 0, void 0, function* () {
const httpclient = OidcClient.createHttpClient();
const res = yield httpclient
.getJson(id_token_url)
.catch(error => {
throw new Error(`Failed to get ID Token. \n
Error Code : ${error.statusCode}\n
Error Message: ${error.result.message}`);
});
const id_token = (_a = res.result) === null || _a === void 0 ? void 0 : _a.value;
if (!id_token) {
throw new Error('Response json body do not have ID Token field');
}
return id_token;
});
}
static getIDToken(audience) {
return __awaiter(this, void 0, void 0, function* () {
try {
// New ID Token is requested from action service
let id_token_url = OidcClient.getIDTokenUrl();
if (audience) {
const encodedAudience = encodeURIComponent(audience);
id_token_url = `${id_token_url}&audience=${encodedAudience}`;
}
core_1.debug(`ID token url is ${id_token_url}`);
const id_token = yield OidcClient.getCall(id_token_url);
core_1.setSecret(id_token);
return id_token;
}
catch (error) {
throw new Error(`Error message: ${error.message}`);
}
});
}
}
exports.OidcClient = OidcClient;
//# sourceMappingURL=oidc-utils.js.map

View File

@@ -1 +0,0 @@
{"version":3,"file":"oidc-utils.js","sourceRoot":"","sources":["../src/oidc-utils.ts"],"names":[],"mappings":";;;;;;;;;;;;AAGA,sDAA+C;AAC/C,wDAAqE;AACrE,iCAAuC;AAKvC,MAAa,UAAU;IACb,MAAM,CAAC,gBAAgB,CAC7B,UAAU,GAAG,IAAI,EACjB,QAAQ,GAAG,EAAE;QAEb,MAAM,cAAc,GAAmB;YACrC,YAAY,EAAE,UAAU;YACxB,UAAU,EAAE,QAAQ;SACrB,CAAA;QAED,OAAO,IAAI,wBAAU,CACnB,qBAAqB,EACrB,CAAC,IAAI,8BAAuB,CAAC,UAAU,CAAC,eAAe,EAAE,CAAC,CAAC,EAC3D,cAAc,CACf,CAAA;IACH,CAAC;IAEO,MAAM,CAAC,eAAe;QAC5B,MAAM,KAAK,GAAG,OAAO,CAAC,GAAG,CAAC,gCAAgC,CAAC,CAAA;QAC3D,IAAI,CAAC,KAAK,EAAE;YACV,MAAM,IAAI,KAAK,CACb,2DAA2D,CAC5D,CAAA;SACF;QACD,OAAO,KAAK,CAAA;IACd,CAAC;IAEO,MAAM,CAAC,aAAa;QAC1B,MAAM,UAAU,GAAG,OAAO,CAAC,GAAG,CAAC,8BAA8B,CAAC,CAAA;QAC9D,IAAI,CAAC,UAAU,EAAE;YACf,MAAM,IAAI,KAAK,CAAC,yDAAyD,CAAC,CAAA;SAC3E;QACD,OAAO,UAAU,CAAA;IACnB,CAAC;IAEO,MAAM,CAAO,OAAO,CAAC,YAAoB;;;YAC/C,MAAM,UAAU,GAAG,UAAU,CAAC,gBAAgB,EAAE,CAAA;YAEhD,MAAM,GAAG,GAAG,MAAM,UAAU;iBACzB,OAAO,CAAgB,YAAY,CAAC;iBACpC,KAAK,CAAC,KAAK,CAAC,EAAE;gBACb,MAAM,IAAI,KAAK,CACb;uBACa,KAAK,CAAC,UAAU;yBACd,KAAK,CAAC,MAAM,CAAC,OAAO,EAAE,CACtC,CAAA;YACH,CAAC,CAAC,CAAA;YAEJ,MAAM,QAAQ,SAAG,GAAG,CAAC,MAAM,0CAAE,KAAK,CAAA;YAClC,IAAI,CAAC,QAAQ,EAAE;gBACb,MAAM,IAAI,KAAK,CAAC,+CAA+C,CAAC,CAAA;aACjE;YACD,OAAO,QAAQ,CAAA;;KAChB;IAED,MAAM,CAAO,UAAU,CAAC,QAAiB;;YACvC,IAAI;gBACF,gDAAgD;gBAChD,IAAI,YAAY,GAAW,UAAU,CAAC,aAAa,EAAE,CAAA;gBACrD,IAAI,QAAQ,EAAE;oBACZ,MAAM,eAAe,GAAG,kBAAkB,CAAC,QAAQ,CAAC,CAAA;oBACpD,YAAY,GAAG,GAAG,YAAY,aAAa,eAAe,EAAE,CAAA;iBAC7D;gBAED,YAAK,CAAC,mBAAmB,YAAY,EAAE,CAAC,CAAA;gBAExC,MAAM,QAAQ,GAAG,MAAM,UAAU,CAAC,OAAO,CAAC,YAAY,CAAC,CAAA;gBACvD,gBAAS,CAAC,QAAQ,CAAC,CAAA;gBACnB,OAAO,QAAQ,CAAA;aAChB;YAAC,OAAO,KAAK,EAAE;gBACd,MAAM,IAAI,KAAK,CAAC,kBAAkB,KAAK,CAAC,OAAO,EAAE,CAAC,CAAA;aACnD;QACH,CAAC;KAAA;CACF;AAzED,gCAyEC"}

View File

@@ -1,25 +0,0 @@
/**
* toPosixPath converts the given path to the posix form. On Windows, \\ will be
* replaced with /.
*
* @param pth. Path to transform.
* @return string Posix path.
*/
export declare function toPosixPath(pth: string): string;
/**
* toWin32Path converts the given path to the win32 form. On Linux, / will be
* replaced with \\.
*
* @param pth. Path to transform.
* @return string Win32 path.
*/
export declare function toWin32Path(pth: string): string;
/**
* toPlatformPath converts the given path to a platform-specific path. It does
* this by replacing instances of / and \ with the platform-specific path
* separator.
*
* @param pth The path to platformize.
* @return string The platform-specific path.
*/
export declare function toPlatformPath(pth: string): string;

View File

@@ -1,58 +0,0 @@
"use strict";
var __createBinding = (this && this.__createBinding) || (Object.create ? (function(o, m, k, k2) {
if (k2 === undefined) k2 = k;
Object.defineProperty(o, k2, { enumerable: true, get: function() { return m[k]; } });
}) : (function(o, m, k, k2) {
if (k2 === undefined) k2 = k;
o[k2] = m[k];
}));
var __setModuleDefault = (this && this.__setModuleDefault) || (Object.create ? (function(o, v) {
Object.defineProperty(o, "default", { enumerable: true, value: v });
}) : function(o, v) {
o["default"] = v;
});
var __importStar = (this && this.__importStar) || function (mod) {
if (mod && mod.__esModule) return mod;
var result = {};
if (mod != null) for (var k in mod) if (k !== "default" && Object.hasOwnProperty.call(mod, k)) __createBinding(result, mod, k);
__setModuleDefault(result, mod);
return result;
};
Object.defineProperty(exports, "__esModule", { value: true });
exports.toPlatformPath = exports.toWin32Path = exports.toPosixPath = void 0;
const path = __importStar(require("path"));
/**
* toPosixPath converts the given path to the posix form. On Windows, \\ will be
* replaced with /.
*
* @param pth. Path to transform.
* @return string Posix path.
*/
function toPosixPath(pth) {
return pth.replace(/[\\]/g, '/');
}
exports.toPosixPath = toPosixPath;
/**
* toWin32Path converts the given path to the win32 form. On Linux, / will be
* replaced with \\.
*
* @param pth. Path to transform.
* @return string Win32 path.
*/
function toWin32Path(pth) {
return pth.replace(/[/]/g, '\\');
}
exports.toWin32Path = toWin32Path;
/**
* toPlatformPath converts the given path to a platform-specific path. It does
* this by replacing instances of / and \ with the platform-specific path
* separator.
*
* @param pth The path to platformize.
* @return string The platform-specific path.
*/
function toPlatformPath(pth) {
return pth.replace(/[/\\]/g, path.sep);
}
exports.toPlatformPath = toPlatformPath;
//# sourceMappingURL=path-utils.js.map

View File

@@ -1 +0,0 @@
{"version":3,"file":"path-utils.js","sourceRoot":"","sources":["../src/path-utils.ts"],"names":[],"mappings":";;;;;;;;;;;;;;;;;;;;;;AAAA,2CAA4B;AAE5B;;;;;;GAMG;AACH,SAAgB,WAAW,CAAC,GAAW;IACrC,OAAO,GAAG,CAAC,OAAO,CAAC,OAAO,EAAE,GAAG,CAAC,CAAA;AAClC,CAAC;AAFD,kCAEC;AAED;;;;;;GAMG;AACH,SAAgB,WAAW,CAAC,GAAW;IACrC,OAAO,GAAG,CAAC,OAAO,CAAC,MAAM,EAAE,IAAI,CAAC,CAAA;AAClC,CAAC;AAFD,kCAEC;AAED;;;;;;;GAOG;AACH,SAAgB,cAAc,CAAC,GAAW;IACxC,OAAO,GAAG,CAAC,OAAO,CAAC,QAAQ,EAAE,IAAI,CAAC,GAAG,CAAC,CAAA;AACxC,CAAC;AAFD,wCAEC"}

View File

@@ -1,202 +0,0 @@
export declare const SUMMARY_ENV_VAR = "GITHUB_STEP_SUMMARY";
export declare const SUMMARY_DOCS_URL = "https://docs.github.com/actions/using-workflows/workflow-commands-for-github-actions#adding-a-job-summary";
export declare type SummaryTableRow = (SummaryTableCell | string)[];
export interface SummaryTableCell {
/**
* Cell content
*/
data: string;
/**
* Render cell as header
* (optional) default: false
*/
header?: boolean;
/**
* Number of columns the cell extends
* (optional) default: '1'
*/
colspan?: string;
/**
* Number of rows the cell extends
* (optional) default: '1'
*/
rowspan?: string;
}
export interface SummaryImageOptions {
/**
* The width of the image in pixels. Must be an integer without a unit.
* (optional)
*/
width?: string;
/**
* The height of the image in pixels. Must be an integer without a unit.
* (optional)
*/
height?: string;
}
export interface SummaryWriteOptions {
/**
* Replace all existing content in summary file with buffer contents
* (optional) default: false
*/
overwrite?: boolean;
}
declare class Summary {
private _buffer;
private _filePath?;
constructor();
/**
* Finds the summary file path from the environment, rejects if env var is not found or file does not exist
* Also checks r/w permissions.
*
* @returns step summary file path
*/
private filePath;
/**
* Wraps content in an HTML tag, adding any HTML attributes
*
* @param {string} tag HTML tag to wrap
* @param {string | null} content content within the tag
* @param {[attribute: string]: string} attrs key-value list of HTML attributes to add
*
* @returns {string} content wrapped in HTML element
*/
private wrap;
/**
* Writes text in the buffer to the summary buffer file and empties buffer. Will append by default.
*
* @param {SummaryWriteOptions} [options] (optional) options for write operation
*
* @returns {Promise<Summary>} summary instance
*/
write(options?: SummaryWriteOptions): Promise<Summary>;
/**
* Clears the summary buffer and wipes the summary file
*
* @returns {Summary} summary instance
*/
clear(): Promise<Summary>;
/**
* Returns the current summary buffer as a string
*
* @returns {string} string of summary buffer
*/
stringify(): string;
/**
* If the summary buffer is empty
*
* @returns {boolen} true if the buffer is empty
*/
isEmptyBuffer(): boolean;
/**
* Resets the summary buffer without writing to summary file
*
* @returns {Summary} summary instance
*/
emptyBuffer(): Summary;
/**
* Adds raw text to the summary buffer
*
* @param {string} text content to add
* @param {boolean} [addEOL=false] (optional) append an EOL to the raw text (default: false)
*
* @returns {Summary} summary instance
*/
addRaw(text: string, addEOL?: boolean): Summary;
/**
* Adds the operating system-specific end-of-line marker to the buffer
*
* @returns {Summary} summary instance
*/
addEOL(): Summary;
/**
* Adds an HTML codeblock to the summary buffer
*
* @param {string} code content to render within fenced code block
* @param {string} lang (optional) language to syntax highlight code
*
* @returns {Summary} summary instance
*/
addCodeBlock(code: string, lang?: string): Summary;
/**
* Adds an HTML list to the summary buffer
*
* @param {string[]} items list of items to render
* @param {boolean} [ordered=false] (optional) if the rendered list should be ordered or not (default: false)
*
* @returns {Summary} summary instance
*/
addList(items: string[], ordered?: boolean): Summary;
/**
* Adds an HTML table to the summary buffer
*
* @param {SummaryTableCell[]} rows table rows
*
* @returns {Summary} summary instance
*/
addTable(rows: SummaryTableRow[]): Summary;
/**
* Adds a collapsable HTML details element to the summary buffer
*
* @param {string} label text for the closed state
* @param {string} content collapsable content
*
* @returns {Summary} summary instance
*/
addDetails(label: string, content: string): Summary;
/**
* Adds an HTML image tag to the summary buffer
*
* @param {string} src path to the image you to embed
* @param {string} alt text description of the image
* @param {SummaryImageOptions} options (optional) addition image attributes
*
* @returns {Summary} summary instance
*/
addImage(src: string, alt: string, options?: SummaryImageOptions): Summary;
/**
* Adds an HTML section heading element
*
* @param {string} text heading text
* @param {number | string} [level=1] (optional) the heading level, default: 1
*
* @returns {Summary} summary instance
*/
addHeading(text: string, level?: number | string): Summary;
/**
* Adds an HTML thematic break (<hr>) to the summary buffer
*
* @returns {Summary} summary instance
*/
addSeparator(): Summary;
/**
* Adds an HTML line break (<br>) to the summary buffer
*
* @returns {Summary} summary instance
*/
addBreak(): Summary;
/**
* Adds an HTML blockquote to the summary buffer
*
* @param {string} text quote text
* @param {string} cite (optional) citation url
*
* @returns {Summary} summary instance
*/
addQuote(text: string, cite?: string): Summary;
/**
* Adds an HTML anchor tag to the summary buffer
*
* @param {string} text link text/content
* @param {string} href hyperlink
*
* @returns {Summary} summary instance
*/
addLink(text: string, href: string): Summary;
}
/**
* @deprecated use `core.summary`
*/
export declare const markdownSummary: Summary;
export declare const summary: Summary;
export {};

View File

@@ -1,283 +0,0 @@
"use strict";
var __awaiter = (this && this.__awaiter) || function (thisArg, _arguments, P, generator) {
function adopt(value) { return value instanceof P ? value : new P(function (resolve) { resolve(value); }); }
return new (P || (P = Promise))(function (resolve, reject) {
function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } }
function rejected(value) { try { step(generator["throw"](value)); } catch (e) { reject(e); } }
function step(result) { result.done ? resolve(result.value) : adopt(result.value).then(fulfilled, rejected); }
step((generator = generator.apply(thisArg, _arguments || [])).next());
});
};
Object.defineProperty(exports, "__esModule", { value: true });
exports.summary = exports.markdownSummary = exports.SUMMARY_DOCS_URL = exports.SUMMARY_ENV_VAR = void 0;
const os_1 = require("os");
const fs_1 = require("fs");
const { access, appendFile, writeFile } = fs_1.promises;
exports.SUMMARY_ENV_VAR = 'GITHUB_STEP_SUMMARY';
exports.SUMMARY_DOCS_URL = 'https://docs.github.com/actions/using-workflows/workflow-commands-for-github-actions#adding-a-job-summary';
class Summary {
constructor() {
this._buffer = '';
}
/**
* Finds the summary file path from the environment, rejects if env var is not found or file does not exist
* Also checks r/w permissions.
*
* @returns step summary file path
*/
filePath() {
return __awaiter(this, void 0, void 0, function* () {
if (this._filePath) {
return this._filePath;
}
const pathFromEnv = process.env[exports.SUMMARY_ENV_VAR];
if (!pathFromEnv) {
throw new Error(`Unable to find environment variable for $${exports.SUMMARY_ENV_VAR}. Check if your runtime environment supports job summaries.`);
}
try {
yield access(pathFromEnv, fs_1.constants.R_OK | fs_1.constants.W_OK);
}
catch (_a) {
throw new Error(`Unable to access summary file: '${pathFromEnv}'. Check if the file has correct read/write permissions.`);
}
this._filePath = pathFromEnv;
return this._filePath;
});
}
/**
* Wraps content in an HTML tag, adding any HTML attributes
*
* @param {string} tag HTML tag to wrap
* @param {string | null} content content within the tag
* @param {[attribute: string]: string} attrs key-value list of HTML attributes to add
*
* @returns {string} content wrapped in HTML element
*/
wrap(tag, content, attrs = {}) {
const htmlAttrs = Object.entries(attrs)
.map(([key, value]) => ` ${key}="${value}"`)
.join('');
if (!content) {
return `<${tag}${htmlAttrs}>`;
}
return `<${tag}${htmlAttrs}>${content}</${tag}>`;
}
/**
* Writes text in the buffer to the summary buffer file and empties buffer. Will append by default.
*
* @param {SummaryWriteOptions} [options] (optional) options for write operation
*
* @returns {Promise<Summary>} summary instance
*/
write(options) {
return __awaiter(this, void 0, void 0, function* () {
const overwrite = !!(options === null || options === void 0 ? void 0 : options.overwrite);
const filePath = yield this.filePath();
const writeFunc = overwrite ? writeFile : appendFile;
yield writeFunc(filePath, this._buffer, { encoding: 'utf8' });
return this.emptyBuffer();
});
}
/**
* Clears the summary buffer and wipes the summary file
*
* @returns {Summary} summary instance
*/
clear() {
return __awaiter(this, void 0, void 0, function* () {
return this.emptyBuffer().write({ overwrite: true });
});
}
/**
* Returns the current summary buffer as a string
*
* @returns {string} string of summary buffer
*/
stringify() {
return this._buffer;
}
/**
* If the summary buffer is empty
*
* @returns {boolen} true if the buffer is empty
*/
isEmptyBuffer() {
return this._buffer.length === 0;
}
/**
* Resets the summary buffer without writing to summary file
*
* @returns {Summary} summary instance
*/
emptyBuffer() {
this._buffer = '';
return this;
}
/**
* Adds raw text to the summary buffer
*
* @param {string} text content to add
* @param {boolean} [addEOL=false] (optional) append an EOL to the raw text (default: false)
*
* @returns {Summary} summary instance
*/
addRaw(text, addEOL = false) {
this._buffer += text;
return addEOL ? this.addEOL() : this;
}
/**
* Adds the operating system-specific end-of-line marker to the buffer
*
* @returns {Summary} summary instance
*/
addEOL() {
return this.addRaw(os_1.EOL);
}
/**
* Adds an HTML codeblock to the summary buffer
*
* @param {string} code content to render within fenced code block
* @param {string} lang (optional) language to syntax highlight code
*
* @returns {Summary} summary instance
*/
addCodeBlock(code, lang) {
const attrs = Object.assign({}, (lang && { lang }));
const element = this.wrap('pre', this.wrap('code', code), attrs);
return this.addRaw(element).addEOL();
}
/**
* Adds an HTML list to the summary buffer
*
* @param {string[]} items list of items to render
* @param {boolean} [ordered=false] (optional) if the rendered list should be ordered or not (default: false)
*
* @returns {Summary} summary instance
*/
addList(items, ordered = false) {
const tag = ordered ? 'ol' : 'ul';
const listItems = items.map(item => this.wrap('li', item)).join('');
const element = this.wrap(tag, listItems);
return this.addRaw(element).addEOL();
}
/**
* Adds an HTML table to the summary buffer
*
* @param {SummaryTableCell[]} rows table rows
*
* @returns {Summary} summary instance
*/
addTable(rows) {
const tableBody = rows
.map(row => {
const cells = row
.map(cell => {
if (typeof cell === 'string') {
return this.wrap('td', cell);
}
const { header, data, colspan, rowspan } = cell;
const tag = header ? 'th' : 'td';
const attrs = Object.assign(Object.assign({}, (colspan && { colspan })), (rowspan && { rowspan }));
return this.wrap(tag, data, attrs);
})
.join('');
return this.wrap('tr', cells);
})
.join('');
const element = this.wrap('table', tableBody);
return this.addRaw(element).addEOL();
}
/**
* Adds a collapsable HTML details element to the summary buffer
*
* @param {string} label text for the closed state
* @param {string} content collapsable content
*
* @returns {Summary} summary instance
*/
addDetails(label, content) {
const element = this.wrap('details', this.wrap('summary', label) + content);
return this.addRaw(element).addEOL();
}
/**
* Adds an HTML image tag to the summary buffer
*
* @param {string} src path to the image you to embed
* @param {string} alt text description of the image
* @param {SummaryImageOptions} options (optional) addition image attributes
*
* @returns {Summary} summary instance
*/
addImage(src, alt, options) {
const { width, height } = options || {};
const attrs = Object.assign(Object.assign({}, (width && { width })), (height && { height }));
const element = this.wrap('img', null, Object.assign({ src, alt }, attrs));
return this.addRaw(element).addEOL();
}
/**
* Adds an HTML section heading element
*
* @param {string} text heading text
* @param {number | string} [level=1] (optional) the heading level, default: 1
*
* @returns {Summary} summary instance
*/
addHeading(text, level) {
const tag = `h${level}`;
const allowedTag = ['h1', 'h2', 'h3', 'h4', 'h5', 'h6'].includes(tag)
? tag
: 'h1';
const element = this.wrap(allowedTag, text);
return this.addRaw(element).addEOL();
}
/**
* Adds an HTML thematic break (<hr>) to the summary buffer
*
* @returns {Summary} summary instance
*/
addSeparator() {
const element = this.wrap('hr', null);
return this.addRaw(element).addEOL();
}
/**
* Adds an HTML line break (<br>) to the summary buffer
*
* @returns {Summary} summary instance
*/
addBreak() {
const element = this.wrap('br', null);
return this.addRaw(element).addEOL();
}
/**
* Adds an HTML blockquote to the summary buffer
*
* @param {string} text quote text
* @param {string} cite (optional) citation url
*
* @returns {Summary} summary instance
*/
addQuote(text, cite) {
const attrs = Object.assign({}, (cite && { cite }));
const element = this.wrap('blockquote', text, attrs);
return this.addRaw(element).addEOL();
}
/**
* Adds an HTML anchor tag to the summary buffer
*
* @param {string} text link text/content
* @param {string} href hyperlink
*
* @returns {Summary} summary instance
*/
addLink(text, href) {
const element = this.wrap('a', text, { href });
return this.addRaw(element).addEOL();
}
}
const _summary = new Summary();
/**
* @deprecated use `core.summary`
*/
exports.markdownSummary = _summary;
exports.summary = _summary;
//# sourceMappingURL=summary.js.map

File diff suppressed because one or more lines are too long

View File

@@ -1,14 +0,0 @@
import { AnnotationProperties } from './core';
import { CommandProperties } from './command';
/**
* Sanitizes an input into a string so it can be passed into issueCommand safely
* @param input input to sanitize into a string
*/
export declare function toCommandValue(input: any): string;
/**
*
* @param annotationProperties
* @returns The command properties to send with the actual annotation command
* See IssueCommandProperties: https://github.com/actions/runner/blob/main/src/Runner.Worker/ActionCommandManager.cs#L646
*/
export declare function toCommandProperties(annotationProperties: AnnotationProperties): CommandProperties;

View File

@@ -1,40 +0,0 @@
"use strict";
// We use any as a valid input type
/* eslint-disable @typescript-eslint/no-explicit-any */
Object.defineProperty(exports, "__esModule", { value: true });
exports.toCommandProperties = exports.toCommandValue = void 0;
/**
* Sanitizes an input into a string so it can be passed into issueCommand safely
* @param input input to sanitize into a string
*/
function toCommandValue(input) {
if (input === null || input === undefined) {
return '';
}
else if (typeof input === 'string' || input instanceof String) {
return input;
}
return JSON.stringify(input);
}
exports.toCommandValue = toCommandValue;
/**
*
* @param annotationProperties
* @returns The command properties to send with the actual annotation command
* See IssueCommandProperties: https://github.com/actions/runner/blob/main/src/Runner.Worker/ActionCommandManager.cs#L646
*/
function toCommandProperties(annotationProperties) {
if (!Object.keys(annotationProperties).length) {
return {};
}
return {
title: annotationProperties.title,
file: annotationProperties.file,
line: annotationProperties.startLine,
endLine: annotationProperties.endLine,
col: annotationProperties.startColumn,
endColumn: annotationProperties.endColumn
};
}
exports.toCommandProperties = toCommandProperties;
//# sourceMappingURL=utils.js.map

View File

@@ -1 +0,0 @@
{"version":3,"file":"utils.js","sourceRoot":"","sources":["../src/utils.ts"],"names":[],"mappings":";AAAA,mCAAmC;AACnC,uDAAuD;;;AAKvD;;;GAGG;AACH,SAAgB,cAAc,CAAC,KAAU;IACvC,IAAI,KAAK,KAAK,IAAI,IAAI,KAAK,KAAK,SAAS,EAAE;QACzC,OAAO,EAAE,CAAA;KACV;SAAM,IAAI,OAAO,KAAK,KAAK,QAAQ,IAAI,KAAK,YAAY,MAAM,EAAE;QAC/D,OAAO,KAAe,CAAA;KACvB;IACD,OAAO,IAAI,CAAC,SAAS,CAAC,KAAK,CAAC,CAAA;AAC9B,CAAC;AAPD,wCAOC;AAED;;;;;GAKG;AACH,SAAgB,mBAAmB,CACjC,oBAA0C;IAE1C,IAAI,CAAC,MAAM,CAAC,IAAI,CAAC,oBAAoB,CAAC,CAAC,MAAM,EAAE;QAC7C,OAAO,EAAE,CAAA;KACV;IAED,OAAO;QACL,KAAK,EAAE,oBAAoB,CAAC,KAAK;QACjC,IAAI,EAAE,oBAAoB,CAAC,IAAI;QAC/B,IAAI,EAAE,oBAAoB,CAAC,SAAS;QACpC,OAAO,EAAE,oBAAoB,CAAC,OAAO;QACrC,GAAG,EAAE,oBAAoB,CAAC,WAAW;QACrC,SAAS,EAAE,oBAAoB,CAAC,SAAS;KAC1C,CAAA;AACH,CAAC;AAfD,kDAeC"}

View File

@@ -1,46 +0,0 @@
{
"name": "@actions/core",
"version": "1.10.0",
"description": "Actions core lib",
"keywords": [
"github",
"actions",
"core"
],
"homepage": "https://github.com/actions/toolkit/tree/main/packages/core",
"license": "MIT",
"main": "lib/core.js",
"types": "lib/core.d.ts",
"directories": {
"lib": "lib",
"test": "__tests__"
},
"files": [
"lib",
"!.DS_Store"
],
"publishConfig": {
"access": "public"
},
"repository": {
"type": "git",
"url": "git+https://github.com/actions/toolkit.git",
"directory": "packages/core"
},
"scripts": {
"audit-moderate": "npm install && npm audit --json --audit-level=moderate > audit.json",
"test": "echo \"Error: run tests from root\" && exit 1",
"tsc": "tsc"
},
"bugs": {
"url": "https://github.com/actions/toolkit/issues"
},
"dependencies": {
"@actions/http-client": "^2.0.1",
"uuid": "^8.3.2"
},
"devDependencies": {
"@types/node": "^12.0.2",
"@types/uuid": "^8.3.4"
}
}

View File

@@ -1,29 +0,0 @@
import { WebhookPayload } from './interfaces';
export declare class Context {
/**
* Webhook payload object that triggered the workflow
*/
payload: WebhookPayload;
eventName: string;
sha: string;
ref: string;
workflow: string;
action: string;
actor: string;
job: string;
runNumber: number;
runId: number;
/**
* Hydrate the context from the environment
*/
constructor();
get issue(): {
owner: string;
repo: string;
number: number;
};
get repo(): {
owner: string;
repo: string;
};
}

View File

@@ -1,50 +0,0 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.Context = void 0;
const fs_1 = require("fs");
const os_1 = require("os");
class Context {
/**
* Hydrate the context from the environment
*/
constructor() {
this.payload = {};
if (process.env.GITHUB_EVENT_PATH) {
if (fs_1.existsSync(process.env.GITHUB_EVENT_PATH)) {
this.payload = JSON.parse(fs_1.readFileSync(process.env.GITHUB_EVENT_PATH, { encoding: 'utf8' }));
}
else {
const path = process.env.GITHUB_EVENT_PATH;
process.stdout.write(`GITHUB_EVENT_PATH ${path} does not exist${os_1.EOL}`);
}
}
this.eventName = process.env.GITHUB_EVENT_NAME;
this.sha = process.env.GITHUB_SHA;
this.ref = process.env.GITHUB_REF;
this.workflow = process.env.GITHUB_WORKFLOW;
this.action = process.env.GITHUB_ACTION;
this.actor = process.env.GITHUB_ACTOR;
this.job = process.env.GITHUB_JOB;
this.runNumber = parseInt(process.env.GITHUB_RUN_NUMBER, 10);
this.runId = parseInt(process.env.GITHUB_RUN_ID, 10);
}
get issue() {
const payload = this.payload;
return Object.assign(Object.assign({}, this.repo), { number: (payload.issue || payload.pull_request || payload).number });
}
get repo() {
if (process.env.GITHUB_REPOSITORY) {
const [owner, repo] = process.env.GITHUB_REPOSITORY.split('/');
return { owner, repo };
}
if (this.payload.repository) {
return {
owner: this.payload.repository.owner.login,
repo: this.payload.repository.name
};
}
throw new Error("context.repo requires a GITHUB_REPOSITORY environment variable like 'owner/repo'");
}
}
exports.Context = Context;
//# sourceMappingURL=context.js.map

View File

@@ -1 +0,0 @@
{"version":3,"file":"context.js","sourceRoot":"","sources":["../src/context.ts"],"names":[],"mappings":";;;AAEA,2BAA2C;AAC3C,2BAAsB;AAEtB,MAAa,OAAO;IAgBlB;;OAEG;IACH;QACE,IAAI,CAAC,OAAO,GAAG,EAAE,CAAA;QACjB,IAAI,OAAO,CAAC,GAAG,CAAC,iBAAiB,EAAE;YACjC,IAAI,eAAU,CAAC,OAAO,CAAC,GAAG,CAAC,iBAAiB,CAAC,EAAE;gBAC7C,IAAI,CAAC,OAAO,GAAG,IAAI,CAAC,KAAK,CACvB,iBAAY,CAAC,OAAO,CAAC,GAAG,CAAC,iBAAiB,EAAE,EAAC,QAAQ,EAAE,MAAM,EAAC,CAAC,CAChE,CAAA;aACF;iBAAM;gBACL,MAAM,IAAI,GAAG,OAAO,CAAC,GAAG,CAAC,iBAAiB,CAAA;gBAC1C,OAAO,CAAC,MAAM,CAAC,KAAK,CAAC,qBAAqB,IAAI,kBAAkB,QAAG,EAAE,CAAC,CAAA;aACvE;SACF;QACD,IAAI,CAAC,SAAS,GAAG,OAAO,CAAC,GAAG,CAAC,iBAA2B,CAAA;QACxD,IAAI,CAAC,GAAG,GAAG,OAAO,CAAC,GAAG,CAAC,UAAoB,CAAA;QAC3C,IAAI,CAAC,GAAG,GAAG,OAAO,CAAC,GAAG,CAAC,UAAoB,CAAA;QAC3C,IAAI,CAAC,QAAQ,GAAG,OAAO,CAAC,GAAG,CAAC,eAAyB,CAAA;QACrD,IAAI,CAAC,MAAM,GAAG,OAAO,CAAC,GAAG,CAAC,aAAuB,CAAA;QACjD,IAAI,CAAC,KAAK,GAAG,OAAO,CAAC,GAAG,CAAC,YAAsB,CAAA;QAC/C,IAAI,CAAC,GAAG,GAAG,OAAO,CAAC,GAAG,CAAC,UAAoB,CAAA;QAC3C,IAAI,CAAC,SAAS,GAAG,QAAQ,CAAC,OAAO,CAAC,GAAG,CAAC,iBAA2B,EAAE,EAAE,CAAC,CAAA;QACtE,IAAI,CAAC,KAAK,GAAG,QAAQ,CAAC,OAAO,CAAC,GAAG,CAAC,aAAuB,EAAE,EAAE,CAAC,CAAA;IAChE,CAAC;IAED,IAAI,KAAK;QACP,MAAM,OAAO,GAAG,IAAI,CAAC,OAAO,CAAA;QAE5B,uCACK,IAAI,CAAC,IAAI,KACZ,MAAM,EAAE,CAAC,OAAO,CAAC,KAAK,IAAI,OAAO,CAAC,YAAY,IAAI,OAAO,CAAC,CAAC,MAAM,IAClE;IACH,CAAC;IAED,IAAI,IAAI;QACN,IAAI,OAAO,CAAC,GAAG,CAAC,iBAAiB,EAAE;YACjC,MAAM,CAAC,KAAK,EAAE,IAAI,CAAC,GAAG,OAAO,CAAC,GAAG,CAAC,iBAAiB,CAAC,KAAK,CAAC,GAAG,CAAC,CAAA;YAC9D,OAAO,EAAC,KAAK,EAAE,IAAI,EAAC,CAAA;SACrB;QAED,IAAI,IAAI,CAAC,OAAO,CAAC,UAAU,EAAE;YAC3B,OAAO;gBACL,KAAK,EAAE,IAAI,CAAC,OAAO,CAAC,UAAU,CAAC,KAAK,CAAC,KAAK;gBAC1C,IAAI,EAAE,IAAI,CAAC,OAAO,CAAC,UAAU,CAAC,IAAI;aACnC,CAAA;SACF;QAED,MAAM,IAAI,KAAK,CACb,kFAAkF,CACnF,CAAA;IACH,CAAC;CACF;AApED,0BAoEC"}

View File

@@ -1,11 +0,0 @@
import * as Context from './context';
import { GitHub } from './utils';
import { OctokitOptions } from '@octokit/core/dist-types/types';
export declare const context: Context.Context;
/**
* Returns a hydrated octokit ready to use for GitHub Actions
*
* @param token the repo PAT or GITHUB_TOKEN
* @param options other options to set
*/
export declare function getOctokit(token: string, options?: OctokitOptions): InstanceType<typeof GitHub>;

View File

@@ -1,36 +0,0 @@
"use strict";
var __createBinding = (this && this.__createBinding) || (Object.create ? (function(o, m, k, k2) {
if (k2 === undefined) k2 = k;
Object.defineProperty(o, k2, { enumerable: true, get: function() { return m[k]; } });
}) : (function(o, m, k, k2) {
if (k2 === undefined) k2 = k;
o[k2] = m[k];
}));
var __setModuleDefault = (this && this.__setModuleDefault) || (Object.create ? (function(o, v) {
Object.defineProperty(o, "default", { enumerable: true, value: v });
}) : function(o, v) {
o["default"] = v;
});
var __importStar = (this && this.__importStar) || function (mod) {
if (mod && mod.__esModule) return mod;
var result = {};
if (mod != null) for (var k in mod) if (Object.hasOwnProperty.call(mod, k)) __createBinding(result, mod, k);
__setModuleDefault(result, mod);
return result;
};
Object.defineProperty(exports, "__esModule", { value: true });
exports.getOctokit = exports.context = void 0;
const Context = __importStar(require("./context"));
const utils_1 = require("./utils");
exports.context = new Context.Context();
/**
* Returns a hydrated octokit ready to use for GitHub Actions
*
* @param token the repo PAT or GITHUB_TOKEN
* @param options other options to set
*/
function getOctokit(token, options) {
return new utils_1.GitHub(utils_1.getOctokitOptions(token, options));
}
exports.getOctokit = getOctokit;
//# sourceMappingURL=github.js.map

View File

@@ -1 +0,0 @@
{"version":3,"file":"github.js","sourceRoot":"","sources":["../src/github.ts"],"names":[],"mappings":";;;;;;;;;;;;;;;;;;;;;;AAAA,mDAAoC;AACpC,mCAAiD;AAKpC,QAAA,OAAO,GAAG,IAAI,OAAO,CAAC,OAAO,EAAE,CAAA;AAE5C;;;;;GAKG;AACH,SAAgB,UAAU,CACxB,KAAa,EACb,OAAwB;IAExB,OAAO,IAAI,cAAM,CAAC,yBAAiB,CAAC,KAAK,EAAE,OAAO,CAAC,CAAC,CAAA;AACtD,CAAC;AALD,gCAKC"}

View File

@@ -1,40 +0,0 @@
export interface PayloadRepository {
[key: string]: any;
full_name?: string;
name: string;
owner: {
[key: string]: any;
login: string;
name?: string;
};
html_url?: string;
}
export interface WebhookPayload {
[key: string]: any;
repository?: PayloadRepository;
issue?: {
[key: string]: any;
number: number;
html_url?: string;
body?: string;
};
pull_request?: {
[key: string]: any;
number: number;
html_url?: string;
body?: string;
};
sender?: {
[key: string]: any;
type: string;
};
action?: string;
installation?: {
id: number;
[key: string]: any;
};
comment?: {
id: number;
[key: string]: any;
};
}

View File

@@ -1,4 +0,0 @@
"use strict";
/* eslint-disable @typescript-eslint/no-explicit-any */
Object.defineProperty(exports, "__esModule", { value: true });
//# sourceMappingURL=interfaces.js.map

View File

@@ -1 +0,0 @@
{"version":3,"file":"interfaces.js","sourceRoot":"","sources":["../src/interfaces.ts"],"names":[],"mappings":";AAAA,uDAAuD"}

View File

@@ -1,6 +0,0 @@
/// <reference types="node" />
import * as http from 'http';
import { OctokitOptions } from '@octokit/core/dist-types/types';
export declare function getAuthString(token: string, options: OctokitOptions): string | undefined;
export declare function getProxyAgent(destinationUrl: string): http.Agent;
export declare function getApiBaseUrl(): string;

View File

@@ -1,43 +0,0 @@
"use strict";
var __createBinding = (this && this.__createBinding) || (Object.create ? (function(o, m, k, k2) {
if (k2 === undefined) k2 = k;
Object.defineProperty(o, k2, { enumerable: true, get: function() { return m[k]; } });
}) : (function(o, m, k, k2) {
if (k2 === undefined) k2 = k;
o[k2] = m[k];
}));
var __setModuleDefault = (this && this.__setModuleDefault) || (Object.create ? (function(o, v) {
Object.defineProperty(o, "default", { enumerable: true, value: v });
}) : function(o, v) {
o["default"] = v;
});
var __importStar = (this && this.__importStar) || function (mod) {
if (mod && mod.__esModule) return mod;
var result = {};
if (mod != null) for (var k in mod) if (Object.hasOwnProperty.call(mod, k)) __createBinding(result, mod, k);
__setModuleDefault(result, mod);
return result;
};
Object.defineProperty(exports, "__esModule", { value: true });
exports.getApiBaseUrl = exports.getProxyAgent = exports.getAuthString = void 0;
const httpClient = __importStar(require("@actions/http-client"));
function getAuthString(token, options) {
if (!token && !options.auth) {
throw new Error('Parameter token or opts.auth is required');
}
else if (token && options.auth) {
throw new Error('Parameters token and opts.auth may not both be specified');
}
return typeof options.auth === 'string' ? options.auth : `token ${token}`;
}
exports.getAuthString = getAuthString;
function getProxyAgent(destinationUrl) {
const hc = new httpClient.HttpClient();
return hc.getAgent(destinationUrl);
}
exports.getProxyAgent = getProxyAgent;
function getApiBaseUrl() {
return process.env['GITHUB_API_URL'] || 'https://api.github.com';
}
exports.getApiBaseUrl = getApiBaseUrl;
//# sourceMappingURL=utils.js.map

View File

@@ -1 +0,0 @@
{"version":3,"file":"utils.js","sourceRoot":"","sources":["../../src/internal/utils.ts"],"names":[],"mappings":";;;;;;;;;;;;;;;;;;;;;;AACA,iEAAkD;AAGlD,SAAgB,aAAa,CAC3B,KAAa,EACb,OAAuB;IAEvB,IAAI,CAAC,KAAK,IAAI,CAAC,OAAO,CAAC,IAAI,EAAE;QAC3B,MAAM,IAAI,KAAK,CAAC,0CAA0C,CAAC,CAAA;KAC5D;SAAM,IAAI,KAAK,IAAI,OAAO,CAAC,IAAI,EAAE;QAChC,MAAM,IAAI,KAAK,CAAC,0DAA0D,CAAC,CAAA;KAC5E;IAED,OAAO,OAAO,OAAO,CAAC,IAAI,KAAK,QAAQ,CAAC,CAAC,CAAC,OAAO,CAAC,IAAI,CAAC,CAAC,CAAC,SAAS,KAAK,EAAE,CAAA;AAC3E,CAAC;AAXD,sCAWC;AAED,SAAgB,aAAa,CAAC,cAAsB;IAClD,MAAM,EAAE,GAAG,IAAI,UAAU,CAAC,UAAU,EAAE,CAAA;IACtC,OAAO,EAAE,CAAC,QAAQ,CAAC,cAAc,CAAC,CAAA;AACpC,CAAC;AAHD,sCAGC;AAED,SAAgB,aAAa;IAC3B,OAAO,OAAO,CAAC,GAAG,CAAC,gBAAgB,CAAC,IAAI,wBAAwB,CAAA;AAClE,CAAC;AAFD,sCAEC"}

View File

@@ -1,21 +0,0 @@
import * as Context from './context';
import { Octokit } from '@octokit/core';
import { OctokitOptions } from '@octokit/core/dist-types/types';
export declare const context: Context.Context;
export declare const GitHub: (new (...args: any[]) => {
[x: string]: any;
}) & {
new (...args: any[]): {
[x: string]: any;
};
plugins: any[];
} & typeof Octokit & import("@octokit/core/dist-types/types").Constructor<import("@octokit/plugin-rest-endpoint-methods/dist-types/generated/method-types").RestEndpointMethods & {
paginate: import("@octokit/plugin-paginate-rest").PaginateInterface;
}>;
/**
* Convience function to correctly format Octokit Options to pass into the constructor.
*
* @param token the repo PAT or GITHUB_TOKEN
* @param options other options to set
*/
export declare function getOctokitOptions(token: string, options?: OctokitOptions): OctokitOptions;

View File

@@ -1,54 +0,0 @@
"use strict";
var __createBinding = (this && this.__createBinding) || (Object.create ? (function(o, m, k, k2) {
if (k2 === undefined) k2 = k;
Object.defineProperty(o, k2, { enumerable: true, get: function() { return m[k]; } });
}) : (function(o, m, k, k2) {
if (k2 === undefined) k2 = k;
o[k2] = m[k];
}));
var __setModuleDefault = (this && this.__setModuleDefault) || (Object.create ? (function(o, v) {
Object.defineProperty(o, "default", { enumerable: true, value: v });
}) : function(o, v) {
o["default"] = v;
});
var __importStar = (this && this.__importStar) || function (mod) {
if (mod && mod.__esModule) return mod;
var result = {};
if (mod != null) for (var k in mod) if (Object.hasOwnProperty.call(mod, k)) __createBinding(result, mod, k);
__setModuleDefault(result, mod);
return result;
};
Object.defineProperty(exports, "__esModule", { value: true });
exports.getOctokitOptions = exports.GitHub = exports.context = void 0;
const Context = __importStar(require("./context"));
const Utils = __importStar(require("./internal/utils"));
// octokit + plugins
const core_1 = require("@octokit/core");
const plugin_rest_endpoint_methods_1 = require("@octokit/plugin-rest-endpoint-methods");
const plugin_paginate_rest_1 = require("@octokit/plugin-paginate-rest");
exports.context = new Context.Context();
const baseUrl = Utils.getApiBaseUrl();
const defaults = {
baseUrl,
request: {
agent: Utils.getProxyAgent(baseUrl)
}
};
exports.GitHub = core_1.Octokit.plugin(plugin_rest_endpoint_methods_1.restEndpointMethods, plugin_paginate_rest_1.paginateRest).defaults(defaults);
/**
* Convience function to correctly format Octokit Options to pass into the constructor.
*
* @param token the repo PAT or GITHUB_TOKEN
* @param options other options to set
*/
function getOctokitOptions(token, options) {
const opts = Object.assign({}, options || {}); // Shallow clone - don't mutate the object provided by the caller
// Auth
const auth = Utils.getAuthString(token, opts);
if (auth) {
opts.auth = auth;
}
return opts;
}
exports.getOctokitOptions = getOctokitOptions;
//# sourceMappingURL=utils.js.map

View File

@@ -1 +0,0 @@
{"version":3,"file":"utils.js","sourceRoot":"","sources":["../src/utils.ts"],"names":[],"mappings":";;;;;;;;;;;;;;;;;;;;;;AAAA,mDAAoC;AACpC,wDAAyC;AAEzC,oBAAoB;AACpB,wCAAqC;AAErC,wFAAyE;AACzE,wEAA0D;AAE7C,QAAA,OAAO,GAAG,IAAI,OAAO,CAAC,OAAO,EAAE,CAAA;AAE5C,MAAM,OAAO,GAAG,KAAK,CAAC,aAAa,EAAE,CAAA;AACrC,MAAM,QAAQ,GAAG;IACf,OAAO;IACP,OAAO,EAAE;QACP,KAAK,EAAE,KAAK,CAAC,aAAa,CAAC,OAAO,CAAC;KACpC;CACF,CAAA;AAEY,QAAA,MAAM,GAAG,cAAO,CAAC,MAAM,CAClC,kDAAmB,EACnB,mCAAY,CACb,CAAC,QAAQ,CAAC,QAAQ,CAAC,CAAA;AAEpB;;;;;GAKG;AACH,SAAgB,iBAAiB,CAC/B,KAAa,EACb,OAAwB;IAExB,MAAM,IAAI,GAAG,MAAM,CAAC,MAAM,CAAC,EAAE,EAAE,OAAO,IAAI,EAAE,CAAC,CAAA,CAAC,iEAAiE;IAE/G,OAAO;IACP,MAAM,IAAI,GAAG,KAAK,CAAC,aAAa,CAAC,KAAK,EAAE,IAAI,CAAC,CAAA;IAC7C,IAAI,IAAI,EAAE;QACR,IAAI,CAAC,IAAI,GAAG,IAAI,CAAA;KACjB;IAED,OAAO,IAAI,CAAA;AACb,CAAC;AAbD,8CAaC"}

View File

@@ -1,21 +0,0 @@
Actions Http Client for Node.js
Copyright (c) GitHub, Inc.
All rights reserved.
MIT License
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and
associated documentation files (the "Software"), to deal in the Software without restriction,
including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense,
and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT
LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN
NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

View File

@@ -1,26 +0,0 @@
## Releases
## 1.0.10
Contains a bug fix where proxy is defined without a user and password. see [PR here](https://github.com/actions/http-client/pull/42)
## 1.0.9
Throw HttpClientError instead of a generic Error from the \<verb>Json() helper methods when the server responds with a non-successful status code.
## 1.0.8
Fixed security issue where a redirect (e.g. 302) to another domain would pass headers. The fix was to strip the authorization header if the hostname was different. More [details in PR #27](https://github.com/actions/http-client/pull/27)
## 1.0.7
Update NPM dependencies and add 429 to the list of HttpCodes
## 1.0.6
Automatically sends Content-Type and Accept application/json headers for \<verb>Json() helper methods if not set in the client or parameters.
## 1.0.5
Adds \<verb>Json() helper methods for json over http scenarios.
## 1.0.4
Started to add \<verb>Json() helper methods. Do not use this release for that. Use >= 1.0.5 since there was an issue with types.
## 1.0.1 to 1.0.3
Adds proxy support.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 33 KiB

View File

@@ -1,23 +0,0 @@
import ifm = require('./interfaces');
export declare class BasicCredentialHandler implements ifm.IRequestHandler {
username: string;
password: string;
constructor(username: string, password: string);
prepareRequest(options: any): void;
canHandleAuthentication(response: ifm.IHttpClientResponse): boolean;
handleAuthentication(httpClient: ifm.IHttpClient, requestInfo: ifm.IRequestInfo, objs: any): Promise<ifm.IHttpClientResponse>;
}
export declare class BearerCredentialHandler implements ifm.IRequestHandler {
token: string;
constructor(token: string);
prepareRequest(options: any): void;
canHandleAuthentication(response: ifm.IHttpClientResponse): boolean;
handleAuthentication(httpClient: ifm.IHttpClient, requestInfo: ifm.IRequestInfo, objs: any): Promise<ifm.IHttpClientResponse>;
}
export declare class PersonalAccessTokenCredentialHandler implements ifm.IRequestHandler {
token: string;
constructor(token: string);
prepareRequest(options: any): void;
canHandleAuthentication(response: ifm.IHttpClientResponse): boolean;
handleAuthentication(httpClient: ifm.IHttpClient, requestInfo: ifm.IRequestInfo, objs: any): Promise<ifm.IHttpClientResponse>;
}

View File

@@ -1,58 +0,0 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
class BasicCredentialHandler {
constructor(username, password) {
this.username = username;
this.password = password;
}
prepareRequest(options) {
options.headers['Authorization'] =
'Basic ' +
Buffer.from(this.username + ':' + this.password).toString('base64');
}
// This handler cannot handle 401
canHandleAuthentication(response) {
return false;
}
handleAuthentication(httpClient, requestInfo, objs) {
return null;
}
}
exports.BasicCredentialHandler = BasicCredentialHandler;
class BearerCredentialHandler {
constructor(token) {
this.token = token;
}
// currently implements pre-authorization
// TODO: support preAuth = false where it hooks on 401
prepareRequest(options) {
options.headers['Authorization'] = 'Bearer ' + this.token;
}
// This handler cannot handle 401
canHandleAuthentication(response) {
return false;
}
handleAuthentication(httpClient, requestInfo, objs) {
return null;
}
}
exports.BearerCredentialHandler = BearerCredentialHandler;
class PersonalAccessTokenCredentialHandler {
constructor(token) {
this.token = token;
}
// currently implements pre-authorization
// TODO: support preAuth = false where it hooks on 401
prepareRequest(options) {
options.headers['Authorization'] =
'Basic ' + Buffer.from('PAT:' + this.token).toString('base64');
}
// This handler cannot handle 401
canHandleAuthentication(response) {
return false;
}
handleAuthentication(httpClient, requestInfo, objs) {
return null;
}
}
exports.PersonalAccessTokenCredentialHandler = PersonalAccessTokenCredentialHandler;

Some files were not shown because too many files have changed in this diff Show More