5 Commits

Author SHA1 Message Date
silverwind
9933ea0d92 feat: add configurable bind_workdir option with workspace cleanup for DinD setups (#810)
## Summary

Adds a `container.bind_workdir` config option that exposes the nektos/act `BindWorkdir` setting. When enabled, workspaces are bind-mounted from the host filesystem instead of Docker volumes, which is required for DinD setups where jobs use `docker compose` with bind mounts (e.g. `.:/app`).

Each job gets an isolated workspace at `/workspace/<task_id>/<owner>/<repo>` to prevent concurrent jobs from the same repo interfering with each other. The task directory is cleaned up after job execution.

### Configuration

```yaml
container:
  bind_workdir: true
```

When using this with DinD, also mount the workspace parent into the runner container and add it to `valid_volumes`:
```yaml
container:
  valid_volumes:
    - /workspace/**
```

*This PR was authored by Claude (AI assistant)*

Reviewed-on: https://gitea.com/gitea/act_runner/pulls/810
Reviewed-by: ChristopherHX <38043+christopherhx@noreply.gitea.com>
Co-authored-by: silverwind <me@silverwind.io>
Co-committed-by: silverwind <me@silverwind.io>
2026-03-03 10:15:06 +00:00
RoboMagus
5dd5436169 Semver tags for Docker images (#720)
The main Gitea docker image is already distributed with proper semver tags, such that users can pin to e.g. the minor version and still pull in patch releases.  This is something that has been lacking on the act runner images.

This PR expands the docker image tag versioning strategy such that when `v0.2.13` is released the following image tags are produced:

Basic:
 - `0`
 - `0.2`
 - `0.2.13`
 - `latest`

DinD:
 - `0-dind`
 - `0.2-dind`
 - `0.2.13-dind`
 - `latest-dind`

DinD-Rootless:
 - `0-dind-rootless`
 - `0.2-dind-rootless`
 - `0.2.13-dind-rootless`
 - `latest-dind-rootless`

To verify the `docker/metadata-action` produces the expected results in a Gitea workflow environment I've executed a release workflow. Results can be seen in [this run](https://gitea.com/RoboMagus/gitea_act_runner/actions/runs/14). (Note though that as the repository name of my fork changed from `act_runner` to `gitea_act_runner` this is reflected in the produced docker tags in this test run!)

---------

Co-authored-by: RoboMagus <68224306+RoboMagus@users.noreply.github.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: techknowlogick <techknowlogick@noreply.gitea.com>
Reviewed-on: https://gitea.com/gitea/act_runner/pulls/720
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: RoboMagus <robomagus@noreply.gitea.com>
Co-committed-by: RoboMagus <robomagus@noreply.gitea.com>
2026-02-25 19:09:53 +00:00
silverwind
658101d9cb chore(lint): add golangci-lint v2 and fix all lint issues (#803)
## Summary
- Replace old `.golangci.yml` (v1 format) with v2 format, aligned with gitea's lint config
- Add `lint-go`, `lint-go-fix`, and `lint` Makefile targets using golangci-lint v2.10.1
- Replace `make vet` with `make lint` in CI workflow (lint includes vet)
- Fix all 35 lint issues: modernize (maps.Copy, range over int, any), perfsprint (errors.New), unparam (remove unused parameters), revive (var naming), staticcheck, forbidigo exclusion for cmd/
- Make `security-check` non-fatal (apply https://github.com/go-gitea/gitea/pull/36681)
- Remove dead gocritic exclusion rules (commentFormatting, exitAfterDefer)
- Remove dead linter exclusions and disabled checks (singleCaseSwitch, ST1003, QF1001, QF1006, QF1008, testifylint go-require/require-error, test file exclusions for dupl/errcheck/staticcheck/unparam)

## Test plan
- [x] `golangci-lint run` passes
- [x] `go build ./...` passes
- [x] `go test ./...` passes

---------

Co-authored-by: ChristopherHX <christopher.homberger@web.de>
Co-authored-by: Christopher Homberger <christopher.homberger@web.de>
Reviewed-on: https://gitea.com/gitea/act_runner/pulls/803
Reviewed-by: ChristopherHX <christopherhx@noreply.gitea.com>
2026-02-22 17:35:08 +00:00
ChristopherHX
f0f9f0c8ab fix: composite action log result reported as step result (#801)
Act logs an array of stepID to signal that this is an partial step result within an composite actions, this is the case for years and act_runner never handled it correctly.

Ref: <43e6958fa3/pkg/runner/logger.go (L142)>

Reviewed-on: https://gitea.com/gitea/act_runner/pulls/801
Reviewed-by: silverwind <silverwind@noreply.gitea.com>
2026-02-22 14:56:09 +00:00
silverwind
ae6e5dfcf7 fix: race condition in reporter between RunDaemon and Close (#796)
## Summary

- Fix data race on `r.closed` between `RunDaemon()` and `Close()` by protecting it with the existing `stateMu` — `closed` is part of the reporter state. `RunDaemon()` reads it under `stateMu.RLock()`, `Close()` sets it inside the existing `stateMu.Lock()` block
- `ReportState` now has a parameter to not report results from runDaemon even if set, from now on `Close` reports the result
- `Close` waits for `RunDaemon()` to signal exit via a closed channel `daemon` before reporting the final logs and state with result, unless something really wrong happens it does not time out
- Add `TestReporter_EphemeralRunnerDeletion` which reproduces the exact scenario from #793: RunDaemon's `ReportState` racing with `Close`, causing the ephemeral runner to be deleted before final logs are sent
- Add `TestReporter_RunDaemonClose_Race` which exercises `RunDaemon()` and `Close()` concurrently to verify no data race on `r.closed` under `go test -race`
- Enable `-race` flag in `make test` so CI catches data races going forward

Based on #794, with fixes for the remaining unprotected `r.closed` reads that the race detector catches.

Fixes #793

---------

Co-authored-by: Christopher Homberger <christopher.homberger@web.de>
Co-authored-by: ChristopherHX <christopher.homberger@web.de>
Co-authored-by: rmawatson <rmawatson@hotmail.com>
Reviewed-on: https://gitea.com/gitea/act_runner/pulls/796
Reviewed-by: ChristopherHX <christopherhx@noreply.gitea.com>
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: silverwind <me@silverwind.io>
Co-committed-by: silverwind <me@silverwind.io>
2026-02-22 14:52:49 +00:00
16 changed files with 412 additions and 265 deletions

View File

@@ -39,6 +39,15 @@ jobs:
GPG_FINGERPRINT: ${{ steps.import_gpg.outputs.fingerprint }}
release-image:
runs-on: ubuntu-latest
strategy:
matrix:
variant:
- target: basic
tag_suffix: ""
- target: dind
tag_suffix: "-dind"
- target: dind-rootless
tag_suffix: "-dind-rootless"
container:
image: catthehacker/ubuntu:act-latest
env:
@@ -62,50 +71,33 @@ jobs:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Get Meta
id: meta
- name: Repo Meta
id: repo_meta
run: |
echo REPO_NAME=$(echo ${GITHUB_REPOSITORY} | awk -F"/" '{print $2}') >> $GITHUB_OUTPUT
echo REPO_VERSION=${GITHUB_REF_NAME#v} >> $GITHUB_OUTPUT
- name: "Docker meta"
id: docker_meta
uses: https://github.com/docker/metadata-action@v5
with:
images: |
${{ env.DOCKER_ORG }}/${{ steps.repo_meta.outputs.REPO_NAME }}
tags: |
type=semver,pattern={{major}}.{{minor}}.{{patch}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
flavor: |
latest=true
suffix=${{ matrix.variant.tag_suffix }},onlatest=true
- name: Build and push
uses: docker/build-push-action@v6
with:
context: .
file: ./Dockerfile
target: basic
target: ${{ matrix.variant.target }}
platforms: |
linux/amd64
linux/arm64
push: true
tags: |
${{ env.DOCKER_ORG }}/${{ steps.meta.outputs.REPO_NAME }}:${{ steps.meta.outputs.REPO_VERSION }}
${{ env.DOCKER_ORG }}/${{ steps.meta.outputs.REPO_NAME }}:${{ env.DOCKER_LATEST }}
- name: Build and push dind
uses: docker/build-push-action@v6
with:
context: .
file: ./Dockerfile
target: dind
platforms: |
linux/amd64
linux/arm64
push: true
tags: |
${{ env.DOCKER_ORG }}/${{ steps.meta.outputs.REPO_NAME }}:${{ steps.meta.outputs.REPO_VERSION }}-dind
${{ env.DOCKER_ORG }}/${{ steps.meta.outputs.REPO_NAME }}:${{ env.DOCKER_LATEST }}-dind
- name: Build and push dind-rootless
uses: docker/build-push-action@v6
with:
context: .
file: ./Dockerfile
target: dind-rootless
platforms: |
linux/amd64
linux/arm64
push: true
tags: |
${{ env.DOCKER_ORG }}/${{ steps.meta.outputs.REPO_NAME }}:${{ steps.meta.outputs.REPO_VERSION }}-dind-rootless
${{ env.DOCKER_ORG }}/${{ steps.meta.outputs.REPO_NAME }}:${{ env.DOCKER_LATEST }}-dind-rootless
tags: ${{ steps.docker_meta.outputs.tags }}

View File

@@ -12,8 +12,8 @@ jobs:
- uses: actions/setup-go@v6
with:
go-version-file: 'go.mod'
- name: vet checks
run: make vet
- name: lint
run: make lint
- name: build
run: make build
- name: test

View File

@@ -1,165 +1,112 @@
version: "2"
output:
sort-order:
- file
linters:
default: none
enable:
- gosimple
- typecheck
- govet
- errcheck
- staticcheck
- unused
- dupl
#- gocyclo # The cyclomatic complexety of a lot of functions is too high, we should refactor those another time.
- gofmt
- misspell
- gocritic
- bidichk
- ineffassign
- revive
- gofumpt
- bodyclose
- depguard
- dupl
- errcheck
- forbidigo
- gocheckcompilerdirectives
- gocritic
- govet
- ineffassign
- mirror
- modernize
- nakedret
- unconvert
- wastedassign
- nilnil
- nolintlint
- stylecheck
enable-all: false
disable-all: true
fast: false
run:
go: 1.18
timeout: 10m
skip-dirs:
- node_modules
- public
- web_src
linters-settings:
stylecheck:
checks: ["all", "-ST1005", "-ST1003"]
nakedret:
max-func-lines: 0
gocritic:
disabled-checks:
- ifElseChain
- singleCaseSwitch # Every time this occurred in the code, there was no other way.
revive:
ignore-generated-header: false
severity: warning
confidence: 0.8
errorCode: 1
warningCode: 1
- perfsprint
- revive
- staticcheck
- testifylint
- unconvert
- unparam
- unused
- usestdlibvars
- usetesting
- wastedassign
settings:
depguard:
rules:
main:
deny:
- pkg: io/ioutil
desc: use os or io instead
- pkg: golang.org/x/exp
desc: it's experimental and unreliable
- pkg: github.com/pkg/errors
desc: use builtin errors package instead
nolintlint:
allow-unused: false
require-explanation: true
require-specific: true
gocritic:
enabled-checks:
- equalFold
disabled-checks:
- ifElseChain
revive:
severity: error
rules:
- name: blank-imports
- name: constant-logical-expr
- name: context-as-argument
- name: context-keys-type
- name: dot-imports
- name: empty-lines
- name: error-return
- name: error-strings
- name: exported
- name: identical-branches
- name: if-return
- name: increment-decrement
- name: modifies-value-receiver
- name: package-comments
- name: redefines-builtin-id
- name: superfluous-else
- name: time-naming
- name: unexported-return
- name: var-declaration
- name: var-naming
staticcheck:
checks:
- all
- -ST1005
usetesting:
os-temp-dir: true
perfsprint:
concat-loop: false
govet:
enable:
- nilness
- unusedwrite
exclusions:
generated: lax
presets:
- comments
- common-false-positives
- legacy
- std-error-handling
rules:
- name: blank-imports
- name: context-as-argument
- name: context-keys-type
- name: dot-imports
- name: error-return
- name: error-strings
- name: error-naming
- name: exported
- name: if-return
- name: increment-decrement
- name: var-naming
- name: var-declaration
- name: package-comments
- name: range
- name: receiver-naming
- name: time-naming
- name: unexported-return
- name: indent-error-flow
- name: errorf
- name: duplicated-imports
- name: modifies-value-receiver
gofumpt:
extra-rules: true
lang-version: "1.18"
depguard:
# TODO: use depguard to replace import checks in gitea-vet
list-type: denylist
# Check the list against standard lib.
include-go-root: true
packages-with-error-message:
- github.com/unknwon/com: "use gitea's util and replacements"
- linters:
- forbidigo
path: cmd
issues:
exclude-rules:
# Exclude some linters from running on tests files.
- path: _test\.go
linters:
- gocyclo
- errcheck
- dupl
- gosec
- unparam
- staticcheck
- path: models/migrations/v
linters:
- gocyclo
- errcheck
- dupl
- gosec
- linters:
- dupl
text: "webhook"
- linters:
- gocritic
text: "`ID' should not be capitalized"
- path: modules/templates/helper.go
linters:
- gocritic
- linters:
- unused
text: "swagger"
- path: contrib/pr/checkout.go
linters:
- errcheck
- path: models/issue.go
linters:
- errcheck
- path: models/migrations/
linters:
- errcheck
- path: modules/log/
linters:
- errcheck
- path: routers/api/v1/repo/issue_subscription.go
linters:
- dupl
- path: routers/repo/view.go
linters:
- dupl
- path: models/migrations/
linters:
- unused
- linters:
- staticcheck
text: "argument x is overwritten before first use"
- path: modules/httplib/httplib.go
linters:
- staticcheck
# Enabling this would require refactoring the methods and how they are called.
- path: models/issue_comment_list.go
linters:
- dupl
- linters:
- misspell
text: '`Unknwon` is a misspelling of `Unknown`'
- path: models/update.go
linters:
- unused
- path: cmd/dump.go
linters:
- dupl
- text: "commentFormatting: put a space between `//` and comment text"
linters:
- gocritic
- text: "exitAfterDefer:"
linters:
- gocritic
- path: modules/graceful/manager_windows.go
linters:
- staticcheck
text: "svc.IsAnInteractiveSession is deprecated: Use IsWindowsService instead."
- path: models/user/openid.go
linters:
- golint
max-issues-per-linter: 0
max-same-issues: 0
formatters:
enable:
- gofmt
- gofumpt
settings:
gofumpt:
extra-rules: true
exclusions:
generated: lax
run:
timeout: 10m

View File

@@ -20,6 +20,7 @@ DOCKER_TAG ?= nightly
DOCKER_REF := $(DOCKER_IMAGE):$(DOCKER_TAG)
DOCKER_ROOTLESS_REF := $(DOCKER_IMAGE):$(DOCKER_TAG)-dind-rootless
GOLANGCI_LINT_PACKAGE ?= github.com/golangci/golangci-lint/v2/cmd/golangci-lint@v2.10.1
GOVULNCHECK_PACKAGE ?= golang.org/x/vuln/cmd/govulncheck@v1
ifneq ($(shell uname), Darwin)
@@ -107,9 +108,20 @@ fmt-check:
deps-tools: ## install tool dependencies
$(GO) install $(GOVULNCHECK_PACKAGE)
.PHONY: lint
lint: lint-go vet
.PHONY: lint-go
lint-go: ## lint go files
$(GO) run $(GOLANGCI_LINT_PACKAGE) run
.PHONY: lint-go-fix
lint-go-fix: ## lint go files and fix issues
$(GO) run $(GOLANGCI_LINT_PACKAGE) run --fix
.PHONY: security-check
security-check: deps-tools
GOEXPERIMENT= $(GO) run $(GOVULNCHECK_PACKAGE) -show color ./...
GOEXPERIMENT= $(GO) run $(GOVULNCHECK_PACKAGE) -show color ./... || true
.PHONY: tidy
tidy:
@@ -125,7 +137,7 @@ tidy-check: tidy
fi
test: fmt-check security-check
@$(GO) test -v -cover -coverprofile coverage.txt ./... && echo "\n==>\033[32m Ok\033[m\n" || exit 1
@$(GO) test -race -v -cover -coverprofile coverage.txt ./... && echo "\n==>\033[32m Ok\033[m\n" || exit 1
.PHONY: vet
vet:

View File

@@ -4,7 +4,6 @@
package cmd
import (
"context"
"fmt"
"os"
"os/signal"
@@ -22,7 +21,7 @@ type cacheServerArgs struct {
Port uint16
}
func runCacheServer(ctx context.Context, configFile *string, cacheArgs *cacheServerArgs) func(cmd *cobra.Command, args []string) error {
func runCacheServer(configFile *string, cacheArgs *cacheServerArgs) func(cmd *cobra.Command, args []string) error {
return func(cmd *cobra.Command, args []string) error {
cfg, err := config.LoadDefault(*configFile)
if err != nil {

View File

@@ -72,7 +72,7 @@ func Execute(ctx context.Context) {
Use: "cache-server",
Short: "Start a cache server for the cache action",
Args: cobra.MaximumNArgs(0),
RunE: runCacheServer(ctx, &configFile, &cacheArgs),
RunE: runCacheServer(&configFile, &cacheArgs),
}
cacheCmd.Flags().StringVarP(&cacheArgs.Dir, "dir", "d", "", "Cache directory")
cacheCmd.Flags().StringVarP(&cacheArgs.Host, "host", "s", "", "Host of the cache server")

View File

@@ -262,5 +262,5 @@ func getDockerSocketPath(configDockerHost string) (string, error) {
}
}
return "", fmt.Errorf("daemon Docker Engine socket not found and docker_host config was invalid")
return "", errors.New("daemon Docker Engine socket not found and docker_host config was invalid")
}

View File

@@ -6,7 +6,9 @@ package cmd
import (
"context"
"errors"
"fmt"
"maps"
"os"
"path/filepath"
"strconv"
@@ -77,7 +79,7 @@ func (i *executeArgs) LoadSecrets() map[string]string {
for _, secretPair := range i.secrets {
secretPairParts := strings.SplitN(secretPair, "=", 2)
secretPairParts[0] = strings.ToUpper(secretPairParts[0])
if strings.ToUpper(s[secretPairParts[0]]) == secretPairParts[0] {
if strings.EqualFold(s[secretPairParts[0]], secretPairParts[0]) {
log.Errorf("Secret %s is already defined (secrets are case insensitive)", secretPairParts[0])
}
if len(secretPairParts) == 2 {
@@ -104,9 +106,7 @@ func readEnvs(path string, envs map[string]string) bool {
if err != nil {
log.Fatalf("Error loading from %s: %v", path, err)
}
for k, v := range env {
envs[k] = v
}
maps.Copy(envs, env)
return true
}
return false
@@ -166,7 +166,7 @@ func (i *executeArgs) resolve(path string) string {
return path
}
func printList(plan *model.Plan) error {
func printList(plan *model.Plan) {
type lineInfoDef struct {
jobID string
jobName string
@@ -261,10 +261,9 @@ func printList(plan *model.Plan) error {
if duplicateJobIDs {
fmt.Print("\nDetected multiple jobs with the same job name, use `-W` to specify the path to the specific workflow.\n")
}
return nil
}
func runExecList(ctx context.Context, planner model.WorkflowPlanner, execArgs *executeArgs) error {
func runExecList(planner model.WorkflowPlanner, execArgs *executeArgs) error {
// plan with filtered jobs - to be used for filtering only
var filterPlan *model.Plan
@@ -306,7 +305,7 @@ func runExecList(ctx context.Context, planner model.WorkflowPlanner, execArgs *e
}
}
_ = printList(filterPlan)
printList(filterPlan)
return nil
}
@@ -319,7 +318,7 @@ func runExec(ctx context.Context, execArgs *executeArgs) func(cmd *cobra.Command
}
if execArgs.runList {
return runExecList(ctx, planner, execArgs)
return runExecList(planner, execArgs)
}
// plan with triggered jobs
@@ -378,7 +377,7 @@ func runExec(ctx context.Context, execArgs *executeArgs) func(cmd *cobra.Command
if len(execArgs.artifactServerAddr) == 0 {
ip := common.GetOutboundIP()
if ip == nil {
return fmt.Errorf("unable to determine outbound IP address")
return errors.New("unable to determine outbound IP address")
}
execArgs.artifactServerAddr = ip.String()
}
@@ -422,7 +421,7 @@ func runExec(ctx context.Context, execArgs *executeArgs) func(cmd *cobra.Command
NoSkipCheckout: execArgs.noSkipCheckout,
// PresetGitHubContext: preset,
// EventJSON: string(eventJSON),
ContainerNamePrefix: fmt.Sprintf("GITEA-ACTIONS-TASK-%s", eventName),
ContainerNamePrefix: "GITEA-ACTIONS-TASK-" + eventName,
ContainerMaxLifetime: maxLifetime,
ContainerNetworkMode: container.NetworkMode(execArgs.network),
DefaultActionInstance: execArgs.defaultActionsURL,

View File

@@ -6,6 +6,7 @@ package cmd
import (
"bufio"
"context"
"errors"
"fmt"
"os"
"os/signal"
@@ -107,10 +108,10 @@ type registerInputs struct {
func (r *registerInputs) validate() error {
if r.InstanceAddr == "" {
return fmt.Errorf("instance address is empty")
return errors.New("instance address is empty")
}
if r.Token == "" {
return fmt.Errorf("token is empty")
return errors.New("token is empty")
}
if len(r.Labels) > 0 {
return validateLabels(r.Labels)

View File

@@ -102,7 +102,7 @@ func (p *Poller) Shutdown(ctx context.Context) error {
p.shutdownJobs()
// wait for running jobs to report their status to Gitea
_, _ = <-p.done
<-p.done
return ctx.Err()
}

View File

@@ -7,6 +7,8 @@ import (
"context"
"encoding/json"
"fmt"
"maps"
"os"
"path/filepath"
"strings"
"sync"
@@ -49,9 +51,7 @@ func NewRunner(cfg *config.Config, reg *config.Registration, cli client.Client)
}
}
envs := make(map[string]string, len(cfg.Runner.Envs))
for k, v := range cfg.Runner.Envs {
envs[k] = v
}
maps.Copy(envs, cfg.Runner.Envs)
if cfg.Cache.Enabled == nil || *cfg.Cache.Enabled {
if cfg.Cache.ExternalServer != "" {
envs["ACTIONS_CACHE_URL"] = cfg.Cache.ExternalServer
@@ -116,7 +116,7 @@ func (r *Runner) Run(ctx context.Context, task *runnerv1.Task) error {
// getDefaultActionsURL
// when DEFAULT_ACTIONS_URL == "https://github.com" and GithubMirror is not blank,
// it should be set to GithubMirror first.
func (r *Runner) getDefaultActionsURL(ctx context.Context, task *runnerv1.Task) string {
func (r *Runner) getDefaultActionsURL(task *runnerv1.Task) string {
giteaDefaultActionsURL := task.Context.Fields["gitea_default_actions_url"].GetStringValue()
if giteaDefaultActionsURL == "https://github.com" && r.cfg.Runner.GithubMirror != "" {
return r.cfg.Runner.GithubMirror
@@ -148,7 +148,7 @@ func (r *Runner) run(ctx context.Context, task *runnerv1.Task, reporter *report.
taskContext := task.Context.Fields
log.Infof("task %v repo is %v %v %v", task.Id, taskContext["repository"].GetStringValue(),
r.getDefaultActionsURL(ctx, task),
r.getDefaultActionsURL(task),
r.client.Address())
preset := &model.GithubContext{
@@ -174,8 +174,8 @@ func (r *Runner) run(ctx context.Context, task *runnerv1.Task, reporter *report.
preset.Token = t
}
if actionsIdTokenRequestUrl := taskContext["actions_id_token_request_url"].GetStringValue(); actionsIdTokenRequestUrl != "" {
r.envs["ACTIONS_ID_TOKEN_REQUEST_URL"] = actionsIdTokenRequestUrl
if actionsIDTokenRequestURL := taskContext["actions_id_token_request_url"].GetStringValue(); actionsIDTokenRequestURL != "" {
r.envs["ACTIONS_ID_TOKEN_REQUEST_URL"] = actionsIDTokenRequestURL
r.envs["ACTIONS_ID_TOKEN_REQUEST_TOKEN"] = taskContext["actions_id_token_request_token"].GetStringValue()
task.Secrets["ACTIONS_ID_TOKEN_REQUEST_TOKEN"] = r.envs["ACTIONS_ID_TOKEN_REQUEST_TOKEN"]
}
@@ -197,11 +197,18 @@ func (r *Runner) run(ctx context.Context, task *runnerv1.Task, reporter *report.
maxLifetime = time.Until(deadline)
}
workdirParent := strings.TrimLeft(r.cfg.Container.WorkdirParent, "/")
if r.cfg.Container.BindWorkdir {
// Append the task ID to isolate concurrent jobs from the same repo.
workdirParent = fmt.Sprintf("%s/%d", workdirParent, task.Id)
}
workdir := filepath.FromSlash(fmt.Sprintf("/%s/%s", workdirParent, preset.Repository))
runnerConfig := &runner.Config{
// On Linux, Workdir will be like "/<parent_directory>/<owner>/<repo>"
// On Windows, Workdir will be like "\<parent_directory>\<owner>\<repo>"
Workdir: filepath.FromSlash(fmt.Sprintf("/%s/%s", strings.TrimLeft(r.cfg.Container.WorkdirParent, "/"), preset.Repository)),
BindWorkdir: false,
Workdir: workdir,
BindWorkdir: r.cfg.Container.BindWorkdir,
ActionCacheDir: filepath.FromSlash(r.cfg.Host.WorkdirParent),
ReuseContainers: false,
@@ -222,7 +229,7 @@ func (r *Runner) run(ctx context.Context, task *runnerv1.Task, reporter *report.
ContainerOptions: r.cfg.Container.Options,
ContainerDaemonSocket: r.cfg.Container.DockerHost,
Privileged: r.cfg.Container.Privileged,
DefaultActionInstance: r.getDefaultActionsURL(ctx, task),
DefaultActionInstance: r.getDefaultActionsURL(task),
PlatformPicker: r.labels.PickPlatform,
Vars: task.Vars,
ValidVolumes: r.cfg.Container.ValidVolumes,
@@ -246,6 +253,15 @@ func (r *Runner) run(ctx context.Context, task *runnerv1.Task, reporter *report.
execErr := executor(ctx)
reporter.SetOutputs(job.Outputs)
if r.cfg.Container.BindWorkdir {
// Remove the entire task-specific directory (e.g. /workspace/<task_id>).
taskDir := filepath.FromSlash("/" + workdirParent)
if err := os.RemoveAll(taskDir); err != nil {
log.Warnf("failed to clean up workspace %s: %v", taskDir, err)
}
}
return execErr
}

View File

@@ -307,7 +307,7 @@ func Test_yamlV4NodeRoundTrip(t *testing.T) {
t.Run("unmarshal and re-marshal workflow", func(t *testing.T) {
input := []byte("name: test\non: push\njobs:\n build:\n runs-on: ubuntu-latest\n steps:\n - run: echo hello\n")
var wf map[string]interface{}
var wf map[string]any
err := yaml.Unmarshal(input, &wf)
require.NoError(t, err)
assert.Equal(t, wf["name"], "test")
@@ -315,7 +315,7 @@ func Test_yamlV4NodeRoundTrip(t *testing.T) {
out, err := yaml.Marshal(wf)
require.NoError(t, err)
var wf2 map[string]interface{}
var wf2 map[string]any
err = yaml.Unmarshal(out, &wf2)
require.NoError(t, err)
assert.Equal(t, wf2["name"], "test")

View File

@@ -103,6 +103,12 @@ container:
require_docker: false
# Timeout to wait for the docker daemon to be reachable, if docker is required by require_docker or act_runner
docker_timeout: 0s
# Bind the workspace to the host filesystem instead of using Docker volumes.
# This is required for Docker-in-Docker (DinD) setups when jobs use docker compose
# with bind mounts (e.g., ".:/app"), as volume-based workspaces are not accessible
# from the DinD daemon's filesystem. When enabled, ensure the workspace parent
# directory is also mounted into the runner container and listed in valid_volumes.
bind_workdir: false
host:
# The parent directory of a job's working directory.

View File

@@ -5,6 +5,7 @@ package config
import (
"fmt"
"maps"
"os"
"path/filepath"
"time"
@@ -56,6 +57,7 @@ type Container struct {
ForceRebuild bool `yaml:"force_rebuild"` // Rebuild docker image(s) even if already present
RequireDocker bool `yaml:"require_docker"` // Always require a reachable docker daemon, even if not required by act_runner
DockerTimeout time.Duration `yaml:"docker_timeout"` // Timeout to wait for the docker daemon to be reachable, if docker is required by require_docker or act_runner
BindWorkdir bool `yaml:"bind_workdir"` // BindWorkdir binds the workspace to the host filesystem instead of using Docker volumes. Required for DinD when jobs use docker compose with bind mounts.
}
// Host represents the configuration for the host.
@@ -96,9 +98,7 @@ func LoadDefault(file string) (*Config, error) {
if cfg.Runner.Envs == nil {
cfg.Runner.Envs = map[string]string{}
}
for k, v := range envs {
cfg.Runner.Envs[k] = v
}
maps.Copy(cfg.Runner.Envs, envs)
}
}

View File

@@ -5,6 +5,7 @@ package report
import (
"context"
"errors"
"fmt"
"regexp"
"strings"
@@ -37,6 +38,7 @@ type Reporter struct {
state *runnerv1.TaskState
stateMu sync.RWMutex
outputs sync.Map
daemon chan struct{}
debugOutputEnabled bool
stopCommandEndToken string
@@ -63,6 +65,7 @@ func NewReporter(ctx context.Context, cancel context.CancelFunc, client client.C
state: &runnerv1.TaskState{
Id: task.Id,
},
daemon: make(chan struct{}),
}
if task.Secrets["ACTIONS_STEP_DEBUG"] == "true" {
@@ -75,7 +78,7 @@ func NewReporter(ctx context.Context, cancel context.CancelFunc, client client.C
func (r *Reporter) ResetSteps(l int) {
r.stateMu.Lock()
defer r.stateMu.Unlock()
for i := 0; i < l; i++ {
for i := range l {
r.state.Steps = append(r.state.Steps, &runnerv1.StepState{
Id: int64(i),
})
@@ -93,6 +96,18 @@ func appendIfNotNil[T any](s []*T, v *T) []*T {
return s
}
// isJobStepEntry is used to not report composite step results incorrectly as step result
// returns true if the logentry is on job level
// returns false for composite action step messages
func isJobStepEntry(entry *log.Entry) bool {
if v, ok := entry.Data["stepID"]; ok {
if v, ok := v.([]string); ok && len(v) > 1 {
return false
}
}
return true
}
func (r *Reporter) Fire(entry *log.Entry) error {
r.stateMu.Lock()
defer r.stateMu.Unlock()
@@ -109,6 +124,7 @@ func (r *Reporter) Fire(entry *log.Entry) error {
if stage != "Main" {
if v, ok := entry.Data["jobResult"]; ok {
if jobResult, ok := r.parseResult(v); ok {
// We need to ensure log upload before this upload
r.state.Result = jobResult
r.state.StoppedAt = timestamppb.New(timestamp)
for _, s := range r.state.Steps {
@@ -162,7 +178,7 @@ func (r *Reporter) Fire(entry *log.Entry) error {
} else if !r.duringSteps() {
r.logRows = appendIfNotNil(r.logRows, r.parseLogRow(entry))
}
if v, ok := entry.Data["stepResult"]; ok {
if v, ok := entry.Data["stepResult"]; ok && isJobStepEntry(entry) {
if stepResult, ok := r.parseResult(v); ok {
if step.LogLength == 0 {
step.LogIndex = int64(r.logOffset + len(r.logRows))
@@ -176,27 +192,29 @@ func (r *Reporter) Fire(entry *log.Entry) error {
}
func (r *Reporter) RunDaemon() {
if r.closed {
return
}
if r.ctx.Err() != nil {
r.stateMu.RLock()
closed := r.closed
r.stateMu.RUnlock()
if closed || r.ctx.Err() != nil {
// Acknowledge close
close(r.daemon)
return
}
_ = r.ReportLog(false)
_ = r.ReportState()
_ = r.ReportState(false)
time.AfterFunc(time.Second, r.RunDaemon)
}
func (r *Reporter) Logf(format string, a ...interface{}) {
func (r *Reporter) Logf(format string, a ...any) {
r.stateMu.Lock()
defer r.stateMu.Unlock()
r.logf(format, a...)
}
func (r *Reporter) logf(format string, a ...interface{}) {
func (r *Reporter) logf(format string, a ...any) {
if !r.duringSteps() {
r.logRows = append(r.logRows, &runnerv1.LogRow{
Time: timestamppb.Now(),
@@ -226,9 +244,8 @@ func (r *Reporter) SetOutputs(outputs map[string]string) {
}
func (r *Reporter) Close(lastWords string) error {
r.closed = true
r.stateMu.Lock()
r.closed = true
if r.state.Result == runnerv1.Result_RESULT_UNSPECIFIED {
if lastWords == "" {
lastWords = "Early termination"
@@ -251,13 +268,23 @@ func (r *Reporter) Close(lastWords string) error {
})
}
r.stateMu.Unlock()
// Wait for Acknowledge
select {
case <-r.daemon:
case <-time.After(60 * time.Second):
close(r.daemon)
log.Error("No Response from RunDaemon for 60s, continue best effort")
}
return retry.Do(func() error {
if err := r.ReportLog(true); err != nil {
return err
}
return r.ReportState()
}, retry.Context(r.ctx))
// Report the job outcome even when all log upload retry attempts have been exhausted
return errors.Join(
retry.Do(func() error {
return r.ReportLog(true)
}, retry.Context(r.ctx)),
retry.Do(func() error {
return r.ReportState(true)
}, retry.Context(r.ctx)),
)
}
func (r *Reporter) ReportLog(noMore bool) error {
@@ -280,22 +307,25 @@ func (r *Reporter) ReportLog(noMore bool) error {
ack := int(resp.Msg.AckIndex)
if ack < r.logOffset {
return fmt.Errorf("submitted logs are lost")
return errors.New("submitted logs are lost")
}
r.stateMu.Lock()
r.logRows = r.logRows[ack-r.logOffset:]
submitted := r.logOffset + len(rows)
r.logOffset = ack
r.stateMu.Unlock()
if noMore && ack < r.logOffset+len(rows) {
return fmt.Errorf("not all logs are submitted")
if noMore && ack < submitted {
return errors.New("not all logs are submitted")
}
return nil
}
func (r *Reporter) ReportState() error {
// ReportState only reports the job result if reportResult is true
// RunDaemon never reports results even if result is set
func (r *Reporter) ReportState(reportResult bool) error {
r.clientM.Lock()
defer r.clientM.Unlock()
@@ -303,8 +333,13 @@ func (r *Reporter) ReportState() error {
state := proto.Clone(r.state).(*runnerv1.TaskState)
r.stateMu.RUnlock()
// Only report result from Close to reliable sent logs
if !reportResult {
state.Result = runnerv1.Result_RESULT_UNSPECIFIED
}
outputs := make(map[string]string)
r.outputs.Range(func(k, v interface{}) bool {
r.outputs.Range(func(k, v any) bool {
if val, ok := v.(string); ok {
outputs[k.(string)] = val
}
@@ -328,7 +363,7 @@ func (r *Reporter) ReportState() error {
}
var noSent []string
r.outputs.Range(func(k, v interface{}) bool {
r.outputs.Range(func(k, v any) bool {
if _, ok := v.(string); ok {
noSent = append(noSent, k.(string))
}
@@ -359,7 +394,7 @@ var stringToResult = map[string]runnerv1.Result{
"cancelled": runnerv1.Result_RESULT_CANCELLED,
}
func (r *Reporter) parseResult(result interface{}) (runnerv1.Result, bool) {
func (r *Reporter) parseResult(result any) (runnerv1.Result, bool) {
str := ""
if v, ok := result.(string); ok { // for jobResult
str = v
@@ -373,7 +408,7 @@ func (r *Reporter) parseResult(result interface{}) (runnerv1.Result, bool) {
var cmdRegex = regexp.MustCompile(`^::([^ :]+)( .*)?::(.*)$`)
func (r *Reporter) handleCommand(originalContent, command, parameters, value string) *string {
func (r *Reporter) handleCommand(originalContent, command, value string) *string {
if r.stopCommandEndToken != "" && command != r.stopCommandEndToken {
return &originalContent
}
@@ -419,7 +454,7 @@ func (r *Reporter) parseLogRow(entry *log.Entry) *runnerv1.LogRow {
matches := cmdRegex.FindStringSubmatch(content)
if matches != nil {
if output := r.handleCommand(content, matches[1], matches[2], matches[3]); output != nil {
if output := r.handleCommand(content, matches[1], matches[3]); output != nil {
content = *output
} else {
return nil

View File

@@ -5,8 +5,11 @@ package report
import (
"context"
"errors"
"strings"
"sync"
"testing"
"time"
runnerv1 "code.gitea.io/actions-proto-go/runner/v1"
connect_go "connectrpc.com/connect"
@@ -15,6 +18,7 @@ import (
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
"google.golang.org/protobuf/types/known/structpb"
"google.golang.org/protobuf/types/known/timestamppb"
"gitea.com/gitea/act_runner/internal/pkg/client/mocks"
)
@@ -169,29 +173,165 @@ func TestReporter_Fire(t *testing.T) {
return connect_go.NewResponse(&runnerv1.UpdateTaskResponse{}), nil
})
ctx, cancel := context.WithCancel(context.Background())
taskCtx, err := structpb.NewStruct(map[string]interface{}{})
taskCtx, err := structpb.NewStruct(map[string]any{})
require.NoError(t, err)
reporter := NewReporter(ctx, cancel, client, &runnerv1.Task{
Context: taskCtx,
})
reporter.RunDaemon()
defer func() {
assert.NoError(t, reporter.Close(""))
require.NoError(t, reporter.Close(""))
}()
reporter.ResetSteps(5)
dataStep0 := map[string]interface{}{
dataStep0 := map[string]any{
"stage": "Main",
"stepNumber": 0,
"raw_output": true,
}
assert.NoError(t, reporter.Fire(&log.Entry{Message: "regular log line", Data: dataStep0}))
assert.NoError(t, reporter.Fire(&log.Entry{Message: "::debug::debug log line", Data: dataStep0}))
assert.NoError(t, reporter.Fire(&log.Entry{Message: "regular log line", Data: dataStep0}))
assert.NoError(t, reporter.Fire(&log.Entry{Message: "::debug::debug log line", Data: dataStep0}))
assert.NoError(t, reporter.Fire(&log.Entry{Message: "::debug::debug log line", Data: dataStep0}))
assert.NoError(t, reporter.Fire(&log.Entry{Message: "regular log line", Data: dataStep0}))
require.NoError(t, reporter.Fire(&log.Entry{Message: "regular log line", Data: dataStep0}))
require.NoError(t, reporter.Fire(&log.Entry{Message: "::debug::debug log line", Data: dataStep0}))
require.NoError(t, reporter.Fire(&log.Entry{Message: "regular log line", Data: dataStep0}))
require.NoError(t, reporter.Fire(&log.Entry{Message: "::debug::debug log line", Data: dataStep0}))
require.NoError(t, reporter.Fire(&log.Entry{Message: "::debug::debug log line", Data: dataStep0}))
require.NoError(t, reporter.Fire(&log.Entry{Message: "regular log line", Data: dataStep0}))
require.NoError(t, reporter.Fire(&log.Entry{Message: "composite step result", Data: map[string]any{
"stage": "Main",
"stepID": []string{"0", "0"},
"stepNumber": 0,
"raw_output": true,
"stepResult": "failure",
}}))
assert.Equal(t, runnerv1.Result_RESULT_UNSPECIFIED, reporter.state.Steps[0].Result)
require.NoError(t, reporter.Fire(&log.Entry{Message: "step result", Data: map[string]any{
"stage": "Main",
"stepNumber": 0,
"raw_output": true,
"stepResult": "success",
}}))
assert.Equal(t, runnerv1.Result_RESULT_SUCCESS, reporter.state.Steps[0].Result)
assert.Equal(t, int64(3), reporter.state.Steps[0].LogLength)
assert.Equal(t, int64(5), reporter.state.Steps[0].LogLength)
})
}
// TestReporter_EphemeralRunnerDeletion reproduces the exact scenario from
// https://gitea.com/gitea/act_runner/issues/793:
//
// 1. RunDaemon calls ReportLog(false) — runner is still alive
// 2. Close() updates state to Result=FAILURE (between RunDaemon's ReportLog and ReportState)
// 3. RunDaemon's ReportState() would clone the completed state and send it,
// but the fix makes ReportState return early when closed, preventing this
// 4. Close's ReportLog(true) succeeds because the runner was not deleted
func TestReporter_EphemeralRunnerDeletion(t *testing.T) {
runnerDeleted := false
client := mocks.NewClient(t)
client.On("UpdateLog", mock.Anything, mock.Anything).Return(
func(_ context.Context, req *connect_go.Request[runnerv1.UpdateLogRequest]) (*connect_go.Response[runnerv1.UpdateLogResponse], error) {
if runnerDeleted {
return nil, errors.New("runner has been deleted")
}
return connect_go.NewResponse(&runnerv1.UpdateLogResponse{
AckIndex: req.Msg.Index + int64(len(req.Msg.Rows)),
}), nil
},
)
client.On("UpdateTask", mock.Anything, mock.Anything).Maybe().Return(
func(_ context.Context, req *connect_go.Request[runnerv1.UpdateTaskRequest]) (*connect_go.Response[runnerv1.UpdateTaskResponse], error) {
// Server deletes ephemeral runner when it receives a completed state
if req.Msg.State != nil && req.Msg.State.Result != runnerv1.Result_RESULT_UNSPECIFIED {
runnerDeleted = true
}
return connect_go.NewResponse(&runnerv1.UpdateTaskResponse{}), nil
},
)
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
taskCtx, err := structpb.NewStruct(map[string]any{})
require.NoError(t, err)
reporter := NewReporter(ctx, cancel, client, &runnerv1.Task{Context: taskCtx})
reporter.ResetSteps(1)
// Fire a log entry to create pending data
require.NoError(t, reporter.Fire(&log.Entry{
Message: "build output",
Data: log.Fields{"stage": "Main", "stepNumber": 0, "raw_output": true},
}))
// Step 1: RunDaemon calls ReportLog(false) — runner is still alive
require.NoError(t, reporter.ReportLog(false))
// Step 2: Close() updates state — sets Result=FAILURE and marks steps cancelled.
// In the real race, this happens while RunDaemon is between ReportLog and ReportState.
reporter.stateMu.Lock()
reporter.closed = true
for _, v := range reporter.state.Steps {
if v.Result == runnerv1.Result_RESULT_UNSPECIFIED {
v.Result = runnerv1.Result_RESULT_CANCELLED
}
}
reporter.state.Result = runnerv1.Result_RESULT_FAILURE
reporter.logRows = append(reporter.logRows, &runnerv1.LogRow{
Time: timestamppb.Now(),
Content: "Early termination",
})
reporter.state.StoppedAt = timestamppb.Now()
reporter.stateMu.Unlock()
// Step 3: RunDaemon's ReportState() — with the fix, this returns early
// because closed=true, preventing the server from deleting the runner.
require.NoError(t, reporter.ReportState(false))
assert.False(t, runnerDeleted, "runner must not be deleted by RunDaemon's ReportState")
// Step 4: Close's final log upload succeeds because the runner is still alive.
// Flush pending rows first, then send the noMore signal (matching Close's retry behavior).
require.NoError(t, reporter.ReportLog(false))
// Acknowledge Close as done in daemon
close(reporter.daemon)
err = reporter.ReportLog(true)
require.NoError(t, err, "final log upload must not fail: runner should not be deleted before Close finishes sending logs")
err = reporter.ReportState(true)
require.NoError(t, err, "final state update should work: runner should not be deleted before Close finishes sending logs")
}
func TestReporter_RunDaemonClose_Race(t *testing.T) {
client := mocks.NewClient(t)
client.On("UpdateLog", mock.Anything, mock.Anything).Return(
func(_ context.Context, req *connect_go.Request[runnerv1.UpdateLogRequest]) (*connect_go.Response[runnerv1.UpdateLogResponse], error) {
return connect_go.NewResponse(&runnerv1.UpdateLogResponse{
AckIndex: req.Msg.Index + int64(len(req.Msg.Rows)),
}), nil
},
)
client.On("UpdateTask", mock.Anything, mock.Anything).Return(
func(_ context.Context, req *connect_go.Request[runnerv1.UpdateTaskRequest]) (*connect_go.Response[runnerv1.UpdateTaskResponse], error) {
return connect_go.NewResponse(&runnerv1.UpdateTaskResponse{}), nil
},
)
ctx, cancel := context.WithCancel(context.Background())
taskCtx, err := structpb.NewStruct(map[string]any{})
require.NoError(t, err)
reporter := NewReporter(ctx, cancel, client, &runnerv1.Task{
Context: taskCtx,
})
reporter.ResetSteps(1)
// Start the daemon loop in a separate goroutine.
// RunDaemon reads r.closed and reschedules itself via time.AfterFunc.
var wg sync.WaitGroup
wg.Go(func() {
reporter.RunDaemon()
})
// Close concurrently — this races with RunDaemon on r.closed.
require.NoError(t, reporter.Close(""))
// Cancel context so pending AfterFunc callbacks exit quickly.
cancel()
wg.Wait()
time.Sleep(2 * time.Second)
}