feat: add Prometheus metrics endpoint for runner observability (#820)

## What

Add an optional Prometheus `/metrics` HTTP endpoint to `act_runner` so operators can observe runner health, polling behavior, job outcomes, and RPC latency without scraping logs.

New surface:

- `internal/pkg/metrics/metrics.go` — metric definitions, custom `Registry`, static Go/process collectors, label constants, `ResultToStatusLabel` helper.
- `internal/pkg/metrics/server.go` — hardened `http.Server` serving `/metrics` and `/healthz` with Slowloris-safe timeouts (`ReadHeaderTimeout` 5s, `ReadTimeout`/`WriteTimeout` 10s, `IdleTimeout` 60s) and a 5s graceful shutdown.
- `daemon.go` wires it up behind `cfg.Metrics.Enabled` (disabled by default).
- `poller.go` / `reporter.go` / `runner.go` instrument their existing hot paths with counters/histograms/gauges — no behavior change.

Metrics exported (namespace `act_runner_`):

| Subsystem | Metric | Type | Labels |
|---|---|---|---|
| — | `info` | Gauge | `version`, `name` |
| — | `capacity`, `uptime_seconds` | Gauge | — |
| `poll` | `fetch_total`, `client_errors_total` | Counter | `result` / `method` |
| `poll` | `fetch_duration_seconds`, `backoff_seconds` | Histogram / Gauge | — |
| `job` | `total` | Counter | `status` |
| `job` | `duration_seconds`, `running`, `capacity_utilization_ratio` | Histogram / GaugeFunc | — |
| `report` | `log_total`, `state_total` | Counter | `result` |
| `report` | `log_duration_seconds`, `state_duration_seconds` | Histogram | — |
| `report` | `log_buffer_rows` | Gauge | — |
| — | `go_*`, `process_*` | standard collectors | — |

All label values are predefined constants — **no high-cardinality labels** (no task IDs, repo URLs, branches, tokens, or secrets) so scraping is safe and bounded.

## Why

Teams self-hosting Gitea + `act_runner` at scale need to answer basic SRE questions that are currently invisible:

- How often are RPCs failing? Which RPC? (`act_runner_client_errors_total`)
- Are runners saturated? (`act_runner_job_capacity_utilization_ratio`, `act_runner_job_running`)
- How long do jobs take? (`act_runner_job_duration_seconds`)
- Is polling backing off? (`act_runner_poll_backoff_seconds`, `act_runner_poll_fetch_total{result=\"error\"}`)
- Are log/state reports slow? (`act_runner_report_{log,state}_duration_seconds`)
- Is the log buffer draining? (`act_runner_report_log_buffer_rows`)

Today operators have to grep logs. This PR makes all of the above first-class metrics so they can feed dashboards and alerts (`rate(act_runner_client_errors_total[5m]) > 0.1`, capacity saturation alerts, etc.).

The endpoint is **disabled by default** and binds to `127.0.0.1:9101` when enabled, so it's opt-in and safe for existing deployments.

## How

### Config

```yaml
metrics:
  enabled: false           # opt-in
  addr: 127.0.0.1:9101     # change to 0.0.0.0:9101 only behind a reverse proxy
```

`config.example.yaml` documents both fields plus a security note about binding externally without auth.

### Wiring

1. `daemon.go` calls `metrics.Init()` (guarded by `sync.Once`), sets `act_runner_info`, `act_runner_capacity`, registers uptime + running-jobs GaugeFuncs, then starts the server goroutine with the daemon context — it shuts down cleanly on `ctx.Done()`.
2. `poller.fetchTask` observes RPC latency / result / error counters. `DeadlineExceeded` (long-poll idle) is treated as an empty result and **not** observed into the histogram so the 5s timeout doesn't swamp the buckets.
3. `poller.pollOnce` reports `poll_backoff_seconds` using the pre-jitter base interval (the true backoff level), and only when it changes — prevents noisy no-op gauge updates at the `FetchIntervalMax` plateau.
4. `reporter.ReportLog` / `ReportState` record duration histograms and success/error counters; `log_buffer_rows` is updated only when the value changes, guarded by the already-held `clientM`.
5. `runner.Run` observes `job_duration_seconds` and increments `job_total` by outcome via `metrics.ResultToStatusLabel`.

### Safety / security review

- All timeouts set; Slowloris-safe.
- Custom `prometheus.NewRegistry()` — no global registration side-effects.
- No sensitive data in labels (reviewed every instrumentation site).
- Single new dependency: `github.com/prometheus/client_golang v1.23.2`.
- Endpoint is unauthenticated by design and documented as such; default localhost bind mitigates exposure. Operators exposing externally should front it with a reverse proxy.

## Verification

### Unit tests

\`\`\`bash
go build ./...
go vet ./...
go test ./...
\`\`\`

### Manual smoke test

1. Enable metrics in `config.yaml`:
   \`\`\`yaml
   metrics:
     enabled: true
     addr: 127.0.0.1:9101
   \`\`\`
2. Start the runner against a Gitea instance: \`./act_runner daemon\`.
3. Scrape the endpoint:
   \`\`\`bash
   curl -s http://127.0.0.1:9101/metrics | grep '^act_runner_'
   curl -s http://127.0.0.1:9101/healthz   # → ok
   \`\`\`
4. Confirm the static series appear immediately: \`act_runner_info\`, \`act_runner_capacity\`, \`act_runner_uptime_seconds\`, \`act_runner_job_running\`, \`act_runner_job_capacity_utilization_ratio\`.
5. Trigger a workflow and confirm counters increment: \`act_runner_poll_fetch_total{result=\"task\"}\`, \`act_runner_job_total{status=\"success\"}\`, \`act_runner_report_log_total{result=\"success\"}\`.
6. Leave the runner idle and confirm \`act_runner_poll_backoff_seconds\` settles (and does **not** churn on every poll).
7. Ctrl-C and confirm a clean \"metrics server shutdown\" log line (no port-in-use error on restart within 5s).

### Prometheus integration

Add to \`prometheus.yml\`:

\`\`\`yaml
scrape_configs:
  - job_name: act_runner
    static_configs:
      - targets: ['127.0.0.1:9101']
\`\`\`

Sample alert to try:

\`\`\`
sum(rate(act_runner_client_errors_total[5m])) by (method) > 0.1
\`\`\`

## Out of scope (follow-ups)

- TLS and auth on the metrics endpoint (mitigated today by localhost default; add when operators need external scraping).
- Per-task labels (intentionally avoided for cardinality safety).

---

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Reviewed-on: https://gitea.com/gitea/act_runner/pulls/820
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Bo-Yi Wu <appleboy.tw@gmail.com>
Co-committed-by: Bo-Yi Wu <appleboy.tw@gmail.com>
This commit is contained in:
Bo-Yi Wu
2026-04-15 01:27:34 +00:00
committed by Bo-Yi Wu (吳柏毅)
parent f2d545565f
commit f33e5a6245
10 changed files with 393 additions and 4 deletions

View File

@@ -21,6 +21,7 @@ import (
"gitea.com/gitea/act_runner/internal/pkg/client"
"gitea.com/gitea/act_runner/internal/pkg/config"
"gitea.com/gitea/act_runner/internal/pkg/metrics"
)
type Reporter struct {
@@ -36,6 +37,11 @@ type Reporter struct {
logReplacer *strings.Replacer
oldnew []string
// lastLogBufferRows is the last value written to the ReportLogBufferRows
// gauge; guarded by clientM (the same lock held around each ReportLog call)
// so the gauge skips no-op Set calls when the buffer size is unchanged.
lastLogBufferRows int
state *runnerv1.TaskState
stateChanged bool
stateMu sync.RWMutex
@@ -93,6 +99,13 @@ func NewReporter(ctx context.Context, cancel context.CancelFunc, client client.C
return rv
}
// Result returns the final job result. Safe to call after Close() returns.
func (r *Reporter) Result() runnerv1.Result {
r.stateMu.RLock()
defer r.stateMu.RUnlock()
return r.state.Result
}
func (r *Reporter) ResetSteps(l int) {
r.stateMu.Lock()
defer r.stateMu.Unlock()
@@ -421,15 +434,20 @@ func (r *Reporter) ReportLog(noMore bool) error {
return nil
}
start := time.Now()
resp, err := r.client.UpdateLog(r.ctx, connect.NewRequest(&runnerv1.UpdateLogRequest{
TaskId: r.state.Id,
Index: int64(r.logOffset),
Rows: rows,
NoMore: noMore,
}))
metrics.ReportLogDuration.Observe(time.Since(start).Seconds())
if err != nil {
metrics.ReportLogTotal.WithLabelValues(metrics.LabelResultError).Inc()
metrics.ClientErrors.WithLabelValues(metrics.LabelMethodUpdateLog).Inc()
return err
}
metrics.ReportLogTotal.WithLabelValues(metrics.LabelResultSuccess).Inc()
ack := int(resp.Msg.AckIndex)
if ack < r.logOffset {
@@ -440,7 +458,12 @@ func (r *Reporter) ReportLog(noMore bool) error {
r.logRows = r.logRows[ack-r.logOffset:]
submitted := r.logOffset + len(rows)
r.logOffset = ack
remaining := len(r.logRows)
r.stateMu.Unlock()
if remaining != r.lastLogBufferRows {
metrics.ReportLogBufferRows.Set(float64(remaining))
r.lastLogBufferRows = remaining
}
if noMore && ack < submitted {
return errors.New("not all logs are submitted")
@@ -479,16 +502,21 @@ func (r *Reporter) ReportState(reportResult bool) error {
state.Result = runnerv1.Result_RESULT_UNSPECIFIED
}
start := time.Now()
resp, err := r.client.UpdateTask(r.ctx, connect.NewRequest(&runnerv1.UpdateTaskRequest{
State: state,
Outputs: outputs,
}))
metrics.ReportStateDuration.Observe(time.Since(start).Seconds())
if err != nil {
metrics.ReportStateTotal.WithLabelValues(metrics.LabelResultError).Inc()
metrics.ClientErrors.WithLabelValues(metrics.LabelMethodUpdateTask).Inc()
r.stateMu.Lock()
r.stateChanged = true
r.stateMu.Unlock()
return err
}
metrics.ReportStateTotal.WithLabelValues(metrics.LabelResultSuccess).Inc()
for _, k := range resp.Msg.SentOutputs {
r.outputs.Store(k, struct{}{})