perf: reduce runner-to-server connection load with adaptive reporting and polling (#819)

## Summary

Many teams self-host Gitea + Act Runner at scale. The current runner design causes excessive HTTP requests to the Gitea server, leading to high server load. This PR addresses three root causes: aggressive fixed-interval polling, per-task status reporting every 1 second regardless of activity, and unoptimized HTTP client configuration.

## Problem

The original architecture has these issues:

**1. Fixed 1-second reporting interval (RunDaemon)**

- Every running task calls ReportLog + ReportState every 1 second (2 HTTP requests/sec/task)
- These requests are sent even when there are no new log rows or state changes
- With 200 runners × 3 tasks each = **1,200 req/sec just for status reporting**

**2. Fixed 2-second polling interval (no backoff)**

- Idle runners poll FetchTask every 2 seconds forever, even when no jobs are queued
- No exponential backoff or jitter — all runners can synchronize after network recovery (thundering herd)
- 200 idle runners = **100 req/sec doing nothing useful**

**3. HTTP client not tuned**

- Uses http.DefaultClient with MaxIdleConnsPerHost=2, causing frequent TCP/TLS reconnects
- Creates two separate http.Client instances (one for Ping, one for Runner service) instead of sharing

**Total: ~1,300 req/sec for 200 runners with 3 tasks each**

## Solution

### Adaptive Event-Driven Log Reporting

Replace the recursive `time.AfterFunc(1s)` pattern in RunDaemon with a goroutine-based select event loop using three trigger mechanisms:

| Trigger | Default | Purpose |
|---------|---------|---------|
| `log_report_max_latency` | 3s | Guarantee even a single log line is delivered within this time |
| `log_report_interval` | 5s | Periodic sweep — steady-state cadence |
| `log_report_batch_size` | 100 rows | Immediate flush during bursty output (e.g., npm install) |

**Key design**: `log_report_max_latency` (3s) must be less than `log_report_interval` (5s) so the max-latency timer fires before the periodic ticker for single-line scenarios.

State reporting is decoupled to its own `state_report_interval` (default 5s), with immediate flush on step transitions (start/stop) via a stateNotify channel for responsive frontend UX.

Additionally:
- Skip ReportLog when `len(rows) == 0` (no pending log rows)
- Skip ReportState when `stateChanged == false && len(outputs) == 0` (nothing changed)
- Move expensive `proto.Clone` after the early-return check to avoid deep copies on no-op paths

### Polling Backoff with Jitter

Replace fixed `rate.Limiter` with adaptive exponential backoff:
- Track `consecutiveEmpty` and `consecutiveErrors` counters
- Interval doubles with each empty/error response: `base × 2^(n-1)`, capped at `fetch_interval_max` (default 60s)
- Add ±20% random jitter to prevent thundering herd
- Fetch first, sleep after ��� preserves burst=1 behavior for immediate first fetch on startup and after task completion

### HTTP Client Tuning

- Configure custom `http.Transport` with `MaxIdleConnsPerHost=10` (was 2)
- Share a single `http.Client` between PingService and RunnerService
- Add `IdleConnTimeout=90s` for clean connection lifecycle

## Load Reduction

For 200 runners × 3 tasks (70% with active log output):

| Component | Before | After | Reduction |
|-----------|--------|-------|-----------|
| Polling (idle) | 100 req/s | ~3.4 req/s | 97% |
| Log reporting | 420 req/s | ~84 req/s | 80% |
| State reporting | 126 req/s | ~25 req/s | 80% |
| **Total** | **~1,300 req/s** | **~113 req/s** | **~91%** |

## Frontend UX Impact

| Scenario | Before | After | Notes |
|----------|--------|-------|-------|
| Continuous output (npm install) | ~1s | ~5s | Periodic ticker sweep |
| Single line then silence | ~1s | ≤3s | maxLatencyTimer guarantee |
| Bursty output (100+ lines) | ~1s | <1s | Batch size immediate flush |
| Step start/stop | ~1s | <1s | stateNotify immediate flush |
| Job completion | ~1s | ~1s | Close() retry unchanged |

## New Configuration Options

All have safe defaults — existing config files need no changes:

```yaml
runner:
  fetch_interval_max: 60s        # Max backoff interval when idle
  log_report_interval: 5s        # Periodic log flush interval
  log_report_max_latency: 3s     # Max time a log row waits (must be < log_report_interval)
  log_report_batch_size: 100     # Immediate flush threshold
  state_report_interval: 5s      # State flush interval (step transitions are always immediate)
```

Config validation warns on invalid combinations:
- `fetch_interval_max < fetch_interval` → auto-corrected
- `log_report_max_latency >= log_report_interval` → warning (timer would be redundant)

## Test Plan

- [x] `go build ./...` passes
- [x] `go test ./...` passes (all existing + 3 new tests)
- [x] `golangci-lint run` — 0 issues
- [x] TestReporter_MaxLatencyTimer — verifies single log line flushed by maxLatencyTimer before logTicker
- [x] TestReporter_BatchSizeFlush — verifies batch size threshold triggers immediate flush
- [x] TestReporter_StateNotifyFlush — verifies step transition triggers immediate state flush
- [x] TestReporter_EphemeralRunnerDeletion — verifies Close/RunDaemon race safety
- [x] TestReporter_RunDaemonClose_Race — verifies concurrent Close safety

Reviewed-on: https://gitea.com/gitea/act_runner/pulls/819
Reviewed-by: Nicolas <173651+bircni@noreply.gitea.com>
Co-authored-by: Bo-Yi Wu <appleboy.tw@gmail.com>
Co-committed-by: Bo-Yi Wu <appleboy.tw@gmail.com>
This commit is contained in:
Bo-Yi Wu
2026-04-14 11:29:25 +00:00
committed by Bo-Yi Wu (吳柏毅)
parent 90c1275f0e
commit f2d545565f
9 changed files with 696 additions and 86 deletions

View File

@@ -20,6 +20,7 @@ import (
"google.golang.org/protobuf/types/known/timestamppb"
"gitea.com/gitea/act_runner/internal/pkg/client"
"gitea.com/gitea/act_runner/internal/pkg/config"
)
type Reporter struct {
@@ -35,16 +36,27 @@ type Reporter struct {
logReplacer *strings.Replacer
oldnew []string
state *runnerv1.TaskState
stateMu sync.RWMutex
outputs sync.Map
daemon chan struct{}
state *runnerv1.TaskState
stateChanged bool
stateMu sync.RWMutex
outputs sync.Map
daemon chan struct{}
// Adaptive batching control
logReportInterval time.Duration
logReportMaxLatency time.Duration
logBatchSize int
stateReportInterval time.Duration
// Event notification channels (non-blocking, buffered 1)
logNotify chan struct{} // signal: new log rows arrived
stateNotify chan struct{} // signal: step transition (start/stop)
debugOutputEnabled bool
stopCommandEndToken string
}
func NewReporter(ctx context.Context, cancel context.CancelFunc, client client.Client, task *runnerv1.Task) *Reporter {
func NewReporter(ctx context.Context, cancel context.CancelFunc, client client.Client, task *runnerv1.Task, cfg *config.Config) *Reporter {
var oldnew []string
if v := task.Context.Fields["token"].GetStringValue(); v != "" {
oldnew = append(oldnew, v, "***")
@@ -57,11 +69,17 @@ func NewReporter(ctx context.Context, cancel context.CancelFunc, client client.C
}
rv := &Reporter{
ctx: ctx,
cancel: cancel,
client: client,
oldnew: oldnew,
logReplacer: strings.NewReplacer(oldnew...),
ctx: ctx,
cancel: cancel,
client: client,
oldnew: oldnew,
logReplacer: strings.NewReplacer(oldnew...),
logReportInterval: cfg.Runner.LogReportInterval,
logReportMaxLatency: cfg.Runner.LogReportMaxLatency,
logBatchSize: cfg.Runner.LogReportBatchSize,
stateReportInterval: cfg.Runner.StateReportInterval,
logNotify: make(chan struct{}, 1),
stateNotify: make(chan struct{}, 1),
state: &runnerv1.TaskState{
Id: task.Id,
},
@@ -108,11 +126,42 @@ func isJobStepEntry(entry *log.Entry) bool {
return true
}
func (r *Reporter) Fire(entry *log.Entry) error {
r.stateMu.Lock()
defer r.stateMu.Unlock()
// notifyLog sends a non-blocking signal that new log rows are available.
func (r *Reporter) notifyLog() {
select {
case r.logNotify <- struct{}{}:
default:
}
}
log.WithFields(entry.Data).Trace(entry.Message)
// notifyState sends a non-blocking signal that a UX-critical state change occurred (step start/stop, job result).
func (r *Reporter) notifyState() {
select {
case r.stateNotify <- struct{}{}:
default:
}
}
// unlockAndNotify releases stateMu and sends channel notifications.
// Must be called with stateMu held.
func (r *Reporter) unlockAndNotify(urgentState bool) {
r.stateMu.Unlock()
r.notifyLog()
if urgentState {
r.notifyState()
}
}
func (r *Reporter) Fire(entry *log.Entry) error {
urgentState := false
r.stateMu.Lock()
r.stateChanged = true
if log.IsLevelEnabled(log.TraceLevel) {
log.WithFields(entry.Data).Trace(entry.Message)
}
timestamp := entry.Time
if r.state.StartedAt == nil {
@@ -135,11 +184,13 @@ func (r *Reporter) Fire(entry *log.Entry) error {
}
}
}
urgentState = true
}
}
if !r.duringSteps() {
r.logRows = appendIfNotNil(r.logRows, r.parseLogRow(entry))
}
r.unlockAndNotify(urgentState)
return nil
}
@@ -153,11 +204,13 @@ func (r *Reporter) Fire(entry *log.Entry) error {
if !r.duringSteps() {
r.logRows = appendIfNotNil(r.logRows, r.parseLogRow(entry))
}
r.unlockAndNotify(false)
return nil
}
if step.StartedAt == nil {
step.StartedAt = timestamppb.New(timestamp)
urgentState = true
}
// Force reporting log errors as raw output to prevent silent failures
@@ -185,26 +238,91 @@ func (r *Reporter) Fire(entry *log.Entry) error {
}
step.Result = stepResult
step.StoppedAt = timestamppb.New(timestamp)
urgentState = true
}
}
r.unlockAndNotify(urgentState)
return nil
}
func (r *Reporter) RunDaemon() {
r.stateMu.RLock()
closed := r.closed
r.stateMu.RUnlock()
if closed || r.ctx.Err() != nil {
// Acknowledge close
close(r.daemon)
return
go r.runDaemonLoop()
}
func (r *Reporter) stopLatencyTimer(active *bool, timer *time.Timer) {
if *active {
if !timer.Stop() {
select {
case <-timer.C:
default:
}
}
*active = false
}
}
_ = r.ReportLog(false)
_ = r.ReportState(false)
func (r *Reporter) runDaemonLoop() {
logTicker := time.NewTicker(r.logReportInterval)
stateTicker := time.NewTicker(r.stateReportInterval)
time.AfterFunc(time.Second, r.RunDaemon)
// maxLatencyTimer ensures the first buffered log row is sent within logReportMaxLatency.
// Start inactive — it is armed when the first log row arrives in an empty buffer.
maxLatencyTimer := time.NewTimer(0)
if !maxLatencyTimer.Stop() {
<-maxLatencyTimer.C
}
maxLatencyActive := false
defer logTicker.Stop()
defer stateTicker.Stop()
defer maxLatencyTimer.Stop()
for {
select {
case <-logTicker.C:
_ = r.ReportLog(false)
r.stopLatencyTimer(&maxLatencyActive, maxLatencyTimer)
case <-stateTicker.C:
_ = r.ReportState(false)
case <-r.logNotify:
r.stateMu.RLock()
n := len(r.logRows)
r.stateMu.RUnlock()
if n >= r.logBatchSize {
_ = r.ReportLog(false)
r.stopLatencyTimer(&maxLatencyActive, maxLatencyTimer)
} else if !maxLatencyActive && n > 0 {
maxLatencyTimer.Reset(r.logReportMaxLatency)
maxLatencyActive = true
}
case <-r.stateNotify:
// Step transition or job result — flush both immediately for frontend UX.
_ = r.ReportLog(false)
_ = r.ReportState(false)
r.stopLatencyTimer(&maxLatencyActive, maxLatencyTimer)
case <-maxLatencyTimer.C:
maxLatencyActive = false
_ = r.ReportLog(false)
case <-r.ctx.Done():
close(r.daemon)
return
}
r.stateMu.RLock()
closed := r.closed
r.stateMu.RUnlock()
if closed {
close(r.daemon)
return
}
}
}
func (r *Reporter) Logf(format string, a ...any) {
@@ -268,6 +386,10 @@ func (r *Reporter) Close(lastWords string) error {
})
}
r.stateMu.Unlock()
// Wake up the daemon loop so it detects closed promptly.
r.notifyLog()
// Wait for Acknowledge
select {
case <-r.daemon:
@@ -295,6 +417,10 @@ func (r *Reporter) ReportLog(noMore bool) error {
rows := r.logRows
r.stateMu.RUnlock()
if !noMore && len(rows) == 0 {
return nil
}
resp, err := r.client.UpdateLog(r.ctx, connect.NewRequest(&runnerv1.UpdateLogRequest{
TaskId: r.state.Id,
Index: int64(r.logOffset),
@@ -329,15 +455,7 @@ func (r *Reporter) ReportState(reportResult bool) error {
r.clientM.Lock()
defer r.clientM.Unlock()
r.stateMu.RLock()
state := proto.Clone(r.state).(*runnerv1.TaskState)
r.stateMu.RUnlock()
// Only report result from Close to reliable sent logs
if !reportResult {
state.Result = runnerv1.Result_RESULT_UNSPECIFIED
}
// Build the outputs map first (single Range pass instead of two).
outputs := make(map[string]string)
r.outputs.Range(func(k, v any) bool {
if val, ok := v.(string); ok {
@@ -346,11 +464,29 @@ func (r *Reporter) ReportState(reportResult bool) error {
return true
})
// Consume stateChanged atomically with the snapshot; restored on error
// below so a concurrent Fire() during UpdateTask isn't silently lost.
r.stateMu.Lock()
if !reportResult && !r.stateChanged && len(outputs) == 0 {
r.stateMu.Unlock()
return nil
}
state := proto.Clone(r.state).(*runnerv1.TaskState)
r.stateChanged = false
r.stateMu.Unlock()
if !reportResult {
state.Result = runnerv1.Result_RESULT_UNSPECIFIED
}
resp, err := r.client.UpdateTask(r.ctx, connect.NewRequest(&runnerv1.UpdateTaskRequest{
State: state,
Outputs: outputs,
}))
if err != nil {
r.stateMu.Lock()
r.stateChanged = true
r.stateMu.Unlock()
return err
}