perf: use single poller with semaphore-based capacity control (#822)

## Background

#819 replaced the shared `rate.Limiter` with per-worker exponential backoff counters to add jitter and adaptive polling. Before #819, the poller used:

```go
limiter := rate.NewLimiter(rate.Every(p.cfg.Runner.FetchInterval), 1)
```

This limiter was **shared across all N polling goroutines with burst=1**, effectively serializing their `FetchTask` calls — so even with `capacity=60`, the runner issued roughly one `FetchTask` per `FetchInterval` total.

#819 replaced this with independent per-worker `consecutiveEmpty` / `consecutiveErrors` counters. Each goroutine now backs off **independently**, which inadvertently removed the cross-worker serialization. With `capacity=N`, the runner now has N goroutines each polling on their own schedule — a regression from the pre-#819 baseline for any runner with `capacity > 1`.

(Thanks to @ChristopherHX for catching this in review.)

## Problem

With the post-#819 code:

- `capacity=N` maintains **N persistent polling goroutines**, each calling `FetchTask` independently
- At idle, N goroutines each wake up and send a `FetchTask` RPC per `FetchInterval`
- At full load, N goroutines **continue polling** even though no slot is available to run a new task — every one of those RPCs is wasted
- The `Shutdown()` timeout branch has a pre-existing bug: the "non-blocking check" is actually a blocking receive, so `shutdownJobs()` is never reached on timeout

## Real-World Impact: 3 Runners × capacity=60

Current production environment: 3 runners each with `capacity=60`.

| Metric | Post-#819 (current) | This PR | Reduction |
|--------|---------------------|---------|-----------|
| Polling goroutines (total) | 3 × 60 = **180** | 3 × 1 = **3** | **98.3%** (177 fewer) |
| FetchTask RPCs per poll cycle (idle) | **180** | **3** | **98.3%** |
| FetchTask RPCs per poll cycle (full load) | **180** (all wasted) | **0** (blocked on semaphore) | **100%** |
| Concurrent connections to Gitea | **180** | **3** | **98.3%** |
| Backoff state objects | 180 (per-worker) | 3 (one per runner) | Simplified |

### Idle scenario

All 180 goroutines wake up every `FetchInterval`, each sending a `FetchTask` RPC that returns empty. Server handles 180 RPCs per cycle for zero useful work. After this PR: **3 RPCs per cycle** — one per runner.

> Note: pre-#819 idle behavior was already ~3 RPCs/cycle due to the shared `rate.Limiter`. This PR restores that property while also addressing the full-load case below.

### Full-load scenario (all 180 slots occupied)

All 180 goroutines **continue polling** even though no slot is available. Every RPC is wasted. After this PR: all 3 pollers are **blocked on the semaphore** — **zero RPCs** until a task completes.

> This is a scenario neither the pre-#819 shared limiter nor the post-#819 per-worker backoff handles — both still issue `FetchTask` RPCs when no slot is free. The semaphore is the only approach of the three that ties polling to available capacity.

## Why Not Just Revert to `rate.Limiter`?

Reverting would restore the serialized behavior but is not the right long-term fix:

- **`rate.Limiter` has no concept of available capacity.** At full load it still hands out tokens and issues `FetchTask` RPCs that can't be acted on. The semaphore blocks polling entirely in that case — zero wasted RPCs.
- **It composes poorly with adaptive backoff from #819.** A shared limiter and per-worker backoff pull in different directions.
- **N goroutines serializing on a shared limiter means N-1 of them exist only to wait in line.** A single poller expresses the same behavior more directly.

The semaphore approach ties polling to capacity explicitly: `acquire slot → fetch → dispatch → release`. That invariant becomes structural rather than emergent from a rate limiter.

## Solution

Replace N polling goroutines with a **single polling loop** that uses a buffered channel as a semaphore to control concurrent task execution:

```go
// New: poller.go Poll()
sem := make(chan struct{}, p.cfg.Runner.Capacity)
for {
    select {
    case sem <- struct{}{}:       // Acquire slot (blocks at capacity)
    case <-p.pollingCtx.Done():
        return
    }
    task, ok := p.fetchTask(...)  // Single FetchTask RPC
    if !ok {
        <-sem                     // Release slot on empty response
        // backoff...
        continue
    }
    go func(t *runnerv1.Task) {   // Dispatch task
        defer func() { <-sem }()  // Release slot when done
        p.runTaskWithRecover(p.jobsCtx, t)
    }(task)
}
```

The exponential backoff and jitter from #819 are preserved — just driven by a single `workerState` instead of N per-worker states.

## Shutdown Bug Fix

Fixed a pre-existing bug in `Shutdown()` where the timeout branch could never force-cancel running jobs:

```go
// Before (BROKEN): blocking receive, shutdownJobs() never reached
_, ok := <-p.done   // blocks until p.done is closed
if !ok { return nil }
p.shutdownJobs()    // dead code when jobs are still running

// After (FIXED): proper non-blocking check
select {
case <-p.done:
    return nil
default:
}
p.shutdownJobs()    // now correctly reached on timeout
```

## Code Changes

| Area | Detail |
|------|--------|
| `Poller.runner` | `*run.Runner` → `TaskRunner` interface (enables mock-based testing) |
| `Poll()` | N goroutines → single loop with buffered-channel semaphore |
| `PollOnce()` | Inlined from removed `pollOnce()` |
| `waitBackoff()` | New helper, eliminates duplicated backoff logic |
| `resetBackoff()` | New method on `workerState`, also resets stale `lastBackoff` metric |
| `Shutdown()` | Fixed blocking receive → proper non-blocking select |
| Removed | `poll()`, `pollOnce()` private methods (-2 methods, -42 lines) |

## Test Coverage

Added `TestPoller_ConcurrencyLimitedByCapacity` which verifies:

- With `capacity=3`, at most 3 tasks execute concurrently (`maxConcurrent <= 3`)
- Tasks actually overlap in execution (`maxConcurrent >= 2`)
- `FetchTask` is never called concurrently — confirms single poller (`maxFetchConcur == 1`)
- All 6 tasks complete successfully (`totalCompleted == 6`)
- Mock runner respects context cancellation, enabling shutdown path verification

```
=== RUN   TestPoller_ConcurrencyLimitedByCapacity
--- PASS: TestPoller_ConcurrencyLimitedByCapacity (0.10s)
PASS
ok  	gitea.com/gitea/act_runner/internal/app/poll	0.59s
```

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Reviewed-on: https://gitea.com/gitea/act_runner/pulls/822
Reviewed-by: silverwind <2021+silverwind@noreply.gitea.com>
Co-authored-by: Bo-Yi Wu <appleboy.tw@gmail.com>
Co-committed-by: Bo-Yi Wu <appleboy.tw@gmail.com>
This commit is contained in:
Bo-Yi Wu
2026-04-19 08:10:23 +00:00
committed by Bo-Yi Wu (吳柏毅)
parent 48944e136c
commit 9aafec169b
3 changed files with 246 additions and 74 deletions

View File

@@ -12,7 +12,6 @@ import (
"sync/atomic"
"time"
"gitea.com/gitea/act_runner/internal/app/run"
"gitea.com/gitea/act_runner/internal/pkg/client"
"gitea.com/gitea/act_runner/internal/pkg/config"
"gitea.com/gitea/act_runner/internal/pkg/metrics"
@@ -22,9 +21,15 @@ import (
log "github.com/sirupsen/logrus"
)
// TaskRunner abstracts task execution so the poller can be tested
// without a real runner.
type TaskRunner interface {
Run(ctx context.Context, task *runnerv1.Task) error
}
type Poller struct {
client client.Client
runner *run.Runner
runner TaskRunner
cfg *config.Config
tasksVersion atomic.Int64 // tasksVersion used to store the version of the last task fetched from the Gitea.
@@ -37,20 +42,19 @@ type Poller struct {
done chan struct{}
}
// workerState holds per-goroutine polling state. Backoff counters are
// per-worker so that with Capacity > 1, N workers each seeing one empty
// response don't combine into a "consecutive N empty" reading on a shared
// counter and trigger an unnecessarily long backoff.
// workerState holds the single poller's backoff state. Consecutive empty or
// error responses drive exponential backoff; a successful task fetch resets
// both counters so the next poll fires immediately.
type workerState struct {
consecutiveEmpty int64
consecutiveErrors int64
// lastBackoff is the last interval reported to the PollBackoffSeconds gauge
// from this worker; used to suppress redundant no-op Set calls when the
// backoff plateaus (e.g. at FetchIntervalMax).
// lastBackoff is the last interval reported to the PollBackoffSeconds gauge;
// used to suppress redundant no-op Set calls when the backoff plateaus
// (e.g. at FetchIntervalMax).
lastBackoff time.Duration
}
func New(cfg *config.Config, client client.Client, runner *run.Runner) *Poller {
func New(cfg *config.Config, client client.Client, runner TaskRunner) *Poller {
pollingCtx, shutdownPolling := context.WithCancel(context.Background())
jobsCtx, shutdownJobs := context.WithCancel(context.Background())
@@ -73,22 +77,57 @@ func New(cfg *config.Config, client client.Client, runner *run.Runner) *Poller {
}
func (p *Poller) Poll() {
sem := make(chan struct{}, p.cfg.Runner.Capacity)
wg := &sync.WaitGroup{}
for i := 0; i < p.cfg.Runner.Capacity; i++ {
wg.Add(1)
go p.poll(wg)
}
wg.Wait()
s := &workerState{}
// signal that we shutdown
close(p.done)
defer func() {
wg.Wait()
close(p.done)
}()
for {
select {
case sem <- struct{}{}:
case <-p.pollingCtx.Done():
return
}
task, ok := p.fetchTask(p.pollingCtx, s)
if !ok {
<-sem
if !p.waitBackoff(s) {
return
}
continue
}
s.resetBackoff()
wg.Add(1)
go func(t *runnerv1.Task) {
defer wg.Done()
defer func() { <-sem }()
p.runTaskWithRecover(p.jobsCtx, t)
}(task)
}
}
func (p *Poller) PollOnce() {
p.pollOnce(&workerState{})
// signal that we're done
close(p.done)
defer close(p.done)
s := &workerState{}
for {
task, ok := p.fetchTask(p.pollingCtx, s)
if !ok {
if !p.waitBackoff(s) {
return
}
continue
}
s.resetBackoff()
p.runTaskWithRecover(p.jobsCtx, task)
return
}
}
func (p *Poller) Shutdown(ctx context.Context) error {
@@ -101,13 +140,13 @@ func (p *Poller) Shutdown(ctx context.Context) error {
// our timeout for shutting down ran out
case <-ctx.Done():
// when both the timeout fires and the graceful shutdown
// completed succsfully, this branch of the select may
// fire. Do a non-blocking check here against the graceful
// shutdown status to avoid sending an error if we don't need to.
_, ok := <-p.done
if !ok {
// Both the timeout and the graceful shutdown may fire
// simultaneously. Do a non-blocking check to avoid forcing
// a shutdown when graceful already completed.
select {
case <-p.done:
return nil
default:
}
// force a shutdown of all running jobs
@@ -120,18 +159,27 @@ func (p *Poller) Shutdown(ctx context.Context) error {
}
}
func (p *Poller) poll(wg *sync.WaitGroup) {
defer wg.Done()
s := &workerState{}
for {
p.pollOnce(s)
func (s *workerState) resetBackoff() {
s.consecutiveEmpty = 0
s.consecutiveErrors = 0
s.lastBackoff = 0
}
select {
case <-p.pollingCtx.Done():
return
default:
continue
}
// waitBackoff sleeps for the current backoff interval (with jitter).
// Returns false if the polling context was cancelled during the wait.
func (p *Poller) waitBackoff(s *workerState) bool {
base := p.calculateInterval(s)
if base != s.lastBackoff {
metrics.PollBackoffSeconds.Set(base.Seconds())
s.lastBackoff = base
}
timer := time.NewTimer(addJitter(base))
select {
case <-timer.C:
return true
case <-p.pollingCtx.Done():
timer.Stop()
return false
}
}
@@ -167,34 +215,6 @@ func addJitter(d time.Duration) time.Duration {
return d + time.Duration(jitter)
}
func (p *Poller) pollOnce(s *workerState) {
for {
task, ok := p.fetchTask(p.pollingCtx, s)
if !ok {
base := p.calculateInterval(s)
if base != s.lastBackoff {
metrics.PollBackoffSeconds.Set(base.Seconds())
s.lastBackoff = base
}
timer := time.NewTimer(addJitter(base))
select {
case <-timer.C:
case <-p.pollingCtx.Done():
timer.Stop()
return
}
continue
}
// Got a task — reset backoff counters for fast subsequent polling.
s.consecutiveEmpty = 0
s.consecutiveErrors = 0
p.runTaskWithRecover(p.jobsCtx, task)
return
}
}
func (p *Poller) runTaskWithRecover(ctx context.Context, task *runnerv1.Task) {
defer func() {
if r := recover(); r != nil {