Worker Pools in Go: The Minimal Pattern I Use for High-Efficiency Concurrency


Ever felt that surge of panic when you have 10,000 tasks to run?
Maybe you’re processing images, hitting thousands of API endpoints, or handling data streams from an IoT device (a regular task in my AI/IoT projects).
Your first instinct in Go is: “Easy! I’ll just spin up a goroutine for each!”

// The "Oh No" Pattern
for _, task := range tasks {
go processTask(task)
}

…And just like that, you’ve accidentally DDOSed your own database, hit every rate limit imaginable, or maxed out your CPU. I’ve been there. The system grinds to a halt.

What we need is not an uncontrolled flood; we need a disciplined factory. We need a Worker Pool.

And in Go, you can build a robust, minimal pool using just three core components: a channel for the jobs, a sync.WaitGroup for control, and a for…range loop to run the workers.

The “Factory” Components

Instead of hiring a new “worker” (goroutine) for every single task, we’ll hire a fixed crew of 5-10 workers and give them a conveyor belt (the channel) of tasks.

  • The “To-Do List” (Jobs Channel): This is our conveyor belt. Tasks get put on here. jobs := make(chan int, 1000)
  • The “Workers” (The Goroutines): This is our crew. We hire a fixed number, say, 5. Each worker is a goroutine that just waits for tasks to appear on the conveyor belt. for job := range jobs { … }
  • The “Shift Manager” (The WaitGroup): How do we know when all the tasks are done and we can “turn off the lights”? sync.WaitGroup. This is our shift manager. It knows how many workers we have, and it waits until every single one has clocked out. var wg sync.WaitGroup

The Minimal Code (Ready-to-Paste)

Here is the full, ready-to-run example. We’ll set up 5 workers to process 1,000 “heavy” jobs.

package main
import (
“fmt”
“sync”
“time”
)
// ‘worker’ is our goroutine.
// It will receive jobs from the jobs channel and send
// a “done” signal to the wg when it’s finished.
func worker(id int, wg *sync.WaitGroup, jobs <-chan int) {
// When the worker exits, it tells the WaitGroup it’s done.
defer wg.Done()
// The worker waits for jobs to come in over the jobs channel.
// As soon as the channel is closed and empty, this loop will terminate.
for job := range jobs {
fmt.Printf(“Worker %d started job %d\n”, id, job)
// Simulate a heavy task
time.Sleep(100 * time.Millisecond)
fmt.Printf(“— Worker %d finished job %d\n”, id, job)
}
}
func main() {
const numJobs = 1000
const numWorkers = 5
// Create the jobs channel.
jobs := make(chan int, numJobs)

// Create the WaitGroup.
var wg sync.WaitGroup
fmt.Printf("Hiring %d workers...\n", numWorkers)

// 1. HIRE THE WORKERS
// Add a counter to the WaitGroup for each worker.
wg.Add(numWorkers)

for w := 1; w <= numWorkers; w++ {
    // Spin up the workers. They will immediately block,
    // waiting for the `jobs` channel to have something.
    go worker(w, &wg, jobs)
}
fmt.Printf("Sending %d jobs to the assembly line...\n", numJobs)
// 2. SEND THE JOBS
// Load all 1,000 jobs into the channel.
for j := 1; j <= numJobs; j++ {
    jobs <- j
}

// 3. CLOSE THE CHANNEL
// This is the critical step! We tell the workers: "No more jobs are coming."
// Without this, the workers would wait forever (`for range jobs`)
// and the program would deadlock.
close(jobs)
fmt.Println("All jobs sent. Waiting for workers to finish...")
// 4. WAIT FOR COMPLETION
// `wg.Wait()` blocks the main thread until the WaitGroup counter
// (which is decremented by `wg.Done()`) hits 0.
wg.Wait()

fmt.Println("All workers finished. Factory is closed.")

}

The Magic: Automatic Backpressure

Here’s the most elegant part: Backpressure.

What if the workers are slow, but the boss (main) is fast?

  • If we used an unbuffered channel (make(chan int)), main would block every time it tried to add a job (jobs <- j) until a worker was free to take it.
  • Because we used a buffered channel (make(chan int, numJobs)), main can quickly “dump” all 1,000 jobs into the buffer and move on.

Either way, the channel paces the work for us. We aren’t overwhelming the system, because we only have 5 active workers at any given time. We are in control of our concurrency.

Found this minimal pattern useful? I regularly share best practices, technical deep dives, and novel concurrency methods based on my work in Go and AI/IoT systems.

Follow me for more insights into high-efficiency Go development, distributed systems architecture, and advancing the engineering leadership field.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *