Worker Pool with Channels
In this tutorial you’ll build a small but realistic concurrency pattern: a worker pool. A producer pushes jobs into a channel, a fixed number of workers process them in parallel, and the main task collects the results.
You’ll use:
Channel<T>— typed inter-task communication fromstd/channelspawn— to launch concurrent workers- A nullable receive (
T?) to detect channel closure cleanly
By the end you’ll have a single file you can run with both the bytecode VM and the AOT compiler.
What you’ll build
Section titled “What you’ll build”A program that squares 20 numbers across 4 workers:
- The main task creates two typed channels: one for jobs (
Channel<int>) and one for results (Channel<int>). - It enqueues 20 jobs, then closes the jobs channel.
- It spawns 4 workers, each draining the jobs channel until it’s closed.
- Each worker sends its result to the results channel.
- The main task collects all 20 results and prints the total.
Prerequisites
Section titled “Prerequisites”- Chuks installed — see Installation.
- A working knowledge of classes and concurrency basics.
- Familiarity with the Channel
API.
Step 1 — Project setup
Section titled “Step 1 — Project setup”Create a new file:
mkdir worker-pool && cd worker-pooltouch main.chuksStep 2 — The job function
Section titled “Step 2 — The job function”Workers will call this for each job. Keep it deliberately simple so you can swap in real work later (an HTTP call, a database write, image resizing, etc.):
function processJob(jobId: int): int { var result: int = jobId * jobId; return result;}Step 3 — The worker
Section titled “Step 3 — The worker”A worker is a regular function that loops on jobs.receive() until it returns null. Once the jobs channel is closed and drained, receive() returns null and the worker exits naturally:
function worker(workerId: int, jobs: Channel<int>, results: Channel<int>): void { for ( var jobId: int? = jobs.receive(); jobId != null; jobId = jobs.receive() ) { var jid: int = jobId; var result: int = processJob(jid); println("Worker " + string(workerId) + " processed job " + string(jid) + " -> " + string(result)); results.send(result); }}Two things worth noticing:
- Typed channels.
jobs: Channel<int>andresults: Channel<int>are enforced at compile time.jobs.send("oops")would be a type error. - Nullable receive.
jobs.receive()returnsint?. The loop terminates the moment the channel is closed and empty — no special signaling, no sentinel values.
Step 4 — The main task
Section titled “Step 4 — The main task”Now wire it together: build the channels, enqueue jobs, spawn workers, and drain results.
function main() { const NUM_WORKERS: int = 4; const NUM_JOBS: int = 20;
const jobs = new Channel<int>(NUM_JOBS); const results = new Channel<int>(NUM_JOBS); if (jobs == null || results == null) { println("failed to create channels"); return; }
// Producer: push 20 jobs, then close so workers know we're done. for (var j = 1; j <= NUM_JOBS; j++) { jobs.send(j); } jobs.close();
// Fan out: spawn N workers reading from the same jobs channel. for (var w = 1; w <= NUM_WORKERS; w++) { spawn worker(w, jobs, results); }
// Collect results from the shared results channel. var total: int = 0; for (var r = 0; r < NUM_JOBS; r++) { var res: int? = results.receive(); if (res == null) { break; } total = total + res; }
results.close(); println("All jobs done. Total: " + string(total));}Step 5 — Putting it all together
Section titled “Step 5 — Putting it all together”The full program:
import { Channel } from "std/channel";
function processJob(jobId: int): int { var result: int = jobId * jobId; return result;}
function worker(workerId: int, jobs: Channel<int>, results: Channel<int>): void { for ( var jobId: int? = jobs.receive(); jobId != null; jobId = jobs.receive() ) { var jid: int = jobId; var result: int = processJob(jid); println("Worker " + string(workerId) + " processed job " + string(jid) + " -> " + string(result)); results.send(result); }}
function main() { const NUM_WORKERS: int = 4; const NUM_JOBS: int = 20;
const jobs = new Channel<int>(NUM_JOBS); const results = new Channel<int>(NUM_JOBS); if (jobs == null || results == null) { println("failed to create channels"); return; }
for (var j = 1; j <= NUM_JOBS; j++) { jobs.send(j); } jobs.close();
for (var w = 1; w <= NUM_WORKERS; w++) { spawn worker(w, jobs, results); }
var total: int = 0; for (var r = 0; r < NUM_JOBS; r++) { var res: int? = results.receive(); if (res == null) { break; } total = total + res; }
results.close(); println("All jobs done. Total: " + string(total));}Step 6 — Run it
Section titled “Step 6 — Run it”chuks main.chuksSample output (interleaving will vary — that’s the whole point):
Worker 1 processed job 1 -> 1Worker 2 processed job 2 -> 4Worker 3 processed job 3 -> 9Worker 4 processed job 4 -> 16Worker 1 processed job 5 -> 25...Worker 4 processed job 20 -> 400All jobs done. Total: 2870The total 2870 is 1² + 2² + … + 20² — deterministic regardless of which worker handles which job.
You can also compile to a native binary:
chuks build main.chuks -o worker-pool./worker-poolHow it works
Section titled “How it works”┌─────────┐ jobs (Channel<int>) ┌──────────┐│ │ ────────────────────────────▶ │ worker 1 ││ main │ │ worker 2 │ ─┐│ │ │ worker 3 │ │ results (Channel<int>)│ │ ◀──────────────────────────── │ worker 4 │ │└─────────┘ └──────────┘ ─┘- Fan-out. Multiple workers read from the same
jobschannel. The runtime guarantees each value is delivered to exactly one receiver, so there’s no race over which worker takes which job. - Closure as completion signal. When the producer calls
jobs.close(), every worker’sforloop will eventually exit. No counters, no done-channels, no cancellation tokens — justnullfromreceive(). - Buffered channels. Both channels are buffered to
NUM_JOBSso the producer can enqueue everything up front without blocking, and so workers can produce results whilemainis still draining.
Why this pattern matters
Section titled “Why this pattern matters”This same shape — typed channels, fan-out, close-as-signal — scales to a lot of real workloads:
- HTTP scrapers / API clients — fan out URLs to N HTTP workers.
- Image / video processing — bounded parallelism over a folder of files.
- Database ETL — read rows from a source, transform in workers, write to a sink.
- Background queues — pop jobs from Redis, process across a pool, ack on success.
In every case Channel<T> keeps the boundary between stages type-safe at compile time while spawn and close() keep the orchestration small and explicit.
Going further
Section titled “Going further”Some natural extensions:
- Errors per job. Make results a
Channel<JobResult>whereJobResultis adataTypewithid,value, anderrorfields. - Backpressure. Use a smaller buffer (e.g.
new Channel<int>(4)) so producers slow down when workers fall behind. - Cancellation. Pass a second
done: Channel<bool>and have workers checkdone.tryReceive()between jobs. - Result ordering. Wrap results with their
jobIdso you can reorder them after the fact.
For more on the API surface, see the full Channel