Skip to content

Worker Pool with Channels

In this tutorial you’ll build a small but realistic concurrency pattern: a worker pool. A producer pushes jobs into a channel, a fixed number of workers process them in parallel, and the main task collects the results.

You’ll use:

  • Channel<T> — typed inter-task communication from std/channel
  • spawn — to launch concurrent workers
  • A nullable receive (T?) to detect channel closure cleanly

By the end you’ll have a single file you can run with both the bytecode VM and the AOT compiler.

A program that squares 20 numbers across 4 workers:

  • The main task creates two typed channels: one for jobs (Channel<int>) and one for results (Channel<int>).
  • It enqueues 20 jobs, then closes the jobs channel.
  • It spawns 4 workers, each draining the jobs channel until it’s closed.
  • Each worker sends its result to the results channel.
  • The main task collects all 20 results and prints the total.

Create a new file:

Terminal window
mkdir worker-pool && cd worker-pool
touch main.chuks

Workers will call this for each job. Keep it deliberately simple so you can swap in real work later (an HTTP call, a database write, image resizing, etc.):

function processJob(jobId: int): int {
var result: int = jobId * jobId;
return result;
}

A worker is a regular function that loops on jobs.receive() until it returns null. Once the jobs channel is closed and drained, receive() returns null and the worker exits naturally:

function worker(workerId: int, jobs: Channel<int>, results: Channel<int>): void {
for (
var jobId: int? = jobs.receive();
jobId != null;
jobId = jobs.receive()
) {
var jid: int = jobId;
var result: int = processJob(jid);
println("Worker " + string(workerId) + " processed job " + string(jid) + " -> " + string(result));
results.send(result);
}
}

Two things worth noticing:

  1. Typed channels. jobs: Channel<int> and results: Channel<int> are enforced at compile time. jobs.send("oops") would be a type error.
  2. Nullable receive. jobs.receive() returns int?. The loop terminates the moment the channel is closed and empty — no special signaling, no sentinel values.

Now wire it together: build the channels, enqueue jobs, spawn workers, and drain results.

function main() {
const NUM_WORKERS: int = 4;
const NUM_JOBS: int = 20;
const jobs = new Channel<int>(NUM_JOBS);
const results = new Channel<int>(NUM_JOBS);
if (jobs == null || results == null) {
println("failed to create channels");
return;
}
// Producer: push 20 jobs, then close so workers know we're done.
for (var j = 1; j <= NUM_JOBS; j++) {
jobs.send(j);
}
jobs.close();
// Fan out: spawn N workers reading from the same jobs channel.
for (var w = 1; w <= NUM_WORKERS; w++) {
spawn worker(w, jobs, results);
}
// Collect results from the shared results channel.
var total: int = 0;
for (var r = 0; r < NUM_JOBS; r++) {
var res: int? = results.receive();
if (res == null) {
break;
}
total = total + res;
}
results.close();
println("All jobs done. Total: " + string(total));
}

The full program:

import { Channel } from "std/channel";
function processJob(jobId: int): int {
var result: int = jobId * jobId;
return result;
}
function worker(workerId: int, jobs: Channel<int>, results: Channel<int>): void {
for (
var jobId: int? = jobs.receive();
jobId != null;
jobId = jobs.receive()
) {
var jid: int = jobId;
var result: int = processJob(jid);
println("Worker " + string(workerId) + " processed job " + string(jid) + " -> " + string(result));
results.send(result);
}
}
function main() {
const NUM_WORKERS: int = 4;
const NUM_JOBS: int = 20;
const jobs = new Channel<int>(NUM_JOBS);
const results = new Channel<int>(NUM_JOBS);
if (jobs == null || results == null) {
println("failed to create channels");
return;
}
for (var j = 1; j <= NUM_JOBS; j++) {
jobs.send(j);
}
jobs.close();
for (var w = 1; w <= NUM_WORKERS; w++) {
spawn worker(w, jobs, results);
}
var total: int = 0;
for (var r = 0; r < NUM_JOBS; r++) {
var res: int? = results.receive();
if (res == null) {
break;
}
total = total + res;
}
results.close();
println("All jobs done. Total: " + string(total));
}
Terminal window
chuks main.chuks

Sample output (interleaving will vary — that’s the whole point):

Worker 1 processed job 1 -> 1
Worker 2 processed job 2 -> 4
Worker 3 processed job 3 -> 9
Worker 4 processed job 4 -> 16
Worker 1 processed job 5 -> 25
...
Worker 4 processed job 20 -> 400
All jobs done. Total: 2870

The total 2870 is 1² + 2² + … + 20² — deterministic regardless of which worker handles which job.

You can also compile to a native binary:

Terminal window
chuks build main.chuks -o worker-pool
./worker-pool
┌─────────┐ jobs (Channel<int>) ┌──────────┐
│ │ ────────────────────────────▶ │ worker 1 │
│ main │ │ worker 2 │ ─┐
│ │ │ worker 3 │ │ results (Channel<int>)
│ │ ◀──────────────────────────── │ worker 4 │ │
└─────────┘ └──────────┘ ─┘
  • Fan-out. Multiple workers read from the same jobs channel. The runtime guarantees each value is delivered to exactly one receiver, so there’s no race over which worker takes which job.
  • Closure as completion signal. When the producer calls jobs.close(), every worker’s for loop will eventually exit. No counters, no done-channels, no cancellation tokens — just null from receive().
  • Buffered channels. Both channels are buffered to NUM_JOBS so the producer can enqueue everything up front without blocking, and so workers can produce results while main is still draining.

This same shape — typed channels, fan-out, close-as-signal — scales to a lot of real workloads:

  • HTTP scrapers / API clients — fan out URLs to N HTTP workers.
  • Image / video processing — bounded parallelism over a folder of files.
  • Database ETL — read rows from a source, transform in workers, write to a sink.
  • Background queues — pop jobs from Redis, process across a pool, ack on success.

In every case Channel<T> keeps the boundary between stages type-safe at compile time while spawn and close() keep the orchestration small and explicit.

Some natural extensions:

  • Errors per job. Make results a Channel<JobResult> where JobResult is a dataType with id, value, and error fields.
  • Backpressure. Use a smaller buffer (e.g. new Channel<int>(4)) so producers slow down when workers fall behind.
  • Cancellation. Pass a second done: Channel<bool> and have workers check done.tryReceive() between jobs.
  • Result ordering. Wrap results with their jobId so you can reorder them after the fact.

For more on the API surface, see the full Channel reference.