Skip to content

Concurrency

Chuks has first-class concurrency built into the language. There are two complementary primitives:

PrimitivePurposeRuns concurrently?
async / awaitNon-blocking I/O-bound workYes (goroutine)
spawnCPU-bound parallel workYes (goroutine)

Both async and spawn create real OS-level goroutines under the hood. The difference is when they’re used and what they express:

  • async/await — Mark a function as asynchronous. Calling it always runs it in a concurrent goroutine and returns a Task<T>. Use await to get the result. Designed for I/O-bound operations like HTTP requests, database queries, and file reads.
  • spawn — Run any function (async or not) in a parallel goroutine. Returns a Task<T>. Designed for CPU-bound operations like number crunching, data processing, and parallel algorithms.

An async function always runs in a separate goroutine when called and returns a Task<T>:

async function fetchUser(id: int): Task<string> {
// This runs concurrently in its own goroutine
return "User " + string(id)
}

Use await to pause the current execution until the async function completes:

// Calling an async function returns a Task<T>
// await blocks until the result is ready
var user: string = await fetchUser(42)
println(user) // "User 42"

When you await each call before starting the next, execution is sequential:

async function fetchName(): Task<string> {
return "Alice"
}
async function fetchAge(): Task<int> {
return 30
}
// Sequential — fetchAge waits for fetchName to finish
var name: string = await fetchName()
var age: int = await fetchAge()
println(name + " is " + string(age))

Start all tasks first, then await them to run concurrently:

import { http } from "std/http"
async function fetchData(url: string): Task<string> {
var resp = await http.get(url)
return resp.body
}
// Start both requests concurrently
var t1: Task<string> = fetchData("https://api.example.com/users")
var t2: Task<string> = fetchData("https://api.example.com/posts")
// Both are running — now await the results
var users: string = await t1
var posts: string = await t2

Async functions can call other async functions:

async function getUser(id: int): Task<string> {
return "User-" + string(id)
}
async function getFullProfile(id: int): Task<string> {
var name: string = await getUser(id)
return name + " (full profile)"
}
var profile: string = await getFullProfile(1)
println(profile) // "User-1 (full profile)"

Errors in async functions propagate to the caller through try/catch:

async function riskyOperation(): Task<string> {
throw "something went wrong"
return "ok"
}
try {
var result: string = await riskyOperation()
} catch (e) {
println("Caught: " + string(e))
}

spawn takes any function — async or regular — and runs it in a parallel goroutine:

function computeSum(n: int): int {
var sum: int = 0
for (var i: int = 0; i < n; i = i + 1) {
sum = sum + i
}
return sum
}
// Run in parallel — even though computeSum is NOT async
var task: Task<int> = spawn computeSum(1000000)
var result: int = await task
println(result)

Spawn multiple workers and collect results:

function countPrimes(start: int, end: int): int {
var count: int = 0
var i: int = start
while (i < end) {
if (isPrime(i)) {
count = count + 1
}
i = i + 1
}
return count
}
// Fan-out: 4 parallel workers
var t1: Task<int> = spawn countPrimes(0, 250000)
var t2: Task<int> = spawn countPrimes(250000, 500000)
var t3: Task<int> = spawn countPrimes(500000, 750000)
var t4: Task<int> = spawn countPrimes(750000, 1000000)
// Collect results
var total: int = await t1 + await t2 + await t3 + await t4
println("Total primes: " + string(total))

If you don’t need the result, just spawn without awaiting:

function logEvent(msg: string): void {
println("LOG: " + msg)
}
spawn logEvent("user signed in")
println("continues immediately")
ScenarioUseWhy
HTTP request, DB query, file readasync/awaitThe function is I/O-bound and naturally asynchronous
Prime counting, data crunchingspawnThe function is CPU-bound and needs a parallel thread
Regular function, run in parallelspawnOnly spawn can make a non-async function concurrent
Async function, run in paralleleitherBoth work — spawn on an async function is redundant

Key insight: For async functions, await asyncFunc() and await spawn asyncFunc() are functionally identical — both create a goroutine. The unique value of spawn is that it works on regular (non-async) functions too, enabling parallel computation without requiring the function to be marked async.

async/await → I/O-bound (waiting for external resources)
spawn → CPU-bound (parallel computation)

Both are concurrent. The distinction is about intent: async signals “this function does I/O and should be non-blocking”, while spawn signals “run this computation in parallel for performance”.

Class methods can be marked async just like top-level functions. This is useful for services, repositories, and any class that performs I/O or background work.

class Calculator {
public async add(a: int, b: int): Task<int> {
return a + b
}
public async multiply(a: int, b: int): Task<int> {
return a * b
}
}

Call async methods with await directly, or spawn them for parallel execution:

var calc = new Calculator()
// Sequential
var sum: int = await calc.add(3, 4)
println(sum) // 7
// Parallel
var t1: Task<int> = spawn calc.add(10, 20)
var t2: Task<int> = spawn calc.multiply(5, 6)
var r1: int = await t1
var r2: int = await t2
println(r1) // 30
println(r2) // 30

Async methods work with all access modifiers (public, protected, private), static, and override:

abstract class DataService {
abstract public async fetch(id: int): Task<any>
}
class UserService extends DataService {
override public async fetch(id: int): Task<any> {
// fetch user from database...
return { "id": id, "name": "Alice" }
}
}

When spawning background tasks, you often need them to communicate with the main thread or with each other. Chuks provides channels — typed, synchronized pipes for sending values between concurrent tasks.

Think of a channel as a mailbox: one task drops a message in, another task picks it up. The channel guarantees that messages are delivered safely, without race conditions.

import { channel } from "std/channel"
// Create a buffered channel that can hold up to 5 messages
var ch: Channel = channel.new(5)

The argument to channel.new is the buffer size — how many messages the channel can hold before a sender must wait for a receiver. A buffer of 0 (the default) means every send blocks until another task calls receive, and vice versa.

Buffer SizeBehavior
0Unbuffered — sender blocks until receiver is ready, and receiver blocks until sender sends. This gives tight synchronization.
> 0Buffered — sender can push up to N messages without blocking. Once the buffer is full, the next send blocks until a receive frees a slot.
import { channel } from "std/channel"
var ch: Channel = channel.new(1)
// Send a value into the channel
channel.send(ch, "hello from channel")
// Receive the value on the other end
var msg: any = channel.receive(ch)
println(msg) // "hello from channel"
// Always close channels when done
channel.close(ch)

channel.send(ch, value) blocks if the buffer is full. channel.receive(ch) blocks if the buffer is empty.

Sometimes you don’t want to wait. channel.tryReceive and channel.trySend return immediately regardless of whether the operation succeeded.

import { channel } from "std/channel"
var ch: Channel = channel.new(1)
// tryReceive on an empty channel — does not block
var result: any = channel.tryReceive(ch)
println(result.ok) // false — nothing was available
// trySend on a non-full channel — succeeds immediately
var sent: any = channel.trySend(ch, 42)
println(sent) // true
// trySend on a full channel (buffer=1, already holding 42)
var sent2: any = channel.trySend(ch, 99)
println(sent2) // false — buffer is full, would block
// tryReceive now gets the value
var result2: any = channel.tryReceive(ch)
println(result2.value) // 42
println(result2.ok) // true
channel.close(ch)
FunctionReturnsWhen to use
channel.tryReceive(ch){ value: any, ok: bool }Check for a message without blocking (e.g., polling in a loop)
channel.trySend(ch, value)boolSend if possible, skip if the buffer is full (e.g., dropping non-critical events)

The most common channel pattern. One task produces data, another consumes it. The channel acts as a queue between them.

import { channel } from "std/channel"
var dataCh: Channel = channel.new(5)
function produce(ch: Channel, count: int): void {
for (var i: int = 0; i < count; i++) {
channel.send(ch, i * 10)
}
}
// Producer fills the channel
produce(dataCh, 5)
// Consumer reads all values
var total: int = 0
for (var i: int = 0; i < 5; i++) {
var val: any = channel.receive(dataCh)
total = total + int(val)
}
println("total: " + string(total)) // "total: 100"
channel.close(dataCh)

With spawn, the producer and consumer can run in parallel:

import { channel } from "std/channel"
var ch: Channel = channel.new(10)
function producer(ch: Channel): void {
for (var i: int = 0; i < 10; i++) {
channel.send(ch, i)
}
}
// Run producer in background
spawn producer(ch)
// Consume values as they arrive
for (var i: int = 0; i < 10; i++) {
var val: any = channel.receive(ch)
println("received: " + string(val))
}
channel.close(ch)

Use Case: Spawn + Channel (Background Work)

Section titled “Use Case: Spawn + Channel (Background Work)”

When you need the result of a background computation but want it delivered through a channel instead of a Task:

import { channel } from "std/channel"
var resultCh: Channel = channel.new(1)
function heavyComputation(n: int): int {
var sum: int = 0
for (var i: int = 0; i < n; i++) {
sum = sum + i
}
return sum
}
function computeAndSend(ch: Channel, n: int): void {
var result: int = heavyComputation(n)
channel.send(ch, result)
}
// Run in background, get result through channel
spawn computeAndSend(resultCh, 100)
var result: any = channel.receive(resultCh)
println("result: " + string(result)) // "result: 4950"
channel.close(resultCh)

Use a channel as a simple “done” signal — the value doesn’t matter, just the act of sending it.

import { channel } from "std/channel"
var doneCh: Channel = channel.new(1)
function backgroundWork(done: Channel): void {
println("background: started")
// ... do work ...
println("background: finished")
channel.send(done, true)
}
spawn backgroundWork(doneCh)
// Block until background work signals completion
var signal: any = channel.receive(doneCh)
println("main: background done=" + string(signal))
channel.close(doneCh)

Output:

background: started
background: finished
main: background done=true

Buffered channels act as bounded, thread-safe queues. Send multiple values, read them back in FIFO order:

import { channel } from "std/channel"
var queue: Channel = channel.new(3)
channel.send(queue, "first")
channel.send(queue, "second")
channel.send(queue, "third")
println(channel.receive(queue)) // "first"
println(channel.receive(queue)) // "second"
println(channel.receive(queue)) // "third"
channel.close(queue)
FunctionDescription
channel.new(size?)Create a channel. size sets the buffer (default 0).
channel.send(ch, value)Send a value. Blocks if the buffer is full.
channel.receive(ch)Receive a value. Blocks if the buffer is empty.
channel.close(ch)Close the channel. No more sends allowed.
channel.tryReceive(ch)Non-blocking receive. Returns { value, ok }.
channel.trySend(ch, value)Non-blocking send. Returns true if sent, false otherwise.
ScenarioUseWhy
Get the return value of a background functionawait spawn fn()Simpler — just await the Task
Stream multiple values from a background taskChannelTasks return one value; channels carry many
Coordinate multiple tasks (producer/consumer)ChannelChannels decouple producers from consumers
Signal completion (“done”)ChannelA lightweight alternative to awaiting a Task
Polling without blockingchannel.tryReceiveNon-blocking check for available data

When you call an async function or use spawn, it returns a Task<T> object. The Task API lets you inspect and control tasks.

PropertyTypeDescription
statestringCurrent state: "pending", "done", "cancelled", or "failed"
completedboolWhether the task has finished (successfully or not)
valueTThe resolved value (only available after completion)
contextContextThe task’s execution context
MethodReturn TypeDescription
cancel()voidCancel the task and all its child tasks
timeout(ms)voidSet a timeout in milliseconds
isCancelled()boolCheck if the task has been cancelled
isCompleted()boolCheck if the task has completed

The static method Task.current() returns the currently executing task from within a spawned function. Returns null when called outside a spawned context.

async function worker(): Task<string> {
var t: any = Task.current()
if (t != null) {
return "running inside a task"
}
return "no task context"
}
var result: string = await spawn worker()
println(result) // "running inside a task"
async function longWork(): Task<int> {
var sum: int = 0
for (var i: int = 0; i < 1000000; i = i + 1) {
sum = sum + i
}
return sum
}
var task: Task<int> = spawn longWork()
task.cancel()
println(task.state) // "cancelled"
async function riskyOp(): Task<string> {
// Long running operation...
return "done"
}
var task: Task<string> = spawn riskyOp()
task.timeout(5000) // Cancel automatically after 5 seconds
var result: string = await task

Every spawned task runs within a Context. Contexts form a tree: when a parent task is cancelled, all child tasks are automatically cancelled too. This is the foundation of structured concurrency in Chuks.

Root Context
├── Task A (Context A)
│ ├── Task A1 (Context A1)
│ └── Task A2 (Context A2)
└── Task B (Context B)

Cancelling Task A automatically cancels Task A1 and Task A2, but Task B is unaffected.

MethodReturn TypeDescription
withValue(k, v)voidStore a key-value pair in the context
value(key)anyRetrieve a value by key (walks up the parent chain)
cancel()voidCancel this context and all children
isCancelled()boolCheck if the context has been cancelled
deadline()stringGet the deadline as a string (empty if none set)
setTimeout(ms)voidAuto-cancel the context after ms milliseconds

Use Task.current() to get the current task, then access its .context property:

async function handler(): Task<string> {
var t: any = Task.current()
if (t != null) {
var ctx: any = t.context
ctx.withValue("requestId", "abc-123")
// Child tasks inherit parent context values
var child: Task<string> = spawn childHandler()
return await child
}
return "no context"
}
async function childHandler(): Task<string> {
var t: any = Task.current()
if (t != null) {
var ctx: any = t.context
var reqId: any = ctx.value("requestId")
// reqId is "abc-123" — inherited from parent
return "processed"
}
return "no context"
}
async function parent(): Task<string> {
var child1: Task<int> = spawn work(1)
var child2: Task<int> = spawn work(2)
// If parent is cancelled, child1 and child2 are
// automatically cancelled too
var r1: int = await child1
var r2: int = await child2
return "done"
}
async function work(id: int): Task<int> {
return id * 10
}
PatternUse When
await fn()You need the result before continuing
spawn fn()You want to run CPU-bound work in parallel
task.cancel()You want to stop a task and its children
task.timeout(ms)You want automatic cancellation after a deadline
Task.current()You need to access the current task’s context
ctx.withValue(k,v)You want to pass data down the task tree
ctx.value(k)You want to read data from parent contexts

For parallel computing benchmarks comparing Chuks against Go, Java, Bun, Node.js, and Python, see the Parallel Computing guide.