Functions & Triggers
Deploy and invoke server-side functions via QuickJS, Wasmtime, Node, and Deno engines, with event-driven triggers for CDC, auth, storage, cron, and custom events.
Overview
AltBase provides two tightly integrated systems for serverless compute and event-driven automation:
- Functions — deploy and invoke server-side JavaScript/TypeScript functions with multiple execution engines: QuickJS (with
atlas.*host API access for database, HTTP, KV, and storage operations), Wasmtime (strong Wasm isolation with fuel metering for computation-only workloads), plus Node and Deno runtimes. - Event Triggers — bind events (database changes, auth events, storage events, cron schedules, custom events) to functions or workflows. Triggers evaluate conditions via JSON-based rules and dispatch to configured targets automatically.
Functions live in the atlas-functions crate. Triggers live in the atlas-triggers crate. Together they provide a complete serverless compute layer.
Key Concepts
QuickJS Engine
QuickJS provides a lightweight JavaScript runtime with full access to atlas.* host APIs:
atlas.db.query(sql, params)— runs SQL against the tenant schemaatlas.fetch(url, options)— makes HTTP requests (with per-tier fetch call limits)atlas.kv.get(key)/atlas.kv.set(key, value)— Redis-backed key-value operations
QuickJS is ideal for functions that need to read/write data, call external APIs, or use the KV store. Startup time is sub-10ms.
Wasmtime Engine
Wasmtime provides strong sandboxed isolation for computation-only workloads. JavaScript source is compiled to WebAssembly via Javy (a JS-to-Wasm compiler). The Wasm module runs with:
- Fuel metering — deterministic resource tracking (no wall-clock surprises)
- Memory caps — hard limits on heap allocation
- No host API access — pure computation, no side effects
Wasmtime is ideal for data transformation, hashing, validation, and any function that does not need database or network access.
Function Deployment
Functions are deployed via source code upload. The deploy endpoint accepts the function name, URL-safe slug, runtime (quickjs or wasm), language (javascript or typescript), and source code.
- QuickJS runtime: TypeScript is transpiled to JS (if needed) and stored as
.json disk. - Wasm runtime: source is wrapped with a runtime shim, compiled via Javy to
.wasm, and stored on disk.
A registry entry is created in the control plane database linking the slug to the artifact.
Function Invocation
Functions are invoked by slug via POST /functions/v1/invoke/{slug}. The request body is passed as the function's input argument. The response includes the function return value and execution metrics (wall_ms, memory_bytes, fuel_consumed).
Resource Limits by Tier
| Limit | Free | Pro | Enterprise |
|---|---|---|---|
| Fuel (Wasm) | 100M | 1B | 10B |
| Memory | 16 MB | 64 MB | 256 MB |
| Wall clock | 5s | 15s | 60s |
| Fetch calls (QuickJS) | 5 | 25 | 100 |
Unified Trigger System
The trigger system binds events to functions or workflows. When an event occurs, matching triggers evaluate conditions and dispatch to the configured target. All event types flow through a single NATS consumer (atlasdb.events.>), providing a unified evaluation path.
Event Types
| Event Type | Source | Example |
|---|---|---|
cdc.INSERT.{table} | Database change | cdc.INSERT.orders |
cdc.UPDATE.{table} | Database change | cdc.UPDATE.orders |
cdc.DELETE.{table} | Database change | cdc.DELETE.orders |
auth.signup | Auth engine | New user registration |
auth.login | Auth engine | User login |
storage.upload | Storage engine | File uploaded |
cron | Scheduler | Cron schedule matched |
| Custom | User-defined | Any custom subject |
Trigger Conditions
Conditions are stored as JSON arrays of objects. Each condition specifies a field, operator, and value to match against the event payload:
{"field": "amount", "op": "gt", "value": 100}
Supported operators: eq, neq, gt, gte, lt, lte, contains, in.
Multiple conditions are evaluated with AND logic. All conditions must pass for the trigger to fire.
Target Types
| Target | Behavior |
|---|---|
function | Load from disk, execute via QuickJS or Wasmtime with the event as input |
workflow | Start a workflow run with the event as trigger_data |
In-Memory Trigger Cache
Trigger evaluation happens on every event, so database lookups per event would be too slow. The cache loads all triggers into memory on startup and invalidates when triggers are created, updated, or deleted via the API.
API Reference
Functions Endpoints
| Method | Path | Description | Auth |
|---|---|---|---|
POST | /v1/projects/{project_id}/functions | Register a function | Service Key |
GET | /v1/projects/{project_id}/functions | List functions | Service Key |
DELETE | /v1/projects/{project_id}/functions/{function_id} | Delete a function | Service Key |
POST | /v1/projects/{project_id}/functions/deploy | Deploy function from source | Service Key |
POST | /functions/v1/invoke/{slug} | Invoke a function by slug | API Key |
POST | /functions/v1/invoke/{slug}/{*rest} | Invoke with path suffix | API Key |
Triggers Endpoints
| Method | Path | Description | Auth |
|---|---|---|---|
POST | /v1/projects/{project_id}/triggers | Create a trigger | Service Key |
GET | /v1/projects/{project_id}/triggers | List all triggers | Service Key |
GET | /v1/projects/{project_id}/triggers/{trigger_id} | Get trigger details | Service Key |
PATCH | /v1/projects/{project_id}/triggers/{trigger_id} | Update a trigger | Service Key |
DELETE | /v1/projects/{project_id}/triggers/{trigger_id} | Delete a trigger | Service Key |
EdgeFunction (Registry)
| Field | Type | Description |
|---|---|---|
id | UUID | Primary key |
project_id | UUID | FK to projects |
name | String | Human-readable name |
slug | String | URL-safe identifier for invocation |
runtime | String | wasm, quickjs, deno, or node |
entrypoint | String | File name (e.g., hello.wasm, hello.js) |
EventTrigger
| Field | Type | Description |
|---|---|---|
id | UUID | Primary key |
project_id | UUID | FK to projects |
event_type | String | Event to match (e.g., cdc.INSERT.orders, auth.signup, cron) |
config | JSONB | Event-specific config (e.g., cron schedule, table filter) |
target_type | String | function or workflow |
function_id | UUID | FK to edge_functions (if target_type = function) |
workflow_id | UUID | FK to workflow_definitions (if target_type = workflow) |
conditions | JSONB | Array of condition objects |
enabled | Boolean | Whether trigger is active |
Code Examples
Deploy a QuickJS Function (with Host API Access)
curl -X POST http://localhost:3000/v1/projects/$PROJECT_ID/functions/deploy \
-H "Authorization: Bearer $SERVICE_KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "Get User Orders",
"slug": "get-user-orders",
"runtime": "quickjs",
"language": "javascript",
"source": "export default async function(input) { const rows = await atlas.db.query(\"SELECT * FROM orders WHERE user_id = $1\", [input.user_id]); return { orders: rows }; }"
}'
Deploy a Wasm Function (Computation Only)
curl -X POST http://localhost:3000/v1/projects/$PROJECT_ID/functions/deploy \
-H "Authorization: Bearer $SERVICE_KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "Hash Calculator",
"slug": "hash-calc",
"runtime": "wasm",
"language": "javascript",
"source": "export default function(input) { return { hash: input.data.split(\"\").reduce((a,b) => ((a << 5) - a) + b.charCodeAt(0), 0) }; }"
}'
Invoke a Function
curl -X POST http://localhost:3000/functions/v1/invoke/get-user-orders \
-H "Authorization: Bearer $ANON_KEY" \
-H "Content-Type: application/json" \
-d '{"user_id": "abc-123"}'
# Response:
# {
# "result": { "orders": [...] },
# "metrics": { "wall_ms": 8, "memory_bytes": 2048000 }
# }
Create a CDC Trigger
# Run a function on every new order over $100
curl -X POST http://localhost:3000/v1/projects/$PROJECT_ID/triggers \
-H "Authorization: Bearer $SERVICE_KEY" \
-H "Content-Type: application/json" \
-d '{
"event_type": "cdc.INSERT.orders",
"target_type": "function",
"function_id": "'$FUNCTION_ID'",
"conditions": [{"field": "amount", "op": "gt", "value": 100}],
"enabled": true
}'
Create a Cron Trigger
# Run a workflow every hour
curl -X POST http://localhost:3000/v1/projects/$PROJECT_ID/triggers \
-H "Authorization: Bearer $SERVICE_KEY" \
-H "Content-Type: application/json" \
-d '{
"event_type": "cron",
"config": {"schedule": "0 * * * *"},
"target_type": "workflow",
"workflow_id": "'$WORKFLOW_ID'",
"enabled": true
}'
List Triggers
curl http://localhost:3000/v1/projects/$PROJECT_ID/triggers \
-H "Authorization: Bearer $SERVICE_KEY"
Configuration
| Variable | Default | Description |
|---|---|---|
ATLAS_FUNCTIONS_DIR | ./functions-data | Artifact storage path on disk |
ATLAS_FUNCTIONS_MAX_EXECUTION_MS | 10000 | Default execution timeout in milliseconds |
ATLAS_FUNCTIONS_MAX_MEMORY_MB | 128 | Default memory limit in MB |
ATLAS_NATS_URL | nats://localhost:4222 | NATS server URL (for trigger dispatch) |
How It Works
Function Deploy Flow
- Client sends source code, slug, runtime, and language to the deploy endpoint.
- If
runtime: wasm: source is wrapped with a runtime shim, compiled via Javy to a.wasmbinary, and stored on disk. - If
runtime: quickjs: TypeScript is transpiled to JavaScript (if needed) and stored as.json disk. - A registry entry is created in the control plane database linking the slug to the project and artifact.
Function Invocation Flow (QuickJS)
- Client calls
POST /functions/v1/invoke/{slug}with a JSON body. - The function is looked up by slug and project_id in the registry.
- JavaScript source is loaded from disk.
- The QuickJS engine executes with
atlas.*host APIs injected:atlas.db.query(sql, params)runs SQL against the tenant schema.atlas.fetch(url, options)makes HTTP requests (subject to fetch call limits).atlas.kv.get(key)andatlas.kv.set(key, value)operate on the Redis KV store.
- The result is returned with execution metrics (
wall_ms,memory_bytes).
Function Invocation Flow (Wasmtime)
- The
.wasmmodule bytes are loaded from disk. - Wasmtime instantiates the module with the tier's fuel limit and memory cap.
- Input JSON is passed as the function argument.
- The module executes with fuel metering (deterministic resource tracking).
- The result is returned with metrics (
fuel_consumed,memory_bytes,wall_ms).
Trigger Event Dispatch Flow
- An event occurs (CDC change, auth signup, storage upload, cron tick, or custom event).
- The event is published to NATS on
atlasdb.events.{event_type}. - The dispatcher consumer receives the event.
- The dispatcher looks up matching triggers from the in-memory cache by
project_idandevent_type. - For each matching trigger, conditions are evaluated against the event payload.
- If conditions pass (or no conditions are defined), the target is dispatched:
- Function target: loaded from disk, executed via QuickJS or Wasmtime with the event as input.
- Workflow target: a workflow run is started with the event as
trigger_data.
- The NATS message is acknowledged.
Cron Scheduler
The spawn_cron_scheduler background task runs every minute. For each cron trigger whose schedule matches the current time, it publishes a cron event to the NATS events stream. The event flows through the same dispatcher path as all other trigger sources.
Troubleshooting
| Problem | Cause | Fix |
|---|---|---|
Wasm module not found (404) | Function not deployed or wrong slug | Deploy with the correct slug |
JS source not found (404) | QuickJS function not deployed | Deploy with runtime: quickjs |
Compilation failed (400) | Invalid JavaScript or TypeScript | Fix source code syntax errors |
| Function timeout | Execution exceeds wall clock limit | Optimize function or upgrade tier |
Slug must be alphanumeric (400) | Invalid slug characters | Use only letters, numbers, hyphens, and underscores |
atlas.db.query fails | Wrong SQL or schema issue | Verify SQL syntax; functions run in the tenant schema |
| Trigger not firing | enabled: false on trigger | Update trigger with enabled: true |
| Trigger fires but function fails | Function not deployed or runtime error | Check function deployment and logs |
| Cron not running | Cron scheduler not spawned | Verify server startup includes cron spawn |
| Conditions not matching | Incorrect field names or operators | Check event payload structure and condition JSON |
| Cache stale after trigger update | Cache invalidation failed | Restart server or re-save trigger via API |
Events & Jobs
NATS-backed event publishing, durable JetStream streams, pull-based consumers, and background job processing with SSRF protection.
Workflows & Integrations
DAG-based workflow automation with 18 node types, NATS-driven step dispatch, SSE monitoring, plus 500+ third-party integrations via Nango OAuth, templated actions, webhooks, and circuit breaker protection.