Control Plane
Platform users, organizations, projects, API keys, gateway middleware, request logging, and rate limiting in AltBase.
Overview
The control plane is the foundation layer of AltBase. It manages platform-level resources — users, organizations, projects, environments, and API keys — and provides the gateway middleware chain that every protected request passes through before reaching a feature handler.
All control plane metadata lives in a dedicated PostgreSQL database, separate from tenant application data. This two-pool architecture prevents slow tenant queries from starving platform operations. AltBase compiles all 23 crates into a single binary (modular monolith), eliminating inter-service network latency and simplifying deployment to a single container.
Key Concepts
Platform Resource Hierarchy
AltBase organizes access around a strict hierarchy:
- Platform Users — individuals who manage projects through the dashboard or CLI
- Organizations — groups of platform users that own projects and share billing
- Projects — isolated application environments, each with its own Postgres schema (
proj_{id}), auth settings, and storage buckets - Environments — per-project deployment targets (development, staging, production), each with its own schema name and optional dedicated database URL
- API Keys — credentials scoped to a project and environment, used to authenticate every API request
API Key Types
Every request to a protected endpoint must include an API key. AltBase supports three key roles:
| Role | Permissions | Use Case |
|---|---|---|
Anon | Read-only on tables with permissive RLS policies | Public-facing client apps (browsers, mobile) |
Service | Full CRUD, raw SQL execution, admin endpoints | Server-side backends, admin tools, CI/CD |
Custom(permissions) | Per-table select/insert/update/delete | Fine-grained access for third-party integrations |
Custom role permissions are stored as a JSON object:
{
"tables": {
"products": ["select", "insert"],
"orders": ["select"],
"users": ["select", "update"]
}
}
Tier System
Each project is assigned a tier that controls rate limits, query timeouts, WebSocket connection limits, and embedding depth:
| Tier | Reads/min | Writes/min | WebSocket Conns | Query Timeout | Embed Depth | Max Bulk Insert |
|---|---|---|---|---|---|---|
| Free | 60 | 30 | 200 | 5s | 2 | 1,000 rows |
| Pro | 600 | 300 | 5,000 | 15s | 3 | 10,000 rows |
| Enterprise | 6,000 | 3,000 | 50,000 | 30s | 5 | 10,000 rows |
Gateway Middleware Chain
Every protected request passes through a three-stage middleware pipeline before reaching its feature handler:
Client Request
|
v
1. API Key Middleware (api_key_middleware)
- Extract key from Authorization header or ?apikey= query param
- SHA-256 hash for secure lookup
- Resolve project, environment, tier, role
- Inject TenantContext into request extensions
|
v
2. Rate Limit Middleware (rate_limit_middleware)
- Read tier from TenantContext
- Redis INCR + EXPIRE (60-second fixed window)
- Return 429 with Retry-After if over limit
|
v
3. Feature Handler
- Read TenantContext from Extension<TenantContext>
- Execute business logic within tenant schema
API Reference
Health Endpoints
| Method | Path | Description | Auth |
|---|---|---|---|
GET | /health/live | Liveness probe — always returns {"status": "ok"} | None |
GET | /health/ready | Readiness probe — verifies DB connectivity | None |
Platform Management
| Method | Path | Description | Auth |
|---|---|---|---|
POST | /v1/organizations | Create an organization | Platform User |
GET | /v1/organizations | List organizations | Platform User |
POST | /v1/projects | Create a project | Platform User |
GET | /v1/projects | List projects in organization | Platform User |
GET | /v1/projects/{id} | Get project details | Platform User |
POST | /v1/projects/{id}/api-keys | Create an API key | Platform User |
GET | /v1/projects/{id}/api-keys | List API keys | Platform User |
DELETE | /v1/projects/{id}/api-keys/{key_id} | Revoke an API key | Platform User |
Code Examples
Authenticating with an API Key
Every request to a protected endpoint must include the API key:
# Using Authorization header (recommended)
curl -X GET http://localhost:3000/rest/v1/todos \
-H "Authorization: Bearer YOUR_ANON_KEY"
# Using query parameter (alternative)
curl -X GET "http://localhost:3000/rest/v1/todos?apikey=YOUR_ANON_KEY"
Using the SDK
import { createClient } from '@altbasedb/sdk'
const client = createClient(
'http://localhost:3000',
'YOUR_ANON_KEY'
)
// All subsequent requests include the API key automatically
const { data, error } = await client.from('todos').select('*')
Creating a Project
curl -X POST http://localhost:3000/v1/projects \
-H "Authorization: Bearer $SERVICE_KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "my-app",
"organization_id": "org_abc123",
"tier": "pro"
}'
Health Checks
# Liveness probe
curl http://localhost:3000/health/live
# {"status":"ok"}
# Readiness probe (verifies database connectivity)
curl http://localhost:3000/health/ready
# {"status":"ready"}
Checking Permissions in a Handler
use atlas_gateway::tenant_context::TenantContext;
async fn my_handler(
Extension(ctx): Extension<TenantContext>,
) -> Result<Json<Value>> {
// Require service key
if !matches!(ctx.api_key_role, ApiKeyRole::Service) {
return Err(AtlasError::Forbidden("Service key required".into()));
}
// Check custom role has table permission
if let ApiKeyRole::Custom(perms) = &ctx.api_key_role {
let allowed = perms.get("tables")
.and_then(|t| t.get("products"))
.and_then(|ops| ops.as_array())
.map(|arr| arr.iter().any(|v| v.as_str() == Some("select")))
.unwrap_or(false);
if !allowed {
return Err(AtlasError::Forbidden("Insufficient permissions".into()));
}
}
// ctx.project_id — which project
// ctx.schema_name — which Postgres schema (e.g., "proj_abc123")
// ctx.tier — what resource limits apply
Ok(Json(json!({"ok": true})))
}
Configuration
Required Environment Variables
| Variable | Description | Example |
|---|---|---|
ATLAS_CONTROL_PLANE_DATABASE_URL | Postgres URL for control plane metadata | postgresql://atlas:pass@localhost:5433/atlas_control_plane |
ATLAS_TENANT_DATABASE_URL | Postgres URL for tenant application data | postgresql://atlas:pass@localhost:5433/atlas_tenants |
ATLAS_REDIS_URL | Redis connection URL for caching, sessions, rate limiting | redis://localhost:6379 |
ATLAS_MASTER_KEY | 32-byte hex key for AES-256-GCM encryption | Generate with openssl rand -hex 32 |
Optional Environment Variables
| Variable | Default | Description |
|---|---|---|
ATLAS_HOST | 0.0.0.0 | Server bind address |
ATLAS_PORT | 3000 | Server port |
ATLAS_DB_POOL_SIZE | 20 | Max connections per database pool |
ATLAS_NATS_URL | nats://localhost:4222 | NATS JetStream server URL |
ATLAS_RATE_LIMIT_DISABLED | false | Disable rate limiting (development only) |
ATLAS_STORAGE_PROVIDER | filesystem | Storage backend: filesystem, s3, azure |
ATLAS_STORAGE_ROOT | — | Local filesystem storage path |
ATLAS_FUNCTIONS_DIR | ./functions-data | Function artifact storage |
ATLAS_FUNCTIONS_MAX_EXECUTION_MS | 10000 | Function timeout in milliseconds |
ATLAS_FUNCTIONS_MAX_MEMORY_MB | 128 | Function memory limit |
ATLAS_REALTIME_MAX_CONNS_FREE | 200 | WebSocket connection limit (free tier) |
ATLAS_REALTIME_MAX_CONNS_PRO | 5000 | WebSocket connection limit (pro tier) |
ATLAS_REALTIME_MAX_CONNS_ENTERPRISE | 50000 | WebSocket connection limit (enterprise) |
ATLAS_REALTIME_HEARTBEAT_INTERVAL | 30 | WebSocket heartbeat interval in seconds |
ATLAS_REALTIME_HEARTBEAT_TIMEOUT | 10 | WebSocket heartbeat timeout in seconds |
Email and SMS Variables
| Variable | Description |
|---|---|
ATLAS_PLATFORM_SMTP_HOST | SMTP server hostname |
ATLAS_PLATFORM_SMTP_PORT | SMTP port |
ATLAS_PLATFORM_SMTP_USER | SMTP username |
ATLAS_PLATFORM_SMTP_PASS | SMTP password |
ATLAS_PLATFORM_SMTP_FROM | Sender email address |
ATLAS_PLATFORM_SMS_ENDPOINT | SMS provider API endpoint |
ATLAS_PLATFORM_SMS_API_KEY | SMS provider API key |
Telemetry
Structured JSON logging via tracing is configured using the RUST_LOG environment variable:
RUST_LOG=info # Default level
RUST_LOG=atlas_auth=debug # Debug a specific crate
RUST_LOG=atlas_api_engine=trace # Trace query execution
Log output is structured JSON:
{
"timestamp": "2026-03-23T10:30:00Z",
"level": "INFO",
"target": "atlas_server",
"message": "Listening on 0.0.0.0:3000"
}
How It Works
API Key Authentication Flow
- Extract key from
Authorization: Bearer {key}header or?apikey=query parameter - Validate format (minimum 8 characters)
- Extract prefix (first 8 characters) for fast database lookup via the
key_prefixcolumn - Compute SHA-256 hash of the full key
- Query
api_keystable matching bothkey_prefixandkey_hash— keys are never stored in plaintext - Load environment settings:
schema_name,database_url - Load auth settings:
rsa_public_key,kid,jwt_access_ttl,jwt_refresh_ttl - Load project tier from
projectstable - Build
TenantContextand inject via AxumExtension
The TenantContext struct carried through every request:
pub struct TenantContext {
pub project_id: Uuid,
pub environment_id: Uuid,
pub schema_name: String, // e.g., "proj_abc123"
pub database_url: Option<String>, // Dedicated DB URL (paid tier)
pub tier: Tier, // Free | Pro | Enterprise
pub api_key_role: ApiKeyRole, // Anon | Service | Custom(permissions)
pub rsa_public_key: Option<String>,
pub kid: Option<String>,
pub enable_cookie_auth: bool,
pub jwt_access_ttl: i64,
pub jwt_refresh_ttl: i64,
}
Rate Limiting
AltBase uses a fixed-window counter algorithm backed by Redis:
- Key format:
rl:{project_id}:{read|write}:{window_start} - Window size: 60 seconds
- Implementation: Redis
INCR+EXPIRE— distributed across all server instances
Response headers on every request:
| Header | Description |
|---|---|
X-RateLimit-Limit | Maximum requests allowed in the current window |
X-RateLimit-Remaining | Requests remaining in the current window |
X-RateLimit-Reset | Unix timestamp when the window resets |
When the limit is exceeded, the server returns HTTP 429 with a Retry-After: 60 header. To disable rate limiting for development, set ATLAS_RATE_LIMIT_DISABLED=true.
Request Logging
All API requests are logged to the request_logs table in the control plane database. Each entry records the project ID, endpoint, method, status code, latency, and timestamp for observability and usage tracking.
Database Migrations
Migrations run automatically on startup via sqlx::migrate! from the ./migrations/ directory. Key migrations:
| Migration | Resources Created |
|---|---|
20260315000000_initial_schema.sql | organizations, platform_users, projects, environments, api_keys, profiles, tokens |
20260316000000_storage_objects_policies.sql | storage_buckets, storage_objects, storage_policies |
20260317000000_cron_jobs.sql | cron_jobs, cron_executions |
20260318000000_platform_integrations.sql | connector_templates, connections |
20260319000000_request_logs.sql | request_logs |
20260320000000_billing_columns.sql | Billing tracking columns on projects |
20260322000000_event_triggers.sql | event_triggers, trigger conditions |
20260323000000_embedding_configs.sql | embedding_configs (per-project AI/vector) |
20260325000000_knowledge_collections.sql | knowledge_collections, knowledge_documents, knowledge_chunks |
20260326000000_org_sso.sql | org_sso_connections, org_sso_domain_rules |
20260326000001_project_sso.sql | project_sso_settings, project_sso_activations, project_sso_domain_rules |
Tenant-schema tables (users, sessions, refresh_tokens, oauth_providers, rls_policies, sso_connections, user_identities, customer_orgs) are created per-project via provisioning.rs, not migrations.
Server Startup Sequence
- Load
AppConfigfrom environment (prefixedATLAS_*) - Initialize structured JSON logging
- Create control plane + tenant database pools
- Run migrations
- Connect to Redis + NATS JetStream
- Create all subsystem states (auth, storage, realtime, etc.)
- Spawn background workers (CDC, cleanup, cron, etc.)
- Build Axum router (public + protected routes)
- Serve dashboard static files
- Bind to
{host}:{port}and serve
Error Handling
All crates use the shared AtlasError type which maps automatically to HTTP responses:
| Variant | HTTP Status | When |
|---|---|---|
NotFound(msg) | 404 | Resource does not exist |
BadRequest(msg) | 400 | Invalid input, missing fields |
Unauthorized(msg) | 401 | No or invalid authentication |
Forbidden(msg) | 403 | Valid auth, insufficient permissions |
Conflict(msg) | 409 | Unique constraint violation, concurrent modification |
PreconditionFailed(msg) | 412 | TUS version mismatch |
PayloadTooLarge(msg) | 413 | File or upload exceeds size limit |
RateLimited | 429 | Too many requests (includes Retry-After header) |
ServiceUnavailable(msg) | 503 | Dependency down |
Internal(msg) | 500 | Unexpected error |
Database(sqlx::Error) | Varies | PostgreSQL error code 23505 maps to 409, 22xxx/23xxx to 400, else 500 |
Redis(redis::Error) | 500 | Cache error |
Infrastructure Dependencies
| Service | Image | Port | Purpose |
|---|---|---|---|
| PostgreSQL | postgres:16-alpine | 5433 | Control plane + tenant databases (WAL level = logical for CDC) |
| Redis | redis:7-alpine | 6379 | Cache, sessions, rate limiting, TUS upload state |
| NATS | nats:2-alpine | 4222, 8222 | JetStream messaging for CDC, jobs, events, workflows |
| Azurite | mcr.microsoft.com/azure-storage/azurite | 10000 | Azure Blob Storage emulator for local dev |
Architecture Overview
AltBase compiles 23 Rust crates into a single binary with a dual-context database, event-driven core, and defense-in-depth security — all deployable with four infrastructure containers.
Database API Engine
PostgREST-compatible REST CRUD, query operators, embedding/joins, pagination, raw SQL endpoint, schema introspection, two-tier caching, query compiler, and index advisor.