Database API Engine
PostgREST-compatible REST CRUD, query operators, embedding/joins, pagination, raw SQL endpoint, schema introspection, two-tier caching, query compiler, and index advisor.
Overview
The Database API Engine (atlas-api-engine crate) automatically generates a full REST API from your PostgreSQL schema. It provides PostgREST-compatible CRUD operations, resource embedding (joins), filtering with 10+ operators, pagination, an exact-count option, a raw SQL endpoint for service keys, live schema introspection, a two-tier cache (moka in-process + Redis L2), a query compiler that pre-generates parameterized SQL templates, and an index advisor that tracks query patterns and recommends missing indexes.
Every request runs within a transaction scoped to the tenant's schema (proj_{id}) with SET LOCAL search_path, SET LOCAL statement_timeout (tier-based), and JWT claims injected via SET LOCAL for PostgreSQL RLS evaluation.
Key Concepts
PostgREST-Compatible REST API
The engine maps HTTP methods to SQL operations on any table in the tenant schema:
| HTTP Method | SQL Operation | Description |
|---|---|---|
GET | SELECT | Read rows with filtering, ordering, pagination, and embedding |
HEAD | SELECT | Same as GET but returns only headers (count, schema version) |
POST | INSERT | Create one or more rows |
PATCH | UPDATE | Update rows matching filters |
DELETE | DELETE | Delete rows matching filters |
Filter Operators
Filters are passed as query parameters in the format column=operator.value:
| Operator | SQL Equivalent | Example |
|---|---|---|
eq | = | ?status=eq.active |
neq | != | ?role=neq.admin |
gt | > | ?age=gt.18 |
gte | >= | ?price=gte.10.00 |
lt | < | ?quantity=lt.5 |
lte | <= | ?score=lte.100 |
like | LIKE | ?name=like.*smith* |
ilike | ILIKE | ?email=ilike.*@example.com |
in | IN | ?status=in.(active,pending,review) |
is | IS | ?deleted_at=is.null or ?active=is.true |
All operators support negation with the not. prefix: ?role=not.eq.admin.
Logic Groups
Combine filters with or and and logic:
?or=(status.eq.active,role.eq.admin)
?and=(age.gte.18,verified.is.true)
Resource Embedding (Joins)
Fetch related tables in a single request using the select parameter. The engine detects foreign key relationships automatically via schema introspection:
?select=id,name,orders(id,amount,product(name))
This generates a LEFT JOIN query following foreign key constraints. Embedding depth is limited by tier: 2 (Free), 3 (Pro), 5 (Enterprise).
Pagination and Counting
- Offset-based:
?limit=20&offset=40 - Exact count: Add
Prefer: count=exactheader to get total row count in theContent-Rangeresponse header (format:0-19/142) - Minimal return: Add
Prefer: return=minimalheader to skip returning the response body on mutations
Schema Introspection
The engine introspects the PostgreSQL catalog (information_schema.tables, information_schema.columns, pg_index, foreign key constraints) to discover all tables, columns, types, primary keys, foreign keys, and indexes. This metadata drives the query compiler, embedding resolution, and the /schema endpoint.
Two-Tier Cache
Compiled schemas are cached in two tiers to avoid re-introspecting on every request:
| Tier | Backend | TTL | Purpose |
|---|---|---|---|
| L1 | moka (in-process) | 5 minutes | Sub-microsecond lookup, no network |
| L2 | Redis | 1 hour | Shared across server instances |
Cache lookup order: L1 hit returns immediately. L1 miss checks L2 (Redis). L2 hit populates L1 and returns. Full miss triggers introspection, compilation, and storage in both tiers. The get_or_build method uses moka's try_get_with for single-flight coalescing — if multiple concurrent requests miss the cache for the same project, only one runs the introspection query and the others await its result.
If Redis is unavailable, the engine degrades gracefully by falling through to introspection without error.
Query Compiler
On cache miss, the engine introspects the schema and compiles a set of parameterized SQL templates for every table:
| Pattern | SQL Template |
|---|---|
select_all | SELECT cols FROM "schema"."table" LIMIT $1 OFFSET $2 |
select_pk | SELECT cols FROM "schema"."table" WHERE "pk"=$1 |
insert | INSERT INTO "schema"."table" (cols) VALUES ($1,...) RETURNING * |
update_pk | UPDATE "schema"."table" SET "col"=$1 WHERE "pk"=$N RETURNING * |
delete_pk | DELETE FROM "schema"."table" WHERE "pk"=$1 RETURNING * |
count | SELECT count(*) FROM "schema"."table" |
Insert and update plans are compiled on-demand with column-set hashing (first 8 hex chars of SHA-256 of sorted column names) for cache keys like users:insert:a1b2c3d4.
Index Advisor
The engine records query patterns to Redis in a fire-and-forget manner. For every GET/HEAD request, columns used in WHERE and ORDER BY clauses are tracked:
- Redis key format:
idx_advisor:{project_id}:where:{table}:{column}andidx_advisor:{project_id}:order:{table}:{column} - TTL: 7 days
- Threshold: Columns appearing in 10+ queries without a covering index generate a recommendation
- High impact: 100+ queries without an index
The advisor also scans for pgvector vector columns without HNSW indexes and recommends them unconditionally.
API Reference
REST CRUD Endpoints
| Method | Path | Description | Auth |
|---|---|---|---|
GET | /rest/v1/{table} | Select rows with filters, ordering, pagination, embedding | API Key |
HEAD | /rest/v1/{table} | Headers only (count, schema version) | API Key |
POST | /rest/v1/{table} | Insert one row (JSON object) or bulk insert (JSON array) | API Key |
PATCH | /rest/v1/{table} | Update rows matching filters | API Key |
DELETE | /rest/v1/{table} | Delete rows matching filters | API Key |
SQL Endpoint
| Method | Path | Description | Auth |
|---|---|---|---|
POST | /rest/v1/sql | Execute raw SQL (SELECT, INSERT, UPDATE, DELETE, WITH, EXPLAIN only) | Service Key |
Schema Endpoint
| Method | Path | Description | Auth |
|---|---|---|---|
GET | /rest/v1/schema | Full schema introspection (tables, columns, types, PKs, FKs, indexes) | API Key |
Vector Search
| Method | Path | Description | Auth |
|---|---|---|---|
POST | /rest/v1/vector/search | Approximate nearest neighbor search on vector columns | API Key |
Code Examples
Basic CRUD
# Select all rows
curl http://localhost:3000/rest/v1/todos \
-H "Authorization: Bearer $ANON_KEY"
# Select with filters
curl "http://localhost:3000/rest/v1/todos?status=eq.active&order=created_at.desc&limit=10" \
-H "Authorization: Bearer $ANON_KEY"
# Select specific columns with embedding
curl "http://localhost:3000/rest/v1/orders?select=id,total,customer(name,email)" \
-H "Authorization: Bearer $ANON_KEY"
# Insert a row
curl -X POST http://localhost:3000/rest/v1/todos \
-H "Authorization: Bearer $SERVICE_KEY" \
-H "Content-Type: application/json" \
-d '{"title": "Buy groceries", "status": "pending"}'
# Bulk insert
curl -X POST http://localhost:3000/rest/v1/todos \
-H "Authorization: Bearer $SERVICE_KEY" \
-H "Content-Type: application/json" \
-d '[
{"title": "Buy groceries", "status": "pending"},
{"title": "Walk the dog", "status": "pending"}
]'
# Update with filter (filter required — prevents full-table mutation)
curl -X PATCH "http://localhost:3000/rest/v1/todos?id=eq.123" \
-H "Authorization: Bearer $SERVICE_KEY" \
-H "Content-Type: application/json" \
-d '{"status": "completed"}'
# Delete with filter (filter required)
curl -X DELETE "http://localhost:3000/rest/v1/todos?status=eq.archived" \
-H "Authorization: Bearer $SERVICE_KEY"
Exact Count and Minimal Return
# Get exact count in Content-Range header
curl http://localhost:3000/rest/v1/todos \
-H "Authorization: Bearer $ANON_KEY" \
-H "Prefer: count=exact"
# Response header: Content-Range: 0-9/142
# Insert without returning the created row
curl -X POST http://localhost:3000/rest/v1/todos \
-H "Authorization: Bearer $SERVICE_KEY" \
-H "Content-Type: application/json" \
-H "Prefer: return=minimal" \
-d '{"title": "Quick task"}'
# Returns 201 with no body
Advanced Filters
# IN operator
curl "http://localhost:3000/rest/v1/users?role=in.(admin,editor,moderator)" \
-H "Authorization: Bearer $ANON_KEY"
# IS NULL
curl "http://localhost:3000/rest/v1/todos?deleted_at=is.null" \
-H "Authorization: Bearer $ANON_KEY"
# Negation
curl "http://localhost:3000/rest/v1/users?role=not.eq.admin" \
-H "Authorization: Bearer $ANON_KEY"
# OR logic group
curl "http://localhost:3000/rest/v1/products?or=(price.lt.10,category.eq.sale)" \
-H "Authorization: Bearer $ANON_KEY"
# ILIKE pattern match
curl "http://localhost:3000/rest/v1/users?email=ilike.*@example.com" \
-H "Authorization: Bearer $ANON_KEY"
# Ordering with nulls position
curl "http://localhost:3000/rest/v1/todos?order=created_at.desc.nullslast,title.asc" \
-H "Authorization: Bearer $ANON_KEY"
Raw SQL Endpoint
curl -X POST http://localhost:3000/rest/v1/sql \
-H "Authorization: Bearer $SERVICE_KEY" \
-H "Content-Type: application/json" \
-d '{"query": "SELECT id, name, COUNT(*) OVER() FROM users WHERE active = true LIMIT 50"}'
Response:
{
"columns": ["id", "name", "count"],
"rows": [
{"id": "uuid-1", "name": "Alice", "count": 142},
{"id": "uuid-2", "name": "Bob", "count": 142}
],
"affected_rows": null
}
The SQL endpoint restricts statements to SELECT, INSERT, UPDATE, DELETE, WITH, and EXPLAIN. DDL statements and SET search_path are rejected. The query runs within the tenant schema with the same tier-based statement_timeout.
SDK Usage
import { createClient } from '@altbasedb/sdk'
const client = createClient('http://localhost:3000', 'YOUR_ANON_KEY')
// Select with filters and embedding
const { data, error } = await client
.from('orders')
.select('id, total, customer(name, email)')
.eq('status', 'active')
.order('created_at', { ascending: false })
.limit(20)
// Insert
const { data: newTodo } = await client
.from('todos')
.insert({ title: 'Buy groceries', status: 'pending' })
// Update
const { data: updated } = await client
.from('todos')
.update({ status: 'completed' })
.eq('id', '123')
// Delete
await client
.from('todos')
.delete()
.eq('status', 'archived')
Configuration
The API engine reads configuration from the TenantContext injected by the gateway middleware. The following settings affect query execution:
| Setting | Source | Description |
|---|---|---|
schema_name | API key resolution | Postgres schema to query (e.g., proj_abc123) |
tier | Project settings | Controls statement_timeout and max_embed_depth |
api_key_role | API key resolution | Controls permission checks (Anon, Service, Custom) |
rsa_public_key | Project auth settings | Used for JWT verification and RLS claims injection |
Cache Configuration
| Parameter | Value | Description |
|---|---|---|
| L1 max capacity | Configurable | Maximum number of compiled schemas in moka cache |
| L1 TTL | 300 seconds (5 min) | Time-to-live for in-process cache entries |
| L2 TTL | 3600 seconds (1 hour) | Time-to-live for Redis cache entries |
| L2 key format | compiled_schema:{project_id} | Redis key for serialized compiled schema |
SQL Endpoint Restrictions
| Restriction | Detail |
|---|---|
| Auth | Service key only |
| Allowed keywords | SELECT, INSERT, UPDATE, DELETE, WITH, EXPLAIN |
| Blocked | SET search_path, DDL (CREATE, DROP, ALTER, GRANT, REVOKE) |
| Timeout | Tier-based statement timeout (5s / 15s / 30s) |
How It Works
Request Execution Pipeline
Every REST request follows this pipeline inside executor::execute_request:
- Schema resolution — Get compiled schema from L1/L2 cache, or introspect + compile on miss (single-flight coalesced via moka
try_get_with) - Query parsing — Parse query parameters (
select,order,limit,offset, filters,Preferheader) into aQueryRequestAST - Permission check — Anon keys restricted to GET/HEAD. Custom keys validated against per-table permission map. Service keys have full access
- Mutation safety — PATCH and DELETE require at least one filter to prevent accidental full-table mutations
- Schema name validation — Only alphanumeric and underscore characters allowed
- Table lookup — Validate that the requested table exists in the compiled schema
- Transaction start — Begin transaction with
SET LOCAL search_path,SET LOCAL statement_timeout - JWT claims injection — Extract JWT from Authorization header or
sb-access-tokencookie, verify RS256 signature, inject claims viaSET LOCAL request.jwt.claim.sub,request.jwt.claim.role,request.jwt.claim.emailfor PostgreSQL RLS evaluation. Anonymous requests setrequest.jwt.claim.role = 'anon' - SQL generation — Simple queries use pre-compiled templates. Complex queries (embeds, multiple joins) use the complex builder with lateral joins
- Execution — Run parameterized SQL, convert
PgRowresults to JSON (supporting bool, i32, i64, f64, UUID, DateTime, pgvector Vector, JSONB, and String types) - Optional count — If
Prefer: count=exact, run a separateSELECT count(*)query without LIMIT/OFFSET - Commit — Commit transaction
- Index advisor — Fire-and-forget pattern recording to Redis for WHERE and ORDER BY columns
- Response — Return JSON array with
x-schema-versionheader and optionalContent-Rangeheader
Schema Introspection Queries
The engine runs five SQL queries against the PostgreSQL catalog:
| Query | Source | Data |
|---|---|---|
| Tables | information_schema.tables | All base tables in the schema |
| Columns | information_schema.columns | Column names, types, nullability, defaults, identity, array status |
| Primary keys | information_schema.table_constraints + key_column_usage | PK columns per table |
| Foreign keys | table_constraints + key_column_usage + constraint_column_usage | FK relationships for embedding |
| Indexes | pg_index + pg_class + pg_attribute + pg_am | Index names, columns, uniqueness, method (btree/hnsw), partial predicates |
Foreign keys produce two relationship entries: BelongsTo on the source table and HasMany on the target table. The schema hash is a SHA-256 digest of the canonical JSON representation of the table map (using BTreeMap for deterministic ordering).
Query Compilation
The compiler pre-generates parameterized SQL templates for each table:
- Column lists are fully quoted (
"id","name") - Schema and table names are quoted (
"proj_abc"."users") - Primary key WHERE clauses use positional parameters (
"id"=$1) - Insert plans are compiled on-demand per unique column set, with the cache key incorporating the first 8 hex characters of the SHA-256 hash of sorted column names
- Update plans combine SET clauses (
"name"=$1, "email"=$2) with PK WHERE clauses ("id"=$3)
Index Advisor Recommendations
When you call the advisor endpoint, it:
- Scans Redis keys matching
idx_advisor:{project_id}:where:*using cursor-basedSCAN - For each column with 10+ query hits, checks
pg_indexfor an existing covering index - Generates
CREATE INDEX CONCURRENTLYSQL for uncovered columns (btree by default) - Scans all tables for pgvector
vectorcolumns without indexes and recommends HNSW withvector_cosine_ops - Returns recommendations with estimated impact (Medium for 10-99 queries, High for 100+)
Example recommendation:
{
"table": "orders",
"columns": ["customer_id"],
"index_type": "btree",
"reason": "Column \"customer_id\" on table \"orders\" appears in WHERE clauses 247 times but has no covering index.",
"estimated_impact": "High",
"create_sql": "CREATE INDEX CONCURRENTLY \"idx_orders_customer_id\" ON \"proj_abc\".\"orders\" (\"customer_id\")"
}
Type Conversion
The executor converts PostgreSQL row values to JSON using a type-priority chain:
bool— JSON booleani32— JSON numberi64— JSON numberf64— JSON numberUuid— JSON string (formatted)DateTime<Utc>— JSON string (RFC 3339)pgvector::Vector— JSON array of floatsserde_json::Value— pass-through (JSONB/JSON columns)String— JSON string- Fallback — JSON null
Control Plane
Platform users, organizations, projects, API keys, gateway middleware, request logging, and rate limiting in AltBase.
Authentication & SSO
Auth methods (email/password, OAuth, magic link, OTP, TOTP/MFA), JWT RS256, refresh token families, enterprise SSO (OIDC/SAML), 47 provider presets, customer organizations, RLS, and setup wizard.