GraphQL
Auto-generated GraphQL API from PostgreSQL introspection, with queries, mutations, subscriptions, DataLoader batching, RLS enforcement, and built-in GraphiQL IDE.
GraphQL
The atlas-graphql crate auto-generates a complete GraphQL API from your PostgreSQL tables. Every project gets queries, mutations, and real-time subscriptions without writing resolver boilerplate. The schema is built dynamically from database introspection, cached in Redis, and kept in sync via manual refresh. Foreign key relationships are resolved automatically with DataLoader batching, and all operations enforce row-level security.
Overview
When a GraphQL request arrives for a project, AltBase introspects the tenant database schema, generates GraphQL types for each table, and builds query, mutation, and subscription root fields. The schema is cached per-project in a tiered cache (in-memory LRU + Redis) and rebuilt only when you call the refresh endpoint after DDL changes.
Subscriptions are powered by the CDC (Change Data Capture) pipeline via NATS. When a row changes, the CDC event is pushed through NATS to any active WebSocket subscriptions that match the table and filter criteria.
Applications built on AltBase can extend the auto-generated schema with custom resolvers via a Rust trait (for compiled apps) or via deployed JavaScript/Wasm functions (for runtime apps).
Key Concepts
Dynamic schema generation uses async_graphql::dynamic to build schemas at runtime from introspection data, not compile-time derive macros. Each table becomes a GraphQL object type, with fields mapped from PostgreSQL column types.
Auto-nested relationships are discovered from foreign keys. A person.company_id FK automatically generates a company: Company field on the Person type and a people: [Person!]! reverse field on Company. Nested queries use DataLoader batching to prevent N+1 queries.
Schema caching stores the compiled schema in Redis keyed by graphql:schema:{project_id}:{schema_hash}. An in-memory LRU cache (128 entries) sits in front of Redis for hot tenants. A DashMap<ProjectId, OnceCell> prevents thundering herd on first request.
Connection types wrap list queries with pagination metadata. Every list query returns a {Type}Connection with nodes, totalCount, and hasNextPage. The server-side limit cap is 100 rows.
Filter inputs are generated per-table based on column types. Filters support eq, neq, in, notIn, gt, gte, lt, lte, like, ilike, isNull, and compound operators _and, _or, _not.
RLS enforcement runs every GraphQL request inside a single PostgreSQL transaction. Before any SQL executes, SET LOCAL injects the user's role and JWT claims so that row-level security policies apply to all queries, mutations, and DataLoader fetches within that request.
Soft deletes are handled automatically. Tables with a deleted_at column filter out deleted rows by default. Pass includeDeleted: true to list queries to see them.
Query limits prevent abuse. Max query depth is 10 levels, max complexity is 500 (each field = 1, nested object = 10, list = 20).
API Reference
Endpoints
| Method | Path | Description | Auth |
|---|---|---|---|
GET | /graphql | GraphiQL IDE (browser) | API Key |
POST | /graphql | Execute GraphQL query or mutation | API Key |
GET | /graphql/ws | WebSocket for subscriptions (graphql-transport-ws protocol) | API Key |
POST | /graphql/schema/refresh | Evict schema cache and rebuild from database | Service Key |
POST | /graphql/extensions | Register a custom resolver (function-based) | Service Key |
GET | /graphql/extensions | List registered custom resolvers | Service Key |
DELETE | /graphql/extensions/:id | Remove a custom resolver | Service Key |
PostgreSQL to GraphQL Type Mapping
| PostgreSQL Type | GraphQL Scalar |
|---|---|
uuid | UUID (custom scalar) |
text, varchar | String |
integer, int4 | Int |
bigint, int8 | Int (or BigInt custom scalar for values > 2^53) |
boolean | Boolean |
numeric, decimal | Float |
real, float4, float8 | Float |
timestamptz, timestamp | DateTime (ISO 8601 custom scalar) |
date | Date (custom scalar) |
jsonb, json | JSON (custom scalar) |
text[] | [String!] |
uuid[] | [UUID!] |
integer[] | [Int!] |
Filter Input Types
| Filter Type | Operators |
|---|---|
StringFilter | eq, neq, in, notIn, like, ilike, isNull |
IntFilter | eq, neq, in, gt, gte, lt, lte, isNull |
FloatFilter | eq, neq, gt, gte, lt, lte, isNull |
UUIDFilter | eq, neq, in, isNull |
DateTimeFilter | eq, gt, gte, lt, lte, isNull |
BooleanFilter | eq |
JsonFilter | eq, contains (@>), containedBy (<@), hasKey (?) |
RLS Role Behavior
| API Key Role | PostgreSQL Role | Effect |
|---|---|---|
| Anon | anon | RLS policies for anonymous access apply |
| Service | service_role | Bypasses RLS (full access) |
| Custom (JWT) | authenticated | RLS policies for authenticated users apply; auth_uid() returns JWT subject |
Code Examples
Query with filters and ordering
curl -X POST http://localhost:3000/graphql \
-H "Authorization: Bearer $ANON_KEY" \
-H "Content-Type: application/json" \
-d '{
"query": "query { todos(where: { completed: { eq: false } }, orderBy: [{ field: CREATED_AT, direction: DESC }], limit: 20) { nodes { id title completed created_at } totalCount hasNextPage } }"
}'
Insert mutation
curl -X POST http://localhost:3000/graphql \
-H "Authorization: Bearer $SERVICE_KEY" \
-H "Content-Type: application/json" \
-d '{
"query": "mutation { insertTodo(data: { title: \"Buy milk\", completed: false }) { id title created_at } }"
}'
Nested relationship query
query {
companies(limit: 10) {
nodes {
id
name
people {
id
name
email
}
opportunities {
id
title
value
}
}
}
}
WebSocket subscription
const ws = new WebSocket(
'ws://localhost:3000/graphql/ws',
'graphql-transport-ws'
);
ws.onopen = () => {
ws.send(JSON.stringify({
type: 'connection_init',
payload: { Authorization: `Bearer ${ANON_KEY}` }
}));
ws.send(JSON.stringify({
id: '1',
type: 'subscribe',
payload: {
query: `subscription {
todos(events: [INSERT, UPDATE]) {
event
new { id title completed }
}
}`
}
}));
};
ws.onmessage = (event) => {
const data = JSON.parse(event.data);
console.log(data);
};
Refresh schema after DDL changes
curl -X POST http://localhost:3000/graphql/schema/refresh \
-H "Authorization: Bearer $SERVICE_KEY"
Register a custom resolver via functions
curl -X POST http://localhost:3000/graphql/extensions \
-H "Authorization: Bearer $SERVICE_KEY" \
-H "Content-Type: application/json" \
-d '{
"fieldName": "calculateDiscount",
"fieldType": "mutation",
"functionId": "uuid-of-deployed-function",
"inputSchema": {
"type": "object",
"properties": {
"orderId": { "type": "string", "format": "uuid" },
"code": { "type": "string" }
},
"required": ["orderId", "code"]
},
"outputSchema": {
"type": "object",
"properties": {
"discount": { "type": "number" },
"finalAmount": { "type": "integer" }
}
}
}'
Configuration
The GraphQL engine uses the shared Redis connection for schema caching and the shared NATS connection for subscriptions. No additional environment variables are required.
Schema Cache Settings
| Setting | Value |
|---|---|
| Redis key format | graphql:schema:{project_id}:{schema_hash} |
| TTL | None (invalidated explicitly via refresh endpoint) |
| In-memory LRU size | 128 entries |
| Thundering herd prevention | DashMap<ProjectId, OnceCell> ensures one introspection per tenant |
Auto-Generated Schema Conventions
| Convention | Behavior |
|---|---|
| Table naming | call_log becomes type CallLog (split on _, capitalize each segment) |
| Soft deletes | Tables with deleted_at auto-filter unless includeDeleted: true |
| Workspace scoping | All queries add workspace_id = project_id from TenantContext |
| Read-only fields | created_at, updated_at excluded from insert/update inputs |
| Auto-filled columns | id (gen_random_uuid), workspace_id (from context), timestamps |
| Reverse relationship limit | Max 100 rows per parent; use explicit list queries for more |
Auto-Generated Mutation Behavior
| Operation | SQL Pattern |
|---|---|
insertTodo | INSERT INTO ... RETURNING * with workspace_id from context |
updateTodo | UPDATE ... SET name = COALESCE($1, name) WHERE id = $2 AND workspace_id = $3 RETURNING * |
deleteTodo | Soft delete (SET deleted_at = now()) if deleted_at column exists, otherwise DELETE FROM |
| Bulk update | updateTodos(where, data) returns { affectedRows: Int! } |
| Bulk delete | deleteTodos(where) returns { affectedRows: Int! } |
How It Works
Query Execution Flow
- Client sends a GraphQL query to
POST /graphql. - The
api_key_middlewareauthenticates the request and injectsTenantContext. - User ID is extracted from the JWT in the
Authorizationheader. - The schema cache is checked for this project. On cache miss, the schema is built by introspecting the tenant database and generating GraphQL types dynamically.
- A PostgreSQL transaction is started. RLS context is injected via
SET LOCAL(role, JWT claims, JWT subject). async-graphqlresolves the query. Nested relationship fields use DataLoader batching: FK values are collected into a set, a singleSELECT ... WHERE id = ANY($1)fetches all related rows, and results are mapped back to parent rows.- The transaction is committed and results are returned as JSON.
Subscription Flow
- Client connects to
/graphql/wsusing thegraphql-transport-wsprotocol. - Client sends
connection_initwith auth credentials in the payload. - Client sends a
subscribemessage with a subscription query. - The server subscribes to the NATS subject
atlas.cdc.{project_id}.{schema_name}.{table_name}.>. - On each CDC event, the
wherefilter is applied in-memory on thenewrecord (for INSERT/UPDATE) oroldrecord (for DELETE). - If the event matches, a
ChangePayloadwithevent,old,new,columns, andcommitTimestampis pushed to the WebSocket. - The subscription remains active until the client disconnects or sends an
unsubscribemessage.
DataLoader Batching (N+1 Prevention)
For a query like companies { people { name } }:
- The top-level query returns N company rows.
- For the
peoplefield, the DataLoader collects all company IDs from the parent result. - A single batch query runs:
SELECT * FROM person WHERE company_id = ANY($1) AND deleted_at IS NULL. - Results are grouped by
company_idand attached to each Company node.
Each request creates fresh DataLoader instances keyed by (schema_name, table_name, fk_column) so batching is scoped to the request transaction and inherits its RLS context.
Vector Search & RAG
Embedding generation, vector storage backends, hybrid search with RRF fusion, auto-embedding via CDC, knowledge collections, and LLM-powered Q&A with source citations.
Analytics & Billing
Plan tiers with resource limits, Stripe subscription management, usage tracking with 30-day rolling windows, materialized views for analytics, and webhook-driven tier changes.