Storage
Buckets, object CRUD, multipart uploads, TUS v1.0.0 protocol, storage providers (filesystem, S3, Azure Blob), signed URLs, image transforms, storage policies, and background cleanup.
Overview
The AltBase storage system (atlas-storage crate) provides S3-compatible file management with buckets, objects, access policies, image transformations, signed URLs, and TUS resumable uploads. It supports three backend providers — local filesystem, S3-compatible (AWS S3, MinIO, Cloudflare R2, DigitalOcean Spaces), and Azure Blob Storage — switchable via a single environment variable.
Storage is fully integrated with AltBase auth: bucket-level policies use the same SQL expression model as RLS, referencing auth_uid(), auth_role(), and auth_email() for access control decisions.
Key Concepts
Buckets
Buckets are the top-level containers for objects. Each bucket belongs to a project and has configurable settings:
| Setting | Description |
|---|---|
name | Unique name within the project (max 100 characters) |
visibility | public or private — public buckets allow unauthenticated SELECT |
max_file_size | Maximum upload size in bytes (per file) |
allowed_mimes | Whitelist of allowed MIME types (e.g., ["image/png", "image/jpeg"]) |
Storage isolation: each project's files are stored in a provider container named {project_id}-{bucket_name}.
Objects
Objects represent files stored within buckets. Each object has:
| Field | Description |
|---|---|
id | UUID primary key |
name | Full path within bucket (e.g., avatars/user1/photo.jpg) |
size | File size in bytes |
content_type | MIME type |
checksum | SHA-256 hex digest |
owner_id | Tenant user ID (nullable for service key uploads) |
user_metadata | Arbitrary JSON key-value pairs |
Object listing is served from the storage_objects PostgreSQL table, not from the storage provider. The database is the source of truth for file metadata.
Upload Strategies
| Strategy | Use Case | Max Size |
|---|---|---|
| Simple upload | Small files (default threshold 6 MB) | Bucket max_file_size |
| Resumable upload | Large files with chunked retry | System-wide max (default 5 GB) |
| TUS v1.0.0 | Standard resumable upload protocol with chunk-level retry | System-wide max |
Storage Providers
| Provider | Backend | Config | Best For |
|---|---|---|---|
filesystem | Local disk | ATLAS_STORAGE_ROOT | Local dev, testing, CI |
s3 | AWS S3, MinIO, R2, DO Spaces | ATLAS_S3_* vars | Production (cloud) |
azure | Azure Blob Storage / Azurite | ATLAS_AZURE_CONNECTION_STRING | Azure environments |
All three providers implement the same StorageProvider trait. Switching is a config change, not a code change.
Signed URLs
Signed URLs provide time-limited access to private objects without requiring API key authentication. The URL includes an HMAC-SHA256 signature computed from the signing key (derived from ATLAS_MASTER_KEY via HKDF with context "atlas-storage-signed-urls"), bucket, path, project ID, and expiry timestamp.
Token format: {base64url(payload)}.{base64url(signature)}
Payload: {"b":"bucket","p":"path","pid":"project_id","exp":1710600000}
Image Transforms
On-the-fly image transformations applied during download via query parameters. Transformed images are cached in Redis with a 1-hour TTL. Cache is invalidated when the source file is overwritten or deleted.
| Parameter | Type | Description |
|---|---|---|
width | u32 | Target width in pixels |
height | u32 | Target height in pixels |
format | string | Output format: webp, png, jpeg |
quality | u8 | Compression quality 1-100 (default 80, jpeg/webp only) |
Behavior: if only width or height is given, aspect ratio is preserved. If both are given, the image is resized to fit within bounds without distortion. Transforms only apply to image/* MIME types. Non-image files with transform params are returned unchanged. Maximum source image size for transforms is configurable (default 10 MB).
Storage Policies
Storage policies control access at the bucket level using SQL expressions evaluated against auth context:
| Operation | Controls |
|---|---|
SELECT | Download, list, signed URL generation |
INSERT | Upload (simple + resumable) |
UPDATE | Overwrite existing file |
DELETE | Delete file |
Policy evaluation uses OR semantics (same as PostgreSQL RLS): if any policy returns true, the operation is allowed. If no policies exist for an operation, access is denied (default-deny). Public buckets skip policy checks for SELECT operations.
Available functions in policy expressions:
auth_uid()— authenticated user's IDauth_role()— user's roleauth_email()— user's email- Any column on
storage_objects(e.g.,owner_id,content_type,name)
API Reference
Bucket Endpoints
| Method | Path | Description | Auth |
|---|---|---|---|
POST | /storage/buckets | Create a bucket | API Key |
GET | /storage/buckets | List all buckets | API Key |
GET | /storage/buckets/{name} | Get bucket details | API Key |
PUT | /storage/buckets/{name} | Update bucket settings | API Key |
DELETE | /storage/buckets/{name} | Delete empty bucket and all objects | API Key |
Object Endpoints
| Method | Path | Description | Auth |
|---|---|---|---|
POST | /storage/object/{bucket}/{*path} | Upload a file (simple upload) | API Key |
GET | /storage/object/{bucket}/{*path} | Download a file (with optional transforms) | API Key |
DELETE | /storage/object/{bucket}/{*path} | Delete a file | API Key |
GET | /storage/objects/{bucket} | List objects in bucket | API Key |
Resumable Upload Endpoints
| Method | Path | Description | Auth |
|---|---|---|---|
POST | /storage/upload/resumable | Create a resumable upload session | API Key |
PATCH | /storage/upload/{upload_id} | Upload a chunk | API Key |
POST | /storage/upload/{upload_id}/complete | Complete the upload | API Key |
HEAD | /storage/upload/{upload_id} | Check upload progress (bytes_uploaded, total_size) | API Key |
TUS v1.0.0 Protocol Endpoints
| Method | Path | Description | Auth |
|---|---|---|---|
OPTIONS | /storage/v1/upload/tus | TUS server capabilities | API Key |
POST | /storage/v1/upload/tus | Create TUS upload | API Key |
PATCH | /storage/v1/upload/tus/{upload_id} | Upload TUS chunk | API Key |
HEAD | /storage/v1/upload/tus/{upload_id} | Get TUS upload offset | API Key |
DELETE | /storage/v1/upload/tus/{upload_id} | Cancel TUS upload | API Key |
Signed URL Endpoints
| Method | Path | Description | Auth |
|---|---|---|---|
POST | /storage/signed-url | Generate a signed URL | API Key |
GET | /storage/signed/{token} | Serve file via signed URL | None |
Policy Endpoints
| Method | Path | Description | Auth |
|---|---|---|---|
POST | /storage/policies | Create a storage policy | Service Key |
GET | /storage/policies/bucket/{bucket} | List policies for bucket | Service Key |
DELETE | /storage/policies/{id} | Delete a policy | Service Key |
Code Examples
Bucket Management
# Create a public bucket with MIME whitelist
curl -X POST http://localhost:3000/storage/buckets \
-H "Authorization: Bearer $SERVICE_KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "avatars",
"public": true,
"allowed_mime_types": ["image/png", "image/jpeg", "image/webp"],
"file_size_limit": 5242880
}'
# Create a private bucket for documents
curl -X POST http://localhost:3000/storage/buckets \
-H "Authorization: Bearer $SERVICE_KEY" \
-H "Content-Type: application/json" \
-d '{"name": "documents", "public": false}'
# List all buckets
curl http://localhost:3000/storage/buckets \
-H "Authorization: Bearer $SERVICE_KEY"
# Delete a bucket (removes all objects)
curl -X DELETE http://localhost:3000/storage/buckets/avatars \
-H "Authorization: Bearer $SERVICE_KEY"
Object Upload and Download
# Simple upload
curl -X POST http://localhost:3000/storage/object/avatars/user1.png \
-H "Authorization: Bearer $SERVICE_KEY" \
-H "Content-Type: image/png" \
--data-binary @avatar.png
# Download a file
curl http://localhost:3000/storage/object/avatars/user1.png \
-H "Authorization: Bearer $ANON_KEY" -o avatar.png
# Download with image transform
curl "http://localhost:3000/storage/object/avatars/user1.png?width=200&height=200&format=webp&quality=80" \
-H "Authorization: Bearer $ANON_KEY" -o thumbnail.webp
# List objects with prefix and pagination
curl "http://localhost:3000/storage/objects/avatars?prefix=user1/&limit=100&offset=0" \
-H "Authorization: Bearer $ANON_KEY"
# Delete a file
curl -X DELETE http://localhost:3000/storage/object/avatars/user1.png \
-H "Authorization: Bearer $SERVICE_KEY"
Signed URLs
# Generate a signed URL (1 hour expiry)
curl -X POST http://localhost:3000/storage/signed-url \
-H "Authorization: Bearer $SERVICE_KEY" \
-H "Content-Type: application/json" \
-d '{"bucket": "documents", "path": "invoice.pdf", "expires_in": 3600}'
# Response:
# {"signed_url": "/storage/signed/eyJiIjoiZG9jdW1lbnRzIi..."}
# Access file via signed URL (no auth required)
curl http://localhost:3000/storage/signed/eyJiIjoiZG9jdW1lbnRzIi... -o invoice.pdf
TUS Resumable Upload
# Step 1: Create TUS upload
curl -X POST http://localhost:3000/storage/v1/upload/tus \
-H "Authorization: Bearer $SERVICE_KEY" \
-H "Tus-Resumable: 1.0.0" \
-H "Upload-Length: 104857600" \
-H "Upload-Metadata: filename dmlkZW8ubXA0,bucket dmlkZW9z"
# Returns: Location: /storage/v1/upload/tus/{upload_id}
# Step 2: Upload chunks
curl -X PATCH http://localhost:3000/storage/v1/upload/tus/{upload_id} \
-H "Authorization: Bearer $SERVICE_KEY" \
-H "Tus-Resumable: 1.0.0" \
-H "Upload-Offset: 0" \
-H "Content-Type: application/offset+octet-stream" \
--data-binary @chunk1.bin
# Step 3: Check progress (after disconnect)
curl -I http://localhost:3000/storage/v1/upload/tus/{upload_id} \
-H "Authorization: Bearer $SERVICE_KEY" \
-H "Tus-Resumable: 1.0.0"
# Returns: Upload-Offset: 5242880
# Step 4: Resume from offset
curl -X PATCH http://localhost:3000/storage/v1/upload/tus/{upload_id} \
-H "Authorization: Bearer $SERVICE_KEY" \
-H "Tus-Resumable: 1.0.0" \
-H "Upload-Offset: 5242880" \
-H "Content-Type: application/offset+octet-stream" \
--data-binary @chunk2.bin
Storage Policies
# Allow any authenticated user to upload to avatars
curl -X POST http://localhost:3000/storage/policies \
-H "Authorization: Bearer $SERVICE_KEY" \
-H "Content-Type: application/json" \
-d '{"bucket":"avatars","name":"authenticated_upload","operation":"INSERT","definition":"auth_uid() IS NOT NULL"}'
# Only file owner can delete
curl -X POST http://localhost:3000/storage/policies \
-H "Authorization: Bearer $SERVICE_KEY" \
-H "Content-Type: application/json" \
-d '{"bucket":"avatars","name":"owner_delete","operation":"DELETE","definition":"auth_uid()::uuid = owner_id"}'
# Only owner can read their files in private bucket
curl -X POST http://localhost:3000/storage/policies \
-H "Authorization: Bearer $SERVICE_KEY" \
-H "Content-Type: application/json" \
-d '{"bucket":"documents","name":"owner_read","operation":"SELECT","definition":"auth_uid()::uuid = owner_id"}'
SDK Usage
import { createClient } from '@altbasedb/sdk'
const client = createClient('http://localhost:3000', 'YOUR_ANON_KEY')
// Upload a file
const { data, error } = await client.storage
.from('avatars')
.upload('user1/photo.png', file, {
contentType: 'image/png',
upsert: true
})
// Download a file
const { data: blob } = await client.storage
.from('avatars')
.download('user1/photo.png')
// Get a public URL (public bucket)
const { data: urlData } = client.storage
.from('avatars')
.getPublicUrl('user1/photo.png', {
transform: { width: 200, height: 200, format: 'webp' }
})
// Create a signed URL (private bucket)
const { data: signedUrl } = await client.storage
.from('documents')
.createSignedUrl('invoice.pdf', 3600)
// List files
const { data: files } = await client.storage
.from('avatars')
.list('user1/', { limit: 100, offset: 0 })
// Delete a file
await client.storage.from('avatars').remove(['user1/photo.png'])
Configuration
Environment Variables
| Variable | Default | Description |
|---|---|---|
ATLAS_STORAGE_PROVIDER | filesystem | Backend provider: filesystem, s3, azure |
ATLAS_STORAGE_ROOT | — | Local filesystem root directory |
ATLAS_MASTER_KEY | (required) | Used to derive HMAC signing key for signed URLs via HKDF |
S3-Compatible Provider
| Variable | Description |
|---|---|
ATLAS_S3_ENDPOINT | S3 endpoint URL |
ATLAS_S3_REGION | AWS region |
ATLAS_S3_ACCESS_KEY | Access key |
ATLAS_S3_SECRET_KEY | Secret key |
ATLAS_S3_BUCKET | Default bucket name |
Compatible with AWS S3, MinIO, Cloudflare R2, and DigitalOcean Spaces.
Azure Blob Storage Provider
| Variable | Description |
|---|---|
ATLAS_AZURE_CONNECTION_STRING | Azure Storage connection string (or UseDevelopmentStorage=true for Azurite) |
System Limits
| Parameter | Default | Description |
|---|---|---|
storage_max_upload_size | 5 GB | System-wide maximum upload size |
storage_max_transform_size | 10 MB | Maximum source image size for transforms |
| Transform cache TTL | 1 hour | Redis TTL for cached transformed images |
| Transform cache key | transform:{bucket}:{path}:w{width}:h{height}:f{format}:q{quality} | Redis key format |
| Upload session TTL | 24 hours | Redis TTL for resumable upload state |
How It Works
Simple Upload Flow
POST /storage/object/{bucket}/{*path}with file data- Validate bucket exists and is accessible
- Check if file already exists at path: if yes, evaluate UPDATE policy; if new, evaluate INSERT policy
- Check file size against
bucket.max_file_size - Check MIME type against
bucket.allowed_mimes(if configured). MIME type is validated against file magic bytes (first 512 bytes) to prevent spoofing - Compute SHA-256 checksum
provider.put_object(container, path, data, content_type)- UPSERT into
storage_objects(ON CONFLICT (bucket_id, name) DO UPDATE) — last-writer-wins semantics - Return
{ id, name, size, content_type, checksum }
TUS Upload Flow
- Client sends
POST /storage/v1/upload/tuswithUpload-LengthandUpload-Metadataheaders (TUS-Resumable: 1.0.0 required) - Server creates upload state in Redis (upload ID, offset, metadata) with 24-hour TTL
- Server calls
provider.initiate_multipart()to get a provider-specific upload ID - Client sends
PATCHrequests with file chunks, each withUpload-OffsetandContent-Type: application/offset+octet-stream - Server validates offset continuity (
start == bytes_uploaded) and callsprovider.upload_part()for each chunk - Redis state is updated atomically (Lua script):
bytes_uploaded += chunk.len(), append(part_number, etag)to parts list - Rolling SHA-256 hash is updated incrementally in Redis (no re-download needed)
- When all bytes received (
Upload-Offset == Upload-Length), upload auto-completes - Server calls
provider.complete_multipart()to finalize at the provider level storage_objectsrow is created, Redis session is cleaned up- Storage event published to NATS for the trigger system
- If client disconnects,
HEADreturnsbytes_uploadedfrom Redis so the client can resume
Chunk Assembly per Provider
| Provider | Chunk Storage | Assembly |
|---|---|---|
| Filesystem | Write temp file _chunks/{upload_id}/{part} | Concatenate temp files to final path, delete temps |
| Azure | Put Block with block ID | Put Block List |
| S3 | UploadPart | CompleteMultipartUpload |
Signed URL Flow
Generation:
POST /storage/signed-urlwith{ bucket, path, expires_in }- Evaluate SELECT policy at generation time
- Build payload:
{ b: bucket, p: path, pid: project_id, exp: now + expires_in } - Sign with HMAC-SHA256 using
StorageState.signing_key(derived fromATLAS_MASTER_KEYvia HKDF) - Return
{ signed_url: "/storage/signed/{base64url_token}" }
Verification:
GET /storage/signed/{token}— no auth required- Decode base64url token, split into payload and signature
- Recompute HMAC-SHA256, constant-time compare
- Check
exp > now provider.get_object(container, path)and stream file
Policy Evaluation
- Load all policies for the bucket + operation (cached in Redis, 60-second TTL)
- If bucket is public AND operation is
SELECT— allow without policy check - If no policies exist for the operation — deny (default-deny)
- For each policy, evaluate the
definitionas a SQL expression with claims injected viaSET LOCAL. For SELECT/UPDATE/DELETE, the expression runs against the existingstorage_objectsrow. For INSERT, a synthetic row CTE is used whereowner_id = auth_uid()::uuidso ownership-based policies work against the prospective file metadata. - If any policy returns
true— allow (OR semantics)
Background Cleanup Worker
A background task runs every hour (Tokio interval timer) to clean up:
- Orphaned objects — database record exists but file is missing from provider
- Expired TUS uploads — stale for more than 24 hours. Uses a Redis sorted set
upload_expirywith(expiry_timestamp, upload_id). The worker callsZRANGEBYSCOREfor expired entries, callsprovider.abort_multipart()to clean up provider-side data, and removes entries from the set - Temporary files — from failed uploads
Bucket Deletion Sequence
- Delete all
storage_objectsrows (CASCADE fromstorage_buckets) - Delete all
storage_policiesrows (CASCADE fromstorage_buckets) - Delete the
storage_bucketsrow - Call
provider.delete_container()to remove provider-side data - Invalidate all transform caches for the bucket via Redis SCAN + DEL
If step 4 fails (provider error), the database deletion still succeeds. Orphaned provider data is logged for manual cleanup.
Cache Invalidation
When a file is overwritten (UPSERT) or deleted, all transform cache keys matching transform:{bucket}:{path}:* are invalidated using Redis SCAN + DEL, preventing stale transformed images from being served.
Authentication & SSO
Auth methods (email/password, OAuth, magic link, OTP, TOTP/MFA), JWT RS256, refresh token families, enterprise SSO (OIDC/SAML), 47 provider presets, customer organizations, RLS, and setup wizard.
Realtime
WebSocket-based CDC streaming, ephemeral broadcast messaging, presence tracking, and cron job scheduling for AtlasDB tenants.