An opinionated production reference for Supabase — client setup, auth flows, RLS design, schema patterns, indexing, Edge Functions, Realtime, Storage, and migration strategy. Based on real projects. Not a tutorial.
This is the reference I built from shipping Supabase across five production systems — a 600K LOC AI recruitment platform, a multi-tenant trade fair SaaS, a real-time spiritual marketplace, a salon management system, and an autonomous vending machine backend.
Every section answers one question: what actually matters in production?
Covers @supabase/ssr with the new asymmetric JWT key model (default since May 2025) and the new API key format (sb_publishable_xxx / sb_secret_xxx). Legacy anon and service_role keys still work during the migration period.
# .env.localNEXT_PUBLIC_SUPABASE_URL=https://your-project.supabase.co# New key format (preferred — May 2025+)NEXT_PUBLIC_SUPABASE_PUBLISHABLE_KEY=sb_publishable_xxx# Legacy keys (still work during migration)NEXT_PUBLIC_SUPABASE_ANON_KEY=eyJhbG...
Never put service_role / sb_secret_xxx in NEXT_PUBLIC_ — it will be shipped to the browser.
This is the most important API decision you'll make. The three methods have fundamentally different behavior:
Method
Network call
Trust level
Use case
getSession()
No
Untrusted — cookies can be spoofed
Never use server-side
getClaims()
Rarely (cached JWKs)
JWT-verified, not DB-verified
Most server reads
getUser()
Always
DB-verified, authoritative
Security-critical checks
// ❌ Never use getSession() server-sideconst { data: { session } } = await supabase.auth.getSession()// session.user is from cookies — could be spoofed// ✅ Use getClaims() for most server-side auth checks// With asymmetric keys: verifies JWT locally via JWKs (cached, fast)// With symmetric keys: still hits Auth server (same as getUser)const { data, error } = await supabase.auth.getClaims()if (error || !data) redirect('/login')const userId = data.claims.sub// ✅ Use getUser() when you need guaranteed freshness// Always hits Auth server — confirms session not revokedconst { data: { user }, error } = await supabase.auth.getUser()// Use for: logout detection, banned users, password changes
The security gap in getClaims(): It validates the JWT signature and expiry, but doesn't check if the user was banned, deleted, or signed out server-side after the token was issued. If your app needs instant revocation, use getUser() or keep JWT expiry short (default is 1 hour).
user_metadata (raw_user_meta_data)
├── User-editable via supabase.auth.updateUser()
├── Appears in JWT as user_metadata
└── NEVER use for authorization decisions
app_metadata (raw_app_meta_data)
├── Only writable by service_role / Admin API
├── Appears in JWT as app_metadata
└── SAFE for authorization decisions
// ❌ Never base auth decisions on user_metadataconst role = claims.user_metadata?.role // user can set this to 'admin'!// ✅ Use app_metadata for rolesconst role = claims.app_metadata?.role // only writable by your backend// Setting app_metadata requires the Admin API (service_role)const { error } = await supabaseAdmin.auth.admin.updateUserById(userId, { app_metadata: { role: 'admin' },})
-- Enable RLSalter table public.orders enable row level security;-- Force RLS even for table owners (critical for safety)alter table public.orders force row level security;
Tables in the public schema are accessible through PostgREST (the auto-generated REST API). Without RLS, anyone with your publishable key can read all rows.
-- Users can only see their own rowscreate policy "users_own_rows" on public.orders for all to authenticated using (user_id = auth.uid());-- Separate insert/select/update policies for granular controlcreate policy "users_can_select" on public.orders for select to authenticated using (user_id = auth.uid());create policy "users_can_insert" on public.orders for insert to authenticated with check (user_id = auth.uid()); -- with check for writescreate policy "users_can_update_own" on public.orders for update to authenticated using (user_id = auth.uid()) -- row must belong to user with check (user_id = auth.uid()); -- updated row must still belong to user-- Public read, authenticated writecreate policy "public_read" on public.posts for select using (true);create policy "auth_write" on public.posts for insert to authenticated with check (author_id = auth.uid());
-- Each user belongs to a tenantcreate table public.tenant_members ( tenant_id uuid not null references public.tenants(id), user_id uuid not null references auth.users(id), role text not null default 'member', primary key (tenant_id, user_id));-- Helper function — runs as SECURITY DEFINER, bypasses RLScreate or replace function public.get_user_tenant_ids()returns setof uuidlanguage sqlsecurity definerset search_path = ''stableas $$ select tenant_id from public.tenant_members where user_id = (select auth.uid())$$;-- Policy: users see rows for any tenant they belong tocreate policy "tenant_data_access" on public.orders for all to authenticated using (tenant_id in (select public.get_user_tenant_ids()));
Store roles in app_metadata for JWT-based policies:
-- Set role via Admin API (server-side only)-- supabase.auth.admin.updateUserById(id, { app_metadata: { role: 'admin' } })-- JWT-based role policycreate policy "admin_full_access" on public.orders for all to authenticated using ( (auth.jwt() -> 'app_metadata' ->> 'role') = 'admin' );-- Combined: admins see all, users see owncreate policy "orders_access" on public.orders for select to authenticated using ( user_id = auth.uid() or (auth.jwt() -> 'app_metadata' ->> 'role') = 'admin' );
The most common RLS performance mistake is calling functions without caching:
-- ❌ auth.uid() called for EVERY rowcreate policy "slow_policy" on public.orders using (user_id = auth.uid());-- ✅ Wrap in SELECT — evaluated once, result cachedcreate policy "fast_policy" on public.orders using (user_id = (select auth.uid()));-- ❌ Subquery executes per rowcreate policy "slow_team_policy" on public.orders using ( team_id in ( select team_id from public.team_members where user_id = auth.uid() ) );-- ✅ Use security definer function — executes oncecreate policy "fast_team_policy" on public.orders using (team_id in (select public.get_user_team_ids()));
Always index columns used in RLS policies:
create index orders_user_id_idx on public.orders (user_id);create index orders_tenant_id_idx on public.orders (tenant_id);create index team_members_user_id_idx on public.team_members (user_id);
-- ❌ This view exposes ALL rows — ignores RLScreate view order_summaries as select id, user_id, total from public.orders;-- ✅ Postgres 15+: security_invoker makes view respect RLScreate view order_summaries with (security_invoker = true) as select id, user_id, total from public.orders;-- For older Postgres: revoke direct accessrevoke all on order_summaries from anon, authenticated;
This is a silent failure — no error, rows just don't update:
-- ❌ Only UPDATE policy — updates silently return 0 rowscreate policy "update_own" on public.orders for update using (user_id = auth.uid());-- ✅ Must have SELECT policy too for UPDATE to workcreate policy "select_own" on public.orders for select using (user_id = auth.uid());create policy "update_own" on public.orders for update using (user_id = auth.uid()) with check (user_id = auth.uid());
// For admin operations that need to bypass RLSimport { createClient } from '@supabase/supabase-js'const supabaseAdmin = createClient( process.env.NEXT_PUBLIC_SUPABASE_URL!, process.env.SUPABASE_SERVICE_ROLE_KEY!, // Never in NEXT_PUBLIC_ { auth: { autoRefreshToken: false, persistSession: false } })// This bypasses ALL RLS policies — use with careconst { data } = await supabaseAdmin .from('orders') .select('*') .eq('user_id', targetUserId)
-- ✅ Correct typescreate table public.users ( id bigint generated always as identity primary key, email text not null unique, -- text, not varchar(255) created_at timestamptz not null default now(), -- always timestamptz is_active boolean not null default true, -- not varchar price numeric(10, 2), -- not float (precision matters) metadata jsonb -- not json (indexable, faster));-- Why text over varchar(n)?-- text and varchar have identical performance in Postgres-- varchar(n) adds a constraint but doesn't save storage-- Use text unless you genuinely need the length constraint enforced at DB level
Postgres does NOT automatically index foreign key columns:
create table public.orders ( id bigint generated always as identity primary key, user_id uuid not null references auth.users(id) on delete cascade, tenant_id uuid not null references public.tenants(id), status text not null default 'pending', created_at timestamptz not null default now());-- Always index FK columns manuallycreate index orders_user_id_idx on public.orders (user_id);create index orders_tenant_id_idx on public.orders (tenant_id);-- Find all missing FK indexesselect conrelid::regclass as table_name, a.attname as fk_columnfrom pg_constraint cjoin pg_attribute a on a.attrelid = c.conrelid and a.attnum = any(c.conkey)where c.contype = 'f' and not exists ( select 1 from pg_index i where i.indrelid = c.conrelid and a.attnum = any(i.indkey) );
-- ✅ lowercase snake_case — works everywhere without quotingcreate table public.user_profiles ( user_id uuid, first_name text, last_name text);-- ❌ camelCase or PascalCase — requires quoting forevercreate table public."UserProfiles" ( -- always needs quotes "userId" uuid, "firstName" text);-- Tip: if stuck with mixed-case from an ORM, create a view as compatibility layercreate view user_profiles as select "userId" as user_id, "firstName" as first_name from "UserProfiles";
PostgreSQL doesn't support ADD CONSTRAINT IF NOT EXISTS:
-- ❌ Syntax erroralter table public.profiles add constraint if not exists profiles_email_unique unique (email);-- ✅ DO block for idempotent constraint creationdo $$begin if not exists ( select 1 from pg_constraint where conname = 'profiles_email_unique' and conrelid = 'public.profiles'::regclass ) then alter table public.profiles add constraint profiles_email_unique unique (email); end if;end $$;
-- For tables > 100M rows or time-series datacreate table public.events ( id bigint generated always as identity, created_at timestamptz not null, data jsonb) partition by range (created_at);create table public.events_2025_01 partition of public.events for values from ('2025-01-01') to ('2025-02-01');create table public.events_2025_02 partition of public.events for values from ('2025-02-01') to ('2025-03-01');-- Drop old data instantly (vs DELETE taking hours)drop table public.events_2024_01;
create table public.profiles ( id uuid primary key references auth.users(id) on delete cascade, full_name text, avatar_url text, updated_at timestamptz default now());alter table public.profiles enable row level security;create policy "profiles_select_own" on public.profiles for select to authenticated using (id = (select auth.uid()));create policy "profiles_update_own" on public.profiles for update to authenticated using (id = (select auth.uid())) with check (id = (select auth.uid()));-- Auto-create profile on signupcreate or replace function public.handle_new_user()returns triggerlanguage plpgsqlsecurity definerset search_path = ''as $$begin insert into public.profiles (id, full_name) values (new.id, new.raw_user_meta_data ->> 'full_name'); return new;end;$$;create trigger on_auth_user_created after insert on auth.users for each row execute procedure public.handle_new_user();
-- B-tree (default): =, <, >, BETWEEN, IN, LIKE 'prefix%', IS NULLcreate index orders_created_at_idx on public.orders (created_at);-- GIN: arrays, JSONB containment (@>), full-text searchcreate index products_attrs_gin_idx on public.products using gin (attributes);create index posts_content_fts_idx on public.posts using gin (to_tsvector('english', content));-- GiST: geometric data, range types, nearest-neighbor (KNN)create index locations_point_idx on public.places using gist (location);-- BRIN: huge time-series tables (10-100x smaller than B-tree)-- Only useful when data is physically ordered by the columncreate index events_time_brin_idx on public.events using brin (created_at);-- Hash: equality-only (= operator), slightly faster than B-tree for =create index sessions_token_hash_idx on public.sessions using hash (token);
Column order matters. Equality columns first, range columns last:
-- Query: WHERE status = 'pending' AND created_at > '2025-01-01'-- ✅ Correct order: equality before rangecreate index orders_status_created_idx on public.orders (status, created_at);-- This index CAN answer:-- WHERE status = 'pending'-- WHERE status = 'pending' AND created_at > '2025-01-01'-- This index CANNOT answer efficiently:-- WHERE created_at > '2025-01-01' (no leftmost prefix)
Avoid heap fetches by including frequently selected columns:
-- Normal: index scan finds row ID, then heap fetch for name, totalcreate index orders_status_idx on public.orders (status);select status, customer_id, total from public.orders where status = 'shipped';-- Covering: all columns served from index, no heap accesscreate index orders_status_covering_idx on public.orders (status) include (customer_id, total);
-- Only index active users (90% of queries filter deleted_at is null)create index users_active_email_idx on public.users (email) where deleted_at is null;-- Only pending orders (completed orders are rarely queried)create index orders_pending_idx on public.orders (created_at) where status = 'pending';-- Only non-null SKUscreate index products_sku_idx on public.products (sku) where sku is not null;
Edge Functions run on Deno at the edge. Use them for:
✅ Webhooks (Stripe, GitHub, SendGrid) — need HMAC signature verification
✅ Third-party API calls where you need to hide API keys
✅ Custom auth flows (magic links, custom OAuth)
✅ Image/file processing
✅ Sending emails via Resend/Postmark
✅ Heavy computation you don't want in your Next.js server
❌ Simple CRUD — use PostgREST (auto-generated API) directly
❌ Business logic that could be a Server Action — use Next.js
❌ Long-running jobs (> 150s timeout) — use pg_cron or external queues
# Serve all functions locallysupabase functions serve# Serve a specific function with envsupabase functions serve send-email --env-file .env.local# Deploysupabase functions deploy send-email
✅ Live collaborative features (presence, cursors)
✅ Chat / messaging
✅ Dashboard metrics that update without refresh
✅ Notification delivery
✅ Order status tracking
❌ Data that changes less than every 30 seconds — polling is fine
❌ Every table — enable only where needed
❌ High-volume tables — use Broadcast instead of Postgres Changes
Storage uses the same RLS system, but on storage.objects:
-- Users can upload to their own foldercreate policy "users_upload_own" on storage.objects for insert to authenticated with check ( bucket_id = 'avatars' and (storage.foldername(name))[1] = auth.uid()::text );-- Users can read their own filescreate policy "users_read_own" on storage.objects for select to authenticated using ( bucket_id = 'documents' and (storage.foldername(name))[1] = auth.uid()::text );-- Public read for public bucketcreate policy "public_read" on storage.objects for select using (bucket_id = 'avatars');-- IMPORTANT: Upsert (file replacement) requires INSERT + SELECT + UPDATEcreate policy "users_upsert_own" on storage.objects for update to authenticated using ((storage.foldername(name))[1] = auth.uid()::text) with check ((storage.foldername(name))[1] = auth.uid()::text);
# Start local Supabasesupabase start# Create a new migration filesupabase migration new add_orders_table# Iterate on schema — use execute_sql or psql, NOT apply_migration# apply_migration writes history on every call, making diff impossible# When schema is ready, generate migration from diffsupabase db pull --local --yes# Apply migrationsupabase db push --local# Check migration listsupabase migration list
-- supabase/migrations/20250420120000_add_orders_table.sql-- Always wrap destructive operations in transactionsbegin;create table public.orders ( id bigint generated always as identity primary key, user_id uuid not null references auth.users(id) on delete cascade, status text not null default 'pending', total numeric(10, 2) not null, created_at timestamptz not null default now(), updated_at timestamptz not null default now());create index orders_user_id_idx on public.orders (user_id);create index orders_status_idx on public.orders (status);create index orders_created_at_idx on public.orders (created_at desc);alter table public.orders enable row level security;create policy "orders_user_policy" on public.orders for all to authenticated using (user_id = (select auth.uid())) with check (user_id = (select auth.uid()));commit;
-- ✅ Safe: adding nullable column (no table rewrite)alter table public.users add column bio text;-- ✅ Safe: adding column with default (Postgres 11+)alter table public.users add column is_premium boolean not null default false;-- ⚠️ Careful: adding NOT NULL to existing column-- First add nullable, backfill, then add constraintalter table public.users add column phone text;update public.users set phone = '' where phone is null;alter table public.users alter column phone set not null;-- ⚠️ Careful: adding index on large table-- Use CONCURRENTLY to avoid table lockcreate index concurrently orders_tenant_id_idx on public.orders (tenant_id);-- ❌ Dangerous: dropping column without checking all usages-- Check: grep -r 'old_column_name' your-codebase firstalter table public.users drop column old_column;
-- Always use buffers — shows cache hit ratioexplain (analyze, buffers, format text)select * from public.orderswhere user_id = 'abc' and status = 'pending';-- Key things to look for:-- "Seq Scan" on large table → missing index-- "Rows Removed by Filter: 999000" → poor selectivity-- "Buffers: read >> hit" → data not in cache-- "external merge" in Sort → increase work_mem-- "Nested Loop with loops=10000" → consider different join strategy
-- Enable (usually already enabled in Supabase)create extension if not exists pg_stat_statements;-- Top queries by total timeselect calls, round(total_exec_time::numeric, 2) as total_ms, round(mean_exec_time::numeric, 2) as mean_ms, round(stddev_exec_time::numeric, 2) as stddev_ms, queryfrom pg_stat_statementsorder by total_exec_time desclimit 20;-- Queries averaging > 100msselect query, mean_exec_time, callsfrom pg_stat_statementswhere mean_exec_time > 100order by mean_exec_time desc;-- Reset after optimization to get clean baselineselect pg_stat_statements_reset();
-- Check when tables were last analyzedselect relname, last_vacuum, last_autovacuum, last_analyze, last_autoanalyze, n_live_tup, n_dead_tupfrom pg_stat_user_tablesorder by n_dead_tup desc;-- Manually analyze after large data loadsanalyze public.orders;-- Tune autovacuum for high-churn tablesalter table public.orders set ( autovacuum_vacuum_scale_factor = 0.05, -- vacuum at 5% dead tuples (default 20%) autovacuum_analyze_scale_factor = 0.02 -- analyze at 2% changes (default 10%));
Supabase uses PgBouncer in transaction mode by default. Use the pooler connection string (port 6543) in serverless environments, direct connection (port 5432) for migrations:
# Pooler — use in production app (serverless, edge)DATABASE_URL=postgresql://postgres.ref:[password]@aws-0-eu-central-1.pooler.supabase.com:6543/postgres# Direct — use for migrations and CLI operationsDATABASE_URL=postgresql://postgres:[password]@db.ref.supabase.co:5432/postgres
-- ❌ Long transaction with external calls — holds locksbegin;select * from orders where id = 1 for update;-- ... HTTP call to payment API (2-5 seconds) ...update orders set status = 'paid' where id = 1;commit;-- ✅ Do external work outside transaction-- Step 1: validate, call API outside transaction-- result = await paymentAPI.charge(...)-- Step 2: atomic updatebegin;update orders set status = 'paid', payment_id = $1 where id = $2 and status = 'pending' returning *;commit;
-- Multiple workers, no blocking each otherupdate public.jobsset status = 'processing', worker_id = $1, started_at = now()where id = ( select id from public.jobs where status = 'pending' order by created_at limit 1 for update skip locked)returning *;
☐ NEXT_PUBLIC_ vars contain only publishable key — never service_role/secret
☐ Service role key only in server-side env vars (no NEXT_PUBLIC_ prefix)
☐ Rotate keys if exposed (Dashboard → API Settings)
☐ Never use getSession() server-side — use getClaims() or getUser()
☐ Never use user_metadata for authorization — use app_metadata
☐ app_metadata only set via Admin API (service_role), never from client
☐ JWT expiry set appropriately for your security requirements
☐ Deleting a user does NOT revoke existing tokens — sign out first or wait for expiry
☐ RLS enabled on every table in public schema
☐ force row level security set on sensitive tables
☐ RLS functions wrapped in SELECT for performance: (select auth.uid())
☐ Index on every column used in RLS policies
☐ Views use security_invoker = true (Postgres 15+) or access revoked
☐ security definer functions in non-exposed schema (not public)
☐ UPDATE policies have matching SELECT policy
☐ Upsert requires INSERT + SELECT + UPDATE policies
☐ Private buckets use signed URLs — not public URLs
☐ File paths include user_id to prevent path traversal
☐ No superuser credentials in application
☐ Application role has minimal required permissions
☐ Migrations reviewed for safety (CONCURRENTLY for indexes on large tables)
☐ pg_stat_statements enabled for query monitoring
Safe (no lock):
ALTER TABLE ... ADD COLUMN (nullable or with default)
CREATE INDEX CONCURRENTLY
CREATE POLICY
Careful (brief lock):
ALTER TABLE ... ADD COLUMN NOT NULL (backfill first)
ALTER TABLE ... ALTER COLUMN TYPE
Dangerous (full table lock):
CREATE INDEX (without CONCURRENTLY)
ALTER TABLE ... DROP COLUMN
VACUUM FULL
Last updated: April 2026. Covers @supabase/ssr latest, asymmetric JWT keys (default May 2025+), new API key format (sb_publishable_xxx / sb_secret_xxx).