yigityalim/x/github/hire/share
Back to Handbooks

Supabase Production Guide

An opinionated production reference for Supabase — client setup, auth flows, RLS design, schema patterns, indexing, Edge Functions, Realtime, Storage, and migration strategy. Based on real projects. Not a tutorial.

Last updated: 2026-04-20

Tech Stack

SupabasePostgreSQLNext.jsTypeScripttRPC

Links

GitHub
NextMonorepo Architecture with Turborepo
© 2026 Yiğit Yalım. All rights reserved.
/

This is the reference I built from shipping Supabase across five production systems — a 600K LOC AI recruitment platform, a multi-tenant trade fair SaaS, a real-time spiritual marketplace, a salon management system, and an autonomous vending machine backend.

Every section answers one question: what actually matters in production?

Covers @supabase/ssr with the new asymmetric JWT key model (default since May 2025) and the new API key format (sb_publishable_xxx / sb_secret_xxx). Legacy anon and service_role keys still work during the migration period.


Table of Contents

  • 1. Client Setup
    • Two Clients, Two Contexts
    • Installation
    • Environment Variables
    • Browser Client
    • Server Client
    • Proxy Client (was Middleware)
    • Type Generation
  • 2. Authentication
    • getClaims vs getUser vs getSession
    • Caching Auth Per Request
    • Auth Flows
    • user_metadata vs app_metadata
    • Sign Out
  • 3. Row Level Security
    • Enable RLS on Every Table in Public Schema
    • Basic Policy Patterns
    • Multi-tenant RLS
    • Role-Based Access Control (RBAC)
    • RLS Performance — The Critical Pattern
    • Views Bypass RLS
    • UPDATE Requires SELECT Policy
    • Service Role Bypasses RLS
  • 4. Schema Design
    • Primary Keys
    • Data Types
    • Foreign Keys + Indexes
    • Naming Conventions
    • Constraints Safely
    • Partitioning Large Tables
    • profiles Table Pattern
  • 5. Indexing
    • Index Types
    • Composite Indexes
    • Covering Indexes
    • Partial Indexes
  • 6. Edge Functions
    • When to Use Edge Functions
    • Basic Structure
    • Webhook Handler Pattern
    • Calling Edge Functions from Next.js
    • Local Development
  • 7. Realtime
    • When to Use Realtime
    • Postgres Changes
    • Broadcast
    • Presence
    • Cleanup
  • 8. Storage
    • Bucket Setup
    • Storage RLS Policies
    • Uploading Files
    • File Path Conventions
  • 9. Migrations
    • Local Development Workflow
    • Migration File Structure
    • Safe Schema Changes
    • Run Advisors Before Merging
  • 10. Performance & Monitoring
    • EXPLAIN ANALYZE
    • pg_stat_statements
    • VACUUM and Statistics
    • Connection Pooling
    • Short Transactions
    • Queue Processing with SKIP LOCKED
  • 11. Security Checklist
    • API Keys
    • Auth
    • RLS
    • Storage
    • Database
  • Appendix: Quick Reference
    • Auth Method Decision
    • RLS Pattern by Access Model
    • Migration Safety

1. Client Setup

Two Clients, Two Contexts

With @supabase/ssr, you always have two clients:

createBrowserClient  → Client Components, browser
createServerClient   → Server Components, Server Actions, Route Handlers, Proxy

Never use one where the other belongs. The server client reads and writes cookies; the browser client uses localStorage.

Installation

bun add @supabase/supabase-js @supabase/ssr

Environment Variables

# .env.local
NEXT_PUBLIC_SUPABASE_URL=https://your-project.supabase.co
 
# New key format (preferred — May 2025+)
NEXT_PUBLIC_SUPABASE_PUBLISHABLE_KEY=sb_publishable_xxx
 
# Legacy keys (still work during migration)
NEXT_PUBLIC_SUPABASE_ANON_KEY=eyJhbG...

Never put service_role / sb_secret_xxx in NEXT_PUBLIC_ — it will be shipped to the browser.

Browser Client

// lib/supabase/client.ts
import 'client-only'
import { createBrowserClient } from '@supabase/ssr'
import type { Database } from '@/types/supabase'
 
export function createClient() {
  return createBrowserClient<Database>(
    process.env.NEXT_PUBLIC_SUPABASE_URL!,
    process.env.NEXT_PUBLIC_SUPABASE_PUBLISHABLE_KEY!
  )
}

Server Client

// lib/supabase/server.ts
import 'server-only'
import { createServerClient } from '@supabase/ssr'
import { cookies } from 'next/headers'
import type { Database } from '@/types/supabase'
 
export async function createClient() {
  const cookieStore = await cookies()
 
  return createServerClient<Database>(
    process.env.NEXT_PUBLIC_SUPABASE_URL!,
    process.env.NEXT_PUBLIC_SUPABASE_PUBLISHABLE_KEY!,
    {
      cookies: {
        getAll: () => cookieStore.getAll(),
        setAll: (cookiesToSet) => {
          try {
            cookiesToSet.forEach(({ name, value, options }) =>
              cookieStore.set(name, value, options)
            )
          } catch {
            // Server Components cannot set cookies — ignore
            // Proxy handles the actual cookie refresh
          }
        },
      },
    }
  )
}

Proxy Client (was Middleware)

The Proxy client is special — it must both read cookies from the request and write cookies to the response. This is how token refresh propagates.

// proxy.ts
import { createServerClient } from '@supabase/ssr'
import { NextResponse, type NextRequest } from 'next/server'
import type { Database } from '@/types/supabase'
 
export async function updateSession(request: NextRequest) {
  let supabaseResponse = NextResponse.next({ request })
 
  const supabase = createServerClient<Database>(
    process.env.NEXT_PUBLIC_SUPABASE_URL!,
    process.env.NEXT_PUBLIC_SUPABASE_PUBLISHABLE_KEY!,
    {
      cookies: {
        getAll: () => request.cookies.getAll(),
        setAll: (cookiesToSet) => {
          // Write to both request and response
          cookiesToSet.forEach(({ name, value }) =>
            request.cookies.set(name, value)
          )
          supabaseResponse = NextResponse.next({ request })
          cookiesToSet.forEach(({ name, value, options }) =>
            supabaseResponse.cookies.set(name, value, options)
          )
        },
      },
    }
  )
 
  // Refresh expired token — do NOT remove this
  // This is what keeps the session alive across navigations
  await supabase.auth.getClaims()
 
  return supabaseResponse
}
 
export async function proxy(request: NextRequest) {
  return await updateSession(request)
}
 
export const config = {
  matcher: ['/((?!_next/static|_next/image|favicon.ico|.*\\.(?:svg|png|jpg|jpeg|gif|webp)$).*)'],
}

Type Generation

supabase gen types typescript --project-id your-project-id > types/supabase.ts

Add to package.json:

{
  "scripts": {
    "types": "supabase gen types typescript --project-id $SUPABASE_PROJECT_ID > types/supabase.ts"
  }
}

2. Authentication

getClaims vs getUser vs getSession

This is the most important API decision you'll make. The three methods have fundamentally different behavior:

MethodNetwork callTrust levelUse case
getSession()NoUntrusted — cookies can be spoofedNever use server-side
getClaims()Rarely (cached JWKs)JWT-verified, not DB-verifiedMost server reads
getUser()AlwaysDB-verified, authoritativeSecurity-critical checks
// ❌ Never use getSession() server-side
const { data: { session } } = await supabase.auth.getSession()
// session.user is from cookies — could be spoofed
 
// ✅ Use getClaims() for most server-side auth checks
// With asymmetric keys: verifies JWT locally via JWKs (cached, fast)
// With symmetric keys: still hits Auth server (same as getUser)
const { data, error } = await supabase.auth.getClaims()
if (error || !data) redirect('/login')
const userId = data.claims.sub
 
// ✅ Use getUser() when you need guaranteed freshness
// Always hits Auth server — confirms session not revoked
const { data: { user }, error } = await supabase.auth.getUser()
// Use for: logout detection, banned users, password changes

The security gap in getClaims(): It validates the JWT signature and expiry, but doesn't check if the user was banned, deleted, or signed out server-side after the token was issued. If your app needs instant revocation, use getUser() or keep JWT expiry short (default is 1 hour).

Caching Auth Per Request

Multiple Server Components on the same page shouldn't each call getClaims() independently:

// lib/auth.ts
import 'server-only'
import { cache } from 'react'
import { createClient } from '@/lib/supabase/server'
import { redirect } from 'next/navigation'
 
// Memoized per request — all components share the result
export const getCachedUser = cache(async () => {
  const supabase = await createClient()
  const { data, error } = await supabase.auth.getClaims()
  if (error || !data) return null
  return data.claims
})
 
export const requireAuth = cache(async () => {
  const claims = await getCachedUser()
  if (!claims) redirect('/login')
  return claims
})
// app/dashboard/page.tsx
import { requireAuth } from '@/lib/auth'
 
export default async function DashboardPage() {
  const claims = await requireAuth() // redirects if not authenticated
  // claims.sub = user ID
  // claims.app_metadata = roles, permissions
  return <Dashboard userId={claims.sub} />
}

Auth Flows

Email + Password:

// Sign up
const { data, error } = await supabase.auth.signUp({
  email: 'user@example.com',
  password: 'secure-password',
  options: {
    emailRedirectTo: `${process.env.NEXT_PUBLIC_APP_URL}/auth/callback`,
    data: { full_name: 'User Name' }, // goes to raw_user_meta_data
  },
})
 
// Sign in
const { data, error } = await supabase.auth.signInWithPassword({
  email: 'user@example.com',
  password: 'secure-password',
})

OAuth:

const { data, error } = await supabase.auth.signInWithOAuth({
  provider: 'google',
  options: {
    redirectTo: `${process.env.NEXT_PUBLIC_APP_URL}/auth/callback`,
    scopes: 'email profile',
  },
})

Auth Callback Route Handler:

// app/auth/callback/route.ts
import { createClient } from '@/lib/supabase/server'
import { NextResponse } from 'next/server'
 
export async function GET(request: Request) {
  const { searchParams, origin } = new URL(request.url)
  const code = searchParams.get('code')
  const next = searchParams.get('next') ?? '/dashboard'
 
  if (code) {
    const supabase = await createClient()
    const { error } = await supabase.auth.exchangeCodeForSession(code)
    if (!error) {
      return NextResponse.redirect(`${origin}${next}`)
    }
  }
 
  return NextResponse.redirect(`${origin}/auth/error`)
}

user_metadata vs app_metadata

This is the #1 security trap in Supabase:

user_metadata (raw_user_meta_data)
  ├── User-editable via supabase.auth.updateUser()
  ├── Appears in JWT as user_metadata
  └── NEVER use for authorization decisions

app_metadata (raw_app_meta_data)
  ├── Only writable by service_role / Admin API
  ├── Appears in JWT as app_metadata
  └── SAFE for authorization decisions
// ❌ Never base auth decisions on user_metadata
const role = claims.user_metadata?.role // user can set this to 'admin'!
 
// ✅ Use app_metadata for roles
const role = claims.app_metadata?.role // only writable by your backend
 
// Setting app_metadata requires the Admin API (service_role)
const { error } = await supabaseAdmin.auth.admin.updateUserById(userId, {
  app_metadata: { role: 'admin' },
})

Sign Out

// Server Action
'use server'
import { createClient } from '@/lib/supabase/server'
import { redirect } from 'next/navigation'
 
export async function signOut() {
  const supabase = await createClient()
  await supabase.auth.signOut()
  redirect('/login')
}

3. Row Level Security

Enable RLS on Every Table in Public Schema

-- Enable RLS
alter table public.orders enable row level security;
 
-- Force RLS even for table owners (critical for safety)
alter table public.orders force row level security;

Tables in the public schema are accessible through PostgREST (the auto-generated REST API). Without RLS, anyone with your publishable key can read all rows.

Basic Policy Patterns

-- Users can only see their own rows
create policy "users_own_rows" on public.orders
  for all
  to authenticated
  using (user_id = auth.uid());
 
-- Separate insert/select/update policies for granular control
create policy "users_can_select" on public.orders
  for select
  to authenticated
  using (user_id = auth.uid());
 
create policy "users_can_insert" on public.orders
  for insert
  to authenticated
  with check (user_id = auth.uid()); -- with check for writes
 
create policy "users_can_update_own" on public.orders
  for update
  to authenticated
  using (user_id = auth.uid())      -- row must belong to user
  with check (user_id = auth.uid()); -- updated row must still belong to user
 
-- Public read, authenticated write
create policy "public_read" on public.posts
  for select using (true);
 
create policy "auth_write" on public.posts
  for insert to authenticated
  with check (author_id = auth.uid());

Multi-tenant RLS

-- Each user belongs to a tenant
create table public.tenant_members (
  tenant_id uuid not null references public.tenants(id),
  user_id   uuid not null references auth.users(id),
  role      text not null default 'member',
  primary key (tenant_id, user_id)
);
 
-- Helper function — runs as SECURITY DEFINER, bypasses RLS
create or replace function public.get_user_tenant_ids()
returns setof uuid
language sql
security definer
set search_path = ''
stable
as $$
  select tenant_id
  from public.tenant_members
  where user_id = (select auth.uid())
$$;
 
-- Policy: users see rows for any tenant they belong to
create policy "tenant_data_access" on public.orders
  for all
  to authenticated
  using (tenant_id in (select public.get_user_tenant_ids()));

Role-Based Access Control (RBAC)

Store roles in app_metadata for JWT-based policies:

-- Set role via Admin API (server-side only)
-- supabase.auth.admin.updateUserById(id, { app_metadata: { role: 'admin' } })
 
-- JWT-based role policy
create policy "admin_full_access" on public.orders
  for all
  to authenticated
  using (
    (auth.jwt() -> 'app_metadata' ->> 'role') = 'admin'
  );
 
-- Combined: admins see all, users see own
create policy "orders_access" on public.orders
  for select
  to authenticated
  using (
    user_id = auth.uid()
    or (auth.jwt() -> 'app_metadata' ->> 'role') = 'admin'
  );

RLS Performance — The Critical Pattern

The most common RLS performance mistake is calling functions without caching:

-- ❌ auth.uid() called for EVERY row
create policy "slow_policy" on public.orders
  using (user_id = auth.uid());
 
-- ✅ Wrap in SELECT — evaluated once, result cached
create policy "fast_policy" on public.orders
  using (user_id = (select auth.uid()));
 
-- ❌ Subquery executes per row
create policy "slow_team_policy" on public.orders
  using (
    team_id in (
      select team_id from public.team_members where user_id = auth.uid()
    )
  );
 
-- ✅ Use security definer function — executes once
create policy "fast_team_policy" on public.orders
  using (team_id in (select public.get_user_team_ids()));

Always index columns used in RLS policies:

create index orders_user_id_idx on public.orders (user_id);
create index orders_tenant_id_idx on public.orders (tenant_id);
create index team_members_user_id_idx on public.team_members (user_id);

Views Bypass RLS

-- ❌ This view exposes ALL rows — ignores RLS
create view order_summaries as
  select id, user_id, total from public.orders;
 
-- ✅ Postgres 15+: security_invoker makes view respect RLS
create view order_summaries
  with (security_invoker = true) as
  select id, user_id, total from public.orders;
 
-- For older Postgres: revoke direct access
revoke all on order_summaries from anon, authenticated;

UPDATE Requires SELECT Policy

This is a silent failure — no error, rows just don't update:

-- ❌ Only UPDATE policy — updates silently return 0 rows
create policy "update_own" on public.orders
  for update
  using (user_id = auth.uid());
 
-- ✅ Must have SELECT policy too for UPDATE to work
create policy "select_own" on public.orders
  for select
  using (user_id = auth.uid());
 
create policy "update_own" on public.orders
  for update
  using (user_id = auth.uid())
  with check (user_id = auth.uid());

Service Role Bypasses RLS

// For admin operations that need to bypass RLS
import { createClient } from '@supabase/supabase-js'
 
const supabaseAdmin = createClient(
  process.env.NEXT_PUBLIC_SUPABASE_URL!,
  process.env.SUPABASE_SERVICE_ROLE_KEY!, // Never in NEXT_PUBLIC_
  { auth: { autoRefreshToken: false, persistSession: false } }
)
 
// This bypasses ALL RLS policies — use with care
const { data } = await supabaseAdmin
  .from('orders')
  .select('*')
  .eq('user_id', targetUserId)

4. Schema Design

Primary Keys

-- ✅ Sequential: bigint identity (single DB, best performance)
create table public.orders (
  id bigint generated always as identity primary key
);
 
-- ✅ Distributed/exposed IDs: UUIDv7 (time-ordered, no fragmentation)
-- Requires pg_uuidv7 extension
create extension if not exists pg_uuidv7;
create table public.events (
  id uuid default uuid_generate_v7() primary key
);
 
-- ❌ Avoid: random UUID v4 as PK on large tables (index fragmentation)
create table public.bad (
  id uuid default gen_random_uuid() primary key -- causes write amplification
);

Data Types

-- ✅ Correct types
create table public.users (
  id          bigint generated always as identity primary key,
  email       text not null unique,          -- text, not varchar(255)
  created_at  timestamptz not null default now(), -- always timestamptz
  is_active   boolean not null default true, -- not varchar
  price       numeric(10, 2),               -- not float (precision matters)
  metadata    jsonb                          -- not json (indexable, faster)
);
 
-- Why text over varchar(n)?
-- text and varchar have identical performance in Postgres
-- varchar(n) adds a constraint but doesn't save storage
-- Use text unless you genuinely need the length constraint enforced at DB level

Foreign Keys + Indexes

Postgres does NOT automatically index foreign key columns:

create table public.orders (
  id          bigint generated always as identity primary key,
  user_id     uuid not null references auth.users(id) on delete cascade,
  tenant_id   uuid not null references public.tenants(id),
  status      text not null default 'pending',
  created_at  timestamptz not null default now()
);
 
-- Always index FK columns manually
create index orders_user_id_idx    on public.orders (user_id);
create index orders_tenant_id_idx  on public.orders (tenant_id);
 
-- Find all missing FK indexes
select
  conrelid::regclass as table_name,
  a.attname as fk_column
from pg_constraint c
join pg_attribute a on a.attrelid = c.conrelid and a.attnum = any(c.conkey)
where c.contype = 'f'
  and not exists (
    select 1 from pg_index i
    where i.indrelid = c.conrelid and a.attnum = any(i.indkey)
  );

Naming Conventions

-- ✅ lowercase snake_case — works everywhere without quoting
create table public.user_profiles (
  user_id    uuid,
  first_name text,
  last_name  text
);
 
-- ❌ camelCase or PascalCase — requires quoting forever
create table public."UserProfiles" (  -- always needs quotes
  "userId"    uuid,
  "firstName" text
);
 
-- Tip: if stuck with mixed-case from an ORM, create a view as compatibility layer
create view user_profiles as
  select "userId" as user_id, "firstName" as first_name from "UserProfiles";

Constraints Safely

PostgreSQL doesn't support ADD CONSTRAINT IF NOT EXISTS:

-- ❌ Syntax error
alter table public.profiles
  add constraint if not exists profiles_email_unique unique (email);
 
-- ✅ DO block for idempotent constraint creation
do $$
begin
  if not exists (
    select 1 from pg_constraint
    where conname = 'profiles_email_unique'
    and conrelid = 'public.profiles'::regclass
  ) then
    alter table public.profiles
      add constraint profiles_email_unique unique (email);
  end if;
end $$;

Partitioning Large Tables

-- For tables > 100M rows or time-series data
create table public.events (
  id         bigint generated always as identity,
  created_at timestamptz not null,
  data       jsonb
) partition by range (created_at);
 
create table public.events_2025_01 partition of public.events
  for values from ('2025-01-01') to ('2025-02-01');
 
create table public.events_2025_02 partition of public.events
  for values from ('2025-02-01') to ('2025-03-01');
 
-- Drop old data instantly (vs DELETE taking hours)
drop table public.events_2024_01;

profiles Table Pattern

The standard pattern for extending auth.users:

create table public.profiles (
  id         uuid primary key references auth.users(id) on delete cascade,
  full_name  text,
  avatar_url text,
  updated_at timestamptz default now()
);
 
alter table public.profiles enable row level security;
 
create policy "profiles_select_own" on public.profiles
  for select to authenticated
  using (id = (select auth.uid()));
 
create policy "profiles_update_own" on public.profiles
  for update to authenticated
  using (id = (select auth.uid()))
  with check (id = (select auth.uid()));
 
-- Auto-create profile on signup
create or replace function public.handle_new_user()
returns trigger
language plpgsql
security definer
set search_path = ''
as $$
begin
  insert into public.profiles (id, full_name)
  values (new.id, new.raw_user_meta_data ->> 'full_name');
  return new;
end;
$$;
 
create trigger on_auth_user_created
  after insert on auth.users
  for each row execute procedure public.handle_new_user();

5. Indexing

Index Types

-- B-tree (default): =, <, >, BETWEEN, IN, LIKE 'prefix%', IS NULL
create index orders_created_at_idx on public.orders (created_at);
 
-- GIN: arrays, JSONB containment (@>), full-text search
create index products_attrs_gin_idx on public.products using gin (attributes);
create index posts_content_fts_idx on public.posts using gin (to_tsvector('english', content));
 
-- GiST: geometric data, range types, nearest-neighbor (KNN)
create index locations_point_idx on public.places using gist (location);
 
-- BRIN: huge time-series tables (10-100x smaller than B-tree)
-- Only useful when data is physically ordered by the column
create index events_time_brin_idx on public.events using brin (created_at);
 
-- Hash: equality-only (= operator), slightly faster than B-tree for =
create index sessions_token_hash_idx on public.sessions using hash (token);

Composite Indexes

Column order matters. Equality columns first, range columns last:

-- Query: WHERE status = 'pending' AND created_at > '2025-01-01'
-- ✅ Correct order: equality before range
create index orders_status_created_idx on public.orders (status, created_at);
 
-- This index CAN answer:
-- WHERE status = 'pending'
-- WHERE status = 'pending' AND created_at > '2025-01-01'
 
-- This index CANNOT answer efficiently:
-- WHERE created_at > '2025-01-01' (no leftmost prefix)

Covering Indexes

Avoid heap fetches by including frequently selected columns:

-- Normal: index scan finds row ID, then heap fetch for name, total
create index orders_status_idx on public.orders (status);
select status, customer_id, total from public.orders where status = 'shipped';
 
-- Covering: all columns served from index, no heap access
create index orders_status_covering_idx
  on public.orders (status)
  include (customer_id, total);

Partial Indexes

Index only the rows you actually query:

-- Only index active users (90% of queries filter deleted_at is null)
create index users_active_email_idx on public.users (email)
  where deleted_at is null;
 
-- Only pending orders (completed orders are rarely queried)
create index orders_pending_idx on public.orders (created_at)
  where status = 'pending';
 
-- Only non-null SKUs
create index products_sku_idx on public.products (sku)
  where sku is not null;

6. Edge Functions

When to Use Edge Functions

Edge Functions run on Deno at the edge. Use them for:

✅ Webhooks (Stripe, GitHub, SendGrid) — need HMAC signature verification
✅ Third-party API calls where you need to hide API keys
✅ Custom auth flows (magic links, custom OAuth)
✅ Image/file processing
✅ Sending emails via Resend/Postmark
✅ Heavy computation you don't want in your Next.js server

❌ Simple CRUD — use PostgREST (auto-generated API) directly
❌ Business logic that could be a Server Action — use Next.js
❌ Long-running jobs (> 150s timeout) — use pg_cron or external queues

Basic Structure

// supabase/functions/send-email/index.ts
import { createClient } from 'npm:@supabase/supabase-js@2'
 
Deno.serve(async (req: Request) => {
  // Auth check — always verify the JWT
  const authHeader = req.headers.get('Authorization')
  if (!authHeader) {
    return new Response('Unauthorized', { status: 401 })
  }
 
  const supabase = createClient(
    Deno.env.get('SUPABASE_URL')!,
    Deno.env.get('SUPABASE_ANON_KEY')!, // use anon key + user JWT
    {
      global: { headers: { Authorization: authHeader } },
    }
  )
 
  // Verify the user
  const { data: { user }, error } = await supabase.auth.getUser()
  if (error || !user) {
    return new Response('Unauthorized', { status: 401 })
  }
 
  const body = await req.json()
 
  // Your logic here
  const result = await sendEmail(body.to, body.subject, body.html)
 
  return new Response(JSON.stringify({ success: true }), {
    headers: { 'Content-Type': 'application/json' },
  })
})

Webhook Handler Pattern

// supabase/functions/stripe-webhook/index.ts
import Stripe from 'npm:stripe@14'
 
const stripe = new Stripe(Deno.env.get('STRIPE_SECRET_KEY')!)
 
Deno.serve(async (req: Request) => {
  const signature = req.headers.get('stripe-signature')
  const body = await req.text()
 
  let event: Stripe.Event
  try {
    event = stripe.webhooks.constructEvent(
      body,
      signature!,
      Deno.env.get('STRIPE_WEBHOOK_SECRET')!
    )
  } catch {
    return new Response('Invalid signature', { status: 400 })
  }
 
  const supabase = createClient(
    Deno.env.get('SUPABASE_URL')!,
    Deno.env.get('SUPABASE_SERVICE_ROLE_KEY')! // service role for admin ops
  )
 
  switch (event.type) {
    case 'checkout.session.completed': {
      const session = event.data.object as Stripe.CheckoutSession
      await supabase
        .from('subscriptions')
        .upsert({
          user_id: session.metadata?.user_id,
          stripe_session_id: session.id,
          status: 'active',
        })
      break
    }
    case 'customer.subscription.deleted': {
      // handle cancellation
      break
    }
  }
 
  return new Response(JSON.stringify({ received: true }), {
    headers: { 'Content-Type': 'application/json' },
  })
})

Calling Edge Functions from Next.js

// From a Server Action or Server Component
const { data, error } = await supabase.functions.invoke('send-email', {
  body: { to: 'user@example.com', subject: 'Hello', html: '<p>Hi</p>' },
})
 
// From a Client Component
const supabase = createClient()
const { data, error } = await supabase.functions.invoke('my-function', {
  body: { param: 'value' },
})

Local Development

# Serve all functions locally
supabase functions serve
 
# Serve a specific function with env
supabase functions serve send-email --env-file .env.local
 
# Deploy
supabase functions deploy send-email

7. Realtime

When to Use Realtime

✅ Live collaborative features (presence, cursors)
✅ Chat / messaging
✅ Dashboard metrics that update without refresh
✅ Notification delivery
✅ Order status tracking

❌ Data that changes less than every 30 seconds — polling is fine
❌ Every table — enable only where needed
❌ High-volume tables — use Broadcast instead of Postgres Changes

Postgres Changes

Listen to INSERT, UPDATE, DELETE on a table:

'use client'
import { createClient } from '@/lib/supabase/client'
import { useEffect, useState } from 'react'
 
export function OrderTracker({ orderId }: { orderId: string }) {
  const [status, setStatus] = useState<string>('pending')
  const supabase = createClient()
 
  useEffect(() => {
    const channel = supabase
      .channel(`order-${orderId}`)
      .on(
        'postgres_changes',
        {
          event: 'UPDATE',
          schema: 'public',
          table: 'orders',
          filter: `id=eq.${orderId}`, // always filter — don't listen to all rows
        },
        (payload) => {
          setStatus(payload.new.status)
        }
      )
      .subscribe()
 
    return () => {
      supabase.removeChannel(channel)
    }
  }, [orderId, supabase])
 
  return <div>Status: {status}</div>
}

RLS applies to Realtime. A change is only broadcast to a subscriber if that subscriber's RLS policies would allow them to SELECT that row.

Enable Realtime on a table:

-- In Supabase dashboard: Database → Replication → Tables
-- Or via SQL:
alter publication supabase_realtime add table public.orders;

Broadcast

For high-throughput events that don't need database persistence:

// Sender
const channel = supabase.channel('room-1')
await channel.subscribe()
channel.send({
  type: 'broadcast',
  event: 'cursor-move',
  payload: { x: 100, y: 200, userId: 'abc' },
})
 
// Receiver
supabase
  .channel('room-1')
  .on('broadcast', { event: 'cursor-move' }, (payload) => {
    updateCursor(payload.payload)
  })
  .subscribe()

Presence

Track who's online in a channel:

const channel = supabase.channel('room-1', {
  config: { presence: { key: userId } },
})
 
// Track current user
await channel.track({ userId, name: 'Yiğit', cursor: { x: 0, y: 0 } })
 
// Listen to presence changes
channel
  .on('presence', { event: 'sync' }, () => {
    const state = channel.presenceState()
    // { userId: [{ userId, name, cursor }] }
  })
  .on('presence', { event: 'join' }, ({ key, newPresences }) => {
    console.log('User joined:', key)
  })
  .on('presence', { event: 'leave' }, ({ key, leftPresences }) => {
    console.log('User left:', key)
  })
  .subscribe()

Cleanup

Always remove channels when components unmount:

useEffect(() => {
  const channel = supabase.channel('my-channel')
    .on(...)
    .subscribe()
 
  return () => {
    supabase.removeChannel(channel)
  }
}, [])

8. Storage

Bucket Setup

-- Create bucket via SQL
insert into storage.buckets (id, name, public)
values ('avatars', 'avatars', true); -- public = publicly accessible URLs
 
insert into storage.buckets (id, name, public)
values ('documents', 'documents', false); -- private = signed URLs required

Or via dashboard: Storage → Create Bucket.

Storage RLS Policies

Storage uses the same RLS system, but on storage.objects:

-- Users can upload to their own folder
create policy "users_upload_own" on storage.objects
  for insert to authenticated
  with check (
    bucket_id = 'avatars'
    and (storage.foldername(name))[1] = auth.uid()::text
  );
 
-- Users can read their own files
create policy "users_read_own" on storage.objects
  for select to authenticated
  using (
    bucket_id = 'documents'
    and (storage.foldername(name))[1] = auth.uid()::text
  );
 
-- Public read for public bucket
create policy "public_read" on storage.objects
  for select using (bucket_id = 'avatars');
 
-- IMPORTANT: Upsert (file replacement) requires INSERT + SELECT + UPDATE
create policy "users_upsert_own" on storage.objects
  for update to authenticated
  using ((storage.foldername(name))[1] = auth.uid()::text)
  with check ((storage.foldername(name))[1] = auth.uid()::text);

Uploading Files

// Upload from Server Action
const { data, error } = await supabase.storage
  .from('avatars')
  .upload(`${userId}/avatar.jpg`, file, {
    cacheControl: '3600',
    upsert: true,
  })
 
// Get public URL (for public buckets)
const { data: { publicUrl } } = supabase.storage
  .from('avatars')
  .getPublicUrl(`${userId}/avatar.jpg`)
 
// Get signed URL (for private buckets)
const { data: { signedUrl }, error } = await supabase.storage
  .from('documents')
  .createSignedUrl(`${userId}/contract.pdf`, 3600) // expires in 1 hour

File Path Conventions

avatars/{user_id}/avatar.jpg      — user-scoped
documents/{user_id}/{filename}    — user-scoped
tenant/{tenant_id}/files/{name}   — tenant-scoped
public/logos/{name}               — shared public assets

9. Migrations

Local Development Workflow

# Start local Supabase
supabase start
 
# Create a new migration file
supabase migration new add_orders_table
 
# Iterate on schema — use execute_sql or psql, NOT apply_migration
# apply_migration writes history on every call, making diff impossible
 
# When schema is ready, generate migration from diff
supabase db pull --local --yes
 
# Apply migration
supabase db push --local
 
# Check migration list
supabase migration list

Migration File Structure

-- supabase/migrations/20250420120000_add_orders_table.sql
 
-- Always wrap destructive operations in transactions
begin;
 
create table public.orders (
  id          bigint generated always as identity primary key,
  user_id     uuid not null references auth.users(id) on delete cascade,
  status      text not null default 'pending',
  total       numeric(10, 2) not null,
  created_at  timestamptz not null default now(),
  updated_at  timestamptz not null default now()
);
 
create index orders_user_id_idx on public.orders (user_id);
create index orders_status_idx on public.orders (status);
create index orders_created_at_idx on public.orders (created_at desc);
 
alter table public.orders enable row level security;
 
create policy "orders_user_policy" on public.orders
  for all to authenticated
  using (user_id = (select auth.uid()))
  with check (user_id = (select auth.uid()));
 
commit;

Safe Schema Changes

-- ✅ Safe: adding nullable column (no table rewrite)
alter table public.users add column bio text;
 
-- ✅ Safe: adding column with default (Postgres 11+)
alter table public.users add column is_premium boolean not null default false;
 
-- ⚠️ Careful: adding NOT NULL to existing column
-- First add nullable, backfill, then add constraint
alter table public.users add column phone text;
update public.users set phone = '' where phone is null;
alter table public.users alter column phone set not null;
 
-- ⚠️ Careful: adding index on large table
-- Use CONCURRENTLY to avoid table lock
create index concurrently orders_tenant_id_idx on public.orders (tenant_id);
 
-- ❌ Dangerous: dropping column without checking all usages
-- Check: grep -r 'old_column_name' your-codebase first
alter table public.users drop column old_column;

Run Advisors Before Merging

# Check for security and performance issues
supabase db advisors
 
# Or via MCP
# mcp: get_advisors

10. Performance & Monitoring

EXPLAIN ANALYZE

-- Always use buffers — shows cache hit ratio
explain (analyze, buffers, format text)
select * from public.orders
where user_id = 'abc' and status = 'pending';
 
-- Key things to look for:
-- "Seq Scan" on large table → missing index
-- "Rows Removed by Filter: 999000" → poor selectivity
-- "Buffers: read >> hit" → data not in cache
-- "external merge" in Sort → increase work_mem
-- "Nested Loop with loops=10000" → consider different join strategy

pg_stat_statements

-- Enable (usually already enabled in Supabase)
create extension if not exists pg_stat_statements;
 
-- Top queries by total time
select
  calls,
  round(total_exec_time::numeric, 2) as total_ms,
  round(mean_exec_time::numeric, 2) as mean_ms,
  round(stddev_exec_time::numeric, 2) as stddev_ms,
  query
from pg_stat_statements
order by total_exec_time desc
limit 20;
 
-- Queries averaging > 100ms
select query, mean_exec_time, calls
from pg_stat_statements
where mean_exec_time > 100
order by mean_exec_time desc;
 
-- Reset after optimization to get clean baseline
select pg_stat_statements_reset();

VACUUM and Statistics

-- Check when tables were last analyzed
select
  relname,
  last_vacuum,
  last_autovacuum,
  last_analyze,
  last_autoanalyze,
  n_live_tup,
  n_dead_tup
from pg_stat_user_tables
order by n_dead_tup desc;
 
-- Manually analyze after large data loads
analyze public.orders;
 
-- Tune autovacuum for high-churn tables
alter table public.orders set (
  autovacuum_vacuum_scale_factor = 0.05,  -- vacuum at 5% dead tuples (default 20%)
  autovacuum_analyze_scale_factor = 0.02  -- analyze at 2% changes (default 10%)
);

Connection Pooling

Supabase uses PgBouncer in transaction mode by default. Use the pooler connection string (port 6543) in serverless environments, direct connection (port 5432) for migrations:

# Pooler — use in production app (serverless, edge)
DATABASE_URL=postgresql://postgres.ref:[password]@aws-0-eu-central-1.pooler.supabase.com:6543/postgres
 
# Direct — use for migrations and CLI operations
DATABASE_URL=postgresql://postgres:[password]@db.ref.supabase.co:5432/postgres

Short Transactions

-- ❌ Long transaction with external calls — holds locks
begin;
select * from orders where id = 1 for update;
-- ... HTTP call to payment API (2-5 seconds) ...
update orders set status = 'paid' where id = 1;
commit;
 
-- ✅ Do external work outside transaction
-- Step 1: validate, call API outside transaction
-- result = await paymentAPI.charge(...)
 
-- Step 2: atomic update
begin;
update orders
  set status = 'paid', payment_id = $1
  where id = $2 and status = 'pending'
  returning *;
commit;

Queue Processing with SKIP LOCKED

-- Multiple workers, no blocking each other
update public.jobs
set
  status = 'processing',
  worker_id = $1,
  started_at = now()
where id = (
  select id from public.jobs
  where status = 'pending'
  order by created_at
  limit 1
  for update skip locked
)
returning *;

11. Security Checklist

API Keys

☐ NEXT_PUBLIC_ vars contain only publishable key — never service_role/secret
☐ Service role key only in server-side env vars (no NEXT_PUBLIC_ prefix)
☐ Rotate keys if exposed (Dashboard → API Settings)

Auth

☐ Never use getSession() server-side — use getClaims() or getUser()
☐ Never use user_metadata for authorization — use app_metadata
☐ app_metadata only set via Admin API (service_role), never from client
☐ JWT expiry set appropriately for your security requirements
☐ Deleting a user does NOT revoke existing tokens — sign out first or wait for expiry

RLS

☐ RLS enabled on every table in public schema
☐ force row level security set on sensitive tables
☐ RLS functions wrapped in SELECT for performance: (select auth.uid())
☐ Index on every column used in RLS policies
☐ Views use security_invoker = true (Postgres 15+) or access revoked
☐ security definer functions in non-exposed schema (not public)
☐ UPDATE policies have matching SELECT policy

Storage

☐ Upsert requires INSERT + SELECT + UPDATE policies
☐ Private buckets use signed URLs — not public URLs
☐ File paths include user_id to prevent path traversal

Database

☐ No superuser credentials in application
☐ Application role has minimal required permissions
☐ Migrations reviewed for safety (CONCURRENTLY for indexes on large tables)
☐ pg_stat_statements enabled for query monitoring

Appendix: Quick Reference

Auth Method Decision

Server-side auth check
├── Need instant revocation detection?
│   └── Yes → getUser() (always hits Auth server)
│   └── No → getClaims() (fast, JWT-verified)
│              └── Multiple components same request?
│                  └── Yes → React cache() wrapper
│                  └── No → getClaims() directly
└── Client-side
    └── onAuthStateChange() / getClaims()

RLS Pattern by Access Model

Public read, no write restriction
  → create policy "..." using (true)

User owns row
  → using (user_id = (select auth.uid()))

Tenant isolation
  → using (tenant_id in (select get_user_tenant_ids()))

Role-based (from app_metadata)
  → using ((auth.jwt() -> 'app_metadata' ->> 'role') = 'admin')

Combined user + admin
  → using (user_id = (select auth.uid()) OR (auth.jwt() -> 'app_metadata' ->> 'role') = 'admin')

Migration Safety

Safe (no lock):
  ALTER TABLE ... ADD COLUMN (nullable or with default)
  CREATE INDEX CONCURRENTLY
  CREATE POLICY

Careful (brief lock):
  ALTER TABLE ... ADD COLUMN NOT NULL (backfill first)
  ALTER TABLE ... ALTER COLUMN TYPE

Dangerous (full table lock):
  CREATE INDEX (without CONCURRENTLY)
  ALTER TABLE ... DROP COLUMN
  VACUUM FULL

Last updated: April 2026. Covers @supabase/ssr latest, asymmetric JWT keys (default May 2025+), new API key format (sb_publishable_xxx / sb_secret_xxx).