When you tap "Import Trip" in Kahintochalo, a lot happens in under a second: an AI response gets parsed, validated, and persisted as trips, activities, packing lists, budget categories, crew members, and activity feed entries — all in one atomic operation.
Getting that right took a few interesting engineering decisions worth sharing.
The Problem with Sequential Inserts
The naïve approach is to fire off a series of inserts from the client:
INSERT INTO trips ...
INSERT INTO activities ...
INSERT INTO packing_categories ...
-- and so on
This breaks in two ways. First, if any insert fails midway, you're left with a partially-created trip — a ghost that confuses the user and clutters the database. Second, doing this from the client means each insert requires a round-trip, adds latency, and forces you to plumb auth tokens and trip IDs through every call.
Security Definer RPCs to the Rescue
We moved the entire import into a single Postgres function with SECURITY DEFINER:
create function public.import_trip_v1(
p_trip jsonb,
p_activities jsonb,
p_packing jsonb,
p_budget_categories jsonb,
p_actor_name text,
p_actor_color text default '#F5A623'
)
returns uuid
language plpgsql
security definer
set search_path = public
The SECURITY DEFINER flag means the function runs with the privileges of the definer (a Postgres superuser), not the caller. This lets us bypass RLS for the duration of the insert while still validating auth.uid() at the top of the function — best of both worlds.
Every insert is wrapped in the same transaction, so if any step fails (say, a NOT NULL violation in activities), the entire trip rolls back. The client either gets a UUID back or an error — never a partial state.
Handling Nested Data
The trickiest part was packing categories, which have child items. You can't insert both in a single INSERT ... SELECT because the category id doesn't exist until after the insert. We used a PL/pgSQL loop:
for v_section in select * from jsonb_array_elements(p_packing)
loop
insert into packing_categories (trip_id, name, sort_order)
values (v_trip_id, ...) returning id into v_cat_id;
insert into packing_items (trip_id, category_id, name, sort_order)
select v_trip_id, v_cat_id, item->>'name', ...
from jsonb_array_elements(v_section->'items') as item
where trim(item->>'name') <> '';
end loop;
The RETURNING id INTO v_cat_id captures the generated primary key so we can immediately reference it in the next insert. Clean, readable, and fully atomic.
AI Output → Validated JSONB
The AI generates a structured payload. Before we ship it to Supabase, we validate it client-side with a Zod schema, then send it directly as jsonb parameters. Inside the RPC we use Postgres coalesce patterns to handle missing or null fields gracefully:
coalesce(nullif(trim(p_trip->>'name'), ''), 'My Trip')
This means even a half-baked AI response results in a sensible trip, not a crash.
What's Next
The same pattern is powering our upcoming collaborative import feature, where multiple users will be able to import and merge trip segments in real-time. The atomic RPC approach makes conflict resolution much easier — each import is either fully applied or fully rejected.
If you're building on Supabase and find yourself doing multi-table writes from the client, give security-definer RPCs a look. They clean up a lot of complexity.