Meet 'Ask Trevor': The AI Assistant That Speeds Up Everything You Do in Journey Analytics
By Trevor Paulsen
Watch the full demo of Ask Trevor setting up a journey analytics platform from scratch.
This is part of a series where we're building a DIY journey analytics platform from scratch. If you're just joining, check out the earlier posts to catch up!
Over the last several posts, we've walked through every layer of building a journey analytics platform from scratch: syncing warehouse data, unifying it into queryable tables, stitching identities across devices, building a query-time processing engine, defining metrics and dimensions, creating segments with sequence awareness, and adding calculated metrics. If you've been following along, you know there's a lot of machinery under the hood, and all of that machinery needs to be configured before anyone can start analyzing data.
This is a problem pretty much every analytics platform shares - there's a significant setup burden between "I have data in my warehouse" and "I can actually ask questions about my customers." For a journey analytics tool specifically, that gap is even wider because of concepts like identity stitching, sessionization, and multi-dataset unification that don't exist in traditional BI tools.
In the very first post of this series, I mentioned that one of the unsolved problems in the journey analytics space is that AI capabilities are typically bolted on as afterthoughts - used for checking boxes in RFPs rather than being foundational to the experience. I wanted to take a different approach. What if the AI assistant wasn't just a feature, but the primary way users interact with the platform?
That's the vision behind "Ask Trevor" (could I have come up with a cornier name?).
The Setup Problem
Let's take stock of what a user needs to do before they can run their first report. Working through the pipeline we've built over this series:
- Connect a data source - Provide warehouse credentials, test the connection, discover available tables
- Create datasets - For each table, pick the right dataset type (event, lookup, identity map), map the timestamp field, the user ID field, the row ID field
- Configure identity stitching - If you want cross-device tracking, set up an identity map dataset with cascading resolution rules in the right priority order
- Build a data group - Combine your datasets into a unified queryable table, configure identity replay frequency, and trigger a backfill
- Create metrics - Define your KPIs as SQL expressions with the right aggregation functions, handle deduplication, validate the SQL against real data
- Create dimensions - Set up your categorical breakdowns (date granularities, device types, marketing channels, product categories) with the right SQL patterns
- Create segments - Define your key user cohorts and behavioral filters using the segment syntax, with the right scoping
- Create calculated metrics - Build your ratios and rates (conversion rate, average order value, revenue per user) that combine standard metrics
That's a lot of steps, and each one has enough nuance that getting it wrong means your data won't look right downstream. For example, mapping the wrong field as your timestamp will break sessionization and persistence, or getting the identity stitching priority order backwards will mess up your person counts.
For someone who's deeply familiar with the platform - namely the person who built it 😅 - this setup takes maybe 30-40 minutes. For a new user, even a technical one, it could easily take hours of trial and error. That's not a great first experience for a product that's supposed to make analytics easier.
Enter "Ask Trevor"
Ask Trevor is an AI assistant built specifically for Trevorwithdata, and its primary job is to make that entire setup process conversational. Instead of navigating through configuration screens and picking the right options, you just describe what you want and it handles the rest.
The important thing is that Ask Trevor isn't just automating button clicks. It's making decisions that require understanding the data and business context. When it sees a table with fields like user_id, device_id, and email, it recognizes that might be an identity mapping table and asks about cross-device tracking. When the user says their most important KPI is "revenue per visitor," it knows that's a calculated metric that needs two standard metrics (total revenue and unique visitors) created first.
How It Works Under the Hood
The architecture is surprisingly straightforward. Ask Trevor runs as a Node.js service, and the core of it is Anthropic's Claude Agent SDK - the same framework that powers Claude Code. The Agent SDK gives us multi-turn tool orchestration out of the box, which means the assistant can chain together complex sequences of API calls without us having to write explicit workflow logic.
Here's the high-level flow:
User message → Claude Agent SDK
↓
Claude Opus (reasoning)
↓
MCP Tool calls → Trevorlytics API
↓
Results streamed back via SSE
The unique piece is the MCP (Model Context Protocol) server powering Ask Trevor. My design goal from the start was that every single API the UI uses should also be accessible to the AI - so anything a user can do through the interface, Ask Trevor can do too. There are 13 custom MCP tools (so far) that make this possible, and each one maps to a set of operations in our API:
| Tool | What It Does |
|---|---|
manageSources | Connect warehouses, test connections, discover tables |
manageDatasets | Create/sync datasets, configure identity stitching |
manageDataGroups | Build unified tables, manage schemas, trigger backfills |
manageMetrics | Create metrics, inspect schemas, validate SQL, preview results |
manageDimensions | Create dimensions, sample data, validate expressions |
manageSegments | Create segments with sequence/scope awareness, validate |
executeReports | Run table reports, SQL queries, preview generated SQL |
managePreferences | Save and recall user/company context and memories |
renderBlock | Display interactive tables, funnels, and SQL results in chat |
Each tool is a single entry point with multiple operations - so manageMetrics handles listing, creating, updating, deleting, schema inspection, data sampling, SQL validation, and calculated metric previews all through one interface. This keeps the tool count manageable while still giving the AI access to everything it needs.
The Agent SDK handles all the orchestration. When a user asks to "set up my analytics," the AI reasons through which tools to call and in what order, passes the results from one step to the next, and handles errors along the way. We didn't have to write a state machine or define explicit workflow steps - the AI figures out the right sequence based on its system prompt and the current context.
The System Prompt: Where the Domain Expertise Lives
One thing I've learned building this is that the system prompt is where the magic happens. I can inject everything I know about journey analytics into the system prompt, and Claude is incredibly capable of coupling that context with its reasoning and tool use.
Our system prompt is around 1,800 lines and covers:
- Platform context - The data pipeline flow, what each object type means, how the 6-layer query architecture works
- Workflow patterns - Step-by-step guidance for setting up sources, creating metrics, building segments
- Decision rules - When to use a standard metric vs. calculated metric, how to handle identity stitching priority, what questions to ask before creating datasets
- Tool usage - When to inspect schemas, when to validate SQL, when to run preview reports
- Communication style - Keep responses concise, translate technical errors to business terms, always confirm before destructive actions
For example, the system prompt includes specific instructions like this for metric creation:
Decision rule - when to use which:
- If the metric is a direct aggregation of ONE raw field → Standard metric
- If the metric divides, multiplies, or combines two separately-meaningful aggregations → Calculated metric
- If the user asks for "X per Y", "average X per Y", "X rate", or any ratio → Calculated metric
There are similar rules for segment scoping - when a user says "show me people who viewed a product and then purchased," the AI needs to default to person scope, not session scope, because session scope would miss the common case where someone browses on their phone and buys on their laptop later. This is the kind of domain knowledge that's hard for the AI to figure out on its own, and the system prompt encodes the correct approach so it gets things right as often as possible.
Knowledge Files: Even More Context
The system prompt can only hold so much. For deeper reference material - the kind of thing you'd normally dig through documentation for - Ask Trevor has access to knowledge files that it reads on demand when working on specific tasks. These cover things like how to create metrics, how to create dimensions, how to author segments, and a catalog of standard metrics and dimensions that work out of the box for any company. Between the system prompt and these knowledge files, the AI has all the context it needs to make good decisions without guessing.
Context That Persists: Preferences and Memory
One of the features I'm most excited about is that Ask Trevor can learn and remember context about a user and their company. Before jumping into the technical setup, the assistant asks questions like:
- What industry are you in? What does your company do?
- What are your most important KPIs and how do you define them?
- Are there any naming conventions or terminology we should know about?
- What's your role? What do you primarily use analytics for?
These answers get saved as structured preferences at two levels - company-wide and user-level. They're automatically injected into every future conversation's system prompt, so the AI always has that business context available without needing to ask again.
On top of structured preferences, users can also tell the assistant to "remember" arbitrary facts - things like "our fiscal year starts in April" or "we call our mobile app users 'app people' internally." These get stored as memories and also persist across conversations.
This means the second time a user comes back to Ask Trevor, the assistant already knows what their business does, what KPIs matter most, and any quirks about their data. It can make better suggestions and skip the discovery questions. Over time, it gets meaningfully more useful.
Why This Matters for the Bigger Picture
I mentioned in the first post that one of the problems with existing journey analytics tools is that they're notoriously difficult to set up. The traditional approach requires specialized implementation teams who spend weeks configuring data collection, mapping fields, and defining the semantic layer. And while AI may not yet be able to entirely replace human judgment, we can at least empower people to move much more quickly than they could have otherwise.
That's what Ask Trevor is designed to do. If a user can describe what their data looks like and what they want to measure, the AI can handle the technical translation. You don't need to know the deep technical differences between a standard metric and a calculated metric, or understand how identity stitching cascading rules work, or remember the exact SQL syntax for a session-scoped segment. You just need to know your business.
This is what I meant when I said AI should be foundational to the experience, not a bolt-on. The assistant isn't just a convenience feature that saves a few clicks - it fundamentally lowers the technical barrier to getting value from journey analytics.
What's Next
We've covered how Ask Trevor can accelerate setup, but we haven't talked about analysis yet - that's a whole separate (and honestly even more fun) topic. Ask Trevor can also run reports, build interactive tables and funnels, and help users explore their data conversationally. We'll dig into that in a future post.
For now, the important thing is that we have an AI assistant that can take a user from "here are my warehouse credentials" to "I'm ready to analyze my customer journeys" in a single conversation. That feels like a meaningful step toward making journey analytics accessible to everyone.
If you want to try it out or just follow along with the build, come find me on LinkedIn. And if you're interested in getting hands-on with Ask Trevor yourself, sign up for early access - we'd love to have you. 🙌
Trevor Paulsen is a data product professional at UKG and a former product leader of Adobe's Customer Journey Analytics. All views expressed are his own.