Back to all posts

Notebooks, Blocks, and the Table Builder: Making Data Easier for Humans and AI to Explore

By Trevor Paulsen

This is part of a series where we're building a DIY journey analytics platform from scratch. If you're just joining, check out the earlier posts to catch up!

In the last post, we walked through how Ask Trevor handles the entire setup process for a journey analytics platform - connecting warehouses, configuring datasets, defining metrics, all of it. By the end of that conversation, a user has a fully wired-up environment and is ready to actually start asking questions about their customers.

Which brings us to the obvious next question: where does that asking actually happen?

This is the part of the product I've been most excited to build, because I think the analysis surface is where journey analytics tools either feel powerful or feel like a chore. We need somewhere users can explore freely, save their work, share it with teammates, and (importantly) collaborate with the AI assistant without it feeling like a disconnected experience.

The answer I landed on has two pieces: notebooks and blocks. This post introduces both, and then goes deep on the first block we shipped: table builder.

A quick walkthrough of the notebook, table builder, and how the same block type renders inside an Ask Trevor chat.

Notebooks: The Canvas

I'll admit I'm biased here. I'm a data scientist by training, and if you know me, you know how much I love R and Python notebooks. There's something really satisfying about being able to see every step of an analysis laid out in order, rerun it from scratch, and have the whole thing read like a little science paper - methodology, results, and conclusions all in one place.

There's a second reason I'm excited about notebooks: they give AI a traceable path to follow. When the AI assistant can see the full train of analysis a user has built up - not just the final chart, but every block that led there - it has the context it needs to build on that work intelligently instead of starting from zero each time. A notebook is essentially a shared memory between the human and the AI.

The notebook concept itself isn't new. Jupyter has been the default analysis canvas for data scientists for over a decade, and lots of modern tools use this pattern in one form or another. The core idea is simple: a notebook is an ordered list of cells, each cell does one thing, and the document tells a story from top to bottom.

So I decided to borrow that mental model directly. A Trevorwithdata notebook is just an ordered collection of blocks, with some metadata (name, description, tags, sharing settings) wrapped around it. Notebooks are scoped to a data group (the unified table we built earlier in the series), so every block inside a notebook shares the same underlying dataset.

Blocks: The Primitive

Here's where things get interesting. Inside a notebook, each cell is a "block," and a block isn't just a chunk of markdown or a SQL snippet. A block is a self-contained, configurable unit of analysis with a defined schema, its own backing API, and a React component that knows how to render it.

As of this writing, there are four block types:

BlockWhat It Does
textMarkdown or plain text for narrative, headings, and commentary
sqlA raw SQL query against the data group, with optional chart visualization
tableA freeform table with dimensions, metrics, segments, filters, and drill-down (more on this below)
funnelA multi-step conversion analysis with scope and conversion windows (we'll cover this in more detail in a future post)

Every block, regardless of type, shares a common base: an ID, a type, an optional description, audit timestamps (created and last modified, so we can show things like "edited 2 hours ago"), and an analytical date range - the time window the block queries over, which can be either absolute dates or a rolling preset like last_30_days. On top of that base, each block type extends with its own type-specific configuration.

This shared structure is what makes blocks composable. You can drop them into any order in a notebook, you can save them, you can share them, and (the part I'm most excited about) you can render them in places that aren't notebooks at all.

Two Surfaces, One Block

The principle I want to call out here is the one that ties this whole architecture together: a block doesn't care where it lives.

The same React component that renders a table block inside a notebook also renders it inside an Ask Trevor chat message. There is no separate "chat version" of the table builder. There is no parallel rendering pipeline. When the AI decides to show you a table, it builds the same configuration object that a user would build by clicking around in the notebook UI, and the same component renders it.

This is what the renderBlock MCP tool I mentioned in the last post actually does. When Ask Trevor wants to display a result, it doesn't return formatted text or a screenshot or a link. It calls renderBlock with a block type and a config, and the orchestrator emits a server-sent event to the UI saying "drop this block into the chat." The UI receives the event and mounts the same React component it would mount inside a notebook.

Taking this approach has big advantages. When we add a new feature to the table builder - say, a new column filter or a new sorting option - it shows up everywhere immediately. The AI doesn't need a separate code path. The notebook doesn't need a separate code path. We just ship the feature once.

It also means the conversational and exploratory experiences feel like the same product instead of two products bolted together. A user can ask Ask Trevor for "revenue by marketing channel for last month," see the result rendered as a real interactive table inside the chat, and then (with one click) move that exact block into a notebook to keep iterating on it. No re-querying, no re-configuring, no copy-paste.

The Table Builder, In Detail

Now let's talk about the first block we built and the one that does the most heavy lifting: the table builder.

The table builder is meant to be the workhorse for ad-hoc exploration - the thing you reach for when you have a question and you don't yet know what shape the answer will take. If you've spent time in any of the big analytics tools, the idea of a pivot-style table where you mix and match dimensions and metrics will feel familiar.

Here's what it exposes to users:

  • Dimensions - The categorical fields you want to break things down by. Marketing channel, page name, device type, day of week, whatever. You can add multiple dimensions and reorder them.
  • Metrics - The numbers you care about. Visits, revenue, conversion rate, time spent, anything from your metrics catalog. Multiple metrics can sit side by side.
  • Column ordering - Dimensions and metrics can be interleaved in any order, which sounds small but matters a lot for how you read the table. We use drag-and-drop (powered by @dnd-kit) so users can reorder columns without thinking about it.
  • Global segments - Segment filters applied to the entire table, with AND/OR logic between them.
  • Column segments - Per-metric segment comparisons, so you can put "Revenue (Mobile)" right next to "Revenue (Desktop)".
  • Aggregate filters - Post-aggregation filters (think SQL HAVING), so you can show only rows where, say, revenue is above some threshold.
  • Sorting - Sort by any column, ascending or descending.
  • Date range - Either absolute dates or a rolling preset. (We'll come back to this in a second - it has a fun twist.)
  • Null handling - Explicit controls for whether to include or exclude rows with null dimension values.
  • Visualization overlay - An optional bar/line/pie chart that sits above the table and visualizes the same data.
  • Row breakdowns - This is the drill-down feature. Users can drag a new dimension onto any row, and see that row broken down further. Tables support nesting up to 5 levels deep, and (importantly) the drill-down state is saved with the block, so reopening the notebook restores exactly what the user was looking at.

Behind the scenes, the table builder config gets translated into a request against our query engine - the 6-layer query-time processing pipeline we built earlier in the series. The block doesn't know anything about that pipeline. It just builds a structured config object and hands it off.

That separation matters because the block is purely a configuration surface. The query engine and the table builder UI can evolve independently, and - maybe more importantly - the AI never has to touch SQL. Instead of generating a big query with CTEs, joins, and window functions every time it wants to show a table, Ask Trevor just emits a small JSON config (a few dozen tokens) and hands it to the same pipeline the UI uses. That's a huge win for both speed and token cost.

How It Actually Fits Together

It might be helpful to walk through what happens end-to-end when a block gets rendered, since this is the part that took the most thought to get right.

Each block type has its own dedicated API endpoint. Table blocks hit executeTableReport. SQL blocks hit executeSQLReport. Funnel blocks hit their own funnel endpoint. This matters because each block type has different input validation, different config shapes, and different ways of describing what it wants. A table config isn't a funnel config, and trying to cram both into one generic "run block" endpoint would make every call site a pile of conditionals.

Here's the flow when, say, a user opens a notebook with a table block in it:

  1. The notebook document is fetched from Firebase Realtime DB. That gives us the list of blocks and their configs, but not their data - the data isn't stored with the block.
  2. Each block's React component mounts with its config. The table block component sees its config and calls executeTableReport with it.
  3. executeTableReport validates the config, looks up the referenced dimensions, metrics, and segments from the metrics catalog, and hands a structured request to the query engine.
  4. The query engine runs the same 6-layer pipeline it runs for everything else - assembling CTEs, applying segments, deduplicating metrics, all of it - and returns a tabular result.
  5. The block component receives the result and renders it.

When Ask Trevor wants to show a table inside a chat message, the flow is almost identical. The AI calls its renderBlock tool with a block type and a config. The orchestrator emits a server-sent event that tells the UI to mount a table block with that config. The table block mounts, calls executeTableReport with the config, and goes through steps 3-5 exactly the same way. The chat surface and the notebook surface converge at the API layer.

The punchline is that the query engine doesn't know or care whether the request came from a notebook, a chat message, a dashboard, or a future surface we haven't built yet. It just sees a config and runs it. Every block type gets to have its own shape and its own endpoint, but they all funnel into the same pipeline underneath. New surfaces are basically free, and improvements to the pipeline show up everywhere at once.

Ask Trevor Inside the Notebook

One more piece worth calling out: Ask Trevor is aware of where you are. If you ask for analysis from the main Ask Trevor UX, it'll pull the tables right into the conversation so you can keep chatting about them. If you ask for the same analysis while you're inside a notebook, it'll edit the notebook for you directly, dropping blocks in, tweaking configs, and updating date ranges without you having to leave the page.

For folks curious about how this works under the hood: every chat request from the UI carries a little uiContext payload that tells the orchestrator which page the user is on. If that payload includes a notebookContext (notebook ID, current blocks, and which block the cursor is on), the orchestrator injects that state into the system prompt and swaps the AI's toolset. Instead of renderBlock (which emits a server-sent event to drop a block into the chat), the AI gets editNotebook, which writes the block straight to the notebook document. Same block types, same configs, same query engine underneath. The only thing that changes is the destination.

Because it's using the same block configs in both places, this was a pretty small addition on top of the architecture we already had. The blocks don't care where they live, and now the AI doesn't either.

What's Next

We've now got a notebook surface, a block primitive, and the first real block type shipped. From here, the series gets a lot more fun, because each future post can take one block type and go deep on it.

On deck:

  • The funnel block - Multi-step conversion analysis with person/session scoping and conversion windows. We'll talk about why funnels in a journey analytics tool need to handle cross-device and out-of-order events differently than a traditional web analytics funnel.
  • The path / flow block - The one that's defined in our schema but doesn't have a UI yet. This is where I think we have a real chance to do something different from existing tools, and I want to take my time with it.
  • A few more block types I haven't talked about publicly yet, including some that lean heavily into what the AI can contribute to an analysis.

And in parallel, we'll keep evolving Ask Trevor's role inside notebooks. Right now the AI can drop blocks into a chat. The next step is having the AI suggest blocks inside a notebook itself - watching what a user is exploring and proposing the next analysis they might want to run.

If you want to follow along (or jump in and try it yourself), come find me on LinkedIn, or sign up for early access to get hands-on with the platform. I'd love to hear what kinds of analyses you'd want to build first. 🙌


Trevor Paulsen is a data product professional at UKG and a former product leader of Adobe's Customer Journey Analytics. All views expressed are his own.

I'm letting a few people try what I've created for free.

Join the waitlist