FiveonefourFiveonefour
Fiveonefour Docs
MooseStackTemplatesGuides
Release Notes
Source514
  • Improving the Performance of Your Dashboards
  • Chat in your app
  1. Guides
  2. Chat in Your App

On this page

TL;DROverviewWhy chat is worth building, and why data is keyCommon use casesImplementation strategy: fast baseline, then iterateWhen this approach is worth building0 - Builder GuideMeta2-minute context for buildersWhat this guide isImplementation decisions1 - Tutorial: from parquet in S3 to chat applicationScopeSetupLocal DevelopmentModel your dataBulk add data locallyChat with your dataCreate custom API endpoints, and a custom front-end:Deploy to Boreal and VercelHydrate your production deployment2 - Adding chat to your existing Next.js applicationOverview & ArchitectureBackend Setup: MooseStack MCP ServerFrontend SetupIntegration & TestingDeploymentAppendix: Data context as code

Building a data-aware chat on top of your ClickHouse database with Next.js and MooseStack

TL;DR

  • It can be simple to set up high performance chat in your applications.
  • That baseline delivers real value.
  • Then extend it: richer context, more sources, visualizations.

This guide will:

  1. Help define when and how to prototype in app chat and get it used
  2. Help you improve your chat with rich context, and connect rich sources to your chat
  3. Help you embed chat into your existing Next.js application

Navigation tip: If you're evaluating whether this is worth doing, read the Overview section below. If you're building, skip to the Builder Guide section. If you want to jump straight to the implementation, see the Tutorial section.

Overview

“Chat in your app” is a conversational interface embedded directly into your product that can answer questions by querying your actual data. Users ask in natural language, and the system responds with results grounded in live queries and the context of your schema, metrics, and permissions.

This works because modern LLMs can call tools: they can inspect table metadata and run structured queries through a controlled interface (in this guide, an MCP server over ClickHouse), then explain the output in plain language.

What we don’t mean: a generic chatbot that summarizes docs, a RAG-style assistant fed with your knowledge base, or a support bot optimized for ticket deflection. We mean chat that can actually access and query your data: it can inspect schemas, run SQL, compute aggregates, and explain real results inside the workflows your app already supports, and that data access is the focus of this guide.

Why chat is worth building, and why data is key

A data-connected chat interface makes analytics more available and more usable: users ask questions directly, and the system answers by querying your data rather than forcing dashboard navigation.

  • Democratize data access: Non-technical users can query complex datasets without SQL or dashboard training.
  • Reduce friction: Eliminate the “five-click problem”: users ask instead of navigating menus, filters, and dashboards.
  • Context-aware exploration: Conversation history enables follow-ups and iterative refinement without starting over.
  • Faster decisions: Answers arrive in seconds instead of waiting on analysts or learning a new interface.
  • Lower support load: Self-serve reduces repeated questions and tickets to data and analytics teams.

Common use cases

Data-connected chat works best when users are trying to answer concrete questions from live, structured data, especially when the questions change from moment to moment and you cannot realistically pre-build every dashboard and drill-down. In practice, the highest-leverage use cases cluster into three buckets.

Internal analytics and business intelligence. Chat is a fast layer over your existing metrics and dashboards. Executives can ask follow-ups like “what drove the change?” or “is this concentrated in one region?” without waiting on an analyst. Business users can generate ad-hoc cuts of the data (by segment, time window, plan tier) without learning SQL or navigating a maze of filters. Analysts benefit too: chat is an efficient starting point for exploration when they are first familiarizing themselves with a dataset, testing hypotheses, or quickly validating whether a deeper analysis is worth doing.

Customer-facing analytics products. If your product already has analytics value, chat can make that value feel immediate. SaaS customers can ask about usage, adoption, performance, and trends in their own workspace without needing a bespoke dashboard for every role. E-commerce operators can explore sales patterns, inventory movement, and customer behavior with natural follow-ups like “only repeat customers” or “compare to last month.” In financial and fintech contexts, users can explore transaction history, spending categories, anomalies, and portfolio performance, with the system doing the heavy lifting of turning intent into queries and summaries. The common thread: customers get a flexible interface that adapts to their questions, while you avoid building an ever-expanding backlog of one-off analytics views.

Operational workflows. Many teams have data trapped in operational systems, where answers exist but are slow to retrieve. Chat can serve as a query interface for support and ops workflows: “show me this customer’s recent orders,” “what changed after the last deployment,” or “which shipments are delayed and why.” DevOps and platform teams can use chat for chatops-style exploration of metrics and performance data. Supply chain and inventory teams can query stock levels, supplier status, and fulfillment timelines without jumping between tools. In these contexts, the goal is not “support chat,” but rapid, auditable data lookup and aggregation: the system should be able to show what it queried and why, so operators can trust and validate the result.

Across all of these, the pattern is the same: chat is most valuable when it is grounded in the systems of record, fast enough to support iterative exploration, and integrated into the workflows where decisions actually get made.

Implementation strategy: fast baseline, then iterate

Getting to “working” is relatively quick because the core plumbing is now mostly standardized: an LLM that can call tools, an MCP server that exposes your database safely, and an OLAP backend that can answer analytical questions fast. MooseStack packages those pieces so you can stand up a baseline end-to-end without spending weeks on glue code, and the Next.js UI in this guide is just a reference client (the backend works with any frontend).

The rest of the effort is still worth it because the quality of a data chat experience is determined by product-specific details: how your data is modeled, how business terms map to tables and metrics, what permissions and tenancy rules apply, and how results should be presented inside your existing workflows. Once the baseline is live, you can iterate efficiently on accuracy, latency, context, and UX to capture the remaining value without rebuilding the foundation.

When this approach is worth building

Chat is worth investing in once your product has meaningful data gravity: users regularly ask questions that require slicing, aggregation, or comparison, and the existing UI forces them through dashboards, filters, or support channels to get answers. If you already maintain analytics views or a metrics layer, but users still need help finding or interpreting results, chat can reduce that friction immediately.

It is especially compelling when your team is fielding repeat questions, building one-off dashboards for narrow use cases, or struggling to productize a long tail of analytical queries. In those cases, chat becomes a flexible interface that scales with your schema, rather than another surface area you have to constantly redesign.

0 - Builder Guide

Meta

  • Who this is for: Engineers building a data-intensive chat experience inside an app (product engineers, data engineers, technical PMs).
  • What you need (accounts):
    • Boreal account: https://www.boreal.cloud/sign-in
    • Vercel account: https://vercel.com/
    • Github account: https://github.com
    • Claude LLM API key: https://console.anthropic.com/
  • What you need (for local dev):
    • Node.js (v20+): Install from nodejs.org.
    • pnpm (v8+): Install from pnpm.io.
    • Docker Desktop (v20+): Install from docker.com/get-started. Docker must be running.
    • Moose CLI (latest): Install via the docs at https://docs.fiveonefour.com/moosestack or run bash -i <(curl -fsSL https://fiveonefour.com/install.sh) moose.
  • Recommended tooling: Your IDE / terminal. A copilot, like Claude Code, Codex, or Cursor.
  • How long it takes: Demo (minutes), baseline (hours), production-quality (days), enterprise hardening can take longer, mostly driven by governance and data sources.
  • What “done” means: Users can ask questions and get correct, fast, permission-safe answers grounded in your data, with an inspectable trail of tool calls and queries.

2-minute context for builders

This guide is about chat that queries real production data in your OLAP system, not a generic assistant or RAG over documentation. The core loop is: user asks a question, the model calls tools, those tools run structured queries against an OLAP store (ClickHouse), and the model explains the results in product language.

If you use MooseStack (MCP server + ClickHouse integration) you can get a baseline working quickly. The work that remains, and where quality comes from, is your product-specific layer: modeling tables so the model can query them reliably, providing the right schema and metric context, enforcing tenancy and permissions, and integrating chat into your existing UI patterns so it feels like part of your product.

This is most valuable for teams shipping data-heavy products where users increasingly expect a chat interface, but only trust it when answers are grounded in real queries against production data. For engineers, the payoff is leverage: users can explore and answer routine “what’s going on?” questions without tapping you (or your data team) on the shoulder, and when something looks off you can trace the underlying tool calls and SQL instead of debugging answers generated by a black box.

What this guide is

This is an execution guide for shipping a data-connected chat feature with MooseStack and a Next.js reference client. It helps you make the key upfront decisions, get to a working baseline quickly (via the tutorial), then iterate toward production quality by improving context, adding sources, and embedding chat into your app experience (not leaving it as a demo UI).

What this guide is not

  • A generic chatbot or a support bot, or RAG recipe, or guide to fine-tuning your chat or creating agents for common user questions.
  • A one-size-fits-all architecture. This is a set of patterns you adapt to your constraints.
  • A promise that “LLM answers are correct by default.” You will design for verification, observability, and guardrails.

Implementation decisions

The decisions below define the shape of your chat system: what data the agent can access, how fast queries will run, what context it has to generate correct results, which tools it can call, which model provider you depend on, and how auth and access controls work as you move from internal to customer-facing use. This guide defaults to a simple, opinionated template application (internal-first deployment, ClickHouse with rich metadata, SQL-backed tools via MCP, Claude as the model) and shows where to extend or harden it as requirements grow.

Data scope, access and performance

What data is chat allowed to access?

  • Scope both for sensitivity, and reducing the surface area needed to be traversed by the chat agent.

What new sources need to be ingested into ClickHouse to support those questions?

  • Batch sources: S3/Parquet, warehouse exports, periodic snapshots. Ingest with direct insert.
  • Operational sources: event streams, OLTP replicas, CRM/billing syncs. Ingest with workflows, streams or APIs.

For sources already in ClickHouse / MooseStack, is their metadata sufficiently rich to be usable by the LLM (e.g. table and column level comments).

What data must be optimized to meet latency targets?

  • Materialized views for hot rollups (time buckets, top-N, common group-bys).
  • Data modelling that supports performance choices e.g. denormalize where it matters.
  • Guardrails: timeouts, row limits, query complexity caps.

Context and metadata

How can you enrich the data to reduce errors? How does this stay correct as schemas evolve?

  • ClickHouse metadata first: implement table/column comments with business meaning, units, join keys, caveats through MooseStack.
  • Context as code, in SDLC through MooseStack.

Tools beyond “data and metadata access”

What capabilities does the model need besides reading metadata and querying data?

  • Metric definition lookup (what is “active user,” “revenue,” “churn”) (e.g. a metrics lookup tool)
  • Freshness and lineage (when updated, what produced it, upstream deps) (e.g. a Moose InfraMap tool)
  • Search over internal glossary/definitions (e.g. RAG).

Model provider

Question: Which provider is the default for v1?

  • Anthropic Claude (recommended default for simplicity and reliability).
  • OpenRouter (builder choice for routing, flexibility, cost control).

Deployment scope, auth, and access controls

Is v1 internal, customer-facing, or both?

  • Internal first (recommended): move fast, keep audit trail on by default.
  • Customer-facing next: treat governance as a first-class requirement.

How does identity flow to the data access layer?

  • Service token from your app backend (simple, common).
  • User-scoped token/JWT passthrough (more direct, more complex).

Where do you enforce access controls?

  • Tool-layer allowlists/denylists (fastest to implement).
  • Views / scoped schemas (clean separation, good for tenants).
  • RLS hardening path (enterprise): plan it early, implement when needed.

1 - Tutorial: from parquet in S3 to chat application

In this tutorial, you’ll bootstrap the MooseStack MCP template, load real Parquet data from S3 into ClickHouse, validate queries via the MCP-enabled chat UI, then deploy the backend to Boreal and the Next.js app to Vercel.

If you already have a Next.js that you want to add this chat to, see 2 - Adding chat to your existing Next.js application.

Scope

This tutorial covers the end-to-end workflow for bootstrapping the MooseStack MCP template with real Parquet data sourced from S3. It has sample sources for you to use in creating the demo application.

  • Walks through installing the Moose CLI, running the MooseStack + Next.js dev services, staging sample data under packages/moosestack-service/context/, generating the TypeScript ingest pipelines, bulk-loading local ClickHouse, and validating via the MCP-enabled chat UI.
  • Covers local testing of chat, and developing the Next.js template alongside local MooseStack.
  • Covers deploying MooseStack project to Boreal, and web-app to Vercel, as well as authentication concerns for the same.

Not in scope

  • Continuous data ingestion. This covers the initial bulk load only. If you are considering continuous ingestion, use MooseStack’s Workflows.

Setup

Install Moose CLI

bash -i <(curl -fsSL https://fiveonefour.com/install.sh) moose

Initialize your project locally, and install dependencies

moose init <project-name> typescript-mcp 	# initialize your project
cd <project-name>
 
pnpm install # install dependencies

Create and configure .env files

Create .env files

cp packages/moosestack-service/.env.{example,local}
cp packages/web-app/.env.{example,local}

Create API Key authentication tokens:

cd packages/moosestack-service
moose generate hash-token # use output for the API Key & Token below

Set environment variables for API:

  1. Set your API Key in packages/moosestack-service/.env.local to the ENV API KEY generated by moose generate hash-token
  2. Set your API Token in packages/web-app/.env.local to the Bearer Token generated by moose generate hash-token

If you want to use the chat in the application, (which will have MCP based access to the local ClickHouse managed by MooseDev), make sure to set your Anthropic key as an environment variable

echo "ANTHROPIC_API_KEY=your_api_key_here" >> packages/web-app/.env.local

If you don’t have an Anthropic API key, you can get one here: Claude Console

Local Development

Your demo app is now set up! Although it is not yet populated with data. Get it running locally.

Make sure docker desktop is running

docker version

Run the whole stack

From the root of your project, run:

pnpm dev						# run the MooseStack and web app services

Alternatively, run MooseStack and your frontend separately

From the root of your project, run:

pnpm dev:moose    					# Start MooseStack service only
pnpm dev:web      					# Start web app only

You can also run moose dev from your moose project directory /packages/moosestack-service.

Your web-application is available at http://localhost:3000. Don’t be alarmed!

This is a blank canvas for you to use to build whatever user facing analytics you want to. And the chat’s already configured, click the chat icon on the bottom right to open your chat. Our engineer GRoy put some work into that chat panel, and you’ll find that it is resizable, has differentiated scrolling, and more! Feel free to use and modify how you like.

Note: whilst the chat will be functional, since you’ve not added any data to your MooseStack project, the tools available in the chat won’t be able to retrieve any data. In the next section, we’ll explore and model the data.

Start up MooseDev MCP and your copilot

Once your MooseDev Server is up and running, your development MCP server is ready to be connected to your copilot. For documentation for each copilot, see docs. By way of example, here’s how you configure Claude Code:

claude mcp add --transport http moose-dev http://localhost:4000/mcp

Note, you may have to do this before you start Claude Code (or restart Claude Code once you have done this). You can validate that it is working with the /mcp slash command.

Model your data

The following steps cover how to get data from a source, model that data in MooseStack, creating the relevant ClickHouse tables and other infrastructure, and then loading the data into ClickHouse (either local or Boreal hosted).

You can model your data manually or you can generate a data model from your data. This guide will walk through the generative approach.

This guide assumes you have direct access to the files in S3.

If the files are relatively small, we recommend building up a packages/moosestack-service/context/ directory to gather context (you should gitignore your context directory), e.g.:

            • README.md

          Copy data from S3

          There are many ways to copy data down from s3, e.g.:

          Using the S3 CLI (once you’ve authenticated):

          aws s3 ls s3://source-data/ 			# list files in S3 
           
          aws s3 cp s3://source-data/ . --recursive 	# copy data from S3 to context directory

          Using moose query (this only copies one file at a time):

          cd packages/moosestack-service			# navigate to your MooseStack project
           
          moose query "Select * FROM s3('s3://source-data/file-name.parquet', '<Access-Key-ID>', '<Secret-Access-Key>', 'parquet') limit 10;" > context/data/file-name.txt # copy data retrieved from that SQL query to the .txt file

          Add other context relevant for data modeling

          OLAP, and especially ClickHouse, benefits from rigorous data modeling. LLMs aren’t perfect at understanding the nuance of OLAP data modeling, so we created some docs as a guide: https://github.com/514-labs/olap-agent-ref (you can clone it into the context/rules directory).

          cd packages/moosestack-service/context/rules
          gh repo clone 514-labs/olap-agent-ref . # the gh CLI has less trouble with nested repos

          Model your data manually or with your copilot

          Create

          1. A data model object defining the data model of your data to be ingested
          2. An OlapTable object, declaring that table in MooseStack
          3. A reference in index at the root of the MooseStack project (so that MooseOLAP will create the table.
                    • data-model.ts
                  • index.ts
                • README.md

              We suggest you do this sequentially, e.g. with the following prompt pattern:

              '<project-name>/packages/moosestack-service/app/ingest/models.ts' look at this   file. It does two things, declares an interface "DataEvent", and then creates a IngestPipeline that declares a table, streaming topic and ingest API for that interface. I want to do this for the data that I'm ingesting, that I've extracted samples of to this directory: '<project-name>/packages/moosestack-service/context/data'. Lets go step by step. First, lets create the DataModel interface. Refer to the data sample here: '<project-name>/packages/moosestack-service/context/data/sample1.parquet', and the data dictionary here: '<project-name>/packages/moosestack-service/context/data/data_dictionary.csv' and guidelines for good OLAP data modeling here (e.g. tight typing, use of LowCardinality, use of Default Values, etc) '<project-name>/packages/moosestack-service/context/rules'. Don't use a key unless you think it adds value over a standard `order by` field. Then, create the OlapTable object.

              Make sure to then prompt the copilot to export the object to the it to moosestack-service/index.ts too:

              '<project-name>/packages/moosestack-service/app/index.ts' please make sure the above is exported in the index

              Verify the tables were created correctly

              The dev server should then pick up the new table, and the copilot should be able to confirm this with the MooseDev MCP, or by being prompted to use moose query:

              Ensure the table was created by using `moose query` from the moosestack service directory

              Or you can do this manually:

              moose query "SELECT name, engine, total_rows FROM
                    system.tables WHERE database = currentDatabase();"

              It is also good practice to double check the model generated against the sample data (LLMs can make assumptions about types of data, even when presented with sample data):

              check the type limitations against the sample data in /context/data

              Repeat this for each table you want to model.

              Bulk add data locally

              Create a SQL file to load up your data from your remote source to your local ClickHouse:

              --load-data.sqlINSERT INTO `local`.PlayerActivity  SELECT * FROM s3(      's3://source-data/file.parquet','<Access-Key-ID>', '<Secret-Access-Key>'      'Parquet'  );

              Make sure to properly apply any transformations to conform your S3 data to the data model you’ve created. ClickHouse will do many of these transformations naturally. Notably though:

              • Column renamings will have to be done in SQL
              • Default values in CH only set the value where the column is omitted in an insert. If the column was “null” in the source, you will have to cast that in the insert.

              Execute the SQL query to load data into your local ClickHouse

              moose query -f load-data.sql

              It will return:

              Node Id: moosestack-service::2dadb086-d9fb-4e42-9462-680185fac0ef          Query 0 rows

              This is expected. The query doesn’t return any rows of data to the query client. To validate that it worked, use this query:

              moose query "SELECT COUNT(*) FROM
                    \`local\`.tablename"

              You will now have a ready to test and iterate on local development environment: with your local MooseStack up and running, with local ClickHouse populated with real data, and the front-end ready to test.

              Chat with your data

              Just go to http://localhost:3000: everything should be good to go. Chat away

              (optionally) Customize the system prompt

              Edit packages/web-app/src/features/chat/system-prompt.ts to customize how the AI behaves. The default is generic; you may want to tailor it to your data and use case:

              export function getAISystemPrompt(): string {  return `You are a helpful AI assistant that can query and analyze data. When users ask questions:1. Use get_data_catalog first to understand what tables are available2. Use query_clickhouse to run SQL queries and get results3. Explain results clearly and concisely You have access to a ClickHouse database with the following data:- [Describe your tables and what they contain]- [Add business context and metric definitions] Be helpful, accurate, and transparent about what queries you're running.`;}

              Create custom API endpoints, and a custom front-end:

              There’s a bootstrapped Next.js application, and an Express app that you can use to add to your framework.

              I like to point the LLM at a ShadCN component I am interested in, the Express / MooseStack documentation, and the Next.js application folder:

              I want to add a Data Table component from ShadCN (here: https://ui.shadcn.com/docs/components/data-table.md) to the web app here: <project-name>packages/web-app.   I want it to serve data from DataModel here: <project-name>/packages/moosestack-service/app/ingest/models.ts. Please create an express api in this directory to serve this component, here: <project-name>packages/moosestack-service/app/apis, as documented here: https://docs.fiveonefour.com/moose/app-api-frameworks/express/llm-ts.txt

              You should see your frontend application update here http://localhost:3000.

              Deploy to Boreal and Vercel

              We’ll prepare the application for deployment by setting up authentication, then deploy the MooseStack application to Boreal, and the web-app to Vercel. You can set up authentication and hosting to your preferences, if we haven’t covered your preferred option in our docs, reach out in our slack community: https://slack.moosestack.com/.

              Authentication was covered in the start of this guide, but covers back end to front end authentication only (you should also add authentication to restrict access to your frontend).

              Deploy MooseStack to Boreal

              Go to boreal.cloud, and set up an account. Link your Github account. Create an organization. Create a new project:

              Select import a MooseStack Project:

              Set the path to the root of the MooseStack service:

              packages/moosestack-service

              And set the Environment Variables used in the project (following the same steps defined above):

              MOOSE_INGEST_API_KEYMOOSE_CONSUMPTION_API_KEYMOOSE_ADMIN_TOKENMOOSE_WEB_APP_API_KEYS

              Continue, selecting Boreal default hosting (or point it at your own managed ClickHouse and Redpanda instances if you like).

              Click Deploy, and you’ll see your application MooseStack being deployed.

              Deploy your Next.js frontend to Vercel

              Authentication

              ⚠️ Important: The authentication set up above just ensures your back end and frontend can communicate securely. We are going to set environment variables here that give the front end access to the data. Accordingly, please ensure that you are properly adding authentication to your front end. Vercel offers this natively if your deployment is a preview deployment. If you want to productionize this, you may have to implement authentication using something like NextAuth, Auth0 or Clerk.

              See Vercel docs

              • Vercel auth: https://vercel.com/docs/deployment-protection/methods-to-protect-deployments/vercel-authentication
              • Third party auth: https://vercel.com/guides/application-authentication-on-vercel
              Deployment
              1. From the Vercel home-page, add a new project.
              2. Point it at this project, and set the root directory to packages/web-app
              3. Set the following environment variables
              ​​ANTHROPIC_API_KEY: Your Anthropic API key for chat functionalityNEXT_PUBLIC_API_URL: Your Boreal endpoint URL (e.g., `https://your-project.boreal.cloud`)API_KEY: Use the bearer token generated earlier

              You can find your Boreal endpoint URL at the top of the project overview page:

              Hydrate your production deployment

              Your project is now deployed. You have a Vercel hosted frontend. You have a Boreal hosted backend, with tables, APIs etc. set up for you.

              Your backend, however, is still unpopulated.

              This section will cover how to get data into prod.

              Note, it assumes a bulk ingest from a Parquet file on S3 with direct insertion through SQL, like the rest of this tutorial. If you configured a recurring workflow, that would automate data ingest (depending on the trigger). If you set up an ingestion endpoint, you may need to send data to said endpoint.

              Find your boreal connection string / database details

              It is in the Database tab of your deployment overview:
              Make sure to select the appropriate connection string type:

              Connect your SQL client, and run the following ClickHouse SQL query:

              -- Bulk load Parquet data from S3 into ClickHouse INSERT INTO `<clickhouse-database>`.`<table-name>`   -- Target ClickHouse database and tableSELECT *FROM s3(  's3://<bucket-name>/<path-to-file>.parquet',       -- S3 bucket and path to the Parquet file  '<aws-access-key-id>',                              -- AWS Access Key ID  '<aws-secret-access-key>',                          -- AWS Secret Access Key  'Parquet'                                           -- File format);

              This will again return 0 rows. This is expected. You can validate that the transfer worked correctly as follows:

              "SELECT COUNT(*) FROM      `<clickhouse-database>`.tablename"

              2 - Adding chat to your existing Next.js application

              Assumptions: You have a monorepo Your application already has a Next.js service Your application already has a MooseStack service, or you are willing to create one You are using Express for your APIs (you can use other frameworks, they just won’t be outlined in this tutorial

              Time estimate: ~1 hour
              Source template: All code snippets reference github.com/514-labs/moosestack/tree/main/templates/typescript-mcp

              Overview & Architecture

              This part of the guide will walk your through adding a chat panel embedded in your Next.js app that can query your ClickHouse data using natural language. The chat uses Claude to interpret questions and call MCP tools that execute SQL queries against your database.

              ┌─────────────────────────────────────────────────────────────────────────┐│                           Your Next.js App                              ││  ┌─────────────┐    ┌──────────────────┐    ┌────────────────────────┐ ││  │  Chat UI    │───▶│  /api/chat route │───▶│  MCP Client            │ ││  │  Components │    │  (streams resp)  │    │  (@ai-sdk/mcp)         │ ││  └─────────────┘    └──────────────────┘    └───────────┬────────────┘ │└─────────────────────────────────────────────────────────┼──────────────┘                                                          │ HTTP + Bearer Token                                                          ▼┌─────────────────────────────────────────────────────────────────────────┐│          MooseStack Service (localhost:4000 or your Boreal URL)         ││  ┌──────────────────┐    ┌─────────────────┐    ┌───────────────────┐  ││  │  /tools endpoint │───▶│  MCP Server     │───▶│  ClickHouse       │  ││  │  (Express app)   │    │  (tool handlers)│    │  (your data)      │  ││  └──────────────────┘    └─────────────────┘    └───────────────────┘  │└─────────────────────────────────────────────────────────────────────────┘

              What this will walk through building:

              Chat UIPopoutResizable panel with message input/output, tool result rendering
              API Route (/api/chat)Handles chat requests, creates MCP client, streams responses
              MCP Server (/tools)Express app exposing query_clickhouse and get_data_catalog tools
              AuthenticationBearer token passed fromflow between frontend toand MCP server

              Backend Setup: MooseStack MCP Server

              Choose your path:

              • Option A: You don't have a MooseStack project yet → Add the full MooseStack service
              • Option B: You already have a MooseStack project → Add just the MCP Express app

              Option A: Add the full MooseStack service

              If you don't have MooseStack in your project yet:

              Copy the MooseStack service from the template
              # From your monorepo root
              mkdir -p packages
              cd packages
               
              # Clone just the moosestack-service folder
              git clone --depth 1 --filter=blob:none --sparse \
                https://github.com/514-labs/moosestack.git temp-moose
              cd temp-moose
              git sparse-checkout set templates/typescript-mcp/packages/moosestack-service
              cp -r templates/typescript-mcp/packages/moosestack-service ../moosestack-service
              cd ..
              rm -rf temp-moose
              Add the package to your workspace

              If you don't have a pnpm-workspace.yaml at your monorepo root, create one:

              packages:  - 'packages/*'
              Install dependencies
              cd packages/moosestack-service
              pnpm install
              Configure environment variables
              cp .env.example .env.local
              Generate an API key for authentication:
              moose generate hash-token

              This outputs two values:

              • ENV API KEY → Put this in packages/moosestack-service/.env.local as MCP_API_KEY
              • Bearer Token → Save this for your frontend config (Section 2.3)

              Your .env.local should look like:

              MCP_API_KEY=<paste-the-ENV-API-KEY-here>
              Add dev scripts to your root package.json
              {  "scripts": {    "dev": "pnpm --parallel --stream -r dev",    "dev:moose": "pnpm --filter moosestack-service dev",    "dev:web": "pnpm --filter web-app dev"  },  "pnpm": {    "onlyBuiltDependencies": [      "@confluentinc/kafka-javascript",      "@514labs/kafka-javascript"    ]  }}

              Continue to Frontend Setup

              Option B: Add the MCP Express app to your existing MooseStack project

              If you already have a MooseStack project, you just need to add the MCP server Express app.

              Install additional dependencies
              cd packages/your-moosestack-service
              pnpm add @modelcontextprotocol/sdk@1.24.2 @514labs/express-pbkdf2-api-key-auth@^1.0.4 express@^5.1.0
              pnpm add -D @types/express@^5.0.3
              Copy the MCP server file

              Download app/apis/mcp.ts from the template:

              # From your moosestack project root
              mkdir -p app/apis
              curl -o app/apis/mcp.ts \ 
               
              https://raw.githubusercontent.com/514-labs/moosestack/main/templates/typescript-mcp/packages/moosestack-service/app/apis/mcp.ts
              Export the MCP server in your index.ts

              Add this line to your app/index.ts:

              export * from "./apis/mcp";
              Set up authentication

              Add MCP_API_KEY to your .env.local:

              moose generate hash-token

              Copy the ENV API KEY output to your .env.local:

              MCP_API_KEY=<paste-the-ENV-API-KEY-here>

              Save the Bearer Token for frontend configuration.

              Frontend Setup

              Frontend Setup: Dependencies & Environment

              Install required packages

              Note: If you don't have shadcn/ui set up, you'll also need the UI components. See ui.shadcn.com/docs/installation.

              cd packages/web-app  # or your Next.js app directory
              pnpm add ai@5.0.106 @ai-sdk/anthropic@2.0.53 @ai-sdk/mcp@0.0.7 @ai-sdk/react@2.0.106
              pnpm add react-resizable-panels@^3.0.6 react-markdown@^10.1.0 remark-gfm@^4.0.1
              pnpm add lucide-react@^0.552.0
              Configure environment variables

              Create or update your .env.local:

              cp .env.example .env.local  # if you have an example file

              Add these variables:

              ANTHROPIC_API_KEY=<your-anthropic-api-key>MCP_API_TOKEN=<the-bearer-token-from-moose-generate-hash-token>MCP_SERVER_URL=http://localhost:4000
              VariableDescriptionWhere to get it
              ANTHROPIC_API_KEYYour Claude API keyconsole.anthropic.com
              MCP_API_TOKENBearer token for MCP authOutput from moose generate hash-token
              MCP_SERVER_URLMooseStack server URLhttp://localhost:4000 for local dev
              Create environment variable helper

              Create src/env-vars.ts:

              export function getMcpServerUrl(): string {  const value = process.env.MCP_SERVER_URL;  if (!value) {    throw new Error("MCP_SERVER_URL environment variable is not set");  }  return value;} export function getAnthropicApiKey(): string {  const value = process.env.ANTHROPIC_API_KEY;  if (!value) {    throw new Error("ANTHROPIC_API_KEY environment variable is not set");  }  return value;}
              Create .env.development for defaults
              MCP_SERVER_URL=http://localhost:4000

              Frontend Setup: API Route

              The chat API route handles incoming messages, creates an MCP client to connect to your MooseStack server, and streams the AI response back to the frontend.

              Copy the chat feature files
              # From your Next.js app root (e.g., packages/web-app)
              mkdir -p src/features/chat
               
              # Download the core chat logic files
              curl -o src/features/chat/agent-config.ts \
                https://raw.githubusercontent.com/514-labs/moosestack/main/templates/typescript-mcp/packages/web-app/src/features/chat/agent-config.ts
               
              curl -o src/features/chat/get-agent-response.ts \
                https://raw.githubusercontent.com/514-labs/moosestack/main/templates/typescript-mcp/packages/web-app/src/features/chat/get-agent-response.ts
               
              curl -o src/features/chat/system-prompt.ts \
                https://raw.githubusercontent.com/514-labs/moosestack/main/templates/typescript-mcp/packages/web-app/src/features/chat/system-prompt.ts
              Create the API route

              Create src/app/api/chat/route.ts:

              import { NextRequest } from "next/server";import { UIMessage } from "ai";import { getAgentResponse } from "@/features/chat/get-agent-response"; interface ChatBody {  messages: UIMessage[];} export async function POST(request: NextRequest) {  try {    const body: ChatBody = await request.json();    const { messages } = body;     if (!messages || !Array.isArray(messages)) {      return new Response(        JSON.stringify({ error: "Invalid request body", details: "messages must be an array" }),        { status: 400, headers: { "Content-Type": "application/json" } },      );    }     return await getAgentResponse(messages);  } catch (error) {    console.error("Chat error:", error);    return new Response(      JSON.stringify({ error: "Internal server error", details: error instanceof Error ? error.message : "Unknown error" }),      { status: 500, headers: { "Content-Type": "application/json" } },    );  }}
              (Optional) Add status endpoint

              Create src/app/api/chat/status/route.ts to check if the Anthropic key is configured:

              import { NextRequest } from "next/server"; export async function GET(request: NextRequest) {  const hasAnthropicKey = !!process.env.ANTHROPIC_API_KEY;   return new Response(    JSON.stringify({      anthropicKeyAvailable: hasAnthropicKey,      status: hasAnthropicKey ? "ready" : "missing_key",    }),    { status: 200, headers: { "Content-Type": "application/json" } },  );}
              (optionally) Customize the system prompt

              Edit src/features/chat/system-prompt.ts to customize how the AI behaves. The default is generic; you'll want to tailor it to your data and use case:

              export function getAISystemPrompt(): string {  return `You are a helpful AI assistant that can query and analyze data. When users ask questions:1. Use get_data_catalog first to understand what tables are available2. Use query_clickhouse to run SQL queries and get results3. Explain results clearly and concisely You have access to a ClickHouse database with the following data:- [Describe your tables and what they contain]- [Add business context and metric definitions] Be helpful, accurate, and transparent about what queries you're running.`;}

              Frontend Setup: UI Components

              The chat UI consists of several components. Rather than copying each individually, grab the entire chat feature folder and the required layout components.

              Copy all chat UI components
              # From your Next.js app root
              cd src/features/chat
               
              # Download all chat components
              for file in chat-ui.tsx chat-input.tsx chat-output-area.tsx chat-button.tsx \
                          tool-invocation.tsx clickhouse-tool-invocation.tsx tool-data-catalog.tsx \
                          text-formatter.tsx reasoning-section.tsx source-section.tsx \
                          code-block.tsx suggested-prompt.tsx use-anthropic-status.ts; do
                curl -O "https://raw.githubusercontent.com/514-labs/moosestack/main/templates/typescript-mcp/packages/web-app/src/features/chat/$file"
              done
              Copy the layout components
              mkdir -p src/components/layout
               
              curl -o src/components/layout/resizable-chat-layout.tsx \
                https://raw.githubusercontent.com/514-labs/moosestack/main/templates/typescript-mcp/packages/web-app/src/components/layout/resizable-chat-layout.tsx
               
              curl -o src/components/layout/chat-layout-wrapper.tsx \
                https://raw.githubusercontent.com/514-labs/moosestack/main/templates/typescript-mcp/packages/web-app/src/components/layout/chat-layout-wrapper.tsx
               
              curl -o src/components/layout/content-header.tsx \
                https://raw.githubusercontent.com/514-labs/moosestack/main/templates/typescript-mcp/packages/web-app/src/components/layout/content-header.tsx
              Ensure you have required shadcn/ui components

              The chat UI uses these shadcn/ui components. If you don't have them, install via:

              npx shadcn@latest add button textarea scroll-area resizable collapsible badge
              Ensure you have the cn utility

              If you don't have src/lib/utils.ts:

              import { type ClassValue, clsx } from "clsx";import { twMerge } from "tailwind-merge"; export function cn(...inputs: ClassValue[]) {  return twMerge(clsx(inputs));}

              Integration & Testing

              Update your root layout

              Wrap your app with the ChatLayoutWrapper. Edit src/app/layout.tsx:

              import { ChatLayoutWrapper } from "@/components/layout/chat-layout-wrapper"; export default function RootLayout({ children }: { children: React.ReactNode }) {  return (    <html lang="en" suppressHydrationWarning>      <body>        {/* If you have ThemeProvider, wrap ChatLayoutWrapper inside it */}        <ChatLayoutWrapper>{children}</ChatLayoutWrapper>      </body>    </html>  );}

              Note: If you're using next-themes or another theme provider, wrap ChatLayoutWrapper inside it, not the other way around.

              Start both services

              # From monorepo root
              pnpm dev

              Or run them separately:

              # Terminal 1: MooseStack
              pnpm dev:moose
               
              # Terminal 2: Next.js
              pnpm dev:web

              Verify the MCP server is running

              curl http://localhost:4000/tools \
                -H "Content-Type: application/json" \
                -H "Authorization: Bearer <your-mcp-api-token>" \
                -d '{"jsonrpc":"2.0","id":1,"method":"tools/list"}'

              You should see a response listing query_clickhouse and get_data_catalog tools.

              Test the chat

              1. Open your app at http://localhost:3000
              2. Click the chat button (floating button in bottom-right)
              3. Ask a question like "What tables are available?"
              4. The AI should call get_data_catalog and show you the results

              Troubleshooting

              "MCP_SERVER_URL environment variable is not set"

              Make sure you have:

              • .env.local with MCP_SERVER_URL=http://localhost:4000
              • Or .env.development as a fallback
              "Unauthorized" or 401 errors from MCP server
              1. Verify MCP_API_KEY in moosestack-service/.env.local matches the hash from moose generate hash-token
              2. Verify MCP_API_TOKEN in web-app/.env.local is the Bearer Token (not the hash)
              3. Check the token is being sent: look for Authorization: Bearer ... in network requests
              CORS errors

              The architecture avoids CORS issues because:

              • The Next.js API route (/api/chat) is same-origin with your frontend
              • The API route makes server-side requests to MooseStack with the Bearer token

              If you're still seeing CORS errors, ensure your chat UI is calling /api/chat, not the MooseStack server directly.

              Chat panel doesn't appear
              1. Check browser console for errors
              2. Verify ChatLayoutWrapper is wrapping your app in layout.tsx
              3. Verify all shadcn/ui components are installed
              "ANTHROPIC_API_KEY environment variable is not set"
              1. Add your key to .env.local
              2. Restart the Next.js dev server (env vars only load on startup)
              Tool calls fail with ClickHouse errors
              1. Make sure MooseStack dev server is running (pnpm dev:moose)
              2. Check that ClickHouse has tables: moose ls should show your data models
              3. Verify you've run data ingestion or the generator workflow

              Deployment

              See Deploy to Boreal and Vercel.

              Appendix: Data context as code

              Clickhouse allows you to embed table and column level metadata.

              With MooseOlap, you can set these table and column level descriptions in your data models. e.g.

              export interface AircraftTrackingData {	/** Unique aircraft identifier */	hex: string; 		// no comment for this column		transponder_type: string;  	/** callsign, the flight name or aircraft registration as 8 chars */	flight: string;		/** aircraft registration pulled from database */	r: string;  	/** unique aircraft identifier */	aircraft_type?: string;		/** bitfield for certain database flags */	dbFlags: number;}

              The /** JSDoc */ comments on the column level will now be embedded in your ClickHouse database. This additional context will be available to your chat, and retrievable with the same tool-calls that retrieve data.