This guide will:
Navigation tip: If you're evaluating whether this is worth doing, read the Overview section below. If you're building, skip to the Builder Guide section. If you want to jump straight to the implementation, see the Tutorial section.
“Chat in your app” is a conversational interface embedded directly into your product that can answer questions by querying your actual data. Users ask in natural language, and the system responds with results grounded in live queries and the context of your schema, metrics, and permissions.
This works because modern LLMs can call tools: they can inspect table metadata and run structured queries through a controlled interface (in this guide, an MCP server over ClickHouse), then explain the output in plain language.
What we don’t mean: a generic chatbot that summarizes docs, a RAG-style assistant fed with your knowledge base, or a support bot optimized for ticket deflection. We mean chat that can actually access and query your data: it can inspect schemas, run SQL, compute aggregates, and explain real results inside the workflows your app already supports, and that data access is the focus of this guide.
A data-connected chat interface makes analytics more available and more usable: users ask questions directly, and the system answers by querying your data rather than forcing dashboard navigation.
Data-connected chat works best when users are trying to answer concrete questions from live, structured data, especially when the questions change from moment to moment and you cannot realistically pre-build every dashboard and drill-down. In practice, the highest-leverage use cases cluster into three buckets.
Internal analytics and business intelligence. Chat is a fast layer over your existing metrics and dashboards. Executives can ask follow-ups like “what drove the change?” or “is this concentrated in one region?” without waiting on an analyst. Business users can generate ad-hoc cuts of the data (by segment, time window, plan tier) without learning SQL or navigating a maze of filters. Analysts benefit too: chat is an efficient starting point for exploration when they are first familiarizing themselves with a dataset, testing hypotheses, or quickly validating whether a deeper analysis is worth doing.
Customer-facing analytics products. If your product already has analytics value, chat can make that value feel immediate. SaaS customers can ask about usage, adoption, performance, and trends in their own workspace without needing a bespoke dashboard for every role. E-commerce operators can explore sales patterns, inventory movement, and customer behavior with natural follow-ups like “only repeat customers” or “compare to last month.” In financial and fintech contexts, users can explore transaction history, spending categories, anomalies, and portfolio performance, with the system doing the heavy lifting of turning intent into queries and summaries. The common thread: customers get a flexible interface that adapts to their questions, while you avoid building an ever-expanding backlog of one-off analytics views.
Operational workflows. Many teams have data trapped in operational systems, where answers exist but are slow to retrieve. Chat can serve as a query interface for support and ops workflows: “show me this customer’s recent orders,” “what changed after the last deployment,” or “which shipments are delayed and why.” DevOps and platform teams can use chat for chatops-style exploration of metrics and performance data. Supply chain and inventory teams can query stock levels, supplier status, and fulfillment timelines without jumping between tools. In these contexts, the goal is not “support chat,” but rapid, auditable data lookup and aggregation: the system should be able to show what it queried and why, so operators can trust and validate the result.
Across all of these, the pattern is the same: chat is most valuable when it is grounded in the systems of record, fast enough to support iterative exploration, and integrated into the workflows where decisions actually get made.
Getting to “working” is relatively quick because the core plumbing is now mostly standardized: an LLM that can call tools, an MCP server that exposes your database safely, and an OLAP backend that can answer analytical questions fast. MooseStack packages those pieces so you can stand up a baseline end-to-end without spending weeks on glue code, and the Next.js UI in this guide is just a reference client (the backend works with any frontend).
The rest of the effort is still worth it because the quality of a data chat experience is determined by product-specific details: how your data is modeled, how business terms map to tables and metrics, what permissions and tenancy rules apply, and how results should be presented inside your existing workflows. Once the baseline is live, you can iterate efficiently on accuracy, latency, context, and UX to capture the remaining value without rebuilding the foundation.
Chat is worth investing in once your product has meaningful data gravity: users regularly ask questions that require slicing, aggregation, or comparison, and the existing UI forces them through dashboards, filters, or support channels to get answers. If you already maintain analytics views or a metrics layer, but users still need help finding or interpreting results, chat can reduce that friction immediately.
It is especially compelling when your team is fielding repeat questions, building one-off dashboards for narrow use cases, or struggling to productize a long tail of analytical queries. In those cases, chat becomes a flexible interface that scales with your schema, rather than another surface area you have to constantly redesign.
bash -i <(curl -fsSL https://fiveonefour.com/install.sh) moose.This guide is about chat that queries real production data in your OLAP system, not a generic assistant or RAG over documentation. The core loop is: user asks a question, the model calls tools, those tools run structured queries against an OLAP store (ClickHouse), and the model explains the results in product language.
If you use MooseStack (MCP server + ClickHouse integration) you can get a baseline working quickly. The work that remains, and where quality comes from, is your product-specific layer: modeling tables so the model can query them reliably, providing the right schema and metric context, enforcing tenancy and permissions, and integrating chat into your existing UI patterns so it feels like part of your product.
This is most valuable for teams shipping data-heavy products where users increasingly expect a chat interface, but only trust it when answers are grounded in real queries against production data. For engineers, the payoff is leverage: users can explore and answer routine “what’s going on?” questions without tapping you (or your data team) on the shoulder, and when something looks off you can trace the underlying tool calls and SQL instead of debugging answers generated by a black box.
This is an execution guide for shipping a data-connected chat feature with MooseStack and a Next.js reference client. It helps you make the key upfront decisions, get to a working baseline quickly (via the tutorial), then iterate toward production quality by improving context, adding sources, and embedding chat into your app experience (not leaving it as a demo UI).
The decisions below define the shape of your chat system: what data the agent can access, how fast queries will run, what context it has to generate correct results, which tools it can call, which model provider you depend on, and how auth and access controls work as you move from internal to customer-facing use. This guide defaults to a simple, opinionated template application (internal-first deployment, ClickHouse with rich metadata, SQL-backed tools via MCP, Claude as the model) and shows where to extend or harden it as requirements grow.
What data is chat allowed to access?
What new sources need to be ingested into ClickHouse to support those questions?
For sources already in ClickHouse / MooseStack, is their metadata sufficiently rich to be usable by the LLM (e.g. table and column level comments).
What data must be optimized to meet latency targets?
How can you enrich the data to reduce errors? How does this stay correct as schemas evolve?
What capabilities does the model need besides reading metadata and querying data?
Question: Which provider is the default for v1?
Is v1 internal, customer-facing, or both?
How does identity flow to the data access layer?
Where do you enforce access controls?
In this tutorial, you’ll bootstrap the MooseStack MCP template, load real Parquet data from S3 into ClickHouse, validate queries via the MCP-enabled chat UI, then deploy the backend to Boreal and the Next.js app to Vercel.
| If you already have a Next.js that you want to add this chat to, see 2 - Adding chat to your existing Next.js application. |
|---|
This tutorial covers the end-to-end workflow for bootstrapping the MooseStack MCP template with real Parquet data sourced from S3. It has sample sources for you to use in creating the demo application.
packages/moosestack-service/context/, generating the TypeScript ingest pipelines, bulk-loading local ClickHouse, and validating via the MCP-enabled chat UI.bash -i <(curl -fsSL https://fiveonefour.com/install.sh) moosemoose init <project-name> typescript-mcp # initialize your project
cd <project-name>
pnpm install # install dependenciesCreate .env files
cp packages/moosestack-service/.env.{example,local}
cp packages/web-app/.env.{example,local}Create API Key authentication tokens:
cd packages/moosestack-service
moose generate hash-token # use output for the API Key & Token belowSet environment variables for API:
packages/moosestack-service/.env.local to the ENV API KEY generated by moose generate hash-tokenpackages/web-app/.env.local to the Bearer Token generated by moose generate hash-tokenIf you want to use the chat in the application, (which will have MCP based access to the local ClickHouse managed by MooseDev), make sure to set your Anthropic key as an environment variable
echo "ANTHROPIC_API_KEY=your_api_key_here" >> packages/web-app/.env.localIf you don’t have an Anthropic API key, you can get one here: Claude Console
Your demo app is now set up! Although it is not yet populated with data. Get it running locally.
docker versionFrom the root of your project, run:
pnpm dev # run the MooseStack and web app servicesAlternatively, run MooseStack and your frontend separately
From the root of your project, run:
pnpm dev:moose # Start MooseStack service only
pnpm dev:web # Start web app onlyYou can also run moose dev from your moose project directory /packages/moosestack-service.
Your web-application is available at http://localhost:3000. Don’t be alarmed!
This is a blank canvas for you to use to build whatever user facing analytics you want to. And the chat’s already configured, click the chat icon on the bottom right to open your chat. Our engineer GRoy put some work into that chat panel, and you’ll find that it is resizable, has differentiated scrolling, and more! Feel free to use and modify how you like.
Note: whilst the chat will be functional, since you’ve not added any data to your MooseStack project, the tools available in the chat won’t be able to retrieve any data. In the next section, we’ll explore and model the data.
Once your MooseDev Server is up and running, your development MCP server is ready to be connected to your copilot. For documentation for each copilot, see docs. By way of example, here’s how you configure Claude Code:
claude mcp add --transport http moose-dev http://localhost:4000/mcpNote, you may have to do this before you start Claude Code (or restart Claude Code once you have done this). You can validate that it is working with the /mcp slash command.
The following steps cover how to get data from a source, model that data in MooseStack, creating the relevant ClickHouse tables and other infrastructure, and then loading the data into ClickHouse (either local or Boreal hosted).
You can model your data manually or you can generate a data model from your data. This guide will walk through the generative approach.
This guide assumes you have direct access to the files in S3.
If the files are relatively small, we recommend building up a packages/moosestack-service/context/ directory to gather context (you should gitignore your context directory), e.g.:
There are many ways to copy data down from s3, e.g.:
Using the S3 CLI (once you’ve authenticated):
aws s3 ls s3://source-data/ # list files in S3
aws s3 cp s3://source-data/ . --recursive # copy data from S3 to context directoryUsing moose query (this only copies one file at a time):
cd packages/moosestack-service # navigate to your MooseStack project
moose query "Select * FROM s3('s3://source-data/file-name.parquet', '<Access-Key-ID>', '<Secret-Access-Key>', 'parquet') limit 10;" > context/data/file-name.txt # copy data retrieved from that SQL query to the .txt fileOLAP, and especially ClickHouse, benefits from rigorous data modeling. LLMs aren’t perfect at understanding the nuance of OLAP data modeling, so we created some docs as a guide: https://github.com/514-labs/olap-agent-ref (you can clone it into the context/rules directory).
cd packages/moosestack-service/context/rules
gh repo clone 514-labs/olap-agent-ref . # the gh CLI has less trouble with nested reposCreate
We suggest you do this sequentially, e.g. with the following prompt pattern:
'<project-name>/packages/moosestack-service/app/ingest/models.ts' look at this file. It does two things, declares an interface "DataEvent", and then creates a IngestPipeline that declares a table, streaming topic and ingest API for that interface. I want to do this for the data that I'm ingesting, that I've extracted samples of to this directory: '<project-name>/packages/moosestack-service/context/data'. Lets go step by step. First, lets create the DataModel interface. Refer to the data sample here: '<project-name>/packages/moosestack-service/context/data/sample1.parquet', and the data dictionary here: '<project-name>/packages/moosestack-service/context/data/data_dictionary.csv' and guidelines for good OLAP data modeling here (e.g. tight typing, use of LowCardinality, use of Default Values, etc) '<project-name>/packages/moosestack-service/context/rules'. Don't use a key unless you think it adds value over a standard `order by` field. Then, create the OlapTable object.Make sure to then prompt the copilot to export the object to the it to moosestack-service/index.ts too:
'<project-name>/packages/moosestack-service/app/index.ts' please make sure the above is exported in the indexThe dev server should then pick up the new table, and the copilot should be able to confirm this with the MooseDev MCP, or by being prompted to use moose query:
Ensure the table was created by using `moose query` from the moosestack service directoryOr you can do this manually:
moose query "SELECT name, engine, total_rows FROM
system.tables WHERE database = currentDatabase();"It is also good practice to double check the model generated against the sample data (LLMs can make assumptions about types of data, even when presented with sample data):
check the type limitations against the sample data in /context/dataRepeat this for each table you want to model.
Create a SQL file to load up your data from your remote source to your local ClickHouse:
--load-data.sqlINSERT INTO `local`.PlayerActivity SELECT * FROM s3( 's3://source-data/file.parquet','<Access-Key-ID>', '<Secret-Access-Key>' 'Parquet' );Make sure to properly apply any transformations to conform your S3 data to the data model you’ve created. ClickHouse will do many of these transformations naturally. Notably though:
moose query -f load-data.sqlIt will return:
Node Id: moosestack-service::2dadb086-d9fb-4e42-9462-680185fac0ef Query 0 rowsThis is expected. The query doesn’t return any rows of data to the query client. To validate that it worked, use this query:
moose query "SELECT COUNT(*) FROM
\`local\`.tablename"You will now have a ready to test and iterate on local development environment: with your local MooseStack up and running, with local ClickHouse populated with real data, and the front-end ready to test.
Just go to http://localhost:3000: everything should be good to go. Chat away
Edit packages/web-app/src/features/chat/system-prompt.ts to customize how the AI behaves. The default is generic; you may want to tailor it to your data and use case:
export function getAISystemPrompt(): string { return `You are a helpful AI assistant that can query and analyze data. When users ask questions:1. Use get_data_catalog first to understand what tables are available2. Use query_clickhouse to run SQL queries and get results3. Explain results clearly and concisely You have access to a ClickHouse database with the following data:- [Describe your tables and what they contain]- [Add business context and metric definitions] Be helpful, accurate, and transparent about what queries you're running.`;}There’s a bootstrapped Next.js application, and an Express app that you can use to add to your framework.
I like to point the LLM at a ShadCN component I am interested in, the Express / MooseStack documentation, and the Next.js application folder:
I want to add a Data Table component from ShadCN (here: https://ui.shadcn.com/docs/components/data-table.md) to the web app here: <project-name>packages/web-app. I want it to serve data from DataModel here: <project-name>/packages/moosestack-service/app/ingest/models.ts. Please create an express api in this directory to serve this component, here: <project-name>packages/moosestack-service/app/apis, as documented here: https://docs.fiveonefour.com/moose/app-api-frameworks/express/llm-ts.txtYou should see your frontend application update here http://localhost:3000.
We’ll prepare the application for deployment by setting up authentication, then deploy the MooseStack application to Boreal, and the web-app to Vercel. You can set up authentication and hosting to your preferences, if we haven’t covered your preferred option in our docs, reach out in our slack community: https://slack.moosestack.com/.
Authentication was covered in the start of this guide, but covers back end to front end authentication only (you should also add authentication to restrict access to your frontend).
Go to boreal.cloud, and set up an account. Link your Github account. Create an organization. Create a new project:
Select import a MooseStack Project:
Set the path to the root of the MooseStack service:
packages/moosestack-serviceAnd set the Environment Variables used in the project (following the same steps defined above):
MOOSE_INGEST_API_KEYMOOSE_CONSUMPTION_API_KEYMOOSE_ADMIN_TOKENMOOSE_WEB_APP_API_KEYSContinue, selecting Boreal default hosting (or point it at your own managed ClickHouse and Redpanda instances if you like).
Click Deploy, and you’ll see your application MooseStack being deployed.
⚠️ Important: The authentication set up above just ensures your back end and frontend can communicate securely. We are going to set environment variables here that give the front end access to the data. Accordingly, please ensure that you are properly adding authentication to your front end. Vercel offers this natively if your deployment is a preview deployment. If you want to productionize this, you may have to implement authentication using something like NextAuth, Auth0 or Clerk.
See Vercel docs
ANTHROPIC_API_KEY: Your Anthropic API key for chat functionalityNEXT_PUBLIC_API_URL: Your Boreal endpoint URL (e.g., `https://your-project.boreal.cloud`)API_KEY: Use the bearer token generated earlierYou can find your Boreal endpoint URL at the top of the project overview page:
Your project is now deployed. You have a Vercel hosted frontend. You have a Boreal hosted backend, with tables, APIs etc. set up for you.
Your backend, however, is still unpopulated.
This section will cover how to get data into prod.
Note, it assumes a bulk ingest from a Parquet file on S3 with direct insertion through SQL, like the rest of this tutorial. If you configured a recurring workflow, that would automate data ingest (depending on the trigger). If you set up an ingestion endpoint, you may need to send data to said endpoint.
It is in the Database tab of your deployment overview:
Make sure to select the appropriate connection string type:
-- Bulk load Parquet data from S3 into ClickHouse INSERT INTO `<clickhouse-database>`.`<table-name>` -- Target ClickHouse database and tableSELECT *FROM s3( 's3://<bucket-name>/<path-to-file>.parquet', -- S3 bucket and path to the Parquet file '<aws-access-key-id>', -- AWS Access Key ID '<aws-secret-access-key>', -- AWS Secret Access Key 'Parquet' -- File format);This will again return 0 rows. This is expected. You can validate that the transfer worked correctly as follows:
"SELECT COUNT(*) FROM `<clickhouse-database>`.tablename"| Assumptions: You have a monorepo Your application already has a Next.js service Your application already has a MooseStack service, or you are willing to create one You are using Express for your APIs (you can use other frameworks, they just won’t be outlined in this tutorial |
|---|
Time estimate: ~1 hour
Source template: All code snippets reference github.com/514-labs/moosestack/tree/main/templates/typescript-mcp
This part of the guide will walk your through adding a chat panel embedded in your Next.js app that can query your ClickHouse data using natural language. The chat uses Claude to interpret questions and call MCP tools that execute SQL queries against your database.
┌─────────────────────────────────────────────────────────────────────────┐│ Your Next.js App ││ ┌─────────────┐ ┌──────────────────┐ ┌────────────────────────┐ ││ │ Chat UI │───▶│ /api/chat route │───▶│ MCP Client │ ││ │ Components │ │ (streams resp) │ │ (@ai-sdk/mcp) │ ││ └─────────────┘ └──────────────────┘ └───────────┬────────────┘ │└─────────────────────────────────────────────────────────┼──────────────┘ │ HTTP + Bearer Token ▼┌─────────────────────────────────────────────────────────────────────────┐│ MooseStack Service (localhost:4000 or your Boreal URL) ││ ┌──────────────────┐ ┌─────────────────┐ ┌───────────────────┐ ││ │ /tools endpoint │───▶│ MCP Server │───▶│ ClickHouse │ ││ │ (Express app) │ │ (tool handlers)│ │ (your data) │ ││ └──────────────────┘ └─────────────────┘ └───────────────────┘ │└─────────────────────────────────────────────────────────────────────────┘What this will walk through building:
| Chat UI | PopoutResizable panel with message input/output, tool result rendering |
|---|---|
API Route (/api/chat) | Handles chat requests, creates MCP client, streams responses |
MCP Server (/tools) | Express app exposing query_clickhouse and get_data_catalog tools |
| Authentication | Bearer token passed fromflow between frontend toand MCP server |
Choose your path:
If you don't have MooseStack in your project yet:
# From your monorepo root
mkdir -p packages
cd packages
# Clone just the moosestack-service folder
git clone --depth 1 --filter=blob:none --sparse \
https://github.com/514-labs/moosestack.git temp-moose
cd temp-moose
git sparse-checkout set templates/typescript-mcp/packages/moosestack-service
cp -r templates/typescript-mcp/packages/moosestack-service ../moosestack-service
cd ..
rm -rf temp-mooseIf you don't have a pnpm-workspace.yaml at your monorepo root, create one:
packages: - 'packages/*'cd packages/moosestack-service
pnpm installcp .env.example .env.localmoose generate hash-tokenThis outputs two values:
packages/moosestack-service/.env.local as MCP_API_KEYYour .env.local should look like:
MCP_API_KEY=<paste-the-ENV-API-KEY-here>{ "scripts": { "dev": "pnpm --parallel --stream -r dev", "dev:moose": "pnpm --filter moosestack-service dev", "dev:web": "pnpm --filter web-app dev" }, "pnpm": { "onlyBuiltDependencies": [ "@confluentinc/kafka-javascript", "@514labs/kafka-javascript" ] }}Continue to Frontend Setup
If you already have a MooseStack project, you just need to add the MCP server Express app.
cd packages/your-moosestack-service
pnpm add @modelcontextprotocol/sdk@1.24.2 @514labs/express-pbkdf2-api-key-auth@^1.0.4 express@^5.1.0
pnpm add -D @types/express@^5.0.3Download app/apis/mcp.ts from the template:
# From your moosestack project root
mkdir -p app/apis
curl -o app/apis/mcp.ts \
https://raw.githubusercontent.com/514-labs/moosestack/main/templates/typescript-mcp/packages/moosestack-service/app/apis/mcp.tsAdd this line to your app/index.ts:
export * from "./apis/mcp";Add MCP_API_KEY to your .env.local:
moose generate hash-tokenCopy the ENV API KEY output to your .env.local:
MCP_API_KEY=<paste-the-ENV-API-KEY-here>Save the Bearer Token for frontend configuration.
Note: If you don't have shadcn/ui set up, you'll also need the UI components. See ui.shadcn.com/docs/installation.
cd packages/web-app # or your Next.js app directory
pnpm add ai@5.0.106 @ai-sdk/anthropic@2.0.53 @ai-sdk/mcp@0.0.7 @ai-sdk/react@2.0.106
pnpm add react-resizable-panels@^3.0.6 react-markdown@^10.1.0 remark-gfm@^4.0.1
pnpm add lucide-react@^0.552.0Create or update your .env.local:
cp .env.example .env.local # if you have an example fileAdd these variables:
ANTHROPIC_API_KEY=<your-anthropic-api-key>MCP_API_TOKEN=<the-bearer-token-from-moose-generate-hash-token>MCP_SERVER_URL=http://localhost:4000| Variable | Description | Where to get it |
|---|---|---|
ANTHROPIC_API_KEY | Your Claude API key | console.anthropic.com |
MCP_API_TOKEN | Bearer token for MCP auth | Output from moose generate hash-token |
MCP_SERVER_URL | MooseStack server URL | http://localhost:4000 for local dev |
Create src/env-vars.ts:
export function getMcpServerUrl(): string { const value = process.env.MCP_SERVER_URL; if (!value) { throw new Error("MCP_SERVER_URL environment variable is not set"); } return value;} export function getAnthropicApiKey(): string { const value = process.env.ANTHROPIC_API_KEY; if (!value) { throw new Error("ANTHROPIC_API_KEY environment variable is not set"); } return value;}.env.development for defaultsMCP_SERVER_URL=http://localhost:4000The chat API route handles incoming messages, creates an MCP client to connect to your MooseStack server, and streams the AI response back to the frontend.
# From your Next.js app root (e.g., packages/web-app)
mkdir -p src/features/chat
# Download the core chat logic files
curl -o src/features/chat/agent-config.ts \
https://raw.githubusercontent.com/514-labs/moosestack/main/templates/typescript-mcp/packages/web-app/src/features/chat/agent-config.ts
curl -o src/features/chat/get-agent-response.ts \
https://raw.githubusercontent.com/514-labs/moosestack/main/templates/typescript-mcp/packages/web-app/src/features/chat/get-agent-response.ts
curl -o src/features/chat/system-prompt.ts \
https://raw.githubusercontent.com/514-labs/moosestack/main/templates/typescript-mcp/packages/web-app/src/features/chat/system-prompt.tsCreate src/app/api/chat/route.ts:
import { NextRequest } from "next/server";import { UIMessage } from "ai";import { getAgentResponse } from "@/features/chat/get-agent-response"; interface ChatBody { messages: UIMessage[];} export async function POST(request: NextRequest) { try { const body: ChatBody = await request.json(); const { messages } = body; if (!messages || !Array.isArray(messages)) { return new Response( JSON.stringify({ error: "Invalid request body", details: "messages must be an array" }), { status: 400, headers: { "Content-Type": "application/json" } }, ); } return await getAgentResponse(messages); } catch (error) { console.error("Chat error:", error); return new Response( JSON.stringify({ error: "Internal server error", details: error instanceof Error ? error.message : "Unknown error" }), { status: 500, headers: { "Content-Type": "application/json" } }, ); }}Create src/app/api/chat/status/route.ts to check if the Anthropic key is configured:
import { NextRequest } from "next/server"; export async function GET(request: NextRequest) { const hasAnthropicKey = !!process.env.ANTHROPIC_API_KEY; return new Response( JSON.stringify({ anthropicKeyAvailable: hasAnthropicKey, status: hasAnthropicKey ? "ready" : "missing_key", }), { status: 200, headers: { "Content-Type": "application/json" } }, );}Edit src/features/chat/system-prompt.ts to customize how the AI behaves. The default is generic; you'll want to tailor it to your data and use case:
export function getAISystemPrompt(): string { return `You are a helpful AI assistant that can query and analyze data. When users ask questions:1. Use get_data_catalog first to understand what tables are available2. Use query_clickhouse to run SQL queries and get results3. Explain results clearly and concisely You have access to a ClickHouse database with the following data:- [Describe your tables and what they contain]- [Add business context and metric definitions] Be helpful, accurate, and transparent about what queries you're running.`;}The chat UI consists of several components. Rather than copying each individually, grab the entire chat feature folder and the required layout components.
# From your Next.js app root
cd src/features/chat
# Download all chat components
for file in chat-ui.tsx chat-input.tsx chat-output-area.tsx chat-button.tsx \
tool-invocation.tsx clickhouse-tool-invocation.tsx tool-data-catalog.tsx \
text-formatter.tsx reasoning-section.tsx source-section.tsx \
code-block.tsx suggested-prompt.tsx use-anthropic-status.ts; do
curl -O "https://raw.githubusercontent.com/514-labs/moosestack/main/templates/typescript-mcp/packages/web-app/src/features/chat/$file"
donemkdir -p src/components/layout
curl -o src/components/layout/resizable-chat-layout.tsx \
https://raw.githubusercontent.com/514-labs/moosestack/main/templates/typescript-mcp/packages/web-app/src/components/layout/resizable-chat-layout.tsx
curl -o src/components/layout/chat-layout-wrapper.tsx \
https://raw.githubusercontent.com/514-labs/moosestack/main/templates/typescript-mcp/packages/web-app/src/components/layout/chat-layout-wrapper.tsx
curl -o src/components/layout/content-header.tsx \
https://raw.githubusercontent.com/514-labs/moosestack/main/templates/typescript-mcp/packages/web-app/src/components/layout/content-header.tsxThe chat UI uses these shadcn/ui components. If you don't have them, install via:
npx shadcn@latest add button textarea scroll-area resizable collapsible badgecn utilityIf you don't have src/lib/utils.ts:
import { type ClassValue, clsx } from "clsx";import { twMerge } from "tailwind-merge"; export function cn(...inputs: ClassValue[]) { return twMerge(clsx(inputs));}Wrap your app with the ChatLayoutWrapper. Edit src/app/layout.tsx:
import { ChatLayoutWrapper } from "@/components/layout/chat-layout-wrapper"; export default function RootLayout({ children }: { children: React.ReactNode }) { return ( <html lang="en" suppressHydrationWarning> <body> {/* If you have ThemeProvider, wrap ChatLayoutWrapper inside it */} <ChatLayoutWrapper>{children}</ChatLayoutWrapper> </body> </html> );}Note: If you're using next-themes or another theme provider, wrap ChatLayoutWrapper inside it, not the other way around.
# From monorepo root
pnpm devOr run them separately:
# Terminal 1: MooseStack
pnpm dev:moose
# Terminal 2: Next.js
pnpm dev:webcurl http://localhost:4000/tools \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <your-mcp-api-token>" \
-d '{"jsonrpc":"2.0","id":1,"method":"tools/list"}'You should see a response listing query_clickhouse and get_data_catalog tools.
http://localhost:3000get_data_catalog and show you the resultsMake sure you have:
.env.local with MCP_SERVER_URL=http://localhost:4000.env.development as a fallbackMCP_API_KEY in moosestack-service/.env.local matches the hash from moose generate hash-tokenMCP_API_TOKEN in web-app/.env.local is the Bearer Token (not the hash)Authorization: Bearer ... in network requestsThe architecture avoids CORS issues because:
/api/chat) is same-origin with your frontendIf you're still seeing CORS errors, ensure your chat UI is calling /api/chat, not the MooseStack server directly.
ChatLayoutWrapper is wrapping your app in layout.tsx.env.localpnpm dev:moose)moose ls should show your data modelsSee Deploy to Boreal and Vercel.
Clickhouse allows you to embed table and column level metadata.
With MooseOlap, you can set these table and column level descriptions in your data models. e.g.
export interface AircraftTrackingData { /** Unique aircraft identifier */ hex: string; // no comment for this column transponder_type: string; /** callsign, the flight name or aircraft registration as 8 chars */ flight: string; /** aircraft registration pulled from database */ r: string; /** unique aircraft identifier */ aircraft_type?: string; /** bitfield for certain database flags */ dbFlags: number;}The /** JSDoc */ comments on the column level will now be embedded in your ClickHouse database. This additional context will be available to your chat, and retrievable with the same tool-calls that retrieve data.