Introduction to Consumption APIs
Viewing typescript
switch to python
Consumption APIs make it easy to build type-safe endpoints for surfacing data from your OLAP database. These APIs can help power user-facing analytics, dashboards and other front-end components, or even enable AI tools to interact with your data.
As a developer, you’ll write simple, strongly-typed functions that automatically handle:
- URL parameter parsing and type conversion
- SQL query construction and execution
- Response formatting
- Automatically generated OpenAPI documentation
No need to write boilerplate for parameter validation, type conversion, or error handling - the framework handles these for you.
Getting Started
File and Folder Structure
Create your API endpoints by adding .ts
.py
files to the /apis
folder. Each file automatically becomes an API endpoint:
- getFoo.ts
- getBar.ts
- get_foo.py
- get_bar.py
Your files are automatically mapped to /consumption/
endpoints:
getFoo.ts
→/consumption/getFoo
getBar.ts
→/consumption/getBar
get_foo.py
→/consumption/get_foo
get_bar.py
→/consumption/get_bar
Creating a New API Endpoint
The fastest way to create a new Consumption API endpoint is using the Moose CLI:
npx moose-cli consumption init getBar
moose-cli consumption init get_bar
This command will:
- Create a new file
getBar.ts
get_bar.py
in your/apis
directory - Scaffold the basic API structure with type definitions
- Add example query parameters and SQL query
Basic API Template
The generated template is a basic API endpoint that you can customize to your needs:
import { createConsumptionApi } from "@514labs/moose-lib";
// This file is where you can define your API templates for consuming your data
interface QueryParams {}
// createConsumptionApi uses compile time code generation to generate a parser for QueryParams
export default createConsumptionApi<QueryParams>(
async (params, { client, sql }) => {
return client.query.execute(sql`SELECT 1`);
}
);
# This file is where you can define your API templates for consuming your data
# All query_params are passed in as strings, and are used within the sql tag to parameterize you queries
from pydantic import BaseModel, Field
from moose_lib import MooseClient
class QueryParams(BaseModel):
## Define your query parameters here
def run(client: MooseClient, params: QueryParams):
return client.query.execute("SELECT 1", { })
You can then customize this template by:
- Defining your query parameters
- Writing your SQL query
- Adding any necessary data transformations
Implementing the API Logic
Each API endpoint is defined by a single function that handles incoming requests. This function receives typed query parameters and returns database query results.
import {
createConsumptionApi,
ConsumptionHelpers as CH,
} from "@514labs/moose-lib";
import { tags } from "typia";
// Define expected parameters and their types
interface QueryParams {
orderBy: "totalRows" | "rowsWithText" | "maxTextLength" | "totalTextLength";
limit?: number;
startDay?: number & tags.Type<"int32"> & tags.Minimum<1> & tags.Maximum<31>;
endDay?: number & tags.Type<"int32"> & tags.Minimum<1> & tags.Maximum<31>;
}
// createConsumptionApi uses compile time code generation to generate a parser for QueryParams
export default createConsumptionApi<QueryParams>(
async (
{ orderBy = "totalRows", limit = 5, startDay = 1, endDay = 31 },
{ client, sql }
) => {
const query = sql`
SELECT
dayOfMonth,
${CH.column(orderBy)}
FROM BarAggregated_MV
WHERE
dayOfMonth >= ${startDay}
AND dayOfMonth <= ${endDay}
ORDER BY ${CH.column(orderBy)} DESC
LIMIT ${limit}
`;
// Set return type to the expected query result shape
const data = await client.query.execute<{
dayOfMonth: number;
totalRows?: number;
rowsWithText?: number;
maxTextLength?: number;
totalTextLength?: number;
}>(query);
return data;
}
);
Key components:
QueryParams
interface defines the expected URL parameters and their typescreateConsumptionApi
helper provides type safety and automatic parameter parsing- Built-in
client
andsql
utilities for safe query construction ConsumptionHelpers
(CH) for secure parameter injection of SQL identifiers
from moose_lib import MooseClient
from pydantic import BaseModel, Field
from typing import Optional
# Query params are defined as Pydantic models and are validated automatically
class QueryParams(BaseModel):
order_by: Optional[str] = Field(
default="total_rows",
pattern=r"^(total_rows|rows_with_text|max_text_length|total_text_length)$",
description="Must be one of: total_rows, rows_with_text, max_text_length, total_text_length"
)
limit: Optional[int] = Field(
default=5,
gt=0,
le=100,
description="Must be between 1 and 100"
)
start_day: Optional[int] = Field(
default=1,
gt=0,
le=31,
description="Must be between 1 and 31"
)
end_day: Optional[int] = Field(
default=31,
gt=0,
le=31,
description="Must be between 1 and 31"
)
## The run function is where you can define your API logic
def run(client: MooseClient, params: QueryParams):
start_day = params.start_day
end_day = params.end_day
limit = params.limit
order_by = params.order_by
query = f"""
SELECT
day_of_month,
{order_by}
FROM BarAggregated_MV
WHERE day_of_month >= {start_day}
AND day_of_month <= {end_day}
ORDER BY {order_by} DESC
LIMIT {limit}
"""
return client.query.execute(query, {"order_by": order_by, "start_day": start_day, "end_day": end_day, "limit": limit})
Key components:
QueryParams
class defines the expected URL parameters and their types- Type hints and default values provide automatic parameter parsing
- Built-in
MooseClient
for database interactions - Safe query construction using parameterized queries
Query Parameters
Query parameters allow your APIs to accept dynamic inputs through URL parameters. These parameters are automatically parsed and type-converted before reaching your handler function.
Use an interface to define the expected parameters and their types. You can use Union types to allow multiple valid values or define more complex types.
// Define expected parameters and their types
interface QueryParams {
order_by: "total_rows" | "rows_with_text" | "max_text_length" | "total_text_length"; // String union type
limit?: number; // Optional number parameter
start_day?: number; // Optional number parameter
end_day?: number; // Optional number parameter
}
// URL: /consumption/getBar?order_by=total_rows&limit=5&start_day=1&end_day=31 OR /consumption/getBar
// Automatically provides:
{
order_by: "total_rows",
limit: 5,
start_day: 1,
end_day: 31
}
Benefits:
- Automatic type conversion from URL strings
- Runtime type validation
- IDE autocompletion
- Type safety throughout your codebase
Python Consumption APIs use Pydantic to define and validate query parameters:
from pydantic import BaseModel, Field
from typing import Optional
# Query params are defined as Pydantic models and are validated automatically
class QueryParams(BaseModel):
order_by: Optional[str] = "total_rows"
limit: Optional[int] = 5
start_day: Optional[int] = 1
end_day: Optional[int] = 31
# URL: /consumption/get_bar?order_by=total_rows&limit=5&start_day=1&end_day=31 OR /consumption/get_bar
# Automatically provides:
params = QueryParams(order_by="total_rows", limit=5, start_day=1, end_day=31)
Benefits:
- Automatic type conversion from URL strings
- Runtime type validation
- Type hints for IDE support
- Default values for optional parameters
- Clean, declarative parameter definition
Advanced Type Validation
Moose leverages Typia to enforce runtime validation on query parameters using plain TypeScript interfaces. This enables you to use all the advanced Typia tags in your query parameter interfaces to enhance your type definitions and enforce runtime validations.
How It Works
-
Auto-Generated Validators: By annotating your TypeScript interface with Typia tags (e.g.,
Type
,Format
,ExclusiveMinimum
), Moose automatically generates the code needed to validate query parameters at runtime. -
Type-Safe Constraints: For instance, specifying:
userId: number & typia.tags.Type<"uint32">
ensures that theuserId
query parameter must be a 32-bit unsigned integer.
Example
Below is a practical example demonstrating how to use Typia tags within your consumption API endpoint to enforce validations on query parameters:
import { createConsumptionApi } from "@514labs/moose-lib";
import typia from "typia";
interface QueryParams {
orderBy: "totalRows" | "rowsWithText" | "maxTextLength" | "totalTextLength";
limit?: number & tags.Type<"int32"> & tags.Minimum<1>; // limit must be a 32-bit integer greater than 0
startDay?: number & tags.Type<"int32"> & tags.Minimum<1> & tags.Maximum<31>; // startDay must be a 32-bit integer between 1 and 31
endDay?: number & tags.Type<"int32"> & tags.Minimum<1> & tags.Maximum<31>; // endDay must be a 32-bit integer between 1 and 31
}
Other examples of Typia tags:
import { tags } from "typia";
interface MoreExamples {
email: string & tags.Format<"email"> // ensures the parameter is a valid email address
uuid: string & tags.Format<"uuid"> // ensures the parameter is a valid UUID
exclusiveMinimum: number & tags.ExclusiveMinimum<17> // ensures the parameter is greater than 17
minimum: number & tags.Minimum<1> and tags.Maximum<31> // ensure the parameter is within the specified range
uint32: number & tags.Type<"uint32"> // ensures the parameter is a 32-bit unsigned integer
double: number & tags.Type<"double"> // ensures the parameter is a double
}
View the Typia documentation for more information on the available tags.
Moose supports runtime validation of query parameters using Pydantic.
You can use the Field
class to specify default values and validations:
from pydantic import BaseModel, Field
from typing import Optional
class QueryParams(BaseModel):
order_by: Optional[str] = Field(
default="total_rows",
pattern=r"^(total_rows|rows_with_text|max_text_length|total_text_length)$",
description="Must be one of: total_rows, rows_with_text, max_text_length, total_text_length"
)
limit: Optional[int] = Field(
default=5,
gt=0,
le=100,
description="Must be between 1 and 100"
)
start_day: Optional[int] = Field(
default=1,
gt=0,
le=31,
description="Must be between 1 and 31"
)
end_day: Optional[int] = Field(
default=31,
gt=0,
le=31,
description="Must be between 1 and 31"
)
You can import Pydantic types to use in your query parameters:
from pydantic import BaseModel, Field, PositiveInt ## Import PositiveInt to ensure the parameter is a positive integer
class QueryParams(BaseModel):
order_by: Optional[str] = Field(
default="total_rows",
pattern=r"^(total_rows|rows_with_text|max_text_length|total_text_length)$",
description="Must be one of: total_rows, rows_with_text, max_text_length, total_text_length"
)
limit: Optional[PositiveInt] = Field(
default=5,
gt=0,
le=100,
description="Must be between 1 and 100"
)
start_day: Optional[PositiveInt] = Field(
default=1,
gt=0,
le=31,
description="Must be between 1 and 31"
)
end_day: Optional[PositiveInt] = Field(
default=31,
gt=0,
le=31,
description="Must be between 1 and 31"
)
For more information on Pydantic, see the Pydantic documentation.
Safe SQL Construction
The framework provides utilities for safely constructing SQL queries with dynamic parameters:
import { ConsumptionHelpers as CH } from "@514labs/moose-lib";
// Using template literals with the sql helper
const query = sql`
SELECT
dayOfMonth,
${CH.column(orderBy)}
FROM BarAggregated_MV
WHERE
dayOfMonth >= ${startDay}
AND dayOfMonth <= ${endDay}
ORDER BY ${CH.column(orderBy)} DESC
LIMIT ${limit}
`;
// ConsumptionHelpers (CH) prevent SQL injection
CH.column(orderBy) // Safely interpolates column names
The framework provides safe query construction through parameterization:
## The run function is where you can define your API logic
def run(client: MooseClient, params: QueryParams):
start_day = params.start_day
end_day = params.end_day
limit = params.limit
order_by = params.order_by
query = f"""
SELECT
day_of_month,
{order_by}
FROM BarAggregated_MV
WHERE day_of_month >= {start_day}
AND day_of_month <= {end_day}
ORDER BY {order_by} DESC
LIMIT {limit}
"""
return client.query.execute(query, {"order_by": order_by, "start_day": start_day, "end_day": end_day, "limit": limit})
Consumption APIs support only GET requests to ensure optimized data retrieval.
Tips & Best Practices
- Testing Your APIs: Use tools like Postman, Thunder Client, or cURL to test your endpoints.
- Utilize Default Values: Leverage defaults in your interfaces or dataclasses to reduce manual parsing.
- Write Safe SQL: Always use the provided SQL helpers or parameterized queries to avoid injection vulnerabilities.
- Automatic API Documentation: Take advantage of the auto generated OpenAPI specification, hosted on your local dev server at http://localhost:5001/openapi.yaml, for interactive API documentation and easier client integration.
Automatically Generated OpenAPI Documentation
The framework automatically generates an up-to-date OpenAPI specification during local development. This spec provides a comprehensive overview of all consumption API endpoints, including:
- Endpoint Paths & HTTP Methods: Lists each API endpoint and the HTTP method (GET) it supports.
- Request Parameters: Details on the expected query parameters, including types, default values, and validation rules.
- Response Formats: Information about the structure of the responses and potential status codes.
How to Use the OpenAPI Spec
Make sure you are running your local development server before accessing the OpenAPI specification. Run moose dev
to start your local server.
-
Visit the Specification URL:
The OpenAPI specification is hosted on your local development server on port 5001. Simply visit http://localhost:5001/openapi.yaml in your browser to access the YAML file that outlines your API. -
Integrate with API Tools:
You can point your API client tool of choice—such as Swagger UI, Postman, or Thunder Client—to http://localhost:5001/openapi.yaml. This will automatically generate an interactive UI where you can explore and test your API endpoints. -
Generate Client Code:
Leverage the specification to automatically generate client libraries and SDKs. Many development frameworks and tools, such as OpenAPI Generator, support importing OpenAPI specs to streamline client integration. -
Documentation is Automatically Updated:
The documentation is regenerated each time you save your API files while running your local development server, ensuring that any new endpoints or changes in parameter validation are immediately reflected.