TanStack AI: Building Type-Safe, Provider-Agnostic AI Applications
TanStack AI is a new open-source SDK for building AI features in JavaScript. It follows the same approach as TanStack Query and Router, offering a unified, type-safe way to use different AI providers without getting locked into one vendor or a proprietary format.
It is still in alpha, but it already looks promising. The focus is on clean architecture, strong TypeScript support, and the freedom to fit into your current stack instead of forcing a specific framework or provider.
Core principles of TanStack AI
TanStack AI's design centers on several key principles that differentiate it from other AI SDKs in the market.
The SDK uses a pluggable adapter system that makes it provider-agnostic. You can write your application logic once and switch between OpenAI, Anthropic, Google Gemini, or Llama without rewriting code. This prevents vendor lock-in and gives you the flexibility to choose the best model for your specific needs, whether based on cost, performance, or features.
Type safety is a first-class concern throughout the SDK. TanStack AI provides robust, end-to-end type safety with intelligent autocompletion for models, provider-specific options, and tool schemas. Your IDE catches potential errors at compile time rather than runtime, which drastically improves the developer experience and reduces bugs.
The framework-agnostic architecture means TanStack AI provides client libraries for React, SolidJS, and vanilla JavaScript, with more frameworks like Svelte planned. The core logic remains independent of any specific UI framework, staying true to TanStack's philosophy of integrating with your existing stack.
Even in alpha, the vision extends beyond JavaScript with server support for PHP and Python, indicating a long-term goal of creating a universal standard for AI application development.
Setting up a streaming chat endpoint
The foundation of any AI chat application is server-side logic that securely communicates with an AI provider. TanStack AI simplifies this with its core chat function, which handles the complex orchestration of sending messages to an AI model and streaming responses back.
Here's what a basic streaming chat endpoint looks like:
import { chat, toStreamResponse } from '@tanstack/ai';
import { openai } from '@tanstack/ai-openai';
export async function POST(request: Request) {
const { messages, conversationId } = await request.json();
const stream = chat({
adapter: openai(),
messages,
model: 'gpt-4o',
conversationId,
});
return toStreamResponse(stream);
}
The chat function accepts a configuration object where you specify the adapter (in this case OpenAI), the conversation history, and which model to use. The function returns an AsyncIterable stream rather than waiting for the complete response. This streaming approach is crucial for user experience, as it allows displaying the AI's response word-by-word as it generates, rather than making users wait for the entire message.
The toStreamResponse utility handles the complex task of creating a proper HTTP Response object with the correct headers for Server-Sent Events streaming. This abstraction removes the need to manually manage streaming protocols.
Provider-specific options with full type safety
One of TanStack AI's most powerful features is its deep TypeScript integration that extends beyond simple argument checking to provider-specific capabilities.
Different AI providers offer unique features that aren't available across all models. For example, OpenAI's reasoning models provide insights into the model's thought process. TanStack AI allows you to access these features through a providerOptions object:
const stream = chat({
adapter: openai(),
messages,
model: 'gpt-4o',
conversationId,
reasoning: {
effort: "medium",
summary: "detailed",
}
});
The type safety here goes beyond basic validation. When you start typing inside providerOptions, your IDE knows exactly which options are available for the selected provider and model. If you switch to a model that doesn't support the reasoning option, TypeScript immediately flags it as a compile-time error, preventing runtime failures.
This level of type safety is significant because in other libraries, you'd need to constantly consult documentation to know which options work with which models. Mistakes would only surface when your application crashes. TanStack AI brings this validation directly into your development workflow.
Building a reactive client interface
The client-side implementation uses the @tanstack/ai-react package, which provides a useChat hook for managing messages, loading states, and server communication.
Here's how a React component might use the useChat hook:
import { fetchServerSentEvents, useChat } from '@tanstack/ai-react';
import { useState } from 'react';
export function Chat() {
const [input, setInput] = useState('');
const { messages, sendMessage, isLoading } = useChat({
connection: fetchServerSentEvents('/api/chat'),
});
const handleSubmit = (e: React.FormEvent) => {
e.preventDefault();
if (input.trim() && !isLoading) {
sendMessage(input);
setInput('');
}
};
return (
<div>
<div>
{messages.map((message) => (
<div key={message.id}>
<strong>{message.role === 'assistant' ? 'AI:' : 'You:'}</strong>
<p>{message.content}</p>
</div>
))}
</div>
<form onSubmit={handleSubmit}>
<input
value={input}
onChange={(e) => setInput(e.target.value)}
placeholder="Message TanStack AI..."
/>
<button type="submit" disabled={isLoading}>Send</button>
</form>
</div>
);
}
The useChat hook returns everything needed to build the UI. The messages array contains all messages in the current chat session from both user and assistant, automatically updating as the AI streams its response. The sendMessage function handles optimistically adding the user's message to the messages array before sending it to the server. The isLoading boolean flag indicates when the hook is waiting for a response, which is perfect for disabling input or showing loading indicators.
The fetchServerSentEvents connection helper knows how to handle the streaming response from the server. You simply point it at your API endpoint, and it manages the Server-Sent Events protocol automatically.
Tool usage and function calling
Most LLMs have a fundamental limitation: their knowledge is static and becomes outdated. If you ask about recent events, they provide incorrect information based on their training data's cutoff date. TanStack AI addresses this with first-class support for tool usage, allowing AI models to call functions to accomplish tasks.
Defining tool schemas with Zod
Tool definitions describe to the AI what the tool does, when to use it, and what inputs it requires. TanStack AI uses Zod for schema validation, ensuring data passed to tools is correctly formatted:
import { toolDefinition } from '@tanstack/ai';
import { z } from 'zod';
export const searchInternetDef = toolDefinition({
name: 'search_internet',
description: 'Search the internet for current information using Tavily.',
inputSchema: z.object({
query: z.string().describe('The search query.'),
maxResults: z.number().optional().describe('Maximum number of results.'),
}),
});
The name provides a unique identifier that the AI uses when deciding to call the tool. The description is critically important because it's a natural language explanation that the AI uses to understand when to use this tool. A well-written description is key to reliable tool performance.
The inputSchema uses Zod to define expected inputs. The .describe() method adds descriptions for each parameter, giving the AI additional context about what each input represents.
This definition is isomorphic, meaning it can be used on both server and client.
Implementing server-side tool logic
The actual execution code runs on the server for security, especially when dealing with third-party APIs and secret keys:
export const searchInternet = searchInternetDef.server(
async ({ query, maxResults = 5 }) => {
const response = await fetch('https://api.tavily.com/search', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${process.env.TAVILY_API_KEY}`,
},
body: JSON.stringify({
query,
max_results: maxResults,
include_answer: true,
}),
});
if (!response.ok) {
throw new Error(`Tavily API error: ${response.statusText}`);
}
return await response.json();
}
);
The .server() method extends the tool definition with the execution function. The function receives arguments that the AI decided to use, which have already been validated against the Zod schema. You can destructure them directly and trust their types.
The function returns the result from the external API, which TanStack AI automatically passes back to the AI model. The model then uses this information to formulate its final answer.
Integrating tools into chat flows
Adding tools to the chat flow requires passing them to both the server and client.
On the server, add the tool implementation to the tools array:
import { chat, toStreamResponse } from '@tanstack/ai';
import { openai } from '@tanstack/ai-openai';
import { searchInternet } from './tools';
export async function POST(request: Request) {
const { messages, conversationId } = await request.json();
const stream = chat({
adapter: openai(),
messages,
model: 'gpt-4o',
});
return toStreamResponse(stream);
}
On the client, pass the tool definition (not the implementation) to the useChat hook:
import { useChat, fetchServerSentEvents } from '@tanstack/ai-react';
import { searchInternetDef } from '../lib/tools';
export function Chat() {
const { messages, sendMessage, isLoading } = useChat({
connection: fetchServerSentEvents('/api/chat'),
});
// ... rest of component
}
This gives the client-side hook enough information for type safety without exposing server-side logic.
How tool orchestration works
When you ask a question like "Who is the current F1 champion?", TanStack AI orchestrates a complex multi-step process automatically.
The client sends the message to the server, which forwards the conversation and tool definitions to the AI provider. The AI model analyzes the request and realizes its internal knowledge might be outdated. Seeing the search_internet tool is available for getting "current information," the model responds with a request to call the tool rather than directly answering.
TanStack AI's server-side logic intercepts this tool call request, executes the function with the AI's chosen parameters, and retrieves the search results. These results go back to the AI model as additional context. The model now has fresh, up-to-date information and uses it to generate the final answer, which streams back to the client.
This entire orchestration happens automatically. You define the tools and their implementations, and TanStack AI handles the complex back-and-forth between your application and the AI provider.
Advanced features and future capabilities
TanStack AI includes several advanced features beyond basic chat and tool usage. The SDK supports client-side tools that run in the browser, hybrid tools that can execute on both client and server, and tool approval flows for implementing human-in-the-loop processes.
The library also includes an agentic cycle management system for building more complex, autonomous AI agents that can plan and execute multi-step tasks. These features position TanStack AI as more than a simple wrapper, but rather a comprehensive framework for building sophisticated AI systems.
Final thoughts
TanStack AI enters a space already served by tools like Vercel’s AI SDK, but it stands out with a stronger open-source mindset, multi-language support, and very strong type safety.
Its adapter-based design makes it easier to switch providers and avoid lock-in. The deep TypeScript integration also helps catch mistakes earlier and improves autocomplete compared with looser SDKs.
TanStack’s track record matters here. The same focus on developer experience that made TanStack Query and Router popular shows up in TanStack AI’s API.
It is still in alpha, but it is moving in a direction that could make it a go-to SDK for adding AI to apps. More competition here is good news for developers, since it leads to better, more flexible tools.