Documentation Index
Fetch the complete documentation index at: https://docs.corti.ai/llms.txt
Use this file to discover all available pages before exploring further.
The @corti/ai-sdk-adapter package provides utilities to connect Corti’s A2A (Agent-to-Agent) agents with the Vercel AI SDK. It converts between AI SDK’s UI message format and Corti’s A2A format, so you can use familiar patterns like useChat to build chat interfaces powered by Corti agents.
The adapter provides three main functions:
convertToParams() — converts CortiUIMessage[] to A2A MessageSendParams
toUIMessageStream() — converts an A2A stream to a UI message stream
createA2AClientFactory() — creates an A2A client factory configured with Corti authentication
Installation
npm install @corti/ai-sdk-adapter @a2a-js/sdk ai
Quick start
Server: streaming API route (Next.js)
Create an API route that receives messages from the client, converts them to A2A format, streams the response from a Corti agent, and returns a UI message stream.
import {
convertToParams,
toUIMessageStream,
createA2AClientFactory,
} from '@corti/ai-sdk-adapter';
import type { CortiUIMessage, ExpertCredential } from '@corti/ai-sdk-adapter';
import { CortiClient } from '@corti/lib';
import { createUIMessageStreamResponse } from 'ai';
export async function POST(req: Request) {
const { messages }: { messages: CortiUIMessage[] } = await req.json();
// Optional: define credentials for MCP servers
const credentials: ExpertCredential[] = [
{
mcp_name: 'my-server',
token: process.env.MCP_TOKEN,
type: 'bearer' as const,
},
];
// Build A2A params from UI messages
const params = convertToParams(messages, credentials);
// Create A2A client factory and send message stream
const corti = new CortiClient({ /* your Corti client config */ });
const factory = createA2AClientFactory(corti);
const agentUrl = await corti.agents.getCardUrl("YOUR_AGENT_ID");
const client = factory.createFromUrl(agentUrl.toString(), '');
const a2aStream = client.sendMessageStream(params);
// Convert to UI stream
const uiStream = toUIMessageStream(a2aStream, {
callbacks: {
onStart: () => console.log('Stream started'),
onEvent: (event) => console.log('Event:', event),
onFinish: (state) => console.log('Final state:', state),
onError: (error) => console.error('Error:', error),
},
});
return createUIMessageStreamResponse({ stream: uiStream });
}
Client: React chat component
Use the Vercel AI SDK useChat hook with the CortiUIMessage type to render agent responses.
import { useState } from 'react';
import { useChat } from 'ai/react';
import type { CortiUIMessage } from '@corti/ai-sdk-adapter';
export default function Chat() {
const [input, setInput] = useState('');
const { messages, sendMessage, status } = useChat<CortiUIMessage>({
api: '/api/chat',
});
const handleSubmit = (e: React.FormEvent) => {
e.preventDefault();
if (!input.trim() || status === "streaming" || status === "submitted") return;
sendMessage({ message: input });
setInput('');
};
return (
<div>
{messages.map((m) => (
<div key={m.id}>
<strong>{m.role}:</strong>{' '}
{m.parts.map(p => p.type === 'text' ? p.text : '').join(' ')}
</div>
))}
<form onSubmit={handleSubmit}>
<input value={input} onChange={(e) => setInput(e.target.value)} />
<button type="submit">Send</button>
</form>
</div>
);
}
How it works
Context and task continuity
The adapter automatically manages conversation context and task continuity:
contextId — maintains conversation context across multiple messages. Automatically inferred from the last assistant message.
taskId — continues an existing task when the agent requires more input. Only included when the last assistant message has state: 'input-required'.
- Credentials — only sent on the first message (when no
taskId is present).
The convertToParams() function handles all of this automatically — you just pass the messages array from useChat.
Stream callbacks
The toUIMessageStream() function accepts a StreamConversionOptions object with optional callbacks to monitor stream progress:
const uiStream = toUIMessageStream(a2aStream, {
callbacks: {
onStart: () => {
// Called when streaming begins
},
onEvent: (event) => {
// Called on each new event from the stream
},
onFinish: (state) => {
// Called when stream completes with the final task status
},
onError: (error: Error) => {
// Called if an error occurs during streaming
},
onAbort: () => {
// Called when the stream is aborted by the client
},
},
});
Runtime support
This package supports:
- Node.js (18+)
- Edge runtimes (Vercel Edge, Cloudflare Workers)
Resources