Introducing AI SDK 3.0 with Generative UI support

Authors

3 min read

Stream React Components from LLMs to deliver richer user experiences

Last October, we launched v0.dev, a generative UI design tool that converts text and image prompts to React UIs and streamlines the design engineering process.

Today, we are open sourcing v0's Generative UI technology with the release of the Vercel AI SDK 3.0. Developers can now move beyond plaintext and markdown chatbots to give LLMs rich, component-based interfaces.

Visit our demo for a first impression or read the documentation for a preview of the new APIs.

A new user experience for AI

Products like ChatGPT have made a profound impact: they help users write code, plan travel, translate, summarize text, and so much more. However, LLMs have faced two important UX challenges:

  • Limited or imprecise knowledge

  • Plain text / markdown-only responses

With the introduction of Tools and Function Calling, developers have been able to build more robust applications that are able to fetch realtime data.

These applications, however, have been challenging to write and are still lacking in richness and interactivity.

Thanks to our experience in developing v0 with React Server Components (RSC), we've arrived at a simple abstraction that can solve both these problems.

A new developer experience for AI

With the AI SDK 3.0, you can now associate LLM responses to streaming React Server Components.

Let's start with the most basic example, streaming text without retrieval or up-to-date information.

import { render } from 'ai/rsc'
import OpenAI from 'openai'
const openai = new OpenAI()
async function submitMessage(userInput) {
'use server'
return render({
provider: openai,
model: 'gpt-4',
messages: [
{ role: 'system', content: 'You are an assistant' },
{ role: 'user', content: userInput }
],
text: ({ content }) => <p>{content}</p>,
})
}

Let's now solve both original problems: retrieve the live weather and render a custom UI. If your model supports OpenAI-compatible Functions or Tools, you can use the new render method to map specific calls to React Server Components.

import { render } from 'ai/rsc'
import OpenAI from 'openai'
import { z } from 'zod'
const openai = new OpenAI()
async function submitMessage(userInput) { // 'What is the weather in SF?'
'use server'
return render({
provider: openai,
model: 'gpt-4-0125-preview',
messages: [
{ role: 'system', content: 'You are a helpful assistant' },
{ role: 'user', content: userInput }
],
text: ({ content }) => <p>{content}</p>,
tools: {
get_city_weather: {
description: 'Get the current weather for a city',
parameters: z.object({
city: z.string().describe('the city')
}).required(),
render: async function* ({ city }) {
yield <Spinner/>
const weather = await getWeather(city)
return <Weather info={weather} />
}
}
}
})
}

Towards the AI-native web

With Vercel AI SDK 3.0, we're simplifying how you integrate AI into your apps. By using React Server Components, you can now stream UI components directly from LLMs without the need for heavy client-side JavaScript. This means your apps can be more interactive and responsive, without compromising on performance.

This update makes it easier to build and maintain AI-powered features, helping you focus on creating great user experiences. We're excited to see what you ship.

Try the demo

Try an experimental preview of AI SDK 3.0 with Generative UI

Try now

FAQ