Vercel is the easiest and fastest way to ship LLM experiences. Vercel AI comes with a set of handful helpers to rapidly scaffold chat and completion-based experiences:
'use client'; import { useChat } from 'ai/react'; export default function MyComponent() { const { messages, input, handleInputChange, handleSubmit } = useChat(); return ( // ... );}
Vercel recommends leveraging OpenAI Streaming transport to overcome Serverless timeout limitations. While this pattern is a perfect fit for experiences based on direct prompting, it comes short when fetching and preprocessing of data is necessary.
Today's Build Week example showcases a GitHub Profile Generator application that combines user input and its GitHub data to prompt OpenAI, leveraging Defer's @defer/client/next integration:
"use client";import { useDeferRoute } from "@defer/client/next";import generateGitHubProfile from "@/defer/generateGitHubProfile"; export default function Index() { // trigger our Defer function from our Client Component const [generate, { loading, result }] = useDeferRoute<typeof generateGitHubProfile>("/api/githubProfile"); // ... return ( /* ... */ )}
Learn more about @defer/client/next helpers in our ready-to-deploy to Vercel demo, available on GitHub.
This first Build week has been the opportunity to share all the issues that Defer is solving for Serverless and LLM applications as well as when building complex no-code user experiences.