Menu
Defer is reaching the end of its service life on May 1, 2024. Please reach out to support@defer.run for further information.
Engineering
February 29th, 2024

Build Week #1, Day 4:
Build long-running OpenAI experiences

Charly PolyCharly PolyCEO

OpenAI x Defer


Vercel x OpenAI x Defer: beyond streaming

Vercel is the easiest and fastest way to ship LLM experiences. Vercel AI comes with a set of handful helpers to rapidly scaffold chat and completion-based experiences:

'use client';
import { useChat } from 'ai/react';
export default function MyComponent() {  const { messages, input, handleInputChange, handleSubmit } = useChat();
  return (    // ...  );}

Vercel recommends leveraging OpenAI Streaming transport to overcome Serverless timeout limitations. While this pattern is a perfect fit for experiences based on direct prompting, it comes short when fetching and preprocessing of data is necessary.


Today's Build Week example showcases a GitHub Profile Generator application that combines user input and its GitHub data to prompt OpenAI, leveraging Defer's @defer/client/next integration:

"use client";import { useDeferRoute } from "@defer/client/next";import generateGitHubProfile from "@/defer/generateGitHubProfile";
export default function Index() {  // trigger our Defer function from our Client Component  const [generate, { loading, result }] =    useDeferRoute<typeof generateGitHubProfile>("/api/githubProfile");
    // ...
    return (        /* ... */    )}

Learn more about @defer/client/next helpers in our ready-to-deploy to Vercel demo, available on GitHub.

Join the community and learn how to get started, and provide feedback.
Stay tuned about our latest product and company updates.
Start creating background jobs in minutes.
Copyright ©2024 Defer Inc.