Menu
Defer is reaching the end of its service life on May 1, 2024. Please reach out to support@defer.run for further information.
Defer for LLM

From LLM experiments
to products

Defer makes it easy to experiment LLM features within your applications and scale for production.

Your application + LLM

Integrates LLM features in your application with pipelines and workflows that lives besides your application's code.

Effortlessly scale your workload

Distribute your LLM workload in thousands of executions easily with no duration limit to power your embeddings or data classification.

Integrated with Vercel

Directly connect Defer with your Vercel applications to power delightful LLM application experiences.

Iterate & ship.

Productize your LLM workflows.

import { defer } from "@defer/client";
import { createClient } from "@supabase/supabase-js";
import { Configuration, OpenAIApi } from "openai";
import { supabaseClient } from "./lib/supabase";
import { openAiClient } from "./lib/openai";
import { getRecentlyUpdatedDocuments } from "../lib";
import refreshCategories from "./refreshCategories";
async function generateEmbeddings(workspaceID: string) {
// 1. retrieve the newly created or updated documents from the database
const documents = await getRecentlyUpdatedDocuments(workspaceID);
for (const document of documents) {
// (OpenAI recommends replacing newlines with spaces for best results)
const input = document.replace(/\n/g, " ");
// 2. generate an embedding for each document
const embeddingResponse = await openAiClient.createEmbedding({
model: "text-embedding-ada-002",
input,
});
// 3. store the embedding as a vector in our Supabase Vector database
await supabaseClient.from("documents").insert({
content: document,
workspaceID,
embedding: embeddingResponse.data.data,
});
}
// 4. Enqueue another execution that will extract clusters
// to refresh categories
// (see https://cookbook.openai.com/examples/clustering)
await refreshCategories(workspaceID);
}
export default defer(generateEmbeddings, {
concurrency: 5, // limit concurrency for OpenAI rate limiting
retry: 2, // enable retry to handle external failures
});

Generate embeddings for Supabase Vector

Prepare your application's data with Defer long-running workflows to generate OpenAI embeddings

Empowers the best LLM teams

Truewind
AI-Powered bookkeeping and finance
Truewind
Defer has been a great execution platform for Truewind to run LLM related workflows. [...] It allows us to focus on shipping features rather than reinventing wheels
Founding Staff Engineer
Ellis
Reach out to leads,
like you know them
Ellis
Inference takes a long time and APIs are unreliable. Defer allows us to run expensive inference jobs in their service so we can focus on shipping.
Co-Founder

Background jobs reinvented

Copyright ©2024 Defer Inc.