Processing documents with AI involves compute-intensive and long-running processes dealing with multiple possible failures along the way:
Defer workflow's pattern helps with dividing the whole process into smaller sub-tasks that get specific rate limits and retry configuration, for optimal processing power.
Today's Build Week demo showcases a workflow consuming large meeting video files to extract audio as text using OpenAI Whisper.
This Defer workflow optimizes the parallelization of the video processing while matching OpenAI Whisper's 20MB file upload limit and mutualizing OpenAI's account rate limit:
The above meeting video transcript workflow is available on GitHub with one-click deployment to Vercel.
This first Build week has been the opportunity to share all the issues that Defer is solving for Serverless and LLM applications as well as when building complex no-code user experiences.