Ahmad Awais,
Langbase.com, CEO
"Did we really just do; what we really just did?" was the feeling when I built the first version of a computing primitive in the LLM space. It's called a pipe. Prompt Instructions Personalization Engine. A pipe is a language and framework-agnostic; it's like a serverless function, and it works anywhere with virtually any LLM, like a small mixture of expert models. Simple, right? Not exactly rocket science, not groundbreaking, but oh-so-powerful.
It scales so well, I couldn't believe it myself. 984 million tokens generated in three weeks. It's an undeniably brilliant full-stack AI developer experience.
In this talk, I'll explain why an LLM Pipe is genuinely a winning bet and how it will change the way we build gen-AI use cases with LLMs. I'll also discuss how you can create your own custom AI operating system on top of Langbase.
By the end of this talk, you'll walk away prepared to build your own generative AI LLM web apps in less than five minutes. You'll feel confident building any AI use case, from a simple chatbot like ChatGPT to a complex RAG engine with stellar developer experience (DX). You'll be decisive in crafting apps with lightning-fast iteration velocity and default high performance. Let's go!