podcast

Predibase Founders Dev Rishi and Travis Addair

Post on

June 11, 2024

Listen to this Predibase episode on Spotify, Apple, Amazon, and Podcast Addict | Watch on YouTube.

Today, Madrona Partner Vivek Ramaswami hosts Dev Rishi and Travis Addair, co-founders of 2023 IA40 Rising Star, Predibase. Predibase is a platform for developers to easily fine-tune and serve any open-source model. They recently released LoRA Land, a collection of 25 highly-performant open-source fine-tuned models, and launched open-source LoRA eXchange, or LoRAX, which allows users to pack hundreds of task-specific models into a single GPU.

In this episode, Dev and Travis share their perspective on what it takes to commercialize open-source projects, how to build a product your customers actually want, and how to differentiate in a very noisy AI infrastructure market. This is a must-listen for anyone building an AI.

This transcript was automatically generated and edited for clarity.

Vivek: To kick us off, maybe we can just start off by telling us a little bit about the founding story at Predibase. Dev, you were a PM at Google in the Bay for about five years. Travis, you were a senior software engineer at Uber in Seattle for about four years. How did you meet and co-found Predibase in 2021?

Travis: The company originally started out of Uber. Our co-founder, Piero, was working at the time as an AI researcher for Uber’s AI team on a project called Ludwig. He and I worked together. He came to me in March of 2020 saying that he thought that there was an interesting opportunity to productize a lot of the ideas from Ludwig around making deep learning accessible and thinking about bringing that to other industries, other organizations, other personas beyond the data scientists, but folks like analysts and engineers as well.

It started getting a lot more serious in the summer of 2020. We started saying, “We definitely need someone who’s not completely technical and engineer. Let’s try to find someone who knows how to build good products and can help think about some of the go-to-market issues, and that’s how we ultimately came to meet Dev. Every other person that I had talked to up to that point was very bidirectional conversation like, “What are you guys doing? What do you think the opportunity is?” It was a lot of selling on our side. Dev came in with a presentation like, “Here’s how I think that you can turn this into a business.” I was like, “Okay, this guy’s pretty serious.”

Dev: On my side, I had spent time previous to Predibase as PM at Google, and worked on a number of different teams. I saw how Google did machine learning internally. For a while I was jaded on this idea of making deep learning and machine learning a lot easier. I’d seen it and we had tried to do it at Google, and it hadn’t really gone as well as we wanted. I originally got in touch with Piero and Travis through a mutual contact because I got introduced as a skeptic on the space. I think the thing that I had said was, “I’ve seen a lot of people die on this hill of trying to make machine learning a lot more accessible.”

I remember the first meeting I had with Piero and Travis, they didn’t understand what I think the vision directly was, but then I tried the projects that they had tried to be open source and they walked me through a presentation of how they were thinking about it. That’s when it clicked and I knew that there was something here that could be more differentiated approach. We all got to know each other over the summer of 2020, and we officially started the company at the beginning of 2021.

Vivek: It’s funny how different the deep learning and machine learning space was three or four years ago compared to where we are today. It’s almost hard for people to think, “Oh, people were jaded about it when you see all of the optimism today.” I’m curious: what was the aha moment when things really clicked? You mentioned, Travis, that the original vision was around Ludwig and these open-source projects. Often, there’s a lot of difficulty in going from a cool open-source project to Is this a business? So what was that aha moment, when did you start to see things really click?

Travis: What convinced me about the idea of making deep learning accessible was an extrapolation of what I perceived to be a trend happening in the industry, which was on two different fronts. One was the consolidation of data infrastructure from data swamps and data lakes and unstructured towards more canonical systems of record for data. Data was getting better and getting more organized. The other was on the modeling side, that model architectures were consolidating as well towards transformer-based architectures. There were only a handful of models that people were using in production and fine-tuning as opposed to having to build novel model architectures from scratch with low-level tools like PyTorch.

My bet was that eventually, these things were going to converge, and the idea of training or fine-tuning models on proprietary data would be like a one-click type operation in a lot of cases where you’re not having to think very critically about how many layers the model has or having to do a lot of intense manual data engineering. That was going to be much more about the business use case, the kind of level of the problem that someone like an analyst or someone like an engineer would work at as opposed to a data scientist.

I definitely believe this has been proven out to some extent over time. Of course, I’m sure we’ll get into it, but I think with large language models, it happened in a bit of more of a lightning flash as opposed to a slow burn. But definitely I think that that trend is continuing.

Vivek: It’s interesting, Predibase was founded after you all started talking in the summer of 2020 and officially formed the company. When we think about the start of this most recent AI moment or AI era, it’s really when ChatGPT was released at the end of 2022, and everyone started talking about large language models and AI. It’s so interesting talking to all these companies that were founded before that moment. How has the original vision changed or not changed since ChatGPT was released in this pre-LLM era? What do you think has changed and hasn’t changed at Predibase in terms of how you’re thinking about the future and the vision for the company?

Dev: It’s funny because I think a cheeky answer to this question could be that our vision hasn’t necessarily changed that much, but our tactics have changed entirely. Our vision initially was as a platform for anyone to be able to start to build deep learning models. We initially started with data analysts as an audience in 2021. We had an entire interface built around a SQL-like engine that allowed them to use deep learning models immediately. That’s where the name Predibase came from. Predibase is short for predictive database, and the idea was let’s take the same types of constructs that we’ve brought for database systems and bring it towards deep learning and machine learning.

At the end of that year, we found that analysts weren’t our niche. The people who wanted to use us were more like developers, engineers, the types of people that were along the lines of, “Hey, just give me some good technical documentation and I’ll figure this out.” That’s where we started to shift from an audience standpoint. Our vision was, how do we make deep learning accessible toward this type of audience? When we started pitching deep learning in 2021 to a lot of organizations, we came up with the value propositions of deep learning as, “Hey, you can start to work with unstructured data like text, and images. You get better performance at scale. And oh yeah, you can use some of them pre-trained, and that way you don’t need as much data to be able to start to get your initial results.”

The biggest change that happened is that last value proposition, which was third on our list, has become the most important thing that people have started to care about. Which, if I think about what was popular in our platform in 2022, people’s eyes sort of glazed over when we said deep learning is cool, but what they liked was we had this dropdown menu where you could select different text models, and one of them was a heavyweight model called BERT. You could start to use it pre-trained. They didn’t need as much data to be able to get started. They loved the idea that they could actually maybe fine-tune it inside of that platform. At the time, it was just one feature among many of the things that we had done on the platform, among many other value props.

In 2023, when large language models came out in a large way, we started to think about what we wanted our platform to be. One of our very first takes and maybe one of my very first takes specifically was LLMs are another dropdown item in the menu. You had BERT and all these other menus of deep learning models, and now we’re going to add Llama as an example. We needed to recognize that the market had changed how it thought about machine learning. It was no longer thinking about training models first and getting results after it. It was thinking about prompting, fine-tuning, and then re-prompting a fine-tuned model overall. Our tactics significantly shifted. We considered a product pivot that we did in 2023 to be able to actually better support large language models. Funnily enough, it’s still in service of the vision of how do we make deep learning accessible for developers.

Vivek: It’s almost as if the vision has stayed the same, but the market has come to you in some ways like, “Hey, we were talking about deep learning and machine learning before it was “cool” in the current context.” Large language models have opened up a whole new sphere of who you can market to.

Dev: We definitely got lucky there. We had to meet the market halfway. We had to make sure that we also were responsive and not trying to do or meet the market using our old form factors, or our old tactics. The market did come to us in essentially figuring out, “Hey, there are three value propositions you mentioned. One or two of these really matter. How do you center your offering around that?” That’s been one of the most helpful things for a business. As a startup, I find one of the biggest challenges is getting people to care. How do you get anyone from another startup organization to spend the 45 minutes with you or a large enterprise? One of the nice things that’s happened over the last year and a half is we no longer have to explain why deep learning matters. We frame it as LLMs and being able to fine-tune those.

Vivek: You both talk about the big thing for startups, and a lot of the founders who listen to this podcast will attest to the same, is you have this great idea in your head, and you see the tech maybe before anybody else does, but then there’s a question of, well, why should we care? How do you go from figuring out, “This is a cool open-source project,” to “This is something we can commercialize, and people and businesses are willing to pay for this.”?

Dev: A lot of open-source projects are very popular in GitHub, and I do think a subset of those are probably best as open-source software. They’re a framework that makes something easy but doesn’t necessarily need the full depth of a commercial or enterprise already product for it. No one knows more about these types of challenges of actually taking some of these open-source frameworks and running of the infrastructure than Travis. He’s been working directly on it both for Predibase as well as with users who have tried to use this independently. Travis can talk about the challenges we solve, translating open-source frameworks without a commercial offering, and why we thought there was a real commercial business to be built around these frameworks.

Travis: When it comes to open source and particularly open core models, I think the easiest argument to make is that at Uber we had a team of 50 to 100 engineers that were working on building infrastructure for training and serving models. The cost of that is quite significant, even for a company like Uber. For companies that don’t consider this part of their core business, maybe they consider it core infrastructure but it’s not differentiated IP for them, you could invest in building an entire team around it or you could just pay a company like Predibase to help solve those challenges. With our most recent project, LoRAX for example, there’s a good open-source piece of software that can be productized and productionized and used in situations where you need high availability, low latency, and all those sorts of things. That’s sort of the layer on top. We have internal software that’s been running on top of Kubernetes and running across multiple data centers to optimize the availability, latency, and throughput of the system that goes above what’s in the open source.

That’s inevitable when you’re talking about something that’s going to be used day-in and day-out thousands of times a day, that what starts off seeming like long-tail issues like, oh, this request failed or this service was down for some period of time, become mission-critical at certain points. That’s, where there’s a good opportunity to appeal to organizations that need that. There’s a good synergy, I think, where they need those particular levels of service, and we’re able to offer it, and that’s something that they’re willing to pay for at that point.

Vivek: It sounds like, unlike many other open-source projects where you start the open-source project and then you say, “Hey, let’s see if people are willing to pay or not,” this is almost where at the very start of it you knew that there is a willingness to pay given your time at Uber and having seen this scale. Let’s start with this open-source project and start with this product that’s out there, get people to try it, and there’s definitely a willingness to pay. So it’s a different angle. We see many open-source projects that are out there, you wait for a lot of people to use it and say, “Hey, are people willing to pay or not?” and then it’s a different debate you have at that point.

Travis: Dev actually has a good analogy about the front of the kitchen, back of the kitchen when it comes to this sort of thing. I think that serving definitely is this very front-facing thing where you have to get every detail right, and those minor differences make a huge difference in terms of the overall value of what you’re offering. So yeah, I’m not sure, Dev, if you wanted to maybe speak more to that.

Dev: I have two analogies that I’m going to throw out. The first analogy I think about with open-source projects is there a commercially viable business around this? The way I think about it is for us, Ludwig and LoRAX are sort of the engine, and what we’re trying to do is sell the car. There are some people, maybe advanced auto manufacturers, who just want an engine and they want to put it in their tractor or some other kind of setup. Most people want to be able to buy the fully functioning car, something that’s going to allow them to unlock the doors and have a steering wheel and other things along those lines, which in our world is the ability to connect to enterprise data sources, deploy into virtual private cloud, give you observability and deployments that you don’t necessarily get if you’re just to be able to run the open-source projects directly, and finally connect that engine to a gas line, which I think in our world will again be the underlying GPUs, cloud infrastructure that this is going to run on.

The second analogy is for how we think about the product, and then the other piece you always have to figure out is how much the core problem that the open-source product is solving really matters to that end customer and what the visibility around it. That’s where I think about there’s some things that can be done in the back of the house as a kitchen, where maybe someone has an internal pipeline for doing something. It doesn’t need to be pretty, doesn’t need to be production-ready. It could be so much better if they had this commercial product that was built on top of the open source, but it’s not necessarily mission-critical, or they’re not going to lose customers and users because it doesn’t work flawlessly. Think about these as especially those internal cell pipelines.

I think about the front of the house, which to me is taking, for example, fine-tuned models and being able to serve them very well, things that are going to go in front of customers and user traffic. This is the part of the restaurant that you want to make sure services folks really, really well. So, I’ll need to figure out how to combine the car analogy and the restaurant analogy, but to me, the car analogy is how we figure out what’s the commercial viability around the open-source projects, and the restaurant analogy is a little bit of how you think about if an open-source project is going to be important enough to be able to justify some of that commercial viability.

Vivek: I love it. Don’t be surprised if we steal both of those analogies for some of our own companies because it’s really important distinction, like selling the car versus the engine, and the front of the house versus back house. At the end of the day, all of these things roll up to what’s most important for customers.

One of the things that I love when we talk about the front of the house or even what people see is on your website you have a great tagline, which is, “Bigger isn’t always better.” With the explosion of GPT and everything we’ve been hearing from them, we’ve been hearing about these models with billions of parameters. For a while, it was a bigger is better, and we got to create the best model and how many parameters is GPT-5 going to be. In your view, why is bigger not always better? Specifically related to models and the customers you serve, where do you find that bigger is not always better?

Dev: My favorite customer quote is, “Generalized intelligence is great, but I don’t need my point-of-sale system to recite French poetry.” I think that customers have this intuition that bigger isn’t better, and we don’t always even have to convince them that much. They sort of hate this idea that they’re paying for this general-purpose, high-capacity model to be able to do something rote. I want it to be able to classify my calls and tell me did I request a follow-up or not. It’s a very common type of task that people might be able to do using GPT-4. Today, they have a model that can do everything from that to French poetry to write code. There’s this intuition that when you’re using a large model like that you’re paying for all that excess capacity, both in terms of the literal dollars but also in the latency, reliability, and deferred ownership.

When I talk to customers, I think they’re very enamored with this idea of smaller task-specific models that they can own and deploy that are right-sized toward their task. What they need is a good solution for something very, very narrow. Where I think the trick is that customers have is, well, can those small models do as well as the big models? It’s very fair if you’ve played around with some of these open-source models, especially some of the base model versions, you have that intuition that they don’t do as well as the big models as soon as you start to prompt them. That’s where we’ve spent a lot of our time investing in research to figure out what actually allows a small model to do and punch above its weight and be as good as a large model.

To us, what we’ve just unlocked might not be a massive secret, but it’s been around data and fine-tuning. What we found is, that if you fine-tune a much smaller model, a seven billion parameter model, a two billion parameter model, probably an order or two orders of magnitude smaller than some of the bigger models people are using, you can get a parody with or even outperform the larger models, and you can do it in a lot more cost-effective way and also be able to do it a lot faster so you don’t have to wait for that spinner that you often see with some of these larger models.

Travis: A big aspect of this that every organization should think about is the type of tasks that they’re trying to do with the model. If you’re primarily interested in very open-ended tasks where you do want to be able to say, like have a chat application where you want to be able to ask anything from generate French poetry to solve this math problem to whatever, you do need a lot of capacity. That’s why ChatGPT is as successful as it is, is because when a user comes in and uses it, they don’t know a priori what type of question the user is going to have in mind. You do need something very general purpose.

When you’re productionize something behind an API, it’s just like an endpoint, you’re calling it, “Classify this document,” and you’re going to call that over and over and over again thousands of times, you don’t need all those extra parameters just at the baseline. That point depends on how complex your task is the capacity of the model that you need. The less capacity you need, the smaller the model you can use, and the lower the latency. It goes on a task-by-task basis that people should be evaluating these sorts of things.

Vivek: I feel like we are now in the moment of time where we are seeing that the balance swing back to, I might not need this really, really massive model with a trillion parameters and all of that. I need something that works for me, that works for my use case. Dev, you mentioned customers there and what your customers have been saying to you. Take us to the first customer, how did your first customer come through the door? How did you land them?

Dev: I wish there was a very repeatable lesson for founders here, but sometimes your first customer is a little bit of luck mixed with just a little bit of internal network and elbow grease. I remember we started the company in March 2021. I had no idea how we were supposed to get our first customer. What people told us is the common advice, and I think this is correct, is your first few customers are probably in your network. Looking initially at my network, I didn’t know really where to start digging. We ended up getting in-bounded from a large healthcare company based here in the US. They were curious just to know, they had seen Ludwig out there. They weren’t even active users at the time, but they had seen Ludwig and they’d seen it solve a very specific use case that Uber had published a case study around, which was customer service automation. They wanted to know if there was something that could be applied with Ludwig inside their organization.

I’ll never forget the very first customer meeting that we had was with this organization. We started in March, this customer meeting was in April. It was an hour-long meeting where we walked them through what Ludwig had been for a few minutes, but also what our vision was and what we were building with Predibase. The meeting ended with, “If you guys have a design partner signup sheet, just put our name on that list.” That was the end of the very first customer meeting that we had. It came in because of that open-source credibility. I walked away being like, “Are they all that easy?” They’re not just as a very quick recap, but the first one for us did come in a way through network, but really from organic open source inbound that came through.

From there what we’ve found is helpful is the repeatable lesson of content that attacked a use case that somebody cared about and that allowed them to come to us. That’s something we still see as a pretty effective channel now as we’re landing our next sets of customers as well.

Vivek: See, folks, it’s just that easy, just start the company and a month later someone’s going to ping you. Having that channel, as you mentioned, this is one of the nice great benefits of open source is people can start playing with it. Someone may find intrinsically there’s a lot of value and say, “Hey, how do we go from where we are today to doing even more with this?”

Zooming out a little bit, I would say to the outside observer, at least today compared to a few years ago, the AI infrastructure space has become very, very crowded. It seems like there are a lot of infra companies building at the inference layer, at the training layer, doing things around fine-tuning. I would say often to the outside observer and even sometimes to the inside observer, it’s hard to tell what’s real, what’s working, what’s the difference, do we need all these products.

In some ways, it’s really healthy because it gives people a lot of options, and I think when we’re early in the AI era as we are right now, you need a lot of these options, and there’s a lot of space to build. How do you both think about it? One, there’s a day-to-day of just maybe there’s hand-to-hand combat against some product, some more than others maybe, and then there’s just the long term of how do we resonate and stay above the noise and build for the long term.

Dev: I think this market is extremely competitive and that there is a lot of noise that gets introduced into the system. In terms of staying above the noise, the only way that we have found to be effective to stand out as an organization, especially when you don’t have that hour with a customer, you need to go ahead and build a brand where people are just going to look at you for maybe a few seconds or a minute and make an assessment of, “Is this worth my time?” The only way that we’ve seen work is you have to do work that advances the ecosystem yourself.

What I’m saying is I think you need to do something that can be a narrow slice but somewhat novel or a differentiated take. There are two ways that we’ve thought about doing this. The first is people have always liked this idea, that small task-specific models that are fine-tuned will be able to dominate these larger models. I spoke with a customer who said something that stuck with me; he said, “I want to believe that these small task-specific models actually will be the way my organization will go, and I want to use open source, but I just don’t know if it actually works.”

The world out there today is a lot of anecdotal experiments and memes on Twitter and others. One of the first things we did was we started to benchmark data and put out results in our benchmark. We put out a launch in February called LoRA Land, a mix of La La Land as well as the play on LoRA fine-tuning, which is how we did the process, where we took 29 datasets and fine-tuned Mistral-7B against those datasets. We wanted to compare initially to see how much fine-tuning helps against the base model. What we actually found was that fine-tuned Mistral-7B will be at parity with or actually, in many cases, outperform GPT-4. When we do some prompt engineering and try and find prompts that work the best for both of them, it’ll outperform that one out of the box.

That became a moment where we went semi-viral. We were at the top of Reddit for a little while. We had a partnership with Hugging Face and Mistral to go ahead and re-share it; Yann at Meta also, I think, re-shared this. It was a way for us just to start to put some data in the industry. We also released all these models as open source. We even built a little interactive playground where people could play around with these models directly firsthand and start to see what actually the model performances would like.

From there, we’ve scaled this out 10 times. We’ve not only now trained 27 models, but we’ve trained over 270, because we stopped just benchmarking Mistral. People would say, “What about Llama 2? What about Microsoft’s Phi?” We’ve added a number of different models, Google’s Gemma, to this entire list, and we’ve started to just build out our own internal benchmarking to understand what is the fine-tuning leaderboard. We’ve put this content out there, we’ve put these models out there, and there’s going to be more on that very soon. That was one way that we thought about advancing the ecosystem.

The second way, I would say, is honestly building novel frameworks that didn’t exist before. The best example of this is LoRAX. I’ll let Travis speak towards LoRAX as the lead author and creator of the framework, but one of the things that made it very popular was we attacked a problem in a way that no one else had been thinking about, and that really helped us cut above the noise.

Travis: To Dev’s point, attacking a narrow slice of the market is the only way that I found to be able to stay above the noise. The reality is that we talked about how pre-LLMs so much of our focus was on getting people to care, even understanding what the value proposition was, and now everyone cares. Therefore, there’s tons of people in the market attention and many of them are much better capitalized than we are. We’re talking about companies that have hundreds, thousands of employees, in some cases, working on this stuff.

The challenge, I think, was definitely on a product and engineering side to think about ways that we could attack something that, while they were technically capable of doing it, we could do better than them just by sheer focus and execution. We saw an opportunity with this multi-LoRA inference in the second half of last year. It was definitely on people’s radars; there were some early blog posts about it and some research that was happening at institutions like the University of Washington and UC Berkeley, but no one had productized something in this space. We launched LoRAX in November of 2023 and really tried to make it clear that this was a paradigm shift for organizations that instead of thinking about productionizing one general purpose model, you could productionize hundreds or thousands of very narrow task-specific models but solve the essential question, which was, how do you do that at scale in a way that doesn’t break the bank? The previous conventional wisdom was every model is a GPU at a bare minimum. If you have hundreds of models and hundreds of GPUs then you’re paying tens and thousands of dollars per month.

Breaking down that conventional wisdom was the first way that we saw to attack this problem, the goal, of course, being to establish ourselves as a thought leader in that particular space of building these very task-specific models in a way that’s cost-effective and scalable. LoRA Land was a way of building on top of that, saying, “Now that we have this foundational layer with LoRAX, here’s what you can do with it.” I think that that demo of being able to swap between all these different adapters that were better than GPT-4 at the specific task that they’re doing and do it with sub-second latency, I think that started to prove to people that there’s actually something real here. Not to diminish research, but it wasn’t just research, it’s something that you can be using in your organization today.

Vivek: I love reading the Predibase benchmarking reports and seeing how these various models do. There’s almost a sense of fun every time a model comes out, “How’s it going to do? How well does it perform relative to all these benchmarks and relative to all these other open-source models out there?” And because you guys are so close to this and have a great perspective, right now Llama 3 launched. Meta, they’re crushing the open-source model game right now, and obviously, they’re spending a lot of money, resources, and time behind this. It seems to be initially resonating really well. I am curious: how do you see the open-source model world playing out over time? Is it feeling like we’re going to have a handful of providers of large open-source models, Llama, Mistral, Google, or do you think that there’s going to be a world where we’re going to see a long tail of developers and we’re going to see many different types of open-source models and providers of open-source models?

Travis: My opinion on this is that it will break down a little bit by model class. I think that these larger foundational models, particularly the ones that people are open sourcing for general applicability as opposed to building internally on proprietary data to be IP in some way, that is a little bit constrained by the resources of these larger organizations like that. It’s not something that’s generally accessible to a small group of enthusiasts or something like that today. That might change in the future, but right now, the two big barriers are you need lots and lots of compute, and you need lots and lots of data, and both of those things are difficult to come by.

I do think that, at least for the foreseeable future, there’s going to be a requirement to lean onto some of these larger organizations like the Metas of the world to provide those foundational models. Where I do have a lot of optimism on the ability of less well-capitalized, the GPU-poor, to be able to make inroads is definitely on the fine-tuning side. I think that as fine-tuning matures from the research point of view and from the data efficiency point of view, there is an opportunity to create much better-tailored models for specific tasks that do have general applicability that can potentially be something that is valuable to lots of organizations beyond just the individual that made it.

Once we start seeing that become true, there’s a whole new space of creators similar to how content creators create art or videos or music or whatever, being able to create fine-tunes that attack very specific problems and have that be something that people consume at scale. A very interesting opportunity that I believe is coming on the horizon. We’ve seen it to some extent in the computer vision space with diffusion models where diffusion LoRAs for style transfer are starting to become mainstream, and communities around finding different LoRAs that help adjust the way that the models generate images. I definitely think that that moment is coming for large language models as well, where this sort of work moves beyond being restricted to individual organizations that have lots of data to something that can be even more open and transparent, and community-driven.

Vivek: Well, this leads me to my next question, which is for both of you, what do you think is over-hyped and under-hyped in AI today?

Dev: Over-hyped today is chatbots. A lot of organizations started to see value in GenAI actually post GPT-3. The very first thought that organizations had was, “I need ChatGPT for my enterprise.” We were talking to some companies about a year ago and they were like, “I need ChatGPT for my enterprise.” I was like, “Great, what does that mean to you?” They said, “Don’t know. I need to be able to ask the same ChatGPT-style questions but over my internal corpus.”

A lot of the early AI use cases have looked at how to build a chatbot that I can start to ask questions over documents. And one of the main reasons for that is because the interface that went viral was ChatGPT. It was the ability to do this in a consumer setting. The way I think about GenAI models is you essentially have an unlimited army of high school-trained humans essentially that can do different workflows. If you had this kind of unlimited army of knowledge workers, is the most interesting thing you’d really apply them to just better Q&A and better chat? I struggle to think that’s the case. Instead, where I think a lot of the case is going to be in these automation workflows. We’ve used the back-of-kitchen analogy, but there are also the back-of-office tasks, ones that are repetitive and mundane; how do I go ahead and automate document processing? How do I need to go ahead and automate being able to reply to these items? Now, we’ve started to see this become more of a thing.

That’s where a lot of the future for AI is going to go. The over-hyped sentiment is all of these organizations that are saying, “I want ChatGPT for my enterprise,” probably want to start thinking a lot more about, “How do I consider the fact that I have access toward this large essential talent pool that has general purpose knowledge and then can be fine-tuned to do a particular task very well?” I think about it like a college specialization. I can take this high school level agent and give it my college specialization in how Predibase does customer support, for example, and put it to work. That’s the biggest delta that I see from what might be hyped in the market today and where I think a lot of the production workloads are going to be going over the next 12 to 24 months.

Travis: I liken it too saying that it’s the boring AI that’s really under-hyped right now, but that’s where most of the value’s going to come from. I think that in any hype cycle, there’s this very overly optimistic view that we’re going to get 1,000X productivity improvements because we’re going to replace every knowledge worker with an AI system of some sort. Already we’re starting to see the reality unfolding is that, oh, it’s not that easy. It’s never that easy by nature of these things; the 80/20 law — getting the little details right ends up being where the majority of effort is spent, but those things matter.

We’re still quite a way from generalized intelligence and chat interfaces being able to do everything, like replace coders, but certainly, I think that it’s very real that we can get material productivity improvements and efficiency improvements on the order of 20% here, 50% there on very specific parts of the business. It’s going to be through these very narrow applications, to Dev’s point of saying, “We have a system here that requires humans to manually read all these documents. What if we can automate that into something that just turns it into a JSON or turns it into a SQL table for them, and they can just run a few quick sanity checks on it and then send it downstream to the next system?” Those are the sorts of things I can see having a very meaningful impact on the bottom line of businesses, and those are the things that are actually attainable.

Vivek: That’s part of the fun of the hype cycle. Now the initial euphoria has worn off, what are the really interesting things to build from here, right? Let’s get in the nitty-gritty and figure out something that may not have been just the initial like, okay, let’s go build this chatbot on top of GPT. There’s so much more you can do with all of this. Guys, let’s end with this, you both came from iconic companies, as well as Piero, and you obviously have seen a lot of really interesting things and been around some great people, and you’re all first-time founders. If you had to give one tip to a first-time founder, what do you think it would be? And maybe Travis, we’ll start with you, and Dev, we’ll end with you.

Travis: I think the biggest learning for me is you’re always a bit too ambitious when you start out with what you think you can do as an individual or as an organization. Oftentimes, particularly if you spent most of your time being an individual contributor, you have this idea that “If I’m a good engineer and then we hire 10 good engineers, then we’ll be 10X more productive, and we can do all these amazing things.” The reality of doing something and then doing something well enough that people want to buy it and rely on it in production every day is quite a big gap. Definitely being very narrow in terms of the type of problem that you want to tackle early on and say, “Let’s do something very highly specific that maybe doesn’t have a very big TAM in and of itself, but get that working perfectly and then start to think about where we go from there,” that’s definitely been the biggest learning for me I’d say.

Dev: One tip I would say is to be wary anytime someone suggests that you should pick something that’s strategically important to your business, like pick your go-to-market motion or pick what you want your differentiator to be. A real risk that happens toward first-time founders is you sit on a couch, and you’re like, “Hey, what can we do that would be really interesting?” And that’s a really interesting trap, I think, to be able to fall into that doesn’t essentially take in the customer lens of what customers actually care about. I think a lot of first-time founders are very smart. They worked at iconic tech companies, and they’re like, “I saw this happen at, let’s say, a Google, or I saw this happen at an Uber, and so the right way for the future is X, Y, Z.”

That’s a really good starting point, but that needs to be baked in what you’re hearing directly from customers 100% of the time. It’s very easy to essentially pick something that you think would be interesting, cool and differentiated that customers don’t care about. The reality is the thing that you’re suggesting might be the right idea, but it’s a different framing, it’s a different form factor that you need, and you won’t know whether or not you’re right until somebody is willing to purchase an invoice and send you money for it. Make sure you hold that as your primary objective function.

The last bit of advice I liked a lot was everyone who’s done startups has emphasized the importance of velocity. It’s very easy to mix up velocity with, “I need to go ahead and pull 16-hour days and be building a lot of code.” To me, velocity is building highly iteratively. How do you get feedback as soon as possible? The easiest way to do that is what Travis’s point is, is cutting scope. One of my favorite bits of advice that I’ve gotten, which is a bit controversial, is nothing takes longer than a week. The reason that I’ve liked that bit of advice is because I think it forces you to think, “How am I going to take whatever that I’m working on building this week and make sure that I understand at the end of the week is it actually worth doing, is it something that’s putting me in the right direction, is it delivering customer value?”

Both are about baking it into feedback and listening to customers, but understanding that you want to be able to optimize for that feedback cycle. That’s the only way that you’ll probably get to where you’re going.

Vivek: Great advice from both of you. And it’s super exciting to see everything going on at Predibase, at least from the outside. I’m sure it’s even 10 times more exciting and incredible from the inside. Congrats to both of you on all the momentum and really excited to see where things go. Thanks, Dev and Travis.

Other stories

Share The IA40
https://ia40.com
Copy link