The pace of progress in artificial intelligence right now is incredibly exciting. Large, pre-trained foundation models like GPT4, Claude, LLaMA, Stable Diffusion, and similar are disrupting everything from search to protein folding to all categories of design. We suspect these models will become integral parts of the vast majority of products over time, analogous to how relational databases are ubiquitous components of virtually all applications.
However, it can be difficult to keep up with what is happening in this market given how fast it is moving. What are all the emerging use cases? What is the state of research in this space? How are foundation models actually built into applications? What are some of the common challenges faced by teams productizing foundation models?
We recently put together an internal presentation for the investing team at Innovation Endeavors to answer a lot of these questions. While a lot of resources exist online regarding foundation models, we had not seen a comprehensive overview of the space like this, so we wanted to share it externally in case others found it useful.
If you’re building in this space, we’d love to chat with you. Feel free to email us at davis (at) innovationendeavors.com.