Nick is a Principal on the investment team. His interests and investment focus are at the intersection of advanced engineering, computer science, and the life sciences.
Previously, Nick spent time at Tempus Labs, where he focused on commercial strategy and operations. Prior to that, he worked as a strategy consultant at Bain & Company, where he worked across a range of industries but focused primarily on the life sciences, as well as at Spark Therapeutics. Nick has a BS in bioengineering from the University of Illinois at Urbana-Champaign.
Originally hailing from Chicago, Nick very much enjoys the northern California climate.
A leap in genomic medicine: why we invested in Exsilio Therapeutics
Much has been discussed over the last few years about the burgeoning world of digital biology. So many tools are emerging that are beginning to make their mark on the way we discover and develop new medicines. At Innovation Endeavors, we think some of the greatest achievements and impact of these tools will be in the life sciences.
This post, however, is not about what AI can do. Rather it is about drawing remarkable parallels between computation and biology, software and programmable therapeutics, and to announce our latest investment, Exsilio Therapeutics.
We’ve previously contemplated this analogy between biology and software. Biology computes. It can be much more expressive than an LLM, trained by its evolutionary history and able to operate diverse “functions” across many “instances.” And more and more, biology is becoming programmable, meaning we can model and design structures, encode those in the language of DNA, and produce entire constructs, all in the span of hours.
Moderna (and BioNTech and others) showcased this power on the world stage — going from a sequenced virus to a therapeutic candidate in weeks and then to an approved vaccine in record time. The speed and scale of impact was truly one of the most amazing achievements in the history of new medicine development. This was largely enabled by the emergence of modular components that are the hallmark of genetic medicines, in particular, in their case, an mRNA molecule (for which Katalin Karikó and Drew Weissman shared the 2023 Nobel Prize in Physiology or Medicine for foundational contributions to this class of medicines) and a lipid nanoparticle delivery vehicle.
For a long-time coming, academics and industry scientists have highlighted the potential of programmable therapeutics. Robert Plenge has good pieces (here, here) on this idea going back almost a decade, and Peter Marks has been drumming up measured excitement on the possibility for expedited development paths for medicines with previously validated components.
Over the last two decades, the field of genomic medicines has continued to accelerate and expand. The excitement stems from the ability to deliver functional cures for extremely challenging and debilitating diseases. LUXTURNA was the first directly administered gene therapy approved in the US in 2017; and last year brought the approval of the first treatment utilizing CRISPR gene editing technology. With these experiences in hand and many more clinical and commercial stage products, the FDA recently released draft guidance on platform technologies, intended to enable the development, manufacturing, and regulatory efficiencies we hope to enable with programmable medicines.
With that context, we’re excited to announce our investment today in a team and company building a platform to develop programmable genetic elements, Exsilio Therapeutics. They take inspiration from naturally occurring, programmable elements to develop constructs with the ability to insert whole genes. The approach aims to solve many of the current challenges of gene editing approaches — all-RNA payloads that can deliver gene-sized constructs into precise safe harbor sites in disease-relevant cells with a repeatable and titratable approach. This stands to bring forth a new pillar in genomic medicines.
The team bridges in silico and wet lab-based experimentation and is led by Chairman and Interim CEO, Tal Zaks, MD, PhD. As Tal says well, “mRNA-based medicines allow for a software-like approach to creating new medicines.” From his experience as the former Chief Medical Officer at Moderna, he has seen this firsthand. We completely agree and are excited to join our partners Novartis Venture Fund, Delos Capital, OrbiMed, Insight Partners, JP Morgan Life Sciences, CRISPR Therapeutics, Invus, Arc Ventures, and Deep Insight.
A leap in genomic medicine: why we invested in Exsilio Therapeutics
As is often the case, challenges in the world of therapeutic development are distinct but related. Cetus and Genentech — who have legacies in the industrial world — burst onto the scene in the 1970s with the revolution in recombinant DNA technologies, and since then, drug modalities have become increasingly varied and complex with each passing year. Today, over 50% of new drugs are being produced biologically. We now not only have small molecules, recombinant proteins, and monoclonal antibodies but also myriad therapeutic modalities: drugs with compositions that range from nucleic acids to whole cells, with various chemical or polymeric formulations. Adding to the complexity, the dynamic range of production volume now spans n-of-1 therapies to rare populations to global vaccines, requiring more flexible production scales.
As therapies become ever more complex, we’ll need meaningful manufacturing innovation to produce them. Moreover, Covid-19 highlighted the fragility of our existing biomanufacturing infrastructure. All this to say, we have significant challenges to overcome, and we need manufacturing paradigms that are robust, flexible, and scalable.
Today, we want to share some of what we’ve learned about what it takes to actually make all of that stuff and, ultimately, to design products with manufacturing in mind. We’ll first share some additional context around the challenges. Then, we’ll share some innovative approaches we’ve seen folks taking to tackle these challenges and what we hope to see from an investing standpoint.
One last caveat: While we speak generally here about “fermentation” and “biomanufacturing,” it’s worth noting that this can mean a lot of very different things (see this quick laundry list for orientation).
Unsurprisingly, venture investment has grown as well, with synthetic biology startups raising nearly $18 billion in 2021, nearly as much as in all prior years since 2009 combined. Nearly 80% of these dollars have gone towards companies developing specific products, often in food and nutrition or health and medicine. Another 15% of these dollars went towards organism engineering platforms. Almost none went to biomanufacturing infrastructure or bioprocess development innovations.
As a result, application-focused companies are needing to hire teams of process engineers and fermentation specialists not only to manage scale-up risk but often also to put large amounts of steel in the ground themselves. This operational approach requires considerable specialized expertise, the ability to manage long lead times, and large amounts of capital — all of which are hard to come by for startups. Organism development companies rely on application-focused companies to shoulder the capital expense and manage technical scale-up risk, both putting their fate in the hands of others and making it hard to capture value.
Paired with evolving market conditions and unforgiving economic targets, these scaling challenges create a tough landscape for growth-stage companies and a fundamental bottleneck for the field as a whole. As a result, we think that some of the most interesting opportunities come from making a dent in these challenges.
Context – pharma
The last two decades have been remarkable. The fidelity with which we can intervene in disease has greatly expanded. Clinicians now have access to once-unimaginable interventions that significantly modify and treat disease progression versus simply ameliorating symptoms.
Antisense oligonucleotides were one of the first next-generation modalities to enter the clinic, though the field has had fits and starts after the first therapy (Fomivirsen) was approved in 1998 — only to be pulled from the market a few years later. Over the succeeding 20 years, we learned a lot about stability and delivery and now see significant potential in this programmable therapy.
The first autologous adoptive cell therapies were approved (Kymriah and Yescarta in quick succession) in 2017. These therapies take patients’ own immune cells and supercharge them to attack cancer cells. These treatments cured deadly cancers in a significant subset of patients and show durable responses more than 10 years after treatment.
Spark Therapeutics received the first approval in the US for a gene therapy for its treatment targeting a rare form of inherited retinal disease in 2018. This treatment stops the progress of a debilitating disease that ultimately leads to blindness.
Now, what do all these have in common other than being incredible treatments? They are some of the most expensive therapies on the market due to their lengthy and complex manufacturing processes. Yescarta and Kymriah have a list price of $375k and $475k, respectively. Luxturna has a list price of $425k per eye. Spinraza has a list price of $750k in the first year, followed by $375k per year thereafter.
Significant investments are being made in approaches that enable better manufacturing to deliver on the promise of these therapies. If we want these treatments to be available more broadly, we need methods and tools that not only decrease the cost of research and development, but fundamentally of manufacturing.
As usual, brilliant people are getting creative
Over the last year or so, we have seen a whole suite of folks come up with creative ways to tackle these problems:
Build, finance, or get better at what we already know how to do. In the industrial biology world, this tends to look like batch-based, stir-tank stainless steel bioreactors. Tackling the capacity bottleneck solves a critical problem for young companies, firstly allowing folks to shift Capex to Opex and secondly radically accelerating time to market. We’ve seen a variety of variations on this theme. For example:
Pair capacity with hard-won expertise, offering process optimization services together with capacity as a CDMO. For example, BioBrew brings decades of expertise from Ab-InBev to aspiring young biology companies. Perfect Day also just announced the launch of a similar play, Nth bio, to help precision fermentation companies with tech and scale-up services. Resilience is building end-to-end manufacturing services to broaden access to complex medicines and gobbling up smaller, legacy players to centralize expertise. There are also several newer entrants in the space at various scales, like Planetary and Boston Bioworks.
Plug-in tools or services for existing systems — For example, BioRaptor offers process optimization tools that can be run on in-house systems.
Focus on infrastructure finance — For example, Synonym focuses on financing capacity that others can then operate.
Develop new technology that unlocks more economical, flexible, or otherwise better production methods.
Continuous fermentation — Since the 1950s, batch or fed-batch fermentation has been the standard design for most biomanufacturing approaches. However, too much downtime between batches significantly limits overall productivity. We have seen some companies take (variations on) continuous approaches and realize significant economic gains. For continuous approaches to make sense at scale, companies must tackle challenges like evolutionary drift and find new solutions to manage contamination — Pow.bio is an early company focused here.
Cell-free systems — Cell-free systems possess advantages relative to living organisms in the right contexts. Because they aren’t living organisms, they don’t require central metabolism and, therefore, can be much more efficient relative to inputs. They can also produce products that would be toxic for a host cell. Cell-free expression systems have been used as research tools for more than 50 years but are now increasingly practical for myriad other applications. Solugen and Debut are examples of companies working in the industrial biotech sector. For those interested in learning more about cell-free, we recommend this blog post by our friends at KdT. However, this is not limited to the industrial sector. Swiftscale Biologics, acquired by Resilience, is using a cell-free approach to produce biologics (i.e., proteins used as therapeutics).
Engineered cells for more complex pathways and products — Often, highly engineered systems are the best route for robust expression and product profiles. Research out of Jay Keasling’s lab recently demonstrated biosynthesis of the highly complex plant natural products vinblastine and vincristine, demonstrating a path to scalable production of over 3,000 different related molecules in engineered yeast. GRO Biosciences has developed a platform that leverages a genomically recoded organism to allow for scalable, site-specific incorporation of non-standard amino acids in protein therapeutics. Asimov and 64x Bio design mammalian cell lines that enable scalable production of components for advanced modalities such as cell and gene therapies and antibodies.
Reframing the problem to solve for biology challenges — Scale-up is notoriously challenging as conditions aren’t consistent from small, benchtop bioreactors to thousands-of-liters tanks. This change in the environmental conditions can lead to long process development timelines and, in the worst case, a process that doesn’t scale at the economics required to be successful. One interesting method for side-stepping the problem is the transition from scale-up to scale-out. Instead of continuously scaling capacity, teams are instead leveraging the same process but scaled across many bioreactors, which limits technical risk and may improve speed to market. Additionally, scaling manufacturing of autologous cell therapies is a major challenge. In this process, cells are taken from a patient, processed ex vivo, and then returned to the same patient. This is a major hurdle for the scalability of this therapeutic approach, and significant investment is flowing into companies like Ori Biotech. An alternative approach is in situ cell reprogramming, which converts some of the challenges and complexity of autologous cell manufacturing to one of viral/nonviral delivery.
Some observations
Fundamentally, the success of any given biological product (and product company) depends on finding an economical way to manufacture that product. Today, because many product companies lack access to economical manufacturing tools, services, and infrastructure, teams are forced to in-house this work, often hiring process engineering teams and using precious venture dollars to put steel in the ground. If every startup needs expertise in initial product development, strain engineering, and manufacturing at various scales, time and cost to market will become unmanageable for many startups. This is especially true for startups in spaces like food with narrow margins.
As a result, for product companies to be successful, we will need to radically increase access to manufacturing capacity, especially at intermediate scales. In the world of industrial biology, there are simply too few CDMOs available to produce non-pharma products at viable economics regardless of scale. Most CMOs were built 20-50 years ago for pharma, making them over-engineered for folks’ requirements (and, generally, not viable for products with price points lower than about $100/kg). Early on, companies often produce initial product demonstrations in-house and/or work with academic institutions to secure available capacity. Thus, many aspiring companies hit their first real capacity wall at the pilot scale when they find that few facilities are available to begin with, even fewer are food grade, even fewer manufacture domestically, and the very few that are available require years of advance planning to secure.
In the pharma world, companies face similar challenges. While it may be easier to secure capacity in a production environment, the gap between lab and production environments has been characterized as the “valley of death.”
In short: Whether food, industrial, or pharma, time and cost to market are every young bio company’s biggest enemies — and today’s CDMO options do not equip startups nearly well enough against these formidable foes.
Downstream, downstream, downstream
While easily forgotten, downstream processing is mission-critical and intimately coupled with both strain development and manufacturing processes. Downstream processing accounts for roughly 60% of the cost of producing a biological drug and has not improved or scaled at the same rate as upstream processing. Additionally, while upstream processing is specific for each product, the component parts are mostly the same (e.g., cells, media, bioreactor); in downstream processing, there is significantly more variability depending on the product being produced. Accordingly, it is critical to factor DSP costs into a TEA early and make sure that chosen products and manufacturing methods are designed to work with necessary downstream processes.
A young company’s job? To drive the way the road is, not the way it should be
To be successful, companies need to get a lot of things to converge: the early biology, strain or cell line development, process development, manufacturing at scale, and downstream processing — as well as, of course, making sure all of the above are tailored to the specific commercial and regulatory environments for their products of interest.
This is incredibly complex and requires startups to convene a long laundry list of tools and partners around the table, which are expensive and/or require long lead times to secure. Furthermore, suppose the whole purpose of R&D is developing something that can scale. In that case, you better have ways of testing the implications of decisions in the development process on performance at scale.
This is really hard. Robust TEAs are essential and should be upgraded as folks learn more about their processes over time. Companies need to plan for manufacturing and downstream processing significantly in advance and ensure they have the expertise to do so readily available in whichever way makes economic sense (might be an in-house team or a great set of advisors). As a potential starting point, here is a great framework and tool developed by Michael Lynch.
Venture investing — what we’re excited to see
These integrated technical and operational challenges are existential for young companies. Therefore, we are convinced that there are meaningful companies to be built that expand the menu of manufacturing options and accelerate feedback loops between early R&D and scaled-up production. We see opportunities in a few areas:
→ Differentiated technical manufacturing approaches for big classes of products
These need to be relevant for a large enough portion of the market and drive at least 10x, if not 100x, economic improvements. For example, we’re intrigued by the promise of continuous fermentation for industrial projects, assuming you can find ways to scalably wrangle the challenges of contamination and genetic drift.
Alternatively, we see considerable opportunity for variations on today’s methods that drive innovation in other dimensions — e.g. flexibility and time-to-market. For example, in some cases, scale-out manufacturing paired with automation can turn a science problem into an engineering one, offering folks faster time to market and more flexibility in terms of scaling up and down.
→ Product-focused companies that are leveraging existing infrastructure in a thoughtful way
We touched on this above, but the idea of allowing currently unproductive infrastructure to be better utilized (e.g. ethanol infrastructure in the midwest) is intriguing to us, provided folks are up for the metabolic engineering challenges involved.
→ Improved sensing technologies to enable more precise process engineering
While much has been written about the power of various “omics” data in developing more performant strains or cell lines and optimizing manufacturing processes for those strains/cell lines, we remain hugely limited in what we can economically measure.
Finding ways to collect a much wider range of data that speak to evolutionary fitness, metabolic function, and ultimately performance at scale would be hugely valuable. Matterworks is an example of a young company working on faster and much more comprehensive analysis of analytes present in a biological sample. Whereas today, maybe a dozen high-priority analytes are measured at various timepoints, Matterworks is developing a much higher fidelity view of the process.
Relatedly, finding ways to expand what we can measure in real-time during an individual fermentation (and, thus, enable much more precise development and control of manufacturing processes) would make a big difference. Several interesting approaches are being tested in this space, e.g. cell-based sensing or Raman spectroscopy.
→ Generally making it easier to get stuff out of the lab and into the world, even with today’s methods
For many of the reasons we’ve already touched on, barriers to taking an early product or process from academia into a commercial setting are incredibly high, and doing so requires specialized expertise (even though the playbook for doing so can be fairly repeatable).
As a result, we can see that companies who make this process more frictionless and democratize access to manufacturing capacity (even using today’s methods!) can add a lot of value. These can range from simply building more CDMOs, to financing the construction of new manufacturing infrastructure, to other innovative business models which create more streamlined scale-up pipelines for young companies.
We’re not yet sure what’s venture backable here; the main challenges we foresee center around long-term defensibility and margin maintenance. However, we believe this work is critical for the field and are excited to talk to folks taking creative new approaches.
Special thanks to:
Special thank you to Shannon Hall and Ouwei Wang (Pow.bio), Darren Platt (Demetrix), Alex Patist (Geltor), Billy Hagstrom, Jared Wegner & Tyler Autera (Bluestem Bio), Dan Beacom and Chris Guske for the conversations that have contributed to this newsletter. And, as always, thank you to all the folks whose work we cite for the work you do to push the field forward.
Bio Endeavors: It turns out we need to make (a huge amount of) stuff
Life has found a way to, well, live. While scientists and philosophers alike have long debated what it means to be alive, the idea that life is a self-sustained system capable of undergoing Darwinian evolution is remarkably powerful. Simply put, to be alive is to be part of an engine that selects traits with survival advantages over many, many generations. Over time, living systems have become extraordinarily complex. In our last post, we highlighted this complexity in the context of plant biology and the chemical diversity that evolution has naturally generated1.
Finding solutions to major engineering biology problems today — developing new medicines, growing foods in novel ways, developing climate-resilient crops — is really challenging. Engineers design solutions through a rigorous optimization process. In the context of biology, however, designing is not always possible because we don’t know all the rules. As a result, we mostly discover. But, as we’ve seen over the last 3.7 billion years, evolutionary processes are uniquely powerful tools. In the context of building biological systems, what can we learn and utilize?
One of the earliest cited examples of co-opting functionality from microbes (excluding ancient practices for alcohol and cheese) is the discovery of a fungus during World War II. While stationed on the Solomon Islands in the South Pacific, the US Army was baffled by the troublesome deterioration of their tents, clothing, and other textiles. After taking samples and then screening over 14,000 molds recovered from the site, researchers at Natick Army Research Laboratories found the culprit: a strain of fungus, now known as Trichoderma reesei, which produced extracellular enzymes (i.e., cellulases) that degraded their textiles.This turned out to be a massively important discovery in the quest to convert biomass to biofuels: the enzymes that T. reesei produced degraded not only textiles and recalcitrant biomass (lignocellulose), in turn releasing sugars that could be further converted into fuel. However, performance was still not good enough for industrial production, so in the 1970s, researchers began work to further increase the efficiency of T. reesei. What followed were some of the original directed evolution methods whereby researchers introduced random mutagenesis by blasting the bugs with radiation and then screened the mutants in a functional assay. Using these experiments, they were able to ~3x the performance of the bug, creating the foundation for the gold standard strains we have today.
Today, directed evolution experiments are ingrained in the scientific canon: notably, Frances Arnold shared the Nobel Prize in 2018 for her ingenious approach of directed evolution to engineer enzymes. And yet, history has only scratched the surface of what directed evolution experiments can do. New tools are beginning to revolutionize what is possible. For example:
New “writing” tools are expanding the types of problems we can tackle: Increasingly, we can be more targeted with how we edit enzymes, microbes, or populations of microbes to generate diversity. Or we can engineer entirely novel genetic circuits that enable new objective functions, for example, making a bug unable to survive unless it produces a specific compound.
New “reading” tools help us select winners: With better sensing technologies (e.g., biosensors, spectrometry, single cell multi-omics), we can identify and select across a greater range of phenotypes with higher throughput. In the case of strain engineering in industrial biomanufacturing, this is particularly important for selecting a phenotype other than simple growth.
New “optimization” tools help us drive continuous improvement: With novel bioreactors and automation, we can run a much higher number of evolutionary experiments in parallel and precisely adjust environmental conditions for each to drive improved performance over time. Additionally, improved computational approaches can help us learn from these experiments and optimize across many variables simultaneously. Iteratively running these experiments can help us guide evolution through a multidimensional design space with unprecedented precision.
So what? If early directed evolution experiments were like a simple iterative algorithm, tomorrow’s directed evolution experiments can be much more like a black-box ML approach. We are quickly moving towards a world where we can program complex objective functions into biological systems, and build methods that let those systems solve the problems for us. Today, we are exploring this topic of directed evolution and how we might use new methods to solve problems across two applications: industrial synthetic biology and drug discovery.
Industrial synthetic biology: Engineering for stability and scale
Historical context: In industrial synthetic biology, we design microbes as factories for producing molecules of interest: everything from heme in Impossible Foods’ burgers to proteases for fabric detergent to cellulases as in Natick Army Research Laboratories. Microbes make great factories because they self-replicate, scale relatively quickly, and can be tuned to replace many existing, dirty manufacturing processes. The idea is straightforward: we should just program bugs to produce whatever we need in large industrial fermenters.
Companies have been working on this (seemingly simple) problem for decades. Genentech started the recombinant protein trend in the late 70s with somatostatin followed by insulin, and the rigorous engineering discipline picked up a lot of momentum at the turn of the century with the emergence of genomics and systems biology. More recently, companies like Cargill, IFF, Amyris, Ginkgo, and Zymergen2 play important roles in the ecosystem. Nevertheless, economic targets, especially in existing commodity markets, demand massive scale and leave little margin for error. Unfortunately, evolution poses a big problem in industrial biology. Microbial populations move quickly to do what life does best: rapidly evolve for survival, often in unpredictable ways that are highly sensitive to changing conditions (e.g., a scale-up from benchtop to >100k L bioreactors) and to the detriment of the economic targets (e.g., titer, doubling time) we set.
Unfortunately, bottom-up rational design tools (i.e., leveraging known parts and traits) 3have been unable to manage the complex tradeoffs between fitness under varying conditions and the production of the things we care about. Andrew Horwitz of Sestina Bio has a great blog post that discusses this tradeoff in greater detail.
Optimizing with evolution in mind (a brief review of helpful literature): Excitingly, new approaches are emerging that steer evolution in ways that better account for these tradeoffs (at both individual and population levels). To frame the environment and design space we’re operating in when we engineer living systems, Castle et al. introduced several concepts.
Most saliently, they describe the evotype - the evolutionary disposition of the system — which is a critical design component. Per the figure shown above, the evotype is a function of the “fitneity” of the system (i.e., reproductive fitness x desired utility), the phenotype (measured by desired utility), and the variation probability distribution (i.e., how likely is this “fitneity” phenotype). This landscape needs to be designed and traversed in biological systems design to ensure a stable and desired phenotype is reached. In a more tactical review, Sánchez et al. provides an overview of how tools of directed evolution might be applied to engineer microbial ecosystems4. Most of our directed evolution experiments today work in the context of a single bug or enzyme, but engineering ecosystems is incredibly relevant for industrial synthetic biology where we are working on the scale of ecosystems and need robust population dynamics. Similar to the conceptual framework described in Castle et al., this work imagines how to traverse the ecological landscapes to arrive at robust, stable, and desired phenotypes. Importantly, they describe the characteristics required for engineering these systems as:
Phenotypic variations are distributed along a vector of selection that is heritable.
Mechanisms that introduce variability and explore structure-function landscapes.
A community size that enables stochastic sampling for between-population variability.
Enough experimental time to reach a stable, fixed population.
The benefit of using evolutionary processes to solve these problems is that they can deliver traits to bugs that are fitness maxima and therefore stabilized and maintained by selection. This is in contrast to designed traits, which are often eroded by selection. This is an important characteristic of these populations because of the requirements to scale up production. Ultimately, the larger the fermenter, the more generations of selection, and the more potential for the populations of strains to veer off course. Processes that design for evolutionary dynamics can be more robust to the pressures of scale.
Drug discovery: Programming evolution to solve for function
Historical context: We use the term “drug discovery” rather than “drug design” for understandable reasons. Historically biomedical researchers have discovered small molecule drugs through a serendipitous game of brute force. Once we identify a target, we usually throw a bunch (read: millions) of molecules at it to see what sticks. This whole process is referred to as high-throughput screening. We have had the most success working on the canonical drug targets with well-defined and distinct small molecule binding pockets in their active sites. These canonical targets are sometimes referred to as the druggable proteome, though specific definitions of druggable aren’t specific and are constantly changing (e.g., as in kinases).
On the more in silico side, we’ve seen great work building generative chemistry approaches from investigators like Connor Coley, Regina Barzilay, and Gisbert Schneider, that have built hybrid computational-experimental methods. These show strong promise in generating novel chemical matter more efficiently than traditional approaches. That said, bottoms-up, rational design approaches are still early in generating results that translate to the clinic, especially in challenging, undruggable targets. There are a variety of reasons for this:
Native context matters and remains complex to accurately model:
Biology is so context-dependent in how it is expressed, and therefore, not served well by reductionist approaches. Despite the emergence of concepts like digital biology as an analogy for the principles of synthetic biology, it is important to note that biology is distinctly non-digital5. Cells and organisms are incredibly efficient machines, but their constituent parts are not. The emergent properties of these systems are highly sophisticated, but the core components are sloppy and error-prone. So, studying the components in isolation, as we mostly do in high-throughput screening in biochemical assays, makes the data easier to generate, but limits translatability to the clinic.
Ideally, you’d be able to isolate the target you want to study but have the ability to view the effect in a native cellular context. Eikon Therapeutics, for example, directly visualizes protein kinetics in cell-based models and can utilize this approach in high-throughput screening assays (read more in our blog posts here and here).
Structure-based drug design assumes a well-defined and modellable structure:
AlphaFold by Deepmind has made an enormous leap forward in our ability to predict protein structure from its amino acid sequence alone (if you really want to dive down a thrilling science rabbit hole, it’s worth digging into the post by Deepmind and following all the linked resources). This achievement has opened up tons of amazing downstream applications, but it hasn’t solved drug discovery. Cue Derek Lowe with a more sobering take on the AlphaFold achievement and what it means for drug discovery.
A large part of this is because AlphaFold and other structure-based approaches cannot capture regions of proteins that are unstructured (estimated at 37-50% of proteome) and certainly struggle with targets that don’t have well-defined binding pockets. Many undruggable targets fall in this category; these include transcription factors, non-enzymatic proteins, protein-protein interactions, and phosphatases, among others. Estimates place the percentage of undruggable targets at ~85% (though this number is constantly changing; kinases were once considered undruggable and now are a well-trodden class of targets).
Chemical diversity is really important, but it’s hard to find and presents a synthesis challenge:
Today’s high-throughput screening libraries are biased toward historical successes (as are the in silico models built on these training data). There is a reinforcement loop where success begets more success, and these libraries aren’t super diverse as a result, especially so for undruggable targets without historical precedence. Additionally, synthetic accessibility is always challenging with generative small molecule drug design. Natural products are a great option for chemical diversity (as we’ve alluded to with plant chemistry) but are often challenging to source or synthesize. Artemisinin, the anti-malarial drug, is a famous example of the challenges of scaling up synthesis.
Evolution-driven approaches as an alternative: As in the prior example, where our understanding of biology falls short, evolution provides a potential solution. Evolution-driven approaches to solve these problems are in some ways analogous to a biological black-box ML algorithm: set an objective function for the microbe, for example, inhibition of a target, parameterize the search space, and let the microbe evolve to solve for a given function in ways that are often novel or not intuitive.
In an interesting example, Sarkar et al. built a system that does just this. They programmed an objective function into a microbe by requiring that the microbe inhibit an undruggable target, protein tyrosine phosphatase 1B (PTP1B), to survive6. They also gave the bug the tools for the job: in this case, adding pathways to enable the biosynthesis of terpenoids, a large class of natural products7.
The project was a great proof of concept of this approach. The investigators:
Identified two previously unknown terpenoid inhibitors of the PTP1B (the drug target).
The investigators then found a novel binding mode that differs from a previously characterized allosteric inhibitor (and thus not one we would have known to design for); this mode was further characterized in subsequent work, which showed engagement by a disordered target region.
What we’re excited to see: Leveraging evolutionary processes in these ways represents an exciting future direction of drug discovery, combining trends of natural product chemistry, functional assays, allostery, and undruggable targets — equipping bugs with the tools of plants to augment medicinal chemists. We’re a long way off from fully automating medicinal chemistry, but bugs and biosynthesis may represent an interesting way to get there.