• Biopharma’s success rate in bringing drugs to market has long been abysmal. Can new tools help rewrite that troubled past?

    In 2011, a team of researchers at British drugmaker AstraZeneca had a problem they were looking to solve.

    For years, drug discovery and development were a wasteland for innovation. Novel drugs largely fell into one of two categories — monoclonal antibodies and small molecules — and new therapeutic modalities were hard to come by. After a rush of promising approvals in the late 1990s — including then-Biogen’s CD20 targeting antibody breakthrough Rituxan — the field stagnated and attrition rates stayed sky-high. What exactly is the industry doing wrong? AstraZeneca asked itself.

    Paul Morgan
    The drugmaker’s solution was “simple,” Paul Morgan, then an AstraZeneca R&D director for drug metabolism and pharmacokinetics, recalled. AstraZeneca’s resulting “Five R” framework looks basic — even overly simplistic — in hindsight: right target, right patient, right tissue, right safety, right commercial potential. But those simple tenets helped set the stage for the industry’s future in drug development.

    “They’re all common sense, but I think what that sort of signifies is a drive by AstraZeneca and many other companies to fix the low hanging fruit,” Morgan, now leading preclinical development at Sosei Heptares, told Endpoints News.

    The results for AstraZeneca were undeniable: After averaging an atrocious 4% success rate from drug nomination through Phase III completion between 2005 and 2010, the drugmaker’s numbers jumped to 19% between 2012 and 2016.

    But when the low hanging fruit is gone, what’s left to pick?

    Nearly a decade after AstraZeneca set out that framework, attrition rates from Phase I studies to approval across the industry remain bafflingly low — hovering somewhere around 10% despite record highs in the numbers of molecules approved each year. The slog from preclinical R&D (prior to Phase I) through Phase III is referred to by some researchers as  “the valley of death” — and making the leap into Phase I, in particular, is a huge challenge. In oncology, the prospects are even worse.

    Just like in 2010, the industry is due for a breakthrough, and it may now have the technology to help it get there. Artificial intelligence and machine learning have allowed companies to lean on previously unthinkable computational power to crunch predictive models and novel drug designs out of a mountain of data. Meanwhile, the rise of gene editing tools like CRISPR have opened the door to running massive screening experiments that normally would have taken thousands of man-hours.

    There are skeptics out there, most arguing that the tools are only as good as the researchers wielding them. Drug development is a process that deserves an intellectual breakthrough as much as a technological breakthrough, they say.

    What is undeniably true, with so much money and unmet clinical need on the line, is that biopharma can’t continue to lose nine in 10 identified molecules to the development process. Adding new tools to the toolbox in preclinical R&D — the pivotal crossroads before those drugs go into humans— could help ease that burden.

    Building the right molecule
    Drug development has always been a hit-or-miss proposition, but around the turn of the century, there was a slate of massive R&D successes that helped set the precedent for how a molecule is made.

    To build a molecule, developers have to ask a set of common questions: Can we develop a drug that binds to a targeted protein without binding to others? Can it be both soluble and permeable into human cells? Can it be easily transportable without being susceptible to light and the body’s enzymes? Will it be safe? Will it be effective? Those are just a few of the basic questions, but the answers are often contradictory, requiring developers to sometimes make big concessions to get a functional molecule across the finish line.

    Even more, building that “perfect” molecule from scratch requires scientists to take some educated guesses about the proteins they are targeting and how their tailored molecule would function, tempered by trial-and-error in the lab. In the early days, that routine seemed to be working — but it wasn’t foolproof and took forever.

    Take the development of Roche’s Rituxan, for instance. The result of a process that took more than 20 years to come to fruition, Biogen developed the drug on the back of multiple breakthroughs that started with Nobel Prize-winning work in monoclonal antibodies from NCI researchers César Milstein and Georges Köhler and continued through the discovery of the CD20 protein on B cells in 1988. Rituxan’s breakthrough changed the game for drug development, but the initial wave was short-lived.

    The same monoclonal antibody design that worked for Biogen eventually grew larger and more broadly cytotoxic as researchers macheted their way through targets, Morgan said. Small molecules as a field flourished, but the industry lagged behind in developing novel therapeutic modalities — think checkpoint inhibitors and gene therapies — that could change the plan of attack for difficult diseases.

    “We have lots of drugs; this has been successful to some degree using traditional methods — but if you think about it, a lot of these drugs have what’s called a narrow therapeutic index or they have properties that prevent them from reaching their full potential,” said Karen Akinsanya, Schrödinger’s chief biomedical scientist and head of discovery R&D, said. “These are real problems that drug discoverers and developers have to deal with.”

    Karen Akinsanya
    A new wave of modalities in the 2010s blew the doors off how drugmakers can tackle disease, but it created another problem: What’s the right molecule and how do you make it? That’s where AI and machine learning could have the best possible chance at disrupting the industry.

    Bring in the machines
    In terms of their ability to crunch numbers beyond the ability of the average research team, artificial intelligence and machine learning have changed the game.

    Discovery outfits with AI mission statements have turned into superstars overnight, many with the promise that their proprietary robot brain trusts will eventually create the world’s first “AI-discovered” molecule. Some in the industry, however, are skeptical of that promise — mostly due to a bottleneck in the massive amounts of information it takes for those AI platforms to do their work.

    “AI alone cannot do this because the diversity of chemical space is too enormous,” said Schrödinger CEO Ramy Farid said. “No matter how much hype there is, it can’t. It’s just theoretically not possible, and it makes no sense.”

    Ramy Farid
    But for companies that have centered their work around the better mousetrap of their deep learning platforms, a major breakthrough is out there waiting.

    That’s certainly the goal for outfits like Exscientia, a UK-based startup hoping to use its deep learning platform to screen hundreds of thousands of potential molecules for therapeutic use, CEO Andrew Hopkins said. Where Exscientia stands apart is its work in using that platform for public health initiatives — seen in its partnership with the Bill & Melinda Gates Foundation back in December.

    In public health, high costs of R&D coupled with little profit motive have led to an investment crater from both the industry and national health authorities. That inequality in funding has received a fresh look given Covid-19 and the potential of future pandemics. By employing tools like machine learning to speed the discovery process for new molecules, Hopkins believes, that field’s history as a backwater could see a fresh influx of innovation.

    Andrew Hopkins
    “If you can actually change the economics, if you can make things far more productive, then what you’ll also do is open up the opportunity for what you can do in public health,” Hopkins said. “As a society, our toolbox is greater than ever. I think the solution to this is societal, and industry and governments need to come together to realize that we need to solve this market failure — otherwise the cost of the next pandemic is going to be large.”

    But sifting through molecule design isn’t the only potential use for these platforms, experts say. Teams like Dyno Therapeutics recently used their own neural network to identify tens of thousands of viable capsid designs for AAV-delivered gene therapies. That’s a big deal in a field in which imperfect capsids can cause major side effects in patients. Dyno churned out a huge amount of viable designs not by plugging gargantuan amounts of data into its platform, co-founder Sam Sinai said, but by selectively engaging experimental data and allowing the computer to learn as it grows.

    Sam Sinai
    “The collection of all of these experiments we are doing goes into the samemachine learning brain, and what it learns is the key rules that are required to design every capsid for a particular purpose,” he said. “We looked at how different ways at looking at the same data, or even ignoring data, that we had can help certain machine learning models in their ability to model the space that we are trying to go into.”

    Elsewhere, Schrödinger’s team is using AI to look for big advancements in image-based processing — essentially getting a much better look at drug targets, as seen in DeepMind’s Alphafold breakthrough late last year in imaging protein “folding.” But Akinsanya pleaded skepticism of just how far AI and machine learning can go. Not only are the data limitations deep, she said, but not all phases of the discovery process will benefit in the same meaningful way as they may in imaging.

    “Those types of applications are going to be disrupted by machine learning, but when it comes to designing compounds it’s a very different thing,” she said. “You need to know your application, and — this is where investors are perhaps getting overly excited — is as soon as you say the word along with an application, somehow that makes it better. I don’t think that’s necessarily true in all applications.”

    In the oncology wasteland
    If the failure rates across the industry from preclinical R&D are gruesome, oncology as a field is one of the worst performers.

    Targeting tumors is especially difficult due to the toxic side effects common to most cancer drugs, which walk a tightrope between destroying cancerous cells without destroying the healthy tissues that surround them. The potential answers for that problem are numerous, whether by engineering the perfect T cell to laser focus on tumors or using the body’s own pathways — a goal in natural killer cell platforms, for instance — to do the dirty work.

    Part of the problem is most cancer drugs simply don’t focus on the right targets, Frank Stegmeier, a Novartis veteran and chief scientific officer at private biotech KSQ Therapeutics, said. Or, drugs may hit the right target but also hit a group of off-targets that can lead to catastrophic toxicity concerns. That’s not a problem unique to oncology, either. At the root level, researchers simply don’t have enough granular detail about the targets they’re aiming for — but CRISPR could already be changing that.

    Frank Stegmeier
    KSQ is one of a group of players in the early-stage oncology space using CRISPR/Cas “screenings” to identify genes correlated to specific diseases. The hypothesis in a CRISPR screen is simple and broad: Certain physiological effects are connected to the expression of certain genes. By “knocking out” genes in sequence, researchers can speedily weed out a few needles in a haystack that would have previously taken thousands of man-hours to find.

    “One of the things I’ve learned in my 20 years in biology is when I thought I understood biology, I would do the next experiment, and it would prove me wrong,” Stegmeier said. “We go in with no assumptions. We say, I don’t know what the best target is, let’s test all 20,000 in parallel and let the data tell us what the best way of treating the disease is.”

    In KSQ’s case, that powerful platform has earned the interest of top drugmaker Takeda, which partnered with the biotech in January to hunt for NK cell therapies. The power of CRISPR screenings, however, isn’t limited to just industry. In academia, scientists are using the same tools to identify targets en masse — and at least one ambitious project could prove a game-changer in oncology research.

    The Broad and Wellcome Sanger institutes are using CRISPR as part of their Cancer Dependency Map initiative, which is looking to image up to 20,000 tumor types as part of a public-private consortium that aims to match the Human Genome Project in terms of scale.

    Jesse Boehm
    According to Jesse Boehm, Broad’s lead on the map project, researchers at both institutes plan to use CRISPR as a broad screening tool for cancer “dependencies” — genes correlated with the creation and proliferation of tumor cells — in tens of thousands of genotypes and publish the genome data for the industry’s reference. The resulting guideposts would function much like a clinical Google Maps, Boehm said, pointing clinicians in the right direction of hunting down weaknesses in a patients’ tumors.

    It’s a mission statement in the tone of the collaborative project that sequenced the human genome, and it could set oncology up for a massive breakthrough. With one in six deaths worldwide tied to cancer, the public health need is massive; however, it won’t happen soon, and it won’t be cheap.

    “There is a sense of building this foundational data set today so that the right drugs can be developed over the next 20 or 25 years,” Boehm said.

    The cost of the project could fall anywhere between $30 million and $50 million per year for the next 10 years, Boehm said, a non-negligible figure for academic research. However, the NIH has funded similar projects — including the pilot program for the predecessor Cancer Genome Atlas (TCGA), which cost $100 million for its first three years and continued for another 12.

    Collaboration and innovation
    Having a good read on a novel target is one of the few surefire things that can get Big Pharma interested in a discovery outfit’s work.

    That’s part of the story behind the success of Japan’s Sosei Heptares, which has built an ever-expanding portfolio around G protein-coupled receptors that are linked to a range of GI and neurological diseases. Those targets have brought on big-name licensing partners on the caliber of GlaxoSmithKline, AstraZeneca, Novartis and Genentech. The company had a pact with AbbVie circa Allergan as well, but the Illinois drugmaker recently walked on it.

    While he’s a newcomer to Sosei’s team, Morgan has used his experience at AstraZeneca refining the rulebook on drug development to create a better preclinical program with higher chances of success. How does his team do that? Having a granular understanding of their chosen target, understanding the patient population well and working with Big Pharma partners to handle the late-stage regulatory work to get a drug over the finish line.

    “At its core, there is much more appropriate thought given to the choice of the target, target validation and choosing the modality that is likely to have the best chance against that target in the patient population,” Morgan said.

    The third big key to developing a successful molecule is understanding the target patient population, and major breakthroughs in genomic sequencing — combined with tools like CRISPR and machine learning — have given researchers more insight than ever on potential patients, often before the molecule ever makes it into humans.

    Brendan Frey
    At private biotech Deep Genomics, researchers are riding a boom in RNA sequencing data to churn out thousands of viable blocking oligonucleotides for RNAi therapeutics. According to founder and CEO Brendan Frey, the company’s AI platform — its “special power,” as he calls it — allows it to go from target validation to candidate nomination within 12 months and take half of their discovered molecules to the “optimized lead” stage. Those figures, he said, are unprecedented across the industry.

    In November, BioMarin jumped on board Deep Genomics’ work, hoping to leverage the company’s “AI workbench” to identify oligo targets for four rare disease targets.

    The other tool drug developers are using in the transitional stage are pharmacodynamic biomarker assays in animal models that can give researchers insight into how molecules may fare in terms of human safety and efficacy before they ever get there. AI has played a role in fleshing those assays out as well, crunching genome data and reams of safety and efficacy experimental data to turn out predictive markets that can actually predict a molecule’s chance of success. Meanwhile, CRISPR screening has offered up more predictive genomic data that gives researchers a better map for molecule development.

    “Now that we have these tools to really understand human biology at a level of depth that we’ve not had at any time in history, I think our ability to pick targets when we combine all these tools is just unprecedented,” Akinsanya said. “It’s super exciting.”

    Join the Discussion

    Your email address will not be published. Required fields are marked *

    Back to top