There’s only one honest answer to the question “How long does it take to develop a new drug?”, and that’s “Too @#$! long”.
In the same way, the only honest answer to “What are the average chances for a drug candidate’s success?” is “Too @#$! low”.
The combination of those two factors is the root of pretty much all the drug industry’s problems – everything else would get a lot easier to deal with if we could ease up on those two a bit.
Ver:
FARMAINDUSTRIA vende...imagen.
That being the case, there are plenty of people out there who are ready to tell you that they can do something about it. They fall all along the sliding scales of realistic/delusional, well-meaning/predatory, etc. What none of them have been able to do, so far, is make much of a dent in either of those big questions. Improvements do come along, but get balanced out by complications somewhere else, which is why the industry has been spending more and more over the years to maintain roughly similar levels of drug productivity. But this means that whatever new technology comes along, particularly if it’s not that well understood, can get wedged into a PowerPoint deck and sold to people who are hoping that the Next Big Thing has finally arrived.
Artificial Intelligence, in its various forms, is currently the hottest plateful of fried dough being served. It covers a lot of ground, has a lot of potential, and no one in the audience is likely to really understand the details: perfect. I follow this field with great interest, and despite skepticism, I’m not betting against it. That said, I have a limit, which has been reached by a slide deck produced by a consulting company, sent by a longtime reader of the blog. For instance, one of the slides says:
The drug discovery process
typically involves the identification
of hundreds of compounds
and their subsequent elimination
in further rounds of testing.
AI has the potential to help pharma companies discover drugs faster and more cheaply by narrowing the list of therapeutic targets.
OK, those are two different things. You have hundreds of compounds against a target you know about; narrowing the list of therapeutic targets is what you do before you make all those. That’s followed up by one of those big funnel-looking things, showing how projects narrow down to one approved drug. It’s got your Phase III, Phase II, Phase I, Pre-clinical. . .and before that, it has a big honking block of space labeled “Drug Discovery: thousands of molecules screened. 3-6 years”. Next up is the same funnel after the laying on of hands – that massive hunk at the beginning is now a tiny sliver, because “automatic drug discovery” has reduced that screening phase to “3-6 days”, shaving six years off your timelines.
Hooey. Screening “thousands of compounds” does not take you six years, believe me. You can do a million in six weeks. The whole compound screening step is just another early thing in preclinical space; I’ve never seen a successful project in which it was a rate-limiting step. But “shave a few weeks off something at the very beginning” isn’t as compelling an offer, is it? Looking at the companies they’re touting, I note that one of them is Atomwise, whose tendencies towards overstatement I’ve written about here and here. Others (new to me) are BenevolentAI and twoXAR. I will be very happy to see how these folks make out; I really don’t want to give the impression that I want them to fail. I mean, I do this for a living, too, and I would very much like to be able to do it better. We need some help over here! But we do not need some more hype over here – that’s my point.
Now, I should mention that I know people who are up to their collarbones in computational chemistry, in several places around the industry. And I’m told, by some of them, that there are methods that show real promise in advancing drug discovery, which will certainly be good news if true. But I’m also told by everyone involved that at the moment these methods are extremely computationally intensive, even with the best equipment available, so you’re not going to run a virtual screening effort with X-kazillion random compound this way. Not yet. A fully operational quantum computing platform would presumably come in handy, once a great big coding team has written modeling software to take advantage of it. But neither that hardware nor that software exist yet.
Rather than calculating from the ground up, I think that the Benevolent AI people are, like many others before them, mining the list of existing drugs looking for repurposing, and digging through the published literature looking for connections that may have escaped earlier observers. I feel sure that there must be quite a few of those, and I’d have to think that AI/machine learning/deep learning/whathaveyou is going to be a good way to find them. But that’s no easy task, either, considering that (at a guess) about 30% of the medical literature is useless or worse. Humans are needed to curate the data set that you’re feeding your software, and that’s a labor-intensive step. It’s still easier than what the Atomwises of the world are trying to do, though.
None of this is impossible. Some of this may even happen fairly soon, smaller parts may even be happening now. But I will lay money that it’s not all happening as we speak, which is what consultants everywhere would like their audiences to believe. The train is pulling out, the ship is sailing, everyone else knows about this (so why don’t you?) The proper attitude for the real hard sell is mild surprise that your clients haven’t heard the good news that you’re bringing them: the revolution’s here, guys! No one told you? I have developed antibodies to this over the years. In my own experience, scientific revolutions do not announce themselves on polished PowerPoint slides.
By Derek Lowe
No hay comentarios:
Publicar un comentario