A team of researchers from Stanford University and Google have released a paper highlighting a deep learning approach they say shows promise in the field of drug discovery. What they found, essentially, is that that more data covering more biological processes seems like a good recipe for uncovering new drugs.
Importantly, the paper doesn’t claim a major breakthrough that will revolutionize the pharmaceutical industry today. It simply shows that by analyzing a whole lot of data across a whole lot of different target processes — in this case, 37.8 million data points across 259 tasks — seems to work measurably better for discovering possible drugs than does analyzing smaller datasets and/or building models specifically targeting a single a task. (Read the Google blog post for a higher-level, but still very-condensed explanation.)
But when talking about a process in drug discovery that can take years and cost drug companies billions of dollars that ultimately make their way into…
View original post 419 more words