Deep neural networks (DNNs) have proved to be highly promising tools for analyzing large amounts of data, which could speed up research in various scientific fields. For instance, over the past few years, some computer scientists have trained models based on these networks to analyze chemical data and identify promising chemicals for various applications.

Researchers at the Massachusetts Institute of Technology (MIT) recently carried out a study investigating the neural scaling behavior of large DNN-based models trained to generate advantageous compositions and learn interatomic potentials. Their paper, published in Nature Machine Intelligence, shows how quickly the performance of these models can improve as their size and the pool of data they are trained on are increased.\

To read more, click here.