Deep Learning
Explainability and Model Development
My research in deep learning started addressing a seemingly simple yet profound question:
Can a neural network be schematized as a graph, and how its functionality relate with the spectral properties of such graph? Remarkably, following this line, it becomes possible to rank neurons by relevance for
Structural Pruning, and reformulating the learning process through dynamical systems to build more adaptive recurrent architectures. More recently, my focus has shifted towards
Graph Neural Networks (GNNs) in the context of molecular physics, a field where mathematical and physical intution can mutually reinforce each other. This includes developing techniques to approximate the behavior of Message Passing Neural Networks (MPNNs) used in molecular potentials, as well as designing architectures tailored to the
force matching problem — a central challenge in learning accurate and transferable interatomic force fields from highly accuratly simulated data.I also have some
applied works and more theoretical ones going on,
follow me for new papers on the subject!