Home
My Research
Talks
Publications
Teaching
Outreach
Contact
Light
Dark
Automatic
Deep Learning
How a student becomes a teacher: learning and forgetting through Spectral methods
This study explores the teacher-student paradigm in machine learning, focusing on overparameterized student networks trained by fixed teacher networks. It introduces a new optimization scheme using spectral representation of linear information transfer between layers. This approach allows identifying a stable student substructure that mirrors the teacher’s complexity. The method shows that pruning unimportant nodes, based on optimized eigenvalues, does not degrade performance, indicating a second-order phase transition with universality traits in neural network training.
Lorenzo Giambagli
,
Lorenzo Buffoni
,
Lorenzo Chicchi
,
Duccio Fanelli
PDF
Cite
Source Document
A Bridge between Dynamical Systems and Machine Learning: Engineered Ordinary Differential Equations as Classification Algorithm (EODECA)
EODECAs, merging machine learning with dynamical systems, enhance interpretability and transparency in neural networks. They employ continuous ordinary differential equations, offering both high classification accuracy and an understanding of data processes, addressing the opacity of traditional deep learning models. This approach signifies a step towards more comprehensible machine learning models.
Raffaele Marino
,
Lorenzo Giambagli
,
Lorenzo Chicchi
,
Lorenzo Buffoni
,
Duccio Fanelli
PDF
Cite
Source Document
Complex Recurrent Spectral Network
The Complex Recurrent Spectral Network (C-RSN) is a novel AI model that more accurately mimics biological neural processes using localized non-linearity, complex eigenvalues, and separated memory/input functionalities. It demonstrates dynamic, oscillatory behavior akin to biological cognition and effectively classifies data, as shown in tests with the MNIST dataset.
Lorenzo Chicchi
,
Lorenzo Giambagli
,
Lorenzo Buffoni
,
Raffaele Marino
,
Duccio Fanelli
PDF
Cite
Source Document
Non-parametric analysis of the Hubble Diagram with Neural Networks
This study introduces a neural network-based method for nonparametric analysis of the Hubble diagram, extended to high redshifts. Validated using simulated data, the method aligns with a flat Λ (Lambda) cold dark matter model (ΩM ≈ 0.3) up to z ≈ 1-1.5, but deviates at higher redshifts. It also suggests increasing ΩM values with redshift, indicating potential dark energy evolution.
Lorenzo Giambagli
,
Duccio Fanelli
,
Guido Risaliti
,
Matilde Signorini
PDF
Cite
Project
Source Document
Recurrent Spectral Network (RSN): Shaping a discrete map to reach automated classification
The Recurrent Spectral Network (RSN) is a new automated classification method that uses dynamical systems to direct data to specific targets, demonstrating effectiveness with both a simple model and a standard image processing dataset.
Lorenzo Chicchi
,
Duccio Fanelli
,
Lorenzo Giambagli
,
Lorenzo Buffoni
,
Timoteo Carletti
PDF
Cite
Source Document
Spectral pruning of fully connected layers
Training neural networks in spectral space focuses on optimizing eigenvalues and eigenvectors instead of individual weights, allowing effective implicit bias that node enables pruning without sacrificing performance.
Lorenzo Buffoni
,
Enrico Civitelli
,
Lorenzo Giambagli
,
Lorenzo Chicchi
,
Duccio Fanelli
PDF
Cite
Source Document
Machine learning in spectral domain
We introduce a new method for training deep neural networks by focusing on the spectral space, rather than the traditional node space. It involves adjusting the eigenvalues and eigenvectors of transfer operators, offering improved performance over standard methods with an equivalent number of parameters.
Lorenzo Giambagli
,
Lorenzo Buffoni
,
Timoteo Carletti
,
Walter Nocentini
,
Duccio Fanelli
PDF
Cite
Source Document
Mobility-based prediction of SARS-CoV-2 spreading
This paper analyzes the effectiveness of containment measures for SARS-CoV-2, using mobility data to gauge their impact. A deep learning model predicts virus spread scenarios in Italy, showing how these measures help flatten the infection curve and estimating the time required for their noticeable effects.
Lorenzo Chicchi
,
Lorenzo Giambagli
,
Lorenzo Buffoni
,
Duccio Fanelli
Cite
Training of sparse and dense deep neural networks: Fewer parameters, same performance
This study presents a variant of spectral learning for deep neural networks, where adjusting two sets of eigenvalues for each layer mapping significantly enhances network performance with fewer trainable parameters. This method, inspired by homeostatic plasticity, offers a computationally efficient alternative to conventional training, achieving comparable results with a simpler parameter setup. It also enables the creation of sparser networks with impressive classification abilities.
Lorenzo Chicchi
,
Lorenzo Giambagli
,
Lorenzo Buffoni
,
Timoteo Carletti
,
Marco Ciavarella
,
Duccio Fanelli
PDF
Cite
Source Document
Cite
×