Home
My Research
Talks
Publications
Teaching
Outreach
Contact
Light
Dark
Automatic
Theory
Kernel shape renormalization explains output-output correlations in finite Bayesian one-hidden-layer networks
Finite-width one hidden layer networks display nontrivial output-output correlations that vanish in the lazy-training infinite-width limit. This manuscript rationalizes this evidence using kernel shape renormalization in the proportional limit of Bayesian deep learning.
Paolo Baglioni
,
Lorenzo Giambagli
,
Alessandro Vezzani
,
Raffaella Burioni
,
Pietro Rotondo
,
Rosalba Pacelli
Cite
DOI
Machine learning in spectral domain
We introduce a new method for training deep neural networks by focusing on the spectral space, rather than the traditional node space. It involves adjusting the eigenvalues and eigenvectors of transfer operators, offering improved performance over standard methods with an equivalent number of parameters.
Lorenzo Giambagli
,
Lorenzo Buffoni
,
Timoteo Carletti
,
Walter Nocentini
,
Duccio Fanelli
PDF
Cite
Source Document
Training of sparse and dense deep neural networks: Fewer parameters, same performance
This study presents a variant of spectral learning for deep neural networks, where adjusting two sets of eigenvalues for each layer mapping significantly enhances network performance with fewer trainable parameters. This method, inspired by homeostatic plasticity, offers a computationally efficient alternative to conventional training, achieving comparable results with a simpler parameter setup. It also enables the creation of sparser networks with impressive classification abilities.
Lorenzo Chicchi
,
Lorenzo Giambagli
,
Lorenzo Buffoni
,
Timoteo Carletti
,
Marco Ciavarella
,
Duccio Fanelli
PDF
Cite
Source Document
Cite
×