Home
My Research
Talks
Publications
Teaching
Outreach
Contact
Light
Dark
Automatic
article-journal
Training of sparse and dense deep neural networks: Fewer parameters, same performance
This study presents a variant of spectral learning for deep neural networks, where adjusting two sets of eigenvalues for each layer mapping significantly enhances network performance with fewer trainable parameters. This method, inspired by homeostatic plasticity, offers a computationally efficient alternative to conventional training, achieving comparable results with a simpler parameter setup. It also enables the creation of sparser networks with impressive classification abilities.
Lorenzo Chicchi
,
Lorenzo Giambagli
,
Lorenzo Buffoni
,
Timoteo Carletti
,
Marco Ciavarella
,
Duccio Fanelli
PDF
Cite
Source Document
«
Cite
×