The Principles of Deep Learning Theory: An Effective Theory Approach to Understanding Neural Networks
Autor Daniel A. Roberts, Sho Yaida Contribuţii de Boris Haninen Limba Engleză Hardback – 25 mai 2022
Preț: 430.75 lei
Preț vechi: 468.21 lei
-8% Nou
Puncte Express: 646
Preț estimativ în valută:
82.43€ • 86.72$ • 68.42£
82.43€ • 86.72$ • 68.42£
Carte disponibilă
Livrare economică 26 decembrie 24 - 09 ianuarie 25
Livrare express 11-17 decembrie pentru 51.24 lei
Preluare comenzi: 021 569.72.76
Specificații
ISBN-13: 9781316519332
ISBN-10: 1316519333
Pagini: 472
Dimensiuni: 184 x 261 x 26 mm
Greutate: 1.03 kg
Ediția:Nouă
Editura: Cambridge University Press
Colecția Cambridge University Press
Locul publicării:Cambridge, United Kingdom
ISBN-10: 1316519333
Pagini: 472
Dimensiuni: 184 x 261 x 26 mm
Greutate: 1.03 kg
Ediția:Nouă
Editura: Cambridge University Press
Colecția Cambridge University Press
Locul publicării:Cambridge, United Kingdom
Cuprins
Preface; 0. Initialization; 1. Pretraining; 2. Neural networks; 3. Effective theory of deep linear networks at initialization; 4. RG flow of preactivations; 5. Effective theory of preactivations at initializations; 6. Bayesian learning; 7. Gradient-based learning; 8. RG flow of the neural tangent kernel; 9. Effective theory of the NTK at initialization; 10. Kernel learning; 11. Representation learning; ∞. The end of training; ε. Epilogue; A. Information in deep learning; B. Residual learning; References; Index.
Recenzii
'In the history of science and technology, the engineering artifact often comes first: the telescope, the steam engine, digital communication. The theory that explains its function and its limitations often appears later: the laws of refraction, thermodynamics, and information theory. With the emergence of deep learning, AI-powered engineering wonders have entered our lives — but our theoretical understanding of the power and limits of deep learning is still partial. This is one of the first books devoted to the theory of deep learning, and lays out the methods and results from recent theoretical approaches in a coherent manner.' Yann LeCun, New York University and Chief AI Scientist at Meta
'For a physicist, it is very interesting to see deep learning approached from the point of view of statistical physics. This book provides a fascinating perspective on a topic of increasing importance in the modern world.' Edward Witten, Institute for Advanced Study
'This is an important book that contributes big, unexpected new ideas for unraveling the mystery of deep learning's effectiveness, in unusually clear prose. I hope it will be read and debated by experts in all the relevant disciplines.' Scott Aaronson, University of Texas at Austin
'It is not an exaggeration to say that the world is being revolutionized by deep learning methods for AI. But why do these deep networks work? This book offers an approach to this problem through the sophisticated tools of statistical physics and the renormalization group. The authors provide an elegant guided tour of these methods, interesting for experts and non-experts alike. They write with clarity and even moments of humor. Their results, many presented here for the first time, are the first steps in what promises to be a rich research program, combining theoretical depth with practical consequences.' William Bialek, Princeton University
'This book's physics-trained authors have made a cool discovery, that feature learning depends critically on the ratio of depth to width in the neural net.' Gilbert Strang, Massachusetts Institute of Technology
'For a physicist, it is very interesting to see deep learning approached from the point of view of statistical physics. This book provides a fascinating perspective on a topic of increasing importance in the modern world.' Edward Witten, Institute for Advanced Study
'This is an important book that contributes big, unexpected new ideas for unraveling the mystery of deep learning's effectiveness, in unusually clear prose. I hope it will be read and debated by experts in all the relevant disciplines.' Scott Aaronson, University of Texas at Austin
'It is not an exaggeration to say that the world is being revolutionized by deep learning methods for AI. But why do these deep networks work? This book offers an approach to this problem through the sophisticated tools of statistical physics and the renormalization group. The authors provide an elegant guided tour of these methods, interesting for experts and non-experts alike. They write with clarity and even moments of humor. Their results, many presented here for the first time, are the first steps in what promises to be a rich research program, combining theoretical depth with practical consequences.' William Bialek, Princeton University
'This book's physics-trained authors have made a cool discovery, that feature learning depends critically on the ratio of depth to width in the neural net.' Gilbert Strang, Massachusetts Institute of Technology
Notă biografică
Descriere
This volume develops an effective theory approach to understanding deep neural networks of practical relevance.