Cantitate/Preț
Produs

Deep Learning Architectures: A Mathematical Approach: Springer Series in the Data Sciences

Autor Ovidiu Calin
en Limba Engleză Hardback – 14 feb 2020
This book describes how neural networks operate from the mathematical point of view. As a result, neural networks can be interpreted both as function universal approximators and information processors. The book bridges the gap between ideas and concepts of neural networks, which are used nowadays at an intuitive level, and the precise modern mathematical language, presenting the best practices of the former and enjoying the robustness and elegance of the latter.
This book can be used in a graduate course in deep learning, with the first few parts being accessible to senior undergraduates.  In addition, the book will be of wide interest to machine learning researchers who are interested in a theoretical understanding of the subject.
 
 

Citește tot Restrânge

Toate formatele și edițiile

Toate formatele și edițiile Preț Express
Paperback (1) 43241 lei  6-8 săpt.
  Springer International Publishing – 14 feb 2021 43241 lei  6-8 săpt.
Hardback (1) 51717 lei  3-5 săpt. +7294 lei  4-10 zile
  Springer International Publishing – 14 feb 2020 51717 lei  3-5 săpt. +7294 lei  4-10 zile

Din seria Springer Series in the Data Sciences

Preț: 51717 lei

Preț vechi: 64646 lei
-20% Nou

Puncte Express: 776

Preț estimativ în valută:
9906 10739$ 8234£

Carte disponibilă

Livrare economică 11-25 noiembrie
Livrare express 25-31 octombrie pentru 8293 lei

Preluare comenzi: 021 569.72.76

Specificații

ISBN-13: 9783030367206
ISBN-10: 3030367207
Pagini: 760
Ilustrații: XXX, 760 p. 207 illus., 35 illus. in color.
Dimensiuni: 178 x 254 x 45 mm
Greutate: 1.84 kg
Ediția:1st ed. 2020
Editura: Springer International Publishing
Colecția Springer
Seria Springer Series in the Data Sciences

Locul publicării:Cham, Switzerland

Cuprins

Introductory Problems.- Activation Functions.- Cost Functions.- Finding Minima Algorithms.- Abstract Neurons.- Neural Networks.- Approximation Theorems.- Learning with One-dimensional Inputs.- Universal Approximators.- Exact Learning.- Information Representation.- Information Capacity Assessment.- Output Manifolds.- Neuromanifolds.- Pooling.- Convolutional Networks.- Recurrent Neural Networks.- Classification.- Generative Models.- Stochastic Networks.- Hints and Solutions. 


Recenzii

“This book is useful to students who have already had an introductory course in machine learning and are further interested to deepen their understanding of the machine learning material from the mathematical point of view.” (T. C. Mohan, zbMATH 1441.68001, 2020)

Notă biografică

Ovidiu Calin, a graduate from University of Toronto, is a professor at Eastern Michigan University and a former visiting professor at Princeton University and University of Notre Dame. He has delivered numerous lectures at several universities in Japan, Hong Kong, Taiwan, and Kuwait over the last 15 years. His publications include over 60 articles and 8 books in the fields of machine learning, computational finance, stochastic processes, variational calculus and geometric analysis.


Textul de pe ultima copertă

This book describes how neural networks operate from the mathematical point of view. As a result, neural networks can be interpreted both as function universal approximators and information processors. The book bridges the gap between ideas and concepts of neural networks, which are used nowadays at an intuitive level, and the precise modern mathematical language, presenting the best practices of the former and enjoying the robustness and elegance of the latter.
This book can be used in a graduate course in deep learning, with the first few parts being accessible to senior undergraduates.  In addition, the book will be of wide interest to machine learning researchers who are interested in a theoretical understanding of the subject.
 
 
 


Caracteristici

Contains a fair number of end-of chapter exercises
Full solutions provided to all exercises
Appendices including topics needed in the book exposition

Descriere

This book describes how neural networks operate from the mathematical point of view. As a result, neural networks can be interpreted both as function universal approximators and information processors. The book bridges the gap between ideas and concepts of neural networks, which are used nowadays at an intuitive level, and the precise modern mathematical language, presenting the best practices of the former and enjoying the robustness and elegance of the latter.
This book can be used in a graduate course in deep learning, with the first few parts being accessible to senior undergraduates.  In addition, the book will be of wide interest to machine learning researchers who are interested in a theoretical understanding of the subject.