Cantitate/Preț
Produs

Embedded Deep Learning: Algorithms, Architectures and Circuits for Always-on Neural Network Processing

Autor Bert Moons, Daniel Bankman, Marian Verhelst
en Limba Engleză Hardback – 3 noi 2018
This book covers algorithmic and hardware implementation techniques to enable embedded deep learning. The authors describe synergetic design approaches on the application-, algorithmic-, computer architecture-, and circuit-level that will help in achieving the goal of reducing the computational cost of deep learning algorithms. The impact of these techniques is displayed in four silicon prototypes for embedded deep learning.
  • Gives a wide overview of a series of effective solutions for energy-efficient neural networks on battery constrained wearable devices;
  • Discusses the optimization of neural networks for embedded deployment on all levels of the design hierarchy – applications, algorithms, hardware architectures, and circuits – supported by real silicon prototypes;
  • Elaborates on how to design efficient Convolutional Neural Network processors, exploiting parallelism and data-reuse, sparse operations, and low-precision computations;
  • Supports the introduced theory and design concepts by four real silicon prototypes. The physical realization’s implementation and achieved performances are discussed elaborately to illustrated and highlight the introduced cross-layer design concepts.
Citește tot Restrânge

Toate formatele și edițiile

Toate formatele și edițiile Preț Express
Paperback (1) 56863 lei  6-8 săpt.
  Springer International Publishing – 19 ian 2019 56863 lei  6-8 săpt.
Hardback (1) 76780 lei  6-8 săpt.
  Springer International Publishing – 3 noi 2018 76780 lei  6-8 săpt.

Preț: 76780 lei

Preț vechi: 93635 lei
-18% Nou

Puncte Express: 1152

Preț estimativ în valută:
14693 15457$ 12165£

Carte tipărită la comandă

Livrare economică 14-28 ianuarie 25

Preluare comenzi: 021 569.72.76

Specificații

ISBN-13: 9783319992228
ISBN-10: 3319992228
Pagini: 272
Ilustrații: XVI, 206 p. 124 illus., 92 illus. in color.
Dimensiuni: 155 x 235 x 17 mm
Greutate: 0.49 kg
Ediția:1st ed. 2019
Editura: Springer International Publishing
Colecția Springer
Locul publicării:Cham, Switzerland

Cuprins

Chapter 1 Embedded Deep Neural Networks.- Chapter 2 Optimized Hierarchical Cascaded Processing.- Chapter 3 Hardware-Algorithm Co-optimizations.- Chapter 4 Circuit Techniques for Approximate Computing.- Chapter 5 ENVISION: Energy-Scalable Sparse Convolutional Neural Network Processing.- Chapter 6 BINAREYE: Digital and Mixed-signal Always-on Binary Neural Network Processing.- Chapter 7 Conclusions, contributions and future work.


Notă biografică

Dr. ir. Bert Moons received the B.S. and M.S. and PhD degree in Electrical Engineering from KU Leuven, Leuven, Belgium in 2011, 2013 and 2018. He performed his PhD research at ESAT-MICAS as an IWT-funded Research Assistant, focusing on energy-scalable and run-time adaptable digital circuits for embedded Deep Learning applications. Bert authored 15+ conference and journal publications, was a Visiting Research Student at Stanford University in the Murmann Mixed-Signal Group and received the SSCS predoctoral achievement award in 2018.  Currently he is with Synopsys, as a hardware design architect for the DesignWare EV6x Embedded Vision and Deep Learning processors.
Daniel Bankman received the S.B. degree in electrical engineering from the Massachusetts Institute of Technology, Cambridge, MA in 2012 and the M.S. degree from Stanford University, Stanford, CA in 2015. Since 2012, he has been working toward the Ph.D. degree at Stanford University, focusing on mixed-signal processing for machine learning. He has held internship positions with Analog Devices and Intel. His research interests include algorithms, architectures, and circuits for energy-efficient learning and inference in smart devices. He was a recipient of the Texas Instruments Stanford Graduate Fellowship in 2012, the Numerical Technologies Founders Prize in 2013, and the John von Neumann Student Research Award in 2015 and 2017.
Prof. Dr. ir. Marian Verhelst is a professor at the MICAS laboratories (MICro-electronics And Sensors) of the Electrical Engineering Department of KU Leuven. Her research focuses on embedded machine learning, energy-efficient hardware accelerators, self-adaptive circuits and systems, and low-power sensing and processing. Before that, she received a PhD from KU Leuven cum ultima laude, she was a visiting scholar at the Berkeley Wireless Research Center (BWRC) of UC Berkeley, and she worked as a research scientist at Intel Labs, Hillsboro OR. Prof. Verhelst is a member of the DATE conference executive committee, and was a member of the ESSCIRC and ISSCC TPCs and of the ISSCC executive committee. Marian is an SSCS Distinguished Lecturer, was a member of the Young Academy of Belgium, an associate editor for TCAS-II and JSSC and a member of the STEM advisory commitee to the Flemish Government. Marian holds a prestigious ERC Grant from the European Union.


Textul de pe ultima copertă

This book covers algorithmic and hardware implementation techniques to enable embedded deep learning. The authors describe synergetic design approaches on the application-, algorithmic-, computer architecture-, and circuit-level that will help in achieving the goal of reducing the computational cost of deep learning algorithms. The impact of these techniques is displayed in four silicon prototypes for embedded deep learning.
  • Gives a wide overview of a series of effective solutions for energy-efficient neural networks on battery constrained wearable devices;
  • Discusses the optimization of neural networks for embedded deployment on all levels of the design hierarchy – applications, algorithms, hardware architectures, and circuits – supported by real silicon prototypes;
  • Elaborates on how to design efficient Convolutional Neural Network processors, exploiting parallelism and data-reuse, sparse operations, and low-precision computations;
  • Supports the introduced theory and design concepts by four real silicon prototypes. The physical realization’s implementation and achieved performances are discussed elaborately to illustrated and highlight the introduced cross-layer design concepts.


Caracteristici

Gives a wide overview of a series of effective solutions for energy-efficient neural networks on battery constrained wearable devices Discusses the optimization of neural networks for embedded deployment on all levels of the design hierarchy – applications, algorithms, hardware architectures, and circuits – supported by real silicon prototypes Elaborates on how to design efficient Convolutional Neural Network processors, exploiting parallelism and data-reuse, sparse operations, and low-precision computations Supports the introduced theory and design concepts by four real silicon prototypes. The physical realization’s implementation and achieved performances are discussed elaborately to illustrated and highlight the introduced cross-layer design concepts