Cantitate/Preț
Produs

Automated Deep Learning Using Neural Network Intelligence: Develop and Design PyTorch and TensorFlow Models Using Python

Autor Ivan Gridin
en Limba Engleză Paperback – 21 iun 2022
Optimize, develop, and design PyTorch and TensorFlow models for a specific problem using the Microsoft Neural Network Intelligence (NNI) toolkit. This book includes practical examples illustrating automated deep learning approaches and provides techniques to facilitate your deep learning model development.

The first chapters of this book cover the basics of NNI toolkit usage and methods for solving hyper-parameter optimization tasks. You will understand the black-box function maximization problem using NNI, and know how to prepare a TensorFlow or PyTorch model for hyper-parameter tuning, launch an experiment, and interpret the results. The book dives into optimization tuners and the search algorithms they are based on: Evolution search, Annealing search, and the Bayesian Optimization approach. The Neural Architecture Search is covered and you will learn how to develop deep learning models from scratch. Multi-trial and one-shot searching approaches of automatic neural network design are presented. The book teaches you how to construct a search space and launch an architecture search using the latest state-of-the-art exploration strategies: Efficient Neural Architecture Search (ENAS) and Differential Architectural Search (DARTS). You will learn how to automate the construction of a neural network architecture for a particular problem and dataset. The book focuses on model compression and feature engineering methods that are essential in automated deep learning. It also includes performance techniques that allow the creation of large-scale distributive training platforms using NNI.

After reading this book, you will know how to use the full toolkit of automated deep learning methods. The techniques and practical examples presented in this book will allow you to bring your neural network routines to a higher level.

What You Will Learn
  • Know the basic concepts of optimization tuners, search space, and trials
  • Apply different hyper-parameter optimization algorithms to develop effective neural networks
  • Construct new deep learning models from scratch
  • Execute the automated Neural Architecture Search to create state-of-the-art deep learning models
  • Compress the model to eliminate unnecessary deep learning layers

Who This Book Is For 

Intermediate to advanced data scientists and machine learning engineers involved in deep learning and practical neural network development
Citește tot Restrânge

Preț: 33403 lei

Preț vechi: 41753 lei
-20% Nou

Puncte Express: 501

Preț estimativ în valută:
6393 6649$ 5358£

Carte disponibilă

Livrare economică 20 februarie-06 martie

Preluare comenzi: 021 569.72.76

Specificații

ISBN-13: 9781484281482
ISBN-10: 1484281489
Pagini: 384
Ilustrații: XVII, 384 p. 159 illus., 128 illus. in color.
Dimensiuni: 178 x 254 x 26 mm
Greutate: 0.7 kg
Ediția:1st ed.
Editura: Apress
Colecția Apress
Locul publicării:Berkeley, CA, United States

Cuprins

​Chapter 1: Introduction to Neural Network Intelligence.- Chapter 2:Hyperparameter Optimization.- Chapter 3:  Hyperparameter Optimization Under Shell.- 4. Multi-Trial Neural Architecture Search.- Chapter 5: One-Shot Neural Architecture Search.- Chapter 6: Model Pruning.- Chapter 7: NNI Recipes.


Notă biografică

Ivan Gridin is a machine learning expert from Moscow who has worked on distributive high-load systems and implemented different machine learning approaches in practice. One of the primary areas of his research is the design and analysis of predictive time series models. Ivan has fundamental math skills in probability theory, random process theory, time series analysis, machine learning, deep learning, and optimization. He has published books on genetic algorithms and time series analysis.

Textul de pe ultima copertă

Optimize, develop, and design PyTorch and TensorFlow models for a specific problem using the Microsoft Neural Network Intelligence (NNI) toolkit. This book includes practical examples illustrating automated deep learning approaches and provides techniques to facilitate your deep learning model development. The first chapters of this book cover the basics of NNI toolkit usage and methods for solving hyper-parameter optimization tasks. You will understand the black-box function maximization problem using NNI, and know how to prepare a TensorFlow or PyTorch model for hyper-parameter tuning, launch an experiment, and interpret the results. The book dives into optimization tuners and the search algorithms they are based on: Evolution search, Annealing search, and the Bayesian Optimization approach. The Neural Architecture Search is covered and you will learn how to develop deep learning models from scratch. Multi-trial and one-shot searching approaches of automatic neural network design are presented. The book teaches you how to construct a search space and launch an architecture search using the latest state-of-the-art exploration strategies: Efficient Neural Architecture Search (ENAS) and Differential Architectural Search (DARTS). You will learn how to automate the construction of a neural network architecture for a particular problem and dataset. The book focuses on model compression and feature engineering methods that are essential in automated deep learning. It also includes performance techniques that allow the creation of large-scale distributive training platforms using NNI.

After reading this book, you will know how to use the full toolkit of automated deep learning methods. The techniques and practical examples presented in this book will allow you to bring your neural network routines to a higher level.
What You Will Learn
  • Know the basic concepts of optimization tuners, search space, and trials
  • Apply different hyper-parameter optimization algorithms to develop effective neural networks
  • Construct new deep learning models from scratch
  • Execute the automated Neural Architecture Search to create state-of-the-art deep learning models
  • Compress the model to eliminate unnecessary deep learning layers



Caracteristici

Covers application of the latest scientific advances in neural network design Presents a clear and visual representation of neural architecture search concepts Includes boosting of PyTorch and TensorFlow models to the advanced level