Cantitate/Preț
Produs

An Information-Theoretic Approach to Neural Computing: Perspectives in Neural Computing

Autor Gustavo Deco, Dragan Obradovic
en Limba Engleză Hardback – 8 feb 1996
Neural networks provide a powerful new technology to model and control nonlinear and complex systems. In this book, the authors present a detailed formulation of neural networks from the information-theoretic viewpoint. They show how this perspective provides new insights into the design theory of neural networks. In particular they show how these methods may be applied to the topics of supervised and unsupervised learning including feature extraction, linear and non-linear independent component analysis, and Boltzmann machines. Readers are assumed to have a basic understanding of neural networks, but all the relevant concepts from information theory are carefully introduced and explained. Consequently, readers from several different scientific disciplines, notably cognitive scientists, engineers, physicists, statisticians, and computer scientists, will find this to be a very valuable introduction to this topic.
Citește tot Restrânge

Toate formatele și edițiile

Toate formatele și edițiile Preț Express
Paperback (1) 62633 lei  6-8 săpt.
  Springer – 17 sep 2011 62633 lei  6-8 săpt.
Hardback (1) 63258 lei  6-8 săpt.
  Springer – 8 feb 1996 63258 lei  6-8 săpt.

Din seria Perspectives in Neural Computing

Preț: 63258 lei

Preț vechi: 79073 lei
-20% Nou

Puncte Express: 949

Preț estimativ în valută:
12107 12772$ 10089£

Carte tipărită la comandă

Livrare economică 03-17 ianuarie 25

Preluare comenzi: 021 569.72.76

Specificații

ISBN-13: 9780387946665
ISBN-10: 0387946667
Pagini: 262
Ilustrații: XIV, 262 p.
Dimensiuni: 155 x 235 x 21 mm
Greutate: 0.58 kg
Ediția:1996
Editura: Springer
Colecția Springer
Seria Perspectives in Neural Computing

Locul publicării:New York, NY, United States

Public țintă

Research

Cuprins

1 Introduction.- 2 Preliminaries of Information Theory and Neural Networks.- 2.1 Elements of Information Theory.- 2.2 Elements of the Theory of Neural Networks.- I: Unsupervised Learning.- 3 Linear Feature Extraction: Infomax Principle.- 4 Independent Component Analysis: General Formulation and Linear Case.- 5 Nonlinear Feature Extraction: Boolean Stochastic Networks.- 6 Nonlinear Feature Extraction: Deterministic Neural Networks.- II: Supervised Learning.- 7 Supervised Learning and Statistical Estimation.- 8 Statistical Physics Theory of Supervised Learning and Generalization.- 9 Composite Networks.- 10 Information Theory Based Regularizing Methods.- References.