Cantitate/Preț
Produs

General-Purpose Optimization Through Information Maximization: Natural Computing Series

Autor Alan J. Lockett
en Limba Engleză Hardback – 17 aug 2020
This book examines the mismatch between discrete programs, which lie at the center of modern applied mathematics, and the continuous space phenomena they simulate. The author considers whether we can imagine continuous spaces of programs, and asks what the structure of such spaces would be and how they would be constituted. He proposes a functional analysis of program spaces focused through the lens of iterative optimization.
The author begins with the observation that optimization methods such as Genetic Algorithms, Evolution Strategies, and Particle Swarm Optimization can be analyzed as Estimation of Distributions Algorithms (EDAs) in that they can be formulated as conditional probability distributions. The probabilities themselves are mathematical objects that can be compared and operated on, and thus many methods in Evolutionary Computation can be placed in a shared vector space and analyzed using techniques of functionalanalysis. The core ideas of this book expand from that concept, eventually incorporating all iterative stochastic search methods, including gradient-based methods. Inspired by work on Randomized Search Heuristics, the author covers all iterative optimization methods and not just evolutionary methods. The No Free Lunch Theorem is viewed as a useful introduction to the broader field of analysis that comes from developing a shared mathematical space for optimization algorithms. The author brings in intuitions from several branches of mathematics such as topology, probability theory, and stochastic processes and provides substantial background material to make the work as self-contained as possible.

The book will be valuable for researchers in the areas of global optimization, machine learning, evolutionary theory, and control theory.

Citește tot Restrânge

Toate formatele și edițiile

Toate formatele și edițiile Preț Express
Paperback (1) 125954 lei  6-8 săpt.
  Springer Berlin, Heidelberg – 18 aug 2021 125954 lei  6-8 săpt.
Hardback (1) 126599 lei  6-8 săpt.
  Springer Berlin, Heidelberg – 17 aug 2020 126599 lei  6-8 săpt.

Din seria Natural Computing Series

Preț: 126599 lei

Preț vechi: 158249 lei
-20% Nou

Puncte Express: 1899

Preț estimativ în valută:
24233 25438$ 20018£

Carte tipărită la comandă

Livrare economică 30 ianuarie-13 februarie 25

Preluare comenzi: 021 569.72.76

Specificații

ISBN-13: 9783662620069
ISBN-10: 3662620065
Ilustrații: XVIII, 561 p.
Dimensiuni: 155 x 235 mm
Greutate: 0.98 kg
Ediția:1st ed. 2020
Editura: Springer Berlin, Heidelberg
Colecția Springer
Seria Natural Computing Series

Locul publicării:Berlin, Heidelberg, Germany

Cuprins

Introduction.- Review of Optimization Methods.- Functional Analysis of Optimization.- A Unified View of Population-Based Optimizers.- Continuity of Optimizers.- The Optimization Process.- Performance Analysis.- Performance Experiments.- No Free Lunch Does Not Prevent General Optimization.- The Geometry of Optimization and the Optimization Game.- The Evolutionary Annealing Method.- Evolutionary Annealing In Euclidean Space.- Neuroannealing.- Discussion and Future Work.- Conclusion.- App. A, Performance Experiment Results.- App. B, Automated Currency Exchange Trading. 

Notă biografică

Alan J. Lockett received his PhD in 2012 at the University of Texas at Austin under the supervision of Risto Miikkulainen, where his research topics included estimation of temporal probabilistic models, evolutionary computation theory, and learning neural network controllers for robotics. After a postdoc in IDSIA (Lugano) with Jürgen Schmidhuber he now works for CS Disco in Houston.

Textul de pe ultima copertă

This book examines the mismatch between discrete programs, which lie at the center of modern applied mathematics, and the continuous space phenomena they simulate. The author considers whether we can imagine continuous spaces of programs, and asks what the structure of such spaces would be and how they would be constituted. He proposes a functional analysis of program spaces focused through the lens of iterative optimization. 
The author begins with the observation that optimization methods such as Genetic Algorithms, Evolution Strategies, and Particle Swarm Optimization can be analyzed as Estimation of Distributions Algorithms (EDAs) in that they can be formulated as conditional probability distributions. The probabilities themselves are mathematical objects that can be compared and operated on, and thus many methods in Evolutionary Computation can be placed in a shared vector space and analyzed using techniques of functional analysis. The core ideas of this book expand from that concept, eventually incorporating all iterative stochastic search methods, including gradient-based methods. Inspired by work on Randomized Search Heuristics, the author covers all iterative optimization methods and not just evolutionary methods. The No Free Lunch Theorem is viewed as a useful introduction to the broader field of analysis that comes from developing a shared mathematical space for optimization algorithms. The author brings in intuitions from several branches of mathematics such as topology, probability theory, and stochastic processes and provides substantial background material to make the work as self-contained as possible. 
The book will be valuable for researchers in the areas of global optimization, machine learning, evolutionary theory, and control theory.

Caracteristici

The book will be valuable for researchers in the areas of global optimization, machine learning, evolutionary theory, and control theory Optimization is a fundamental problem that recurs across scientific disciplines and is pervasive in informatics research, from statistical machine learning to probabilistic models to reinforcement learning In the final main chapter of the book the author realizes that the basic mathematical objects developed to account for stochastic optimization have applications far beyond optimization, he thinks about them as stimulus-response systems, the key intuition coming from the Optimization Game