Cantitate/Preț
Produs

Reinforcement Learning and Dynamic Programming Using Function Approximators: Automation and Control Engineering

Autor Lucian Busoniu, Robert Babuska, Bart De Schutter, Damien Ernst
en Limba Engleză Hardback – 29 apr 2010
From household appliances to applications in robotics, engineered systems involving complex dynamics can only be as effective as the algorithms that control them. While Dynamic Programming (DP) has provided researchers with a way to optimally solve decision and control problems involving complex dynamic systems, its practical value was limited by algorithms that lacked the capacity to scale up to realistic problems.
 However, in recent years, dramatic developments in Reinforcement Learning (RL), the model-free counterpart of DP, changed our understanding of what is possible. Those developments led to the creation of reliable methods that can be applied even when a mathematical model of the system is unavailable, allowing researchers to solve challenging control problems in engineering, as well as in a variety of other disciplines, including economics, medicine, and artificial intelligence.
Reinforcement Learning and Dynamic Programming Using Function Approximators provides a comprehensive and unparalleled exploration of the field of RL and DP. With a focus on continuous-variable problems, this seminal text details essential developments that have substantially altered the field over the past decade. In its pages, pioneering experts provide a concise introduction to classical RL and DP, followed by an extensive presentation of the state-of-the-art and novel methods in RL and DP with approximation. Combining algorithm development with theoretical guarantees, they elaborate on their work with illustrative examples and insightful comparisons. Three individual chapters are dedicated to representative algorithms from each of the major classes of techniques: value iteration, policy iteration, and policy search. The features and performance of these algorithms are highlighted in extensive experimental studies on a range of control applications.
The recent development of applications involving complex systems has led to a surge of interest in RL and DP methods and the subsequent need for a quality resource on the subject. For graduate students and others new to the field, this book offers a thorough introduction to both the basics and emerging methods. And for those researchers and practitioners working in the fields of optimal and adaptive control, machine learning, artificial intelligence, and operations research, this resource offers a combination of practical algorithms, theoretical analysis, and comprehensive examples that they will be able to adapt and apply to their own work.
Access the authors' website at www.dcsc.tudelft.nl/rlbook/ for additional material, including computer code used in the studies and information concerning new developments.
Citește tot Restrânge

Din seria Automation and Control Engineering

Preț: 76893 lei

Preț vechi: 93771 lei
-18% Nou

Puncte Express: 1153

Preț estimativ în valută:
14716 15286$ 12224£

Carte tipărită la comandă

Livrare economică 01-15 februarie 25

Preluare comenzi: 021 569.72.76

Specificații

ISBN-13: 9781439821084
ISBN-10: 1439821089
Pagini: 280
Ilustrații: 74 b/w images, 15 tables and 200+
Dimensiuni: 156 x 234 x 23 mm
Greutate: 0.52 kg
Ediția:New.
Editura: CRC Press
Colecția CRC Press
Seria Automation and Control Engineering


Public țintă

Graduate students and researchers in control engineering, machine learning/artificial intelligence, and robotics.

Cuprins

Introduction. Dynamic programming and reinforcement learning. Focus of this book. Book outline. Basics of dynamic programming and reinforcement learning.  Introduction. Markov decision processes. Value iteration. Policy iteration. Direct policy search. Conclusions. Bibliographical notes. Dynamic programming and reinforcement learning in large and continuous spaces.  Introduction. The need for approximation in large and continuous spaces. Approximate value iteration. Approximate policy iteration. Finding value function approximators automatically. Approximate policy search. Comparison of approximate value iteration, policy iteration, and policy search. Conclusions. Bibliographical notes. Q-value iteration with fuzzy approximation. Introduction. Fuzzy Q-iteration. Analysis of fuzzy Q-iteration. Optimizing the membership functions. Experimental studies. Conclusions. Bibliographical notes. Online and continuous-action least-squares policy iteration.  Introduction. Least-squares policy iteration.  LSPI with continuous-action approximation. Online LSPI. Using prior knowledge in online LSPI. Experimental studies. Conclusions. Bibliographical notes. Direct policy search with adaptive basis functions. Introduction. Policy search with adaptive basis functions. Experimental studies. Conclusions. Bibliographical notes. References. Glossary.

Notă biografică

Robert Babuska, Lucian Busoniu, and Bart de Schutter are with the Delft University of Technology. Damien Ernst is with the University of Liege.

Descriere

While Dynamic Programming (DP) has helped solve control problems involving dynamic systems, its value was limited by algorithms that lacked practical scale-up capacity. In recent years, developments in Reinforcement Learning (RL), DP's model-free counterpart, has changed this. Focusing on continuous-variable problems, this unparalleled work provides an introduction to classical RL and DP, followed by a presentation of current methods in RL and DP with approximation. Combining algorithm development with theoretical guarantees, it offers illustrative examples that readers will be able to adapt to their own work.