Cantitate/Preț
Produs

Foundations of Deep Reinforcement Learning: Addison-Wesley Data & Analytics Series

Autor Laura Graesser, Keng Wah Loon
en Limba Engleză Paperback – 31 dec 2019

Din seria Addison-Wesley Data & Analytics Series

Preț: 25584 lei

Preț vechi: 31981 lei
-20% Nou

Puncte Express: 384

Preț estimativ în valută:
4896 5170$ 4074£

Carte disponibilă

Livrare economică 21 decembrie 24 - 04 ianuarie 25
Livrare express 07-13 decembrie pentru 3810 lei

Preluare comenzi: 021 569.72.76

Specificații

ISBN-13: 9780135172384
ISBN-10: 0135172381
Pagini: 360
Dimensiuni: 231 x 176 x 17 mm
Greutate: 0.64 kg
Editura: ADDISON-WESLEY
Colecția Pearson Professional
Seria Addison-Wesley Data & Analytics Series


Notă biografică

Laura Graesser is a research software engineer working in robotics at Google. She holds a master's degree in computer science from New York University, where she specialized in machine learning.
Wah Loon Keng is an AI engineer at Machine Zone, where he applies deep reinforcement learning to industrial problems. He has a background in both theoretical physics and computer science.

Cuprins

Foreword xix Preface xxi Acknowledgments xxv About the Authors xxvii Chapter 1: Introduction to Reinforcement Learning 1 1.1 Reinforcement Learning 1 1.2 Reinforcement Learning as MDP 6 1.3 Learnable Functions in Reinforcement Learning 9 1.4 Deep Reinforcement Learning Algorithms 11 1.5 Deep Learning for Reinforcement Learning 17 1.6 Reinforcement Learning and Supervised Learning 19 1.7 Summary 21
Part I: Policy-Based and Value-Based Algorithms 23
Chapter 2: REINFORCE 25 2.1 Policy 26 2.2 The Objective Function 26 2.3 The Policy Gradient 27 2.4 Monte Carlo Sampling 30 2.5 REINFORCE Algorithm 31 2.6 Implementing REINFORCE 33 2.7 Training a REINFORCE Agent 44 2.8 Experimental Results 47 2.9 Summary 51 2.10 Further Reading 51 2.11 History 51 Chapter 3: SARSA 53 3.1 The Q- and V-Functions 54 3.2 Temporal Difference Learning 56 3.3 Action Selection in SARSA 65 3.4 SARSA Algorithm 67 3.5 Implementing SARSA 69 3.6 Training a SARSA Agent 74 3.7 Experimental Results 76 3.8 Summary 78 3.9 Further Reading 79 3.10 History 79 Chapter 4: Deep Q-Networks (DQN) 81 4.1 Learning the Q-Function in DQN 82 4.2 Action Selection in DQN 83 4.3 Experience Replay 88 4.4 DQN Algorithm 89 4.5 Implementing DQN 91 4.6 Training a DQN Agent 96 4.7 Experimental Results 99 4.8 Summary 101 4.9 Further Reading 102 4.10 History 102 Chapter 5: Improving DQN 103 5.1 Target Networks 104 5.2 Double DQN 106 5.3 Prioritized Experience Replay (PER) 109 5.4 Modified DQN Implementation 112 5.5 Training a DQN Agent to Play Atari Games 123 5.6 Experimental Results 128 5.7 Summary 132 5.8 Further Reading 132 Part II: Combined Methods 133 Chapter 6: Advantage Actor-Critic (A2C) 135 6.1 The Actor 136 6.2 The Critic 136 6.3 A2C Algorithm 141 6.4 Implementing A2C 143 6.5 Network Architecture 148 6.6 Training an A2C Agent 150 6.7 Experimental Results 157 6.8 Summary 161 6.9 Further Reading 162 6.10 History 162 Chapter 7: Proximal Policy Optimization (PPO) 165 7.1 Surrogate Objective 165 7.2 Proximal Policy Optimization (PPO) 174 7.3 PPO Algorithm 177 7.4 Implementing PPO 179 7.5 Training a PPO Agent 182 7.6 Experimental Results 188 7.7 Summary 192 7.8 Further Reading 192 Chapter 8: Parallelization Methods 195 8.1 Synchronous Parallelization 196 8.2 Asynchronous Parallelization 197 8.3 Training an A3C Agent 200 8.4 Summary 203 8.5 Further Reading 204 Chapter 9: Algorithm Summary 205 Part III: Practical Details 207 Chapter 10: Getting Deep RL to Work 209 10.1 Software Engineering Practices 209 10.2 Debugging Tips 218 10.3 Atari Tricks 228 10.4 Deep RL Almanac 231 10.5 Summary 238 Chapter 11: SLM Lab 239 11.1 Algorithms Implemented in SLM Lab 239 11.2 Spec File 241 11.3 Running SLM Lab 246 11.4 Analyzing Experiment Results 247 11.5 Summary 249 Chapter 12: Network Architectures 251 12.1 Types of Neural Networks 251 12.2 Guidelines for Choosing a Network Family 256 12.3 The Net API 262 12.4 Summary 271 12.5 Further Reading 271 Chapter 13: Hardware 273 13.1 Computer 273 13.2 Data Types 278 13.3 Optimizing Data Types in RL 280 13.4 Choosing Hardware 285 13.5 Summary 285 Part IV: Environment Design 287 Chapter 14: States 289 14.1 Examples of States 289 14.2 State Completeness 296 14.3 State Complexity 297 14.4 State Information Loss 301 14.5 Preprocessing 306 14.6 Summary 313 Chapter 15: Actions 315 15.1 Examples of Actions 315 15.2 Action Completeness 318 15.3 Action Complexity 319 15.4 Summary 323 15.5 Further Reading: Action Design in Everyday Things 324 Chapter 16: Rewards 327 16.1 The Role of Rewards 327 16.2 Reward Design Guidelines 328 16.3 Summary 332 Chapter 17: Transition Function 333 17.1 Feasibility Checks 333 17.2 Reality Check 335 17.3 Summary 337 Epilogue 338 Appendix A: Deep Reinforcement Learning Timeline 343 Appendix B: Example Environments 345 B.1 Discrete Environments 346 B.2 Continuous Environments 350 References 353 Index 363