Model–Based Reinforcement Learning – From Data to Continuous Actions with a Python–based Toolbox: IEEE Press Series on Control Systems Theory and Applications
Autor M Farsien Limba Engleză Hardback – 8 dec 2022
Preț: 734.95 lei
Preț vechi: 807.64 lei
-9% Nou
Puncte Express: 1102
Preț estimativ în valută:
140.64€ • 148.30$ • 117.08£
140.64€ • 148.30$ • 117.08£
Carte tipărită la comandă
Livrare economică 11-25 ianuarie 25
Preluare comenzi: 021 569.72.76
Specificații
ISBN-13: 9781119808572
ISBN-10: 111980857X
Pagini: 272
Dimensiuni: 152 x 229 x 16 mm
Greutate: 0.52 kg
Editura: Wiley
Seria IEEE Press Series on Control Systems Theory and Applications
Locul publicării:Hoboken, United States
ISBN-10: 111980857X
Pagini: 272
Dimensiuni: 152 x 229 x 16 mm
Greutate: 0.52 kg
Editura: Wiley
Seria IEEE Press Series on Control Systems Theory and Applications
Locul publicării:Hoboken, United States
Cuprins
About the Authors xi
Preface xiii
Acronyms xv
Introduction xvii
1 Nonlinear Systems Analysis 1
1.1 Notation 1
1.2 Nonlinear Dynamical Systems 2
1.2.1 Remarks on Existence, Uniqueness, and Continuation of Solutions 2
1.3 Lyapunov Analysis of Stability 3
1.4 Stability Analysis of Discrete Time Dynamical Systems 7
1.5 Summary 10
Bibliography 10
2 Optimal Control 11
2.1 Problem Formulation 11
2.2 Dynamic Programming 12
2.2.1 Principle of Optimality 12
2.2.2 Hamilton-Jacobi-Bellman Equation 14
2.2.3 A Sufficient Condition for Optimality 15
2.2.4 Infinite-Horizon Problems 16
2.3 Linear Quadratic Regulator 18
2.3.1 Differential Riccati Equation 18
2.3.2 Algebraic Riccati Equation 23
2.3.3 Convergence of Solutions to the Differential Riccati Equation 26
2.3.4 Forward Propagation of the Differential Riccati Equation for Linear Quadratic Regulator 28
2.4 Summary 30
Bibliography 30
3 Reinforcement Learning 33
3.1 Control-Affine Systems with Quadratic Costs 33
3.2 Exact Policy Iteration 35
3.2.1 Linear Quadratic Regulator 39
3.3 Policy Iteration with Unknown Dynamics and Function Approximations 41
3.3.1 Linear Quadratic Regulator with Unknown Dynamics 46
3.4 Summary 47
Bibliography 48
4 Learning of Dynamic Models 51
4.1 Introduction 51
4.1.1 Autonomous Systems 51
4.1.2 Control Systems 51
4.2 Model Selection 52
4.2.1 Gray-Box vs. Black-Box 52
4.2.2 Parametric vs. Nonparametric 52
4.3 Parametric Model 54
4.3.1 Model in Terms of Bases 54
4.3.2 Data Collection 55
4.3.3 Learning of Control Systems 55
4.4 Parametric Learning Algorithms 56
4.4.1 Least Squares 56
4.4.2 Recursive Least Squares 57
4.4.3 Gradient Descent 59
4.4.4 Sparse Regression 60
4.5 Persistence of Excitation 60
4.6 Python Toolbox 61
4.6.1 Configurations 62
4.6.2 Model Update 62
4.6.3 Model Validation 63
4.7 Comparison Results 64
4.7.1 Convergence of Parameters 65
4.7.2 Error Analysis 67
4.7.3 Runtime Results 69
4.8 Summary 73
Bibliography 75
5 Structured Online Learning-Based Control of Continuous-Time Nonlinear Systems 77
5.1 Introduction 77
5.2 A Structured Approximate Optimal Control Framework 77
5.3 Local Stability and Optimality Analysis 81
5.3.1 Linear Quadratic Regulator 81
5.3.2 SOL Control 82
5.4 SOL Algorithm 83
5.4.1 ODE Solver and Control Update 84
5.4.2 Identified Model Update 85
5.4.3 Database Update 85
5.4.4 Limitations and Implementation Considerations 86
5.4.5 Asymptotic Convergence with Approximate Dynamics 87
5.5 Simulation Results 87
5.5.1 Systems Identifiable in Terms of a Given Set of Bases 88
5.5.2 Systems to Be Approximated by a Given Set of Bases 91
5.5.3 Comparison Results 98
5.6 Summary 99
Bibliography 99
6 A Structured Online Learning Approach to Nonlinear Tracking with Unknown Dynamics 103
6.1 Introduction 103
6.2 A Structured Online Learning for Tracking Control 104
6.2.1 Stability and Optimality in the Linear Case 108
6.3 Learning-based Tracking Control Using SOL 111
6.4 Simulation Results 112
6.4.1 Tracking Control of the Pendulum 113
6.4.2 Synchronization of Chaotic Lorenz System 114
6.5 Summary 115
Bibliography 118
7 Piecewise Learning and Control with Stability Guarantees 121
7.1 Introduction 121
7.2 Problem Formulation 122
7.3 The Piecewise Learning and Control Framework 122
7.3.1 System Identification 123
7.3.2 Database 124
7.3.3 Feedback Control 125
7.4 Analysis of Uncertainty Bounds 125
7.4.1 Quadratic Programs for Bounding Errors 126
7.5 Stability Verification for Piecewise-Affine Learning and Control 129
7.5.1 Piecewise Affine Models 129
7.5.2 MIQP-based Stability Verification of PWA Systems 130
7.5.3 Convergence of ACCPM 133
7.6 Numerical Results 134
7.6.1 Pendulum System 134
7.6.2 Dynamic Vehicle System with Skidding 138
7.6.3 Comparison of Runtime Results 140
7.7 Summary 142
Bibliography 143
8 An Application to Solar Photovoltaic Systems 147
8.1 Introduction 147
8.2 Problem Statement 150
8.2.1 PV Array Model 151
8.2.2 DC-D C Boost Converter 152
8.3 Optimal Control of PV Array 154
8.3.1 Maximum Power Point Tracking Control 156
8.3.2 Reference Voltage Tracking Control 162
8.3.3 Piecewise Learning Control 164
8.4 Application Considerations 165
8.4.1 Partial Derivative Approximation Procedure 165
8.4.2 Partial Shading Effect 167
8.5 Simulation Results 170
8.5.1 Model and Control Verification 173
8.5.2 Comparative Results 174
8.5.3 Model-Free Approach Results 176
8.5.4 Piecewise Learning Results 178
8.5.5 Partial Shading Results 179
8.6 Summary 182
Bibliography 182
9 An Application to Low-level Control of Quadrotors 187
9.1 Introduction 187
9.2 Quadrotor Model 189
9.3 Structured Online Learning with RLS Identifier on Quadrotor 190
9.3.1 Learning Procedure 191
9.3.2 Asymptotic Convergence with Uncertain Dynamics 195
9.3.3 Computational Properties 195
9.4 Numerical Results 197
9.5 Summary 201
Bibliography 201
10 Python Toolbox 205
10.1 Overview 205
10.2 User Inputs 205
10.2.1 Process 206
10.2.2 Objective 207
10.3 SOL 207
10.3.1 Model Update 208
10.3.2 Database 208
10.3.3 Library 210
10.3.4 Control 210
10.4 Display and Outputs 211
10.4.1 Graphs and Printouts 213
10.4.2 3D Simulation 213
10.5 Summary 214
Bibliography 214
A Appendix 215
A.1 Supplementary Analysis of Remark 5.4 215
A.2 Supplementary Analysis of Remark 5.5 222
Index 223