Deterministic and Stochastic Optimal Control: Stochastic Modelling and Applied Probability, cartea 1
Autor Wendell H. Fleming, Raymond W. Rishelen Limba Engleză Paperback – 3 feb 2012
Toate formatele și edițiile | Preț | Express |
---|---|---|
Paperback (1) | 1105.06 lei 6-8 săpt. | |
Springer – 3 feb 2012 | 1105.06 lei 6-8 săpt. | |
Hardback (1) | 1110.72 lei 6-8 săpt. | |
Springer – 17 noi 1975 | 1110.72 lei 6-8 săpt. |
Din seria Stochastic Modelling and Applied Probability
- 17% Preț: 464.60 lei
- 18% Preț: 805.44 lei
- 18% Preț: 1110.72 lei
- 18% Preț: 947.35 lei
- Preț: 390.84 lei
- 18% Preț: 952.40 lei
- 15% Preț: 648.56 lei
- 18% Preț: 951.91 lei
- 15% Preț: 637.13 lei
- 18% Preț: 793.63 lei
- Preț: 391.02 lei
- Preț: 401.42 lei
- 15% Preț: 639.08 lei
- 18% Preț: 733.33 lei
- 18% Preț: 785.11 lei
- 15% Preț: 593.42 lei
- 18% Preț: 1114.96 lei
- 15% Preț: 643.16 lei
- Preț: 390.63 lei
- 15% Preț: 645.60 lei
- 15% Preț: 641.71 lei
- 18% Preț: 954.62 lei
- 15% Preț: 645.14 lei
- 18% Preț: 947.50 lei
- 18% Preț: 804.96 lei
- 15% Preț: 644.63 lei
- 20% Preț: 469.59 lei
- 20% Preț: 581.39 lei
Preț: 1105.06 lei
Preț vechi: 1347.63 lei
-18% Nou
Puncte Express: 1658
Preț estimativ în valută:
211.46€ • 221.23$ • 175.66£
211.46€ • 221.23$ • 175.66£
Carte tipărită la comandă
Livrare economică 03-17 aprilie
Preluare comenzi: 021 569.72.76
Specificații
ISBN-13: 9781461263821
ISBN-10: 1461263824
Pagini: 236
Ilustrații: XI, 222 p.
Dimensiuni: 155 x 235 x 12 mm
Greutate: 0.34 kg
Ediția:Softcover reprint of the original 1st ed. 1975
Editura: Springer
Colecția Springer
Seria Stochastic Modelling and Applied Probability
Locul publicării:New York, NY, United States
ISBN-10: 1461263824
Pagini: 236
Ilustrații: XI, 222 p.
Dimensiuni: 155 x 235 x 12 mm
Greutate: 0.34 kg
Ediția:Softcover reprint of the original 1st ed. 1975
Editura: Springer
Colecția Springer
Seria Stochastic Modelling and Applied Probability
Locul publicării:New York, NY, United States
Public țintă
ResearchCuprins
I The Simplest Problem in Calculus of Variations.- 1. Introduction.- 2. Minimum Problems on an Abstract Space—Elementary Theory.- 3. The Euler Equation; Extremals.- 4. Examples.- 5. The Jacobi Necessary Condition.- 6. The Simplest Problem in n Dimensions.- II The Optimal Control Problem.- 1. Introduction.- 2. Examples.- 3. Statement of the Optimal Control Problem.- 4. Equivalent Problems.- 5. Statement of Pontryagin’s Principle.- 6. Extremals for the Moon Landing Problem.- 7. Extremals for the Linear Regulator Problem.- 8. Extremals for the Simplest Problem in Calculus of Variations.- 9. General Features of the Moon Landing Problem.- 10. Summary of Preliminary Results.- 11. The Free Terminal Point Problem.- 12. Preliminary Discussion of the Proof of Pontryagin’s Principle.- 13. A Multiplier Rule for an Abstract Nonlinear Programming Problem.- 14. A Cone of Variations for the Problem of Optimal Control.- 15. Verification of Pontryagin’s Principle.- III Existence and Continuity Properties of Optimal Controls.- 1. The Existence Problem.- 2. An Existence Theorem (Mayer Problem U Compact).- 3. Proof of Theorem 2.1.- 4. More Existence Theorems.- 5. Proof of Theorem 4.1.- 6. Continuity Properties of Optimal Controls.- IV Dynamic Programming.- 1. Introduction.- 2. The Problem.- 3. The Value Function.- 4. The Partial Differential Equation of Dynamic Programming.- 5. The Linear Regulator Problem.- 6. Equations of Motion with Discontinuous Feedback Controls.- 7. Sufficient Conditions for Optimality.- 8. The Relationship between the Equation of Dynamic Programming and Pontryagin’s Principle.- V Stochastic Differential Equations and Markov Diffusion Processes.- 1. Introduction.- 2. Continuous Stochastic Processes; Brownian Motion Processes.- 3. Ito’s StochasticIntegral.- 4. Stochastic Differential Equations.- 5. Markov Diffusion Processes.- 6. Backward Equations.- 7. Boundary Value Problems.- 8. Forward Equations.- 9. Linear System Equations; the Kalman-Bucy Filter.- 10. Absolutely Continuous Substitution of Probability Measures.- 11. An Extension of Theorems 5.1,5.2.- VI Optimal Control of Markov Diffusion Processes.- 1. Introduction.- 2. The Dynamic Programming Equation for Controlled Markov Processes.- 3. Controlled Diffusion Processes.- 4. The Dynamic Programming Equation for Controlled Diffusions; a Verification Theorem.- 5. The Linear Regulator Problem (Complete Observations of System States).- 6. Existence Theorems.- 7. Dependence of Optimal Performance on y and ?.- 8. Generalized Solutions of the Dynamic Programming Equation.- 9. Stochastic Approximation to the Deterministic Control Problem.- 10. Problems with Partial Observations.- 11. The Separation Principle.- Appendices.- A. Gronwall-Bellman Inequality.- B. Selecting a Measurable Function.- C. Convex Sets and Convex Functions.- D. Review of Basic Probability.- E. Results about Parabolic Equations.- F. A General Position Lemma.