Bandit Algorithms
Autor Tor Lattimore, Csaba Szepesvárien Limba Engleză Hardback – 15 iul 2020
Preț: 284.89 lei
Preț vechi: 356.11 lei
-20% Nou
Puncte Express: 427
Preț estimativ în valută:
54.52€ • 56.57$ • 45.57£
54.52€ • 56.57$ • 45.57£
Carte tipărită la comandă
Livrare economică 15-29 martie
Preluare comenzi: 021 569.72.76
Specificații
ISBN-13: 9781108486828
ISBN-10: 1108486827
Pagini: 536
Dimensiuni: 182 x 252 x 32 mm
Greutate: 1.04 kg
Editura: Cambridge University Press
Colecția Cambridge University Press
Locul publicării:New York, United States
ISBN-10: 1108486827
Pagini: 536
Dimensiuni: 182 x 252 x 32 mm
Greutate: 1.04 kg
Editura: Cambridge University Press
Colecția Cambridge University Press
Locul publicării:New York, United States
Cuprins
1. Introduction; 2. Foundations of probability; 3. Stochastic processes and Markov chains; 4. Finite-armed stochastic bandits; 5. Concentration of measure; 6. The explore-then-commit algorithm; 7. The upper confidence bound algorithm; 8. The upper confidence bound algorithm: asymptotic optimality; 9. The upper confidence bound algorithm: minimax optimality; 10. The upper confidence bound algorithm: Bernoulli noise; 11. The Exp3 algorithm; 12. The Exp3-IX algorithm; 13. Lower bounds: basic ideas; 14. Foundations of information theory; 15. Minimax lower bounds; 16. Asymptotic and instance dependent lower bounds; 17. High probability lower bounds; 18. Contextual bandits; 19. Stochastic linear bandits; 20. Confidence bounds for least squares estimators; 21. Optimal design for least squares estimators; 22. Stochastic linear bandits with finitely many arms; 23. Stochastic linear bandits with sparsity; 24. Minimax lower bounds for stochastic linear bandits; 25. Asymptotic lower bounds for stochastic linear bandits; 26. Foundations of convex analysis; 27. Exp3 for adversarial linear bandits; 28. Follow the regularized leader and mirror descent; 29. The relation between adversarial and stochastic linear bandits; 30. Combinatorial bandits; 31. Non-stationary bandits; 32. Ranking; 33. Pure exploration; 34. Foundations of Bayesian learning; 35. Bayesian bandits; 36. Thompson sampling; 37. Partial monitoring; 38. Markov decision processes.
Recenzii
'This year marks the 68th anniversary of 'multi-armed bandits' introduced by Herbert Robbins in 1952, and the 35th anniversary of his 1985 paper with me that advanced multi-armed bandit theory in new directions via the concept of 'regret' and a sharp asymptotic lower bound for the regret. This vibrant subject has attracted important multidisciplinary developments and applications. Bandit Algorithms gives it a comprehensive and up-to-date treatment, and meets the need for such books in instruction and research in the subject, as in a new course on contextual bandits and recommendation technology that I am developing at Stanford.' Tze L. Lai, Stanford University
'This is a timely book on the theory of multi-armed bandits, covering a very broad range of basic and advanced topics. The rigorous treatment combined with intuition makes it an ideal resource for anyone interested in the mathematical and algorithmic foundations of a fascinating and rapidly growing field of research.' Nicolò Cesa-Bianchi, University of Milan
'The field of bandit algorithms, in its modern form, and driven by prominent new applications, has been taking off in multiple directions. The book by Lattimore and Szepesvári is a timely contribution that will become a standard reference on the subject. The book offers a thorough exposition of an enormous amount of material, neatly organized in digestible pieces. It is mathematically rigorous, but also pleasant to read, rich in intuition and historical notes, and without superfluous details. Highly recommended.' John Tsitsiklis, Massachusetts Institute of Technology
'This is a timely book on the theory of multi-armed bandits, covering a very broad range of basic and advanced topics. The rigorous treatment combined with intuition makes it an ideal resource for anyone interested in the mathematical and algorithmic foundations of a fascinating and rapidly growing field of research.' Nicolò Cesa-Bianchi, University of Milan
'The field of bandit algorithms, in its modern form, and driven by prominent new applications, has been taking off in multiple directions. The book by Lattimore and Szepesvári is a timely contribution that will become a standard reference on the subject. The book offers a thorough exposition of an enormous amount of material, neatly organized in digestible pieces. It is mathematically rigorous, but also pleasant to read, rich in intuition and historical notes, and without superfluous details. Highly recommended.' John Tsitsiklis, Massachusetts Institute of Technology
Notă biografică
Descriere
A comprehensive and rigorous introduction for graduate students and researchers, with applications in sequential decision-making problems.