Reinforcement Learning with History Lists
Autor Stephan Timmeren Limba Engleză Paperback – 11 aug 2015
A very general framework for modeling uncertainty in learning environments is given by Partially observable Markov Decision Processes (POMDPs). In a POMDP setting, the learning agent infers a policy for acting optimally in all possible states of the environment, while receiving only observations of these states. The basic idea for coping with partial observability is to include memory into the representation of the policy. Perfect memory is provided by the belief space, i.e. the space of probability distributions over environmental states. However, computing policies defined on the belief space requires a considerable amount of prior knowledge about the learning problem and is expensive in terms of computation time.The author Stephan Timmer presents a reinforcement learning algorithm for solving POMDPs based on short term memory. In contrast to belief states, short term memory is not capable of representing optimal policies, but is far more practical and requires no prior knowledge about the learning problem. It can be shown that the algorithm can also be used to solve large Markov Decision Processes (MDPs) with continuous, multi-dimensional state spaces.
Preț: 435.02 lei
Preț vechi: 543.77 lei
-20% Nou
83.25€ • 86.48$ • 69.15£
Carte tipărită la comandă
Livrare economică 03-17 februarie 25
Specificații
ISBN-10: 3838106210
Pagini: 160
Dimensiuni: 152 x 229 x 9 mm
Greutate: 0.22 kg
Editura: Sudwestdeutscher Verlag Fur Hochschulschrifte
Locul publicării:Germany