“School of Cognitive Sciences”
Back to Papers HomeBack to Papers of School of Cognitive Sciences
Paper IPM / Cognitive Sciences / 11508 |
|
||||||
Abstract: | |||||||
This paper provides a new Fuzzy Reinforcement Learning (FRL) algorithm based on critic-only architecture. The proposed algorithm, called Fuzzy Sarsa Learning (FSL), tunes the parameters of conclusion parts of the Fuzzy
Inference System (FIS) online. Our FSL is based on Sarsa, which approximates the Action Value Function (AVF) and is an on-policy method. In each rule, actions are selected according to the proposed modified Softmax action selection so that the final inferred action selection probability in FSL is equivalent to the standard Softmax formula. We prove the existence of fixed points for the proposed Approximate Action Value Iteration (AAVI). Then, we show that FSL satisfies the necessary conditions that guarantee the existence of stationary points for it, which coincide with the fixed points of the AAVI. We prove that the weight vector of FSL with stationary action selection policy converges to a unique value. We also compare by simulation the performance of FSL and Fuzzy Q-Learning (FQL) in terms of learning speed, and action quality. Moreover, we show by another example the convergence of FSL and the divergence of FQL when both algorithms use a stationary policy.
Download TeX format |
|||||||
back to top |