2011-02-01
Systems Control With Generalized Probabilistic Fuzzy-Reinforcement Learning
Publication
Publication
IEEE Transactions on Fuzzy Systems , Volume 19 - Issue 1 p. 51- 64
Reinforcement learning (RL) is a valuable learning method when the systems require a selection of control actions whose consequences emerge over long periods for which input-output data are not available. In most combinations of fuzzy systems and RL, the environment is considered to be deterministic. In many problems, however, the consequence of an action may be uncertain or stochastic in nature. In this paper, we propose a novel RL approach to combine the universal-function-approximation capability of fuzzy systems with consideration of probability distributions over possible consequences of an action. The proposed generalized probabilistic fuzzy RL (GPFRL) method is a modified version of the actor-critic (AC) learning architecture. The learning is enhanced by the introduction of a probability measure into the learning structure, where an incremental gradient-descent weight-updating algorithm provides convergence. Our results show that the proposed approach is robust under probabilistic uncertainty while also having an enhanced learning speed and good overall performance.
Additional Metadata | |
---|---|
, , , , | |
doi.org/10.1109/TFUZZ.2010.2081994, hdl.handle.net/1765/23932 | |
ERIM Article Series (EAS) | |
IEEE Transactions on Fuzzy Systems | |
Organisation | Erasmus Research Institute of Management |
Hinojosa, W., Nefti, S., & Kaymak, U. (2011). Systems Control With Generalized Probabilistic Fuzzy-Reinforcement Learning. IEEE Transactions on Fuzzy Systems, 19(1), 51–64. doi:10.1109/TFUZZ.2010.2081994 |