Rule-base Reduction in Fuzzy Rule Interpolation-based Q-learning
Authors
View
Keywords
License
Copyright (c) 2015 by the authors
This work is licensed under a Creative Commons Attribution 4.0 International License.
How To Cite
Abstract
The method called Fuzzy Rule Interpolation-based Q-learning (FRIQ-learning for short) uses a fuzzy rule interpolation method to be the reasoning engine applied within Q-learning. This method was introduced previously by the authors along with a rule-base construction extension for FRIQlearning, which can construct the requested FRI fuzzy model from scratch in a reduced size, implementing an incremental creation strategy. The rule-base created this way will most probably contain only those rules which were significant during the construction process, but have no important role in the final rule-base. Also there can be rules which became redundant (can be calculated by using fuzzy rule interpolation) thanks to another rule in the finished rule base. The goal of the paper is to introduce possible methods, which aim to find and remove the redundant and unnecessary rules from the rule-base automatically by using variations of newly developed decremental rule base reduction strategies. The paper also includes an application example presenting the applicability of the methods via a well known reinforcement learning example: the cart-pole simulation.