Residual Gradient Fuzzy Actor-Critic Learning-Based Control for Two-Tank Level System
الكلمات المفتاحية:
Reinforcement Learning، Fuzzy Logic Control، Residual Gradient، Two-Tank Level Controlالملخص
The paper presents a novel application of the residual gradient fuzzy actor-critic learning (RGFACL) algorithm to nonlinear level control of a two-tank system. In contrast to traditional fuzzy actor-critic methods using direct gradient methods, which are commonly susceptible to instability and convergence issues, the RGFACL algorithm presented in this work employs a residual gradient formulation to ensure a more precise and stable learning process. Moreover, the algorithm simultaneously adaptively adjusts the premise (input) and consequent (output) parameters of the fuzzy inference systems, increasing the expressiveness and flexibility of the control strategy. To the best of our knowledge, this is the first implementation of the RGFACL approach on a two-tank benchmark system. The simulation results demonstrate that the RGFACL algorithm achieved improved transient response, reduced overshoot, and enhanced robustness compared to the traditional PID controller. The RGFACL algorithm successfully handled abrupt setpoint changes, its ability to adjust within nonlinear, time-varying operating conditions clear. The results confirm the efficacy of employing the RGFACL learning algorithm to the nonlinear and complex environments.