Metropolis criterion based Q-learning flow control for high-speed networks
MetadataShow full item record
CitationLi, X., Jing, Y., Dimirovski, G. M., & Zhang, S. (2008). Metropolis criterion based Q-learning flow control for high-speed networks. In 17th IFAC World Congress(pp. 11995- 12000). http://dx.doi.org/10.3182/20080706-5-KR-1001.02030
For the congestion problems in high-speed networks, a Metropolis criterion based Q-learning flow controller is proposed. Because of the uncertainties and highly time-varying, it is not easy to accurately obtain the complete information for high-speed networks. The Q-learning algorithm, which is independent of mathematic model, shows the particular superiority in high-speed networks. It obtains the optimal Q-values through interaction with the environment to improve its behavior policy. The Metropolis criterion of simulated annealing algorithm can cope with the balance between exploration and exploitation in Q-learning. By means of learning procedures, the proposed controller can learn to take the best action to regulate source flow with the features of high throughput and low packet loss ratio. Simulation results show that the proposed method can promote the performance of the networks and avoid the occurrence of congestion effectively.