Indexed by:
Abstract:
Distributed Traffic Engineering (TE) adopts decentralized routing and offers natural merits over centralized TE in rapidly responding to dynamic network flows. However, based solely on local network observations, traditional distributed TE methods often lack sufficient routing evidence necessary for generating optimal policies. Furthermore, reinforcement learning, commonly used in distributed TE methods to avoid the need for labeled data, faces challenges in accumulating valuable routing experiences due to its trial-and-error exploration. To handle these challenges, we propose a novel Distributed Intelligent Traffic Engineering (DITE) framework that empowers geographically distributed agents to rapidly and adaptively optimize routing policies for dynamic network flows with low communication overhead. Specifically, to address the limitation of optimizing local routing based solely on partial network observations, we propose enriching the routing evidence for each agent with sparse network-wide information. To promote agents to learn cooperative routing polices efficiently, we explore a new paradigm for training routing agents by seamlessly integrating imitation learning with reinforcement learning. Additionally, to efficiently describe the implicit spatiotemporal features among the routing evidence, we customize a transformer-based agent to utilize the attention mechanism for establishing the relationships between agent states and routing policies. Extensive comparative evaluations on three real network topologies and traffic traces demonstrate the superior performance of DITE in achieving distributed TE. Compared to the existing distributed TE approaches, DITE achieves a notable reduction of 35.95%–42.35% in maximum link utilization and a reduction of 23.62%-68.39% in end-to-end latency. Moreover, the experimental results indicate DITE is robust to network failures and traffic changes. © 2013 IEEE.
Keyword:
Reprint 's Address:
Email:
Source :
IEEE Transactions on Network Science and Engineering
ISSN: 2327-4697
Year: 2025
6 . 7 0 0
JCR@2023
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 5
Affiliated Colleges: