TCP performance over bufferless Optical Burst Switched (OBS) networks could be significantly degraded due to the misinterpretation of network congestion status (referred to as false congestion detection). It has been reported that burst retransmission in the OBS domain can improve the TCP throughput by hiding burst loss events from the upper TCP layer, which can effectively reduce the congestion window fluctuation at the expense of introducing additional delay. However, the additional delay may cause performance degradation for delay-based TCP implementations that are sensitive to packet round trip time in estimating the network congestion status. In this paper, a novel implementation of TCP Vegas that adopts a threshold-based mechanism is proposed for identifying the network congestion status in OBS networks. Analytical models are developed to evaluate the throughput of conventional TCP Vegas and threshold-based Vegas over OBS networks with burst retransmission. Simulation is conducted to validate the analytical model and to compare threshold-based Vegas with a number of legacy TCP implementations, such as TCP Sack and TCP Reno. The analytical model can be used to obtain a proper threshold value that results in an optimal steady state TCP throughput.
Bibliographical noteKAUST Repository Item: Exported on 2020-10-01
ASJC Scopus subject areas
- Computer Networks and Communications
- Electrical and Electronic Engineering