Many autonomous systems (e.g., unmanned and autonomous vehicles) rely on computer vision algorithms to gather information about the surrounding environment and perform decision-making tasks accordingly such as tracking, detection, and segmentation. The performances of these systems can severally degenerate due to bad weather conditions such as rain and fog, which can result in erroneous and fatal decisions. Therefore, there is an urgent need to restore clear and rain-free background scenes from degraded videos in timely manner. In this paper, we propose a minimal-delay de-raining pipeline to eliminate noisy and unwanted rain artifacts from a live video feed. We propose a disentangled approach where we design a standalone deep learning Convolutional AutoEncoder (CAE) network to reconstruct noise-free images and then upgrade this architecture by adding a sequential Generative Adversarial Network (GAN) based module that enhances the quality of the image frames. Experimental results on rainstreak and raindrop datasets show that the proposed CAE architecture with GAN enhancing outperforms a benchmarking CNN for several image quality metrics. Other comparative simulations show that our proposed approach achieves close performances to an attentive standalone GAN de-raining approach with significant time complexity saving.