Distributed learning has gained much interest recently due to its ability to exploit the distributed resources, at end users and network edges, to cooperatively train a global model. However, the performance of such schemes is undermined by practical constraints, such as non-uniformly distributed data across participating users. This challenge is especially prevalent in several practical systems, where the acquired data is collected from diverse locations/users with various conditions. Thus, this paper proposes a novel active federated learning framework that aims at reducing the effects of non-Independent and Identically Distributed (non-IID) data by integrating the idea of Active learning within Federated Learning (FL) framework. Indeed, our proposed solution could significantly improve the performance of FL in terms of the obtained accuracy and convergence time. The proposed solution is tested using two real-world datasets, where it could reduce the convergence time by 61% and increase the classification accuracy by 4% using MNIST dataset, while improving the accuracy by 19.4% for Heartbeat dataset.