Abstract
This paper studies fish growth trajectory tracking using Q-learning under a representative bioenergetic growth model of Nile tilapia (Oreochromis niloticus). The fish growth rate varies in practice and cannot be easily estimated due to the complex aquaculture condition and variable environmental factors. Additionally, the growth trajectory tracking problem is challenging to solve by most of the model-based control approaches due to the nonlinear couplings and interactions between multi-inputs such as temperature, dissolved oxygen, un-ionized ammonia, and the model uncertainty of the fish growth system. We formulate the growth trajectory tracking problem as sampled-data optimal control using discrete state-action pairs Markov decision process on the simulated growth trajectories data to mimic the real aquaculture environment adequately. We propose two Q-learning algorithms that learn the optimal control policy from the simulated data of the fish growth trajectories beginning from the juvenile stage until the desired market weight in the aquaculture environment. The first Q-learning scheme learns the optimal feeding control policy to fish growth rate cultured in cages, while the second one online updates the optimal feeding control policy within an optimal temperature profile for the aquaculture fish growth rate in tanks. The simulation results demonstrate that both Q-learning control strategies achieve good trajectory tracking performance with lower feeding rates and help compensate for the environmental changes of the manipulated variables and the bioenergetic model uncertainties of fish growth in the aquaculture environment. The proposed Q-learning control policies achieve 1.7% and 6.6% relative trajectory tracking errors of the average total weight of fish from both tanks on land and floating cages, respectively. Furthermore, the feeding and temperature control policies reduce 11% relative feeding quantity of the food waste in tanks on land compared to the floating cages where the water temperature is maintained at the ambient temperature of 29.7°C.
Original language | English (US) |
---|---|
Pages (from-to) | 737838 |
Journal | Aquaculture |
Volume | 550 |
DOIs | |
State | Published - Dec 2021 |
Bibliographical note
KAUST Repository Item: Exported on 2022-01-27Acknowledged KAUST grant number(s): BAS/1/1627-01-01
Acknowledgements: The authors would like to thank Professor Jeff Shamma for helpful discussions and guidance on the reinforcement learning framework. This work has been supported by the King Abdullah University of Science and Technology (KAUST), Baseline Research Fund (BAS/1/1627-01-01) to Taous Meriem Laleg, Baseline Research Fund (BAS/1/1010-01-01) to Michael L. Berumen, and Baseline Research fund KAUST-AI Initiative Fund.
ASJC Scopus subject areas
- Aquatic Science