Accelerating flash calculation through deep learning methods

Yu Li, Tao Zhang, Shuyu Sun, Xin Gao

Research output: Contribution to journalArticlepeer-review

44 Scopus citations

Abstract

In the past two decades, researchers have made remarkable progress in accelerating flash calculation, which is very useful in a variety of engineering processes. In this paper, general phase splitting problem statements and flash calculation procedures using the Successive Substitution Method are reviewed, while the main shortages are pointed out. Two acceleration methods, Newton's method and the Sparse Grids Method are presented afterwards as a comparison with the deep learning model proposed in this paper. A detailed introduction from artificial neural networks to deep learning methods is provided here with the authors' own remarks. Factors in the deep learning model are investigated to show their effect on the final result. A selected model based on that has been used in a flash calculation predictor with comparison with other methods mentioned above. It is shown that results from the optimized deep learning model meet the experimental data well with the shortest CPU time. More comparison with experimental data has been conducted to show the robustness of our model.
Original languageEnglish (US)
Pages (from-to)153-165
Number of pages13
JournalJournal of Computational Physics
Volume394
DOIs
StatePublished - May 29 2019

Bibliographical note

KAUST Repository Item: Exported on 2020-10-01
Acknowledged KAUST grant number(s): BAS/1/1351-01-01, BAS/1/1624-01-01
Acknowledgements: The research reported in this publication was supported in part by funding from King Abdullah University of Science and Technology (KAUST) through the grant BAS/1/1351-01-01 and BAS/1/1624-01-01.

Fingerprint

Dive into the research topics of 'Accelerating flash calculation through deep learning methods'. Together they form a unique fingerprint.

Cite this