Neural networks have recently seen tremendous advancements in applicability in
many areas, one of which is their utilization in solving physical problems governed by
partial differential equations and the constraints of these equations. Physics-informed
neural networks is the name given to such neural networks. They are different from
typical neural networks in that they include loss terms that represent the physics
of the problem. These terms often include partial derivatives of the neural network
outputs with respect to its inputs, and these derivatives are found through the use of
automatic differentiation.
The purpose of this thesis is to showcase the ability of physics-informed neural
networks to solve basic fluid flow problems in homogeneous and heterogeneous porous
media. This is done through the utilization of the pressure equation under a set of
assumptions as well as the inclusion of Dirichlet and Neumann boundary conditions.
The goal is to create a surrogate model that allows for finding the pressure and
velocity profiles everywhere inside the domain of interest.
In the homogeneous case, minimization of the loss function that included the
boundary conditions term and the partial differential equation term allowed for producing
results that show good agreement with the results from a numerical simulator.
However, in the case of heterogeneous media where there are sharp discontinuities in
hydraulic conductivity inside the domain, the model failed to produce accurate results.
To resolve this issue, extended physics-informed neural networks were used.
This method involves the decomposition of the domain into multiple homogeneous
sub-domains. Each sub-domain has its own physics informed neural network structure,
equation parameters, and equation constraints. To allow the sub-domains to
communicate, interface conditions are placed on the interfaces that separate the different
sub-domains. The results from this method matched well with the results of
the simulator. In both the homogeneous and heterogeneous cases, neural networks
with only one hidden layer with thirty nodes were used. Even with this simple structure
for the neural networks, the computations are expensive and a large number of
training iterations is required to converge.
Date of Award | Jul 2021 |
---|
Original language | English (US) |
---|
Awarding Institution | - Physical Sciences and Engineering
|
---|
Supervisor | Hussein Hoteit (Supervisor) |
---|