Advances in convolutional neural networks and recurrent neural networks have led to significant improvements in learning on regular grid data domains such as images and texts. However, many real-world datasets, for instance, social networks, citation networks, molecules, point clouds, and 3D meshes, do not lie in such a simple grid. Such data is irregular or non-Euclidean in structure and has complex relational information. Graph machine learning, especially Graph Neural Networks (GNNs), provides a potential for processing such irregular data and being capable of modeling the relation between entities, which is leading the machine learning field to a new era. However, previous state-of-the-art (SOTA) GNNs are limited to shallow architectures due to challenging problems such as vanishing gradients, over-fitting, and over-smoothing. Most of the SOTA GNNs are not deeper than 3 or 4 layers, which restricts the representative power of GNNs and makes learning on large-scale graphs ineffective. Aiming to resolve this challenge, this dissertation discusses approaches to building large-scale and efficient graph machine learning models for learning structured representation with applications to engineering and sciences. This work would present how to make GNNs go deep by introducing architectural designs and how to automatically search GNN architectures by novel neural architecture search algorithms.
Date of Award | Aug 2022 |
---|
Original language | English (US) |
---|
Awarding Institution | - Computer, Electrical and Mathematical Sciences and Engineering
|
---|
Supervisor | Bernard Ghanem (Supervisor) |
---|
- Deep Learning
- Graph Machine Learning
- Computer Vision
- Graph Neural Networks
- Artificial Intelligence
- Structured Intelligence