Overview of graph neural networks of Philip S. Yu team in 2019

Use graph to represent complex relationships and dependencies between objects. However, the complexity of graph data is difficult to handle with existing machine learning algorithms, so deep learning methods are used to handle them. A Comprehensive Survey on Graph Neural Networks paper reviews the development of graph neural network (GNN) in the field of text mining and machine learning, and divides GNN into four categories: recurrent graph neural network, convolution graph neural network, graph auto-encoding and spatio-temporal graph neural network . In addition, it discusses the application of graph neural network across various fields, summarizes open source code, data set and graph neural network evaluation indicators. Finally, possible research directions are given.

The author mentioned that data calculated based on Euclidean distance can capture hidden patterns. However, as the number of applications increases, graphs are used to represent data. For example, it can use the interaction between users and products to improve the accuracy of recommendations; Molecules are built into graphs, and biological activities are identified in drug discovery; in the introduction network, links between articles are established through citation relationships, and they are divided into different categories. However, the graph is different from the image. The nodes are disordered, the size is different, and the number of neighbor nodes changes, which increases the computational difficulty of the graph. In addition, the ML algorithm is based on the assumption of sample independence, which contradicts the construction mechanism of graph.

♕Development history

1. Recursive Graph Neural Networks (RecGNNs) Since 1997, the target node representation has been learned in an iterative manner and by passing neighbor node information until a stable point. Such methods have high computational complexity, and some researchers have studied how to reduce the complexity. For example, "Gated graphsequence neural networks, ICLR2015", "Learning steadystates of iterative algorithms over graphs, ICML2018".

2. Convolutional graph neural networks (ConvGNNs) are divided into spectral-based methods (the earliest in 2013) and space-based methods (the earliest in 2009)

3. Picture self-encoding (GAEs)

4. Spatio-temporal graph neural networks (STGNNs)

♕Picture embedding vs. network embedding

The main difference: GNN is a set of neural network models to handle different tasks, and network embedding covers various methods for the same task. GNNs can deal with the problem of network embedding through the graph autoencoder framework.
graph embedding: Process graph relationship tasks in an end-to-end manner, and extract high-level representations

network embedding: Low-dimensional vectors represent network nodes while maintaining network topology and node content information. Therefore, any subsequent graph analysis tasks, such as classification, clustering and recommendation, can be easily performed using simple off-the-shelf machine learning algorithms. Network embedding also includes non-deep learning methods such as matrix factorization and random walk.

♕Graph neural network vs. graph kernel method

Graph kernel: It is the main technology to solve the problem of graph classification. Use the kernel function to measure the similarity between graph pairs, such as svm. Map the graph and nodes to the vector space through the mapping function. Using two pairs of similarity calculations, high computational complexity

Graph neural network: Perform graph classification directly based on the extracted graph representation, which is more effective than the graph kernel method. Map the graph and nodes to the vector space through the mapping function.

Guess you like

Origin blog.51cto.com/12339636/2536317