Graph Neural Networks: A Review of Methods and Applications

(Submitted on 20 Dec 2018 (

v1

), last revised 2 Jan 2019 (this version, v2))

Abstract: Lots of learning tasks require dealing with graph data which contains rich
relation information among elements. Modeling physics system, learning
molecular fingerprints, predicting protein interface, and classifying diseases
require that a model learns from graph inputs. In other domains such as
learning from non-structural data like texts and images, reasoning on extracted
structures, like the dependency tree of sentences and the scene graph of
images, is an important research topic which also needs graph reasoning models.
Graph neural networks (GNNs) are connectionist models that capture the
dependence of graphs via message passing between the nodes of graphs. Unlike
standard neural networks, graph neural networks retain a state that can
represent information from its neighborhood with arbitrary depth. Although the
primitive GNNs have been found difficult to train for a fixed point, recent
advances in network architectures, optimization techniques, and parallel
computation have enabled successful learning with them. In recent years,
systems based on graph convolutional network (GCN) and gated graph neural
network (GGNN) have demonstrated ground-breaking performance on many tasks
mentioned above. In this survey, we provide a detailed review over existing
graph neural network models, systematically categorize the applications, and
propose four open problems for future research.

Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Machine Learning (stat.ML)
Cite as: arXiv:1812.08434 [cs.LG]
  (or
arXiv:1812.08434v2 [cs.LG]
for this version)

Submission history

From: Jie Zhou [

view email

]


[v1]

Thu, 20 Dec 2018 09:30:12 UTC (2,764 KB)


[v2]

Wed, 2 Jan 2019 02:01:05 UTC (2,767 KB)


Read More

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.