Skip to content

Commit

Permalink
[Doc] Reorganize tutorial (dmlc#2678)
Browse files Browse the repository at this point in the history
  • Loading branch information
BarclayII authored Feb 19, 2021
1 parent dda103d commit 9e04a52
Show file tree
Hide file tree
Showing 15 changed files with 42 additions and 43 deletions.
8 changes: 6 additions & 2 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -197,8 +197,12 @@

examples_dirs = ['../../tutorials/basics',
'../../tutorials/models',
'../../new-tutorial'] # path to find sources
gallery_dirs = ['tutorials/basics', 'tutorials/models', 'new-tutorial'] # path to generate docs
'../../new-tutorial/blitz',
'../../new-tutorial/large'] # path to find sources
gallery_dirs = ['tutorials/basics',
'tutorials/models',
'new-tutorial/blitz',
'new-tutorial/large'] # path to generate docs
reference_url = {
'dgl' : None,
'numpy': 'http://docs.scipy.org/doc/numpy/',
Expand Down
36 changes: 7 additions & 29 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -84,27 +84,20 @@ Getting Started

.. toctree::
:maxdepth: 2
:caption: Basic Tutorials
:caption: Tutorials
:hidden:
:glob:

new-tutorial/1_introduction
new-tutorial/2_dglgraph
new-tutorial/3_message_passing
new-tutorial/4_link_predict
new-tutorial/5_graph_classification
new-tutorial/6_load_data
new-tutorial/blitz/index
new-tutorial/large/index

.. toctree::
:maxdepth: 2
:caption: Stochastic GNN Training Tutorials
:maxdepth: 3
:caption: Model Examples
:hidden:
:glob:

new-tutorial/L0_neighbor_sampling_overview
new-tutorial/L1_large_node_classification
new-tutorial/L2_large_link_prediction
new-tutorial/L4_message_passing
tutorials/models/index

.. toctree::
:maxdepth: 2
Expand All @@ -113,14 +106,7 @@ Getting Started
:titlesonly:
:glob:

guide/graph
guide/message
guide/nn
guide/data
guide/training
guide/minibatch
guide/distributed
guide/mixed_precision
guide/index

.. toctree::
:maxdepth: 2
Expand All @@ -139,14 +125,6 @@ Getting Started
api/python/dgl.sampling
api/python/udf

.. toctree::
:maxdepth: 3
:caption: Model Tutorials
:hidden:
:glob:

tutorials/models/index

.. toctree::
:maxdepth: 1
:caption: Developer Notes
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
"""
A Blitz Introduction to DGL - Node Classification
=================================================
Node Classification with DGL
============================
GNNs are powerful tools for many machine learning tasks on graphs. In
this introductory tutorial, you will learn the basic workflow of using
Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
Introduction of Neighbor Sampling for GNN Training
==================================================
In :doc:`previous tutorials <1_introduction>` you have learned how to
In :doc:`previous tutorials <../blitz/1_introduction>` you have learned how to
train GNNs by computing the representations of all nodes on a graph.
However, sometimes your graph is too large to fit the computation of all
nodes in a single GPU.
Expand All @@ -20,7 +20,7 @@
# ----------------------
#
# Recall that in `Gilmer et al. <https://arxiv.org/abs/1704.01212>`__
# (also in :doc:`message passing tutorial <3_message_passing>`), the
# (also in :doc:`message passing tutorial <../blitz/3_message_passing>`), the
# message passing formulation is as follows:
#
# .. math::
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -192,7 +192,7 @@ def forward(self, bipartites, x):

######################################################################
# If you compare against the code in the
# :doc:`introduction <1_introduction>`, you will notice several
# :doc:`introduction <../blitz/1_introduction>`, you will notice several
# differences:
#
# - **DGL GNN layers on bipartite graphs**. Instead of computing on the
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@
# \mathcal{L} = -\sum_{u\sim v\in \mathcal{D}}\left( y_{u\sim v}\log(\hat{y}_{u\sim v}) + (1-y_{u\sim v})\log(1-\hat{y}_{u\sim v})) \right)
#
# This is identical to the link prediction formulation in :doc:`the previous
# tutorial on link prediction <4_link_predict>`.
# tutorial on link prediction <../blitz/4_link_predict>`.
#


Expand Down Expand Up @@ -83,7 +83,7 @@
# ------------------------------------------------
#
# Different from the :doc:`link prediction tutorial for full
# graph <4_link_predict>`, a common practice to train GNN on large graphs is
# graph <../blitz/4_link_predict>`, a common practice to train GNN on large graphs is
# to iterate over the edges
# in minibatches, since computing the probability of all edges is usually
# impossible. For each minibatch of edges, you compute the output
Expand Down Expand Up @@ -147,7 +147,7 @@
# The second element and the third element are the positive graph and the
# negative graph for this minibatch.
# The concept of positive and negative graphs have been introduced in the
# :doc:`full-graph link prediction tutorial <4_link_predict>`. In minibatch
# :doc:`full-graph link prediction tutorial <../blitz/4_link_predict>`. In minibatch
# training, the positive graph and the negative graph only contain nodes
# necessary for computing the pair-wise scores of positive and negative examples
# in the current minibatch.
Expand Down Expand Up @@ -200,7 +200,7 @@ def forward(self, bipartites, x):
# edges in the sampled minibatch.
#
# The following score predictor, copied from the :doc:`link prediction
# tutorial <4_link_predict>`, takes a dot product between the
# tutorial <../blitz/4_link_predict>`, takes a dot product between the
# incident nodes’ representations.
#

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
for stochastic GNN training. It assumes that
1. You know :doc:`how to write GNN modules for full graph
training <3_message_passing>`.
training <../blitz/3_message_passing>`.
2. You know :doc:`how stochastic GNN training pipeline
works <L1_large_node_classification>`.
Expand Down Expand Up @@ -137,7 +137,7 @@
######################################################################
# Putting them together, you can implement a GraphSAGE convolution for
# training with neighbor sampling as follows (the differences to the :doc:`full graph
# counterpart <3_message_passing>` are highlighted with arrows ``<---``)
# counterpart <../blitz/3_message_passing>` are highlighted with arrows ``<---``)
#

import torch.nn as nn
Expand Down Expand Up @@ -223,7 +223,7 @@ def forward(self, bipartites, x):
# ------------------------------------------------------------------------
#
# Here is a step-by-step tutorial for writing a GNN module for both
# :doc:`full-graph training <1_introduction>` *and* :doc:`stochastic
# :doc:`full-graph training <../blitz/1_introduction>` *and* :doc:`stochastic
# training <L1_node_classification>`.
#
# Say you start with a GNN module that works for full-graph training only:
Expand Down
6 changes: 6 additions & 0 deletions tutorials/models/1_gnn/1_gcn.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,12 @@
###############################################################################
# We then proceed to define the GCNLayer module. A GCNLayer essentially performs
# message passing on all the nodes then applies a fully-connected layer.
#
# .. note::
#
# This is showing how to implement a GCN from scratch. DGL provides a more
# efficient :class:`builtin GCN layer module <dgl.nn.pytorch.conv.GraphConv>`.
#

class GCNLayer(nn.Module):
def __init__(self, in_feats, out_feats):
Expand Down
5 changes: 5 additions & 0 deletions tutorials/models/1_gnn/4_rgcn.py
Original file line number Diff line number Diff line change
Expand Up @@ -124,6 +124,11 @@
# the full weight matrix has three dimensions: relation, input_feature,
# output_feature.
#
# .. note::
#
# This is showing how to implement an R-GCN from scratch. DGL provides a more
# efficient :class:`builtin R-GCN layer module <dgl.nn.pytorch.conv.RelGraphConv>`.
#

import torch
import torch.nn as nn
Expand Down
6 changes: 6 additions & 0 deletions tutorials/models/1_gnn/9_gat.py
Original file line number Diff line number Diff line change
Expand Up @@ -106,6 +106,12 @@
# To begin, you can get an overall impression about how a ``GATLayer`` module is
# implemented in DGL. In this section, the four equations above are broken down
# one at a time.
#
# .. note::
#
# This is showing how to implement a GAT from scratch. DGL provides a more
# efficient :class:`builtin GAT layer module <dgl.nn.pytorch.conv.GATConv>`.
#

import torch
import torch.nn as nn
Expand Down

0 comments on commit 9e04a52

Please sign in to comment.