\n",
+ "\n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Introduction\n",
+ "\n",
+ "### Graphs\n",
+ "\n",
+ "Who are we kidding! You may skip this section if you know what graphs are.\n",
+ "\n",
+ "If you are here and haven't skipped this section, then, we assume that you are a complete beginner, you may want to read everything very carefully. We can define a graph as a picture that represent the data in an organised manner. Let's go deep into applied graph theory. A graph (being directed or undirected) consists of set of vertices (or nodes) denoted by V and a set of edges denoted by E. Edges can be weighted or binary. Let's have a look of a graph. \n",
+ "\n",
+ "\n",
+ "\n",
+ "In the above graph we have:-\n",
+ "\n",
+ "$$V = \\{A, B, C, D, E, F, G\\}$$\n",
+ "\n",
+ "$$E = \\{(A,B), (B,C), (C,E), (B,D), (E,F), (D,E), (B,E), (G,E)\\}$$\n",
+ "\n",
+ "Above all these edges their corresponding weights have been specified. These weights can represent different quantities.For example if we consider these nodes as different cities, edges can be the distance between these cities.\n",
+ "\n",
+ "\n",
+ "\n",
+ "### Terminology\n",
+ "\n",
+ "You may skip this as well, if comfortable.\n",
+ "\n",
+ "\n",
+ "\n",
+ " - __Node__ : A node is an entity in the graph. Here, represented by circles in the graph.\n",
+ " - __Edge__ : It is the line joining two nodes in a graph. Presence of an edge between two nodes represent the relationship between the nodes. Here, represented by straight lines in the graph.\n",
+ " - __Degree of a vertex__ : The degree of a vertex V of a graph G (denoted by deg (V)) is the number of edges incident with the vertex V. As an instance consider node B, it has 3 outgoing edges and 1 incoming edge, so outdegree is 3 and indegree is 1.\n",
+ " - __Adjacency Matrix__ : It is a method of representing a graph using only a square Matrix. Suppose there are N nodes in a graph then there will be N rows and N columns in the corresponding adjacency matrix. The i'th row will contain a 1 in the j'th column if there is an edge between the i'th and the j'th node, otherwise, it will contain a 0."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Why GCNs?\n",
+ "\n",
+ "So let's get into the real deal. Looking around us, we can observe that most of the real-world datasets come in the form of graphs or networks: social networks, protein-interaction networks, the World Wide Web, etc. This makes learning on graphs a very interesting problem that can solve tonnes of domain specific tasks rendering us with insightful information.\n",
+ "\n",
+ "**But why can't these Graph Learning problems be solved by conventional Machine Learning/Deep Learning algorithms like CNNs? Why exactly was there a need for making a whole new class of networks?**\n",
+ "\n",
+ " A) To introduce a new XYZNet?\n",
+ " B) To publish the 'said' novelty in a top tier conference?\n",
+ " C) To you know what every other paper aims to achieve? \n",
+ "\n",
+ "No! No! No! Not because *Kipf and Welling* wanted to sound cool and publish yet another paper in a top tier conference. You see, not everything is an Alchemy :P. On that note I'd really suggest watching this super interesting, my favourite [talk](https://www.youtube.com/watch?v=x7psGHgatGM) by Ali Rahimi which is really relevant today in the ML world.\n",
+ "\n",
+ "So getting back to the topic, obviously I'm joking about these things, and surely this is a really nice contribution and GCNs are really powerful,Ok! Honestly, take the last part with a pinch of salt and **remember** to ask me at the end.\n",
+ "\n",
+ "\n",
+ "\n",
+ "**But I, still haven't answered the big elephant in the room. WHY?** \n",
+ "\n",
+ "To answer the why, we first need to understand how a class of models like Convolutional Neural Networks(CNNs) work. CNNs are really powerful, they have the capacity to learn very high dimensional data. Say you have a $512*512$ pixels image. The dimensionality here is approximately 1 million. For 10 samples \n",
+ "the space becomes $10^{1,000,000}$ and CNNs have proven to work really well on such tough task settings! \n",
+ "\n",
+ "But, there is a catch! These data samples like images, videos, audio etc., where CNN models are mostly used, all have a specific compositionality which is one of the strong assumptions we made before using CNNs. Never really knew what this assumption really meant ehh?\n",
+ "\n",
+ "So CNNs basically extract the compositional features and feed them to the classifier.\n",
+ "\n",
+ "\n",
+ "**What do I mean by compositionality?**\n",
+ "\n",
+ "The Key properties of the assumption of compositionality are\n",
+ "\n",
+ "* Locality\n",
+ "\n",
+ "* Stationarity or Translation Invariance \n",
+ "\n",
+ "* Multi Scale : Learning Hierarchies of representations\n",
+ "\n",
+ "\n",
+ "\n",
+ "**2D Convolution vs Graph Convolution**\n",
+ "\n",
+ "If you haven't figured it out, not all types of data lie on the Euclidean Space and such are the graphs data types, including manifolds, and 3D objects, thus rendering the previous 2D Convolution useless. Hence, the need for GCNs which have the ability to capture the inherent structure and topology of the given graph. Hence this blog :P.\n",
+ "\n",
+ "\n",
+ "\n",
+ "### Appllications of GCNs\n",
+ "One possible application of GCN is in the Facebook's friend prediction algorithm. Consider three people A, B and C. Given that A is a friend of B, B is a friend of C. You may also have some representative information in the form of features about each person, for example, A may like movies starring Liam Neeson and in general C is a fan of genre Thriller, now you have to predict whether A is friend of C.\n",
+ "\n",
+ "|  | \n",
+ "|:--:| \n",
+ "| __Facebook Link Prediction for Suggesting Friends using Social Networks__ |"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## What GCNs?\n",
+ "\n",
+ "As the name suggests, Graph Convolution Networks (GCNs), draw on the idea of Convolution Neural Networks re-defining them for the non-euclidean data domain. A regular Convolutional Neural Network used popularly for Image Recognition, captures the surrounding information of each pixel of an image. Similar to euclidean data like images, the convolution framework here aims to capture neighbourhood information for non euclidean spaces like graph nodes.\n",
+ "\n",
+ "A GCN is basically a neural network that operates on a graph. It will take a graph as an input and give some (we'll see what exactly) meaningful output.\n",
+ "\n",
+ "**GCNs come in two different styles**: \n",
+ "\n",
+ " - **Spectral GCNs**: Spectral-based approaches define graph convolutions by introducing filters from the perspective of graph signal processing based on graph spectral theory.\n",
+ " - **Spatial GCNs**: Spatial-based approaches formulate graph convolutions as aggregating feature information from neighbours.\n",
+ "\n",
+ "Note: Spectral approach has the limitation of the graph structure being same for all samples i.e. homogeneous structure. But it is a hard constraint, as most of the real-world graph data has different structures and size for different samples i.e. heterogeneous structure. Spatial approach is agnostic of graph structure."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## How GCNs?\n",
+ "\n",
+ "First, let's work this out for the Friend Prediction problem and then we will generalize the approach.\n",
+ "\n",
+ "**Problem Statement**: You are given N people and also a graph where there is an edge between two people if they are friends. You need to predict whether two people will become friends in the future or not.\n",
+ "\n",
+ "A simple graph corresponding to this problem is:\n",
+ "\n",
+ "Here person $(1,2)$ are friends, similarly $(2,3), (3,4), (4,1), (5,6), (6,8), (8,7), (7,6)$ are also friends.\n",
+ "\n",
+ "Now we are interested in finding out whether a given pair of people are likely to become friends in the future or not. Let's say that the pair we are interested in is $(1,3)$, now since they have 2 common friends we can softly imply they have a chance of becoming friends, whereas the nodes $(1,5)$ have no friend in common so they are less likely to become friends.\n",
+ "\n",
+ "Lets take another example:\n",
+ "\n",
+ "Here $(1,11)$ are much more likely to become friends than say $(3, 11)$.\n",
+ "\n",
+ "\n",
+ "Now the question that one can raise is 'How to implement and achieve this result?'. GCN's implement it in a way similar to CNNs. In a CNN we apply a filter on the original image to get the representation in next layer. Similarly in GCN we apply a filter which creates the next layer representation. \n",
+ "\n",
+ "Mathematically we can define as follows: $$H^{i} = f(H^{i-1}, A)$$\n",
+ "\n",
+ "\n",
+ "A very simple example of $f$ maybe:\n",
+ "\n",
+ "$$f(H^{i}, A) = σ(AH^{i}W^{i})$$\n",
+ "\n",
+ "\n",
+ "where\n",
+ " - $A$ is the $N × N$ adjacency matrix\n",
+ " - $X$ is the input feature matrix $N × F$, where $N$ is the number of nodes and $F$ is the number of input features for each node.\n",
+ " - $σ$ is the Relu activation function\n",
+ " - $H^{0} = X$ Each layer $H^{i}$ corresponds to an $N × F^{i}$ feature matrix where each row is a feature representation of a node.\n",
+ " - $f$ is the propagation rule\n",
+ " \n",
+ "At each layer, these features are aggregated to form the next layer’s features using the propagation rule $f$. In this way, features become increasingly more abstract at each consecutive layer.\n",
+ "\n",
+ "\n",
+ "Yes that is it, we already have some function to propagate information across the graphs which can be trained in a semi-supervised way. Using the GCN layer, the representation of each node (each row) is now a sum of its neighbors features! In other words, the layer represents each node as an aggregate of its neighborhood."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "**But, Wait is it so simple?**\n",
+ "\n",
+ "I'll request you to stop for a moment here and think really hard about the function we just defined.\n",
+ "\n",
+ "Is that correct?\n",
+ "\n",
+ "**STOP**\n",
+ "\n",
+ "....\n",
+ "\n",
+ "....\n",
+ "\n",
+ "....\n",
+ "\n",
+ "\n",
+ "It is sort of! But it is not exactly what we want. If you were unable to arrive at the problem, fret not. Let's see what exactly are the **'problems'** (yes, more than one problem) this function might lead to:\n",
+ " - **The new node features $H^{i}$ are not a function of its previous representation**: As you might have noticed, the aggregated representation of a node is only a function of its neighbours and does not include its own features. If not handled, this may lead to the loss of the node identity and hence rendering the feature representations useless. We can easily fix this by adding self loops, that is an edge starting and ending on the same node, in this way a node will become a neighbour of itself. Mathematically, self loops are nothing but can be expressed by adding the node identity \n",
+ "\n",
+ "\n",
+ " - **Degree of the nodes lead to the values being scaled asymmetricaly across the graph**: In simple words, nodes that have large number of neighbours (higher degree) will get much more input in the form of neighborhood aggregation from the adjacent nodes and hence will have a larger value and vice versa may be true for nodes with smaller degrees having small values. This can lead to problems during training the network. To deal with the issue, we will be using normalisation i.e, reduce all values in such a way that the values are on the same scale. Normalizing $A$ such that all rows sum to one, i.e. $D^{−1}A$, where $D$ is the diagonal node degree matrix, gets rid of this problem. Multiplying with $D^{−1}A$ now corresponds to taking the average of neighboring node features. According to the authors, after observing emperical results, they suggest \"In practice, dynamics get more interesting when we use a symmetric normalization, i.e. $\\hat{D}^{-\\frac{1}{2}}\\hat{A}\\hat{D}^{-\\frac{1}{2}}$ (as this no longer amounts to mere averaging of neighboring nodes).\n",
+ " \n",
+ " \n",
+ "After addressing the two problems stated above, the new propagation function $f$ is:\n",
+ "\n",
+ "$$f(H^{(l)}, A) = \\sigma\\left( \\hat{D}^{-\\frac{1}{2}}\\hat{A}\\hat{D}^{-\\frac{1}{2}}H^{(l)}W^{(l)}\\right)$$\n",
+ "\n",
+ "\n",
+ "where\n",
+ " - $\\hat{A} = A + I$\n",
+ " - $I$ is the identity matrix\n",
+ " - $\\hat{D}$ is the diagonal node degree matrix of $\\hat{A}$."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Implementing GCNs from Scratch in PyTorch "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We are now ready to put all of the tools together to deploy our very first fully-functional Graph Convolutional Network. In this tutorial we will be training GCN on the 'Zachary Karate Club Network'. We will be using the **'Semi Supervised Graph Learning Model'** proposed in the paper by [Thomas Kipf and Max Welling](https://arxiv.org/abs/1609.02907)."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Zachary Karate Club\n",
+ "\n",
+ "During the period from 1970-1972, Wayne W. Zachary, observed the people belonging to a local karate club. He represented these people as nodes in a graph. And added a edge between a pair of people if they interacted with each other. The result was a the graph shown below.\n",
+ "\n",
+ "\n",
+ "During the study an interesting event happened. A conflict arose between the administrator \"John A\" and instructor \"Mr. Hi\" (pseudonyms), which led to the split of the club into two. Half of the members formed a new club around Mr. Hi; members from the other part found a new instructor or gave up karate. \n",
+ "\n",
+ "Using the graph that he had found earlier, he tried to predict which member will go to which half. And surprisingly he was able to predict the decision of all the members except for node 9 who went with Mr. Hi instead of John A. Zachary used the maximum flow – minimum cut Ford–Fulkerson algorithm for this. We will be using a different algorithm today, hence it is not required to know about Ford-Fulkerson algorithm."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Here we will be using the Semi Supervised Graph Learning Method. Semi Supervised means that we have labels for only some of the nodes and we have find the labels for other nodes. Like in this example we have the labels for only the nodes belonging to 'John A' and 'Mr. Hi', we have not been provided with labels for any other member and we have be predict that only on the basis of the graph given to us."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Loading Required Libraries\n",
+ "In this post we will be using PyTorch and Matplotlib."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import torch\n",
+ "import torch.nn as nn\n",
+ "import torch.optim as optim\n",
+ "import matplotlib.pyplot as plt\n",
+ "%matplotlib notebook\n",
+ "\n",
+ "import imageio\n",
+ "from celluloid import Camera\n",
+ "from IPython.display import HTML\n",
+ "\n",
+ "plt.rcParams['animation.ffmpeg_path'] = '/usr/local/bin/ffmpeg'"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### The Convolutional Layer\n",
+ "First we will be creating the GCNConv class, which will serve as the Layer creation class. Every instance of this class will be getting Adjacency Matrix as input and will be outputing 'RELU(A_hat * X * W)', which the Net class will use."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "class GCNConv(nn.Module):\n",
+ " def __init__(self, A, in_channels, out_channels):\n",
+ " super(GCNConv, self).__init__()\n",
+ " self.A_hat = A+torch.eye(A.size(0))\n",
+ " self.D = torch.diag(torch.sum(A,1))\n",
+ " self.D = self.D.inverse().sqrt()\n",
+ " self.A_hat = torch.mm(torch.mm(self.D, self.A_hat), self.D)\n",
+ " self.W = nn.Parameter(torch.rand(in_channels,out_channels, requires_grad=True))\n",
+ " \n",
+ " def forward(self, X):\n",
+ " out = torch.relu(torch.mm(torch.mm(self.A_hat, X), self.W))\n",
+ " return out"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The Net class will combine multiple Conv layer."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "class Net(torch.nn.Module):\n",
+ " def __init__(self,A, nfeat, nhid, nout):\n",
+ " super(Net, self).__init__()\n",
+ " self.conv1 = GCNConv(A,nfeat, nhid)\n",
+ " self.conv2 = GCNConv(A,nhid, nout)\n",
+ " \n",
+ " def forward(self,X):\n",
+ " H = self.conv1(X)\n",
+ " H2 = self.conv2(H)\n",
+ " return H2"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# 'A' is the adjacency matrix, it contains 1 at a position (i,j) if there is a edge between the node i and node j.\n",
+ "A=torch.Tensor([[0,1,1,1,1,1,1,1,1,0,1,1,1,1,0,0,0,1,0,1,0,1,0,0,0,0,0,0,0,0,0,1,0,0],\n",
+ " [1,0,1,1,0,0,0,1,0,0,0,0,0,1,0,0,0,1,0,1,0,1,0,0,0,0,0,0,0,0,1,0,0,0],\n",
+ " [1,1,0,1,0,0,0,1,1,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,1,0],\n",
+ " [1,1,1,0,0,0,0,1,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],\n",
+ " [1,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],\n",
+ " [1,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],\n",
+ " [1,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],\n",
+ " [1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],\n",
+ " [1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,1],\n",
+ " [0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1],\n",
+ " [1,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],\n",
+ " [1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],\n",
+ " [1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],\n",
+ " [1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1],\n",
+ " [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1],\n",
+ " [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1],\n",
+ " [0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],\n",
+ " [1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],\n",
+ " [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1],\n",
+ " [1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1],\n",
+ " [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1],\n",
+ " [1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],\n",
+ " [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1],\n",
+ " [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,1,0,0,1,1],\n",
+ " [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,1,0,0],\n",
+ " [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,1,0,0],\n",
+ " [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1],\n",
+ " [0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,1],\n",
+ " [0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1],\n",
+ " [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,0,0,0,0,1,1],\n",
+ " [0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1],\n",
+ " [1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,1,0,0,0,1,1],\n",
+ " [0,0,1,0,0,0,0,0,1,0,0,0,0,0,1,1,0,0,1,0,1,0,1,1,0,0,0,0,0,1,1,1,0,1],\n",
+ " [0,0,0,0,0,0,0,0,1,1,0,0,0,1,1,1,0,0,1,1,1,0,1,1,0,0,1,1,1,1,1,1,1,0]\n",
+ " ])"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "In this example we have the label for admin(node 1) and instructor(node 34) so only these two contain the class label(0 and 1) all other are set to -1, which means that the predicted value of these nodes will be ignores in the computation of loss function."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "target=torch.tensor([0,-1,-1,-1, -1, -1, -1, -1,-1,-1,-1,-1, -1, -1, -1, -1,-1,-1,-1,-1, -1, -1, -1, -1,-1,-1,-1,-1, -1, -1, -1, -1,-1,1])"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "X is the feature matrix. Since we dont have any feature of each node, we will just be using the one-hot encoding corresponding to the index of the node."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "X=torch.eye(A.size(0))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Here we are creating a Network with 10 features in the hidden layer and 2 in output layer."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "T=Net(A,X.size(0), 10, 2)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Training"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 8,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "criterion = torch.nn.CrossEntropyLoss(ignore_index=-1)\n",
+ "optimizer = optim.SGD(T.parameters(), lr=0.01, momentum=0.9)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 9,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "loss=criterion(T(X),target)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 10,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/html": [
+ ""
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "execution_count": 10,
+ "metadata": {},
+ "output_type": "execute_result"
+ },
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAWoAAAD4CAYAAADFAawfAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjIsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8li6FKAAAMD0lEQVR4nO3dT4ykBZ2H8efrNARmZMUsFYOM2eFEsiFZIB1WxZAsI0ZWgnvYAyaYrJfZg1HQTYzuxXg3xj2ZTCAuiYjBAS7EJZCIyXJwTM8wLn+GPYiAIDrFQfmTzSL420PX4OzQ2G8N/Xb9in4+SYfu6Xeqvk0yT1e//VYqVYUkqa/3LHqAJOnPM9SS1JyhlqTmDLUkNWeoJam5lTFu9IILLqh9+/aNcdOS9K505MiRF6tqstHnRgn1vn37WFtbG+OmJeldKckzb/c5T31IUnOGWpKaM9SS1JyhlqTmDLUkNWeoJak5Qy1JzRlqSWrOUEtSc4Zakpoz1JLUnKGWpOYGhTrJl5I8nuSxJHcmOWfsYZKkdZuGOslFwBeB1aq6FNgF3Dj2MEnSuqGnPlaAc5OsALuBX483SZJ0qk1DXVXPA98EngVeAH5fVQ+cflySA0nWkqxNp9OtXypJO9SQUx/vBz4NXAx8ENiT5KbTj6uqg1W1WlWrk8mGL1IgSToDQ059fBz4ZVVNq+oPwD3AR8edJUk6aUionwU+nGR3kgD7gePjzpIknTTkHPVh4BBwFHh09ncOjrxLkjQz6MVtq+rrwNdH3iJJ2oDPTJSk5gy1JDVnqCWpOUMtSc0ZaklqzlBLUnOGWpKaM9SS1JyhlqTmDLUkNWeoJak5Qy1JzRlqSWrOUEtSc4Zakpoz1JLUnKGWpOYMtSQ1Z6glqTlDLUnNGWpJam7TUCe5JMmxU95eSnLLdoyTJMHKZgdU1X8DlwEk2QU8D9w78i5J0sy8pz72A7+oqmfGGCNJeqt5Q30jcOdGn0hyIMlakrXpdPrOl0mSgDlCneRs4Abghxt9vqoOVtVqVa1OJpOt2idJO948j6ivA45W1W/HGiNJeqt5Qv0Z3ua0hyRpPINCnWQPcC1wz7hzJEmn2/TyPICqehX4y5G3SJI24DMTJak5Qy1JzRlqSWrOUEtSc4Zakpoz1JLUnKGWpOYMtSQ1Z6glqTlDLUnNGWpJas5QS1JzhlqSmjPUktScoZak5gy1JDVnqCWpOUMtSc0ZaklqzlBLUnOGWpKaGxTqJOcnOZTkySTHk3xk7GGSpHUrA4/7N+D+qvrHJGcDu0fcJEk6xaahTvI+4GrgnwCq6jXgtXFnSZJOGnLq42JgCnw3ySNJbk2y5/SDkhxIspZkbTqdbvlQSdqphoR6BbgC+E5VXQ68Cnz19IOq6mBVrVbV6mQy2eKZkrRzDQn1c8BzVXV49vEh1sMtSdoGm4a6qn4D/CrJJbM/2g88MeoqSdKbhl718QXgjtkVH08BnxtvkiTpVINCXVXHgNWRt0iSNuAzEyWpOUMtSc0ZaklqzlBLUnOGWpKaM9SS1JyhlqTmDLUkNWeoJak5Qy1JzRlqSWrOUEtSc4Zakpoz1JLUnKGWpOYMtSQ1Z6glqTlDLUnNGWpJas5QS1JzhlqSmhv0KuRJngZeBt4AXq8qX5FckrbJoFDP/F1VvTjaEknShjz1IUnNDQ11AQ8kOZLkwEYHJDmQZC3J2nQ63bqFkrTDDQ31x6rqCuA64PNJrj79gKo6WFWrVbU6mUy2dKQk7WSDQl1Vz8/+ewK4F7hyzFGSpD/ZNNRJ9iQ57+T7wCeAx8YeJklaN+Sqjw8A9yY5efz3q+r+UVdJkt60aair6ingb7ZhiyRpA16eJ0nNGWpJas5QS1JzhlqSmjPUktScoZak5gy1JDVnqCWpOUMtSc0ZaklqzlBLUnOGWpKaM9SS1JyhlqTmDLUkNWeoJak5Qy1JzRlqSWrOUEtSc4Zakpoz1JLU3OBQJ9mV5JEk9405SJL0/83ziPpm4PhYQyRJGxsU6iR7gU8Bt447R5J0uqGPqL8NfAX449sdkORAkrUka9PpdEvGSZIGhDrJ9cCJqjry546rqoNVtVpVq5PJZMsGStJON+QR9VXADUmeBn4AXJPke6OukiS9adNQV9XXqmpvVe0DbgR+XFU3jb5MkgR4HbUktbcyz8FV9RPgJ6MskSRtyEfUktScoZak5gy1JDVnqCWpOUMtSc0ZaklqzlBLUnOGWpKaM9SS1JyhlqTmDLUkNWeoJak5Qy1JzRlqSWrOUEtSc4Zakpoz1JLUnKGWpOYMtSQ1Z6glqTlDLUnNbRrqJOck+VmSnyd5PMk3tmOYJGndyoBj/he4pqpeSXIW8HCS/6iqn468TZLEgFBXVQGvzD48a/ZWY46SJP3JoHPUSXYlOQacAB6sqsMbHHMgyVqStel0utU7JWnHGhTqqnqjqi4D9gJXJrl0g2MOVtVqVa1OJpOt3ilJO9ZcV31U1e+Ah4BPjjNHknS6IVd9TJKcP3v/XOBa4Mmxh0mS1g256uNC4PYku1gP+11Vdd+4syRJJw256uO/gMu3YYskaQM+M1GSmjPUktScoZak5gy1JDVnqCWpOUMtSc0ZaklqzlBLUnOGWpKaM9SS1JyhlqTmDLUkNWeoJak5Qy1JzRlqSWrOUEtSc4Zakpoz1JLUnKGWpOYMtSQ1t2mok3woyUNJnkjyeJKbt2OYJGndpq9CDrwO/EtVHU1yHnAkyYNV9cTI2yRJDHhEXVUvVNXR2fsvA8eBi8YeJklaN9c56iT7gMuBw2OMkSS91eBQJ3kvcDdwS1W9tMHnDyRZS7I2nU63cqMk7WiDQp3kLNYjfUdV3bPRMVV1sKpWq2p1Mpls5UZJ2tGGXPUR4DbgeFV9a/xJkqRTDXlEfRXwWeCaJMdmb38/8i5J0syml+dV1cNAtmGLJGkDPjNRkpoz1JLUnKGWpOYMtSQ1Z6glqTlDLUnNGWpJas5QS1JzhlqSmjPUktScoZak5gy1JDVnqCWpOUMtSc0ZaklqzlBLUnOpqq2/0WQKPLPlN/xWFwAvbsP9jGXZ98Pyfw3uX6xl3w9b9zX8VVVt+IKzo4R6uyRZq6rVRe84U8u+H5b/a3D/Yi37ftier8FTH5LUnKGWpOaWPdQHFz3gHVr2/bD8X4P7F2vZ98M2fA1LfY5aknaCZX9ELUnveoZakppbylAn+VCSh5I8keTxJDcvetM8kpyT5GdJfj7b/41FbzoTSXYleSTJfYveciaSPJ3k0STHkqwtes+8kpyf5FCSJ5McT/KRRW8aKskls//vJ99eSnLLonfNI8mXZv9+H0tyZ5JzRruvZTxHneRC4MKqOprkPOAI8A9V9cSCpw2SJMCeqnolyVnAw8DNVfXTBU+bS5IvA6vAX1TV9YveM68kTwOrVbWUT7hIcjvwn1V1a5Kzgd1V9btF75pXkl3A88DfVtV2PFHuHUtyEev/bv+6qv4nyV3Aj6rq38e4v6V8RF1VL1TV0dn7LwPHgYsWu2q4WvfK7MOzZm9L9R0zyV7gU8Cti96yEyV5H3A1cBtAVb22jJGe2Q/8YlkifYoV4NwkK8Bu4Ndj3dFShvpUSfYBlwOHF7tkPrPTBseAE8CDVbVU+4FvA18B/rjoIe9AAQ8kOZLkwKLHzOliYAp8d3b66dYkexY96gzdCNy56BHzqKrngW8CzwIvAL+vqgfGur+lDnWS9wJ3A7dU1UuL3jOPqnqjqi4D9gJXJrl00ZuGSnI9cKKqjix6yzv0saq6ArgO+HySqxc9aA4rwBXAd6rqcuBV4KuLnTS/2SmbG4AfLnrLPJK8H/g0698wPwjsSXLTWPe3tKGendu9G7ijqu5Z9J4zNftx9SHgk4veMoergBtm53h/AFyT5HuLnTS/2aMiquoEcC9w5WIXzeU54LlTfhI7xHq4l811wNGq+u2ih8zp48Avq2paVX8A7gE+OtadLWWoZ7+Muw04XlXfWvSeeSWZJDl/9v65wLXAk4tdNVxVfa2q9lbVPtZ/bP1xVY32aGIMSfbMfhHN7JTBJ4DHFrtquKr6DfCrJJfM/mg/sBS/TD/NZ1iy0x4zzwIfTrJ71qP9rP+ubBQrY93wyK4CPgs8OjvPC/CvVfWjBW6ax4XA7bPfdr8HuKuqlvIStyX2AeDe9X9jrADfr6r7Fztpbl8A7pidPngK+NyC98xl9g3yWuCfF71lXlV1OMkh4CjwOvAIIz6VfCkvz5OknWQpT31I0k5iqCWpOUMtSc0ZaklqzlBLUnOGWpKaM9SS1Nz/Aec71zFZk0ZbAAAAAElFTkSuQmCC\n",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {
+ "needs_background": "light"
+ },
+ "output_type": "display_data"
+ }
+ ],
+ "source": [
+ "# Plot animation using celluloid\n",
+ "fig = plt.figure()\n",
+ "camera = Camera(fig)\n",
+ "\n",
+ "for i in range(200):\n",
+ " optimizer.zero_grad()\n",
+ " loss=criterion(T(X), target)\n",
+ " loss.backward()\n",
+ " optimizer.step()\n",
+ " l=(T(X));\n",
+ "\n",
+ " plt.scatter(l.detach().numpy()[:,0],l.detach().numpy()[:,1],c=[0, 0, 0, 0 ,0 ,0 ,0, 0, 1, 1, 0 ,0, 0, 0, 1 ,1 ,0 ,0 ,1, 0, 1, 0 ,1 ,1, 1, 1, 1 ,1 ,1, 1, 1, 1, 1, 1 ])\n",
+ " for i in range(l.shape[0]):\n",
+ " text_plot = plt.text(l[i,0], l[i,1], str(i+1))\n",
+ "\n",
+ " camera.snap()\n",
+ "\n",
+ " if i%20==0:\n",
+ " print(\"Cross Entropy Loss: =\", loss.item())\n",
+ "\n",
+ "animation = camera.animate(blit=False, interval=150)\n",
+ "animation.save('./train_karate_animation.mp4', writer='ffmpeg', fps=60)\n",
+ "HTML(animation.to_html5_video())"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "As you can see above it has divided the data in two categories , and its close to the actual predictions.\n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## PyTorch Geometric Implementation\n",
+ "We also implemented GCNs using this great library [PyTorch Geometric](https://github.com/rusty1s/pytorch_geometric) (PyG) with a super active mantainer [Matthias Fey](https://github.com/rusty1s/). PyG is specifically built for PyTorch lovers who need an easy, fast and simple way out to implement and test their work on various Graph Representation Learning papers.\n",
+ "\n",
+ "You can find the PyG notebook [here](https://github.com/dsgiitr/graph_nets/blob/master/GCN/GCN_PyG.ipynb) with implementation of GCNs trained on a Citation Network, the Cora Dataset.\n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We strongly reccomend reading up these references as well to make your understanding solid. \n",
+ "\n",
+ "**Also, remember I asked you to remember one thing? To answer that read up on this amazing blog which tries to understand if GCNs really are powerful as they claim to be. [How powerful are Graph Convolutions?](https://www.inference.vc/how-powerful-are-graph-convolutions-review-of-kipf-welling-2016-2/)**\n",
+ "\n",
+ "\n",
+ "## References\n",
+ "- [Blog GCNs by Thomas Kipf](https://tkipf.github.io/graph-convolutional-networks/)\n",
+ "- [Semi-Supervised Classification with Graph Convolutional Networks by Thomas Kipf and Max Welling](https://arxiv.org/abs/1609.02907)\n",
+ "- [How to do Deep Learning on Graphs with Graph Convolutional Networks by Tobias Skovgaard Jepsen\n",
+ "](https://towardsdatascience.com/how-to-do-deep-learning-on-graphs-with-graph-convolutional-networks-7d2250723780)\n",
+ "- [How powerful are Graph Convolutions?](https://www.inference.vc/how-powerful-are-graph-convolutions-review-of-kipf-welling-2016-2/)\n",
+ "- [PyTorch Geometric](https://github.com/rusty1s/pytorch_geometric)"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.7.5"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}