{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# ChebNet: CNN on Graphs with Fast Localized Spectral Filtering"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Motivation"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As a part of this blog series, this time we'll be looking at a spectral convolution technique introduced in the paper by M. Defferrard, X. Bresson, and P. Vandergheynst, on \"Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering\".\n",
"\n",
"
\n",
"\n",
"As mentioned in our previous blog on [A Review : Graph Convolutional Networks (GCN)](https://dsgiitr.com/blogs/gcn/), the spatial convolution and pooling operations are well-defined only for the Euclidean domain. Hence, we cannot apply the convolution directly on the irregular structured data such as graphs.\n",
"\n",
"The technique proposed in this paper provide us with a way to perform convolution on graph like data, for which they used convolution theorem. According to which, Convolution in spatial domain is equivalent to multiplication in Fourier domain. Hence, instead of performing convolution explicitly in the spatial domain, we will transform the graph data and the filter into Fourier domain. Do element-wise multiplication and the result is converted back to spatial domain by performing inverse Fourier transform. Following figure illustrates the proposed technique:\n",
""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## But How to Take This Fourier Transform?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As mentioned we have to take a fourier transform of graph signal. In spectral graph theory, the important operator used for Fourier analysis of graph is the Laplacian operator. For the graph $G=(V,E)$, with set of vertices $V$ of size $n$ and set of edges $E$. The Laplacian is given by
\n",
"$Δ=D−A$
\n",
"where $D$ denotes the diagonal degree matrix and $A$ denotes the adjacency matrix of the graph.
\n",
"When we do eigen-decomposition of the Laplacian, we get the orthonormal eigenvectors, as the Laplacian is real symmetric positive semi-definite matrix (side note: positive semidefinite matrices have orthogonal eigenvectors and symmetric matrix has real eigenvalues). These eigenvectors are denoted by $\\{ϕ_l\\}^n_{l=0}$ and also called as Fourier modes. The corresponding eigenvalues $\\{λ_l\\}^n_{l=0}$ acts as frequencies of the graph.
\n",
"\n",
"The Laplacian can be diagonalized by the Fourier basis.
\n",
"$Δ=ΦΛΦ^T$
\n",
"\n",
"where, $Φ=\\{ϕ_l\\}^n_{l=0}$ is a matrix with eigenvectors as columns and $Λ$ is a diagonal matrix of eigenvalues.
\n",
"\n",
"Now the graph can be transformed to Fourier domain just by multiplying by the Fourier basis. Hence, the Fourier transform of graph signal $x:V→R$ which is defined on nodes of the graph $x∈R^n$ is given by:
\n",
"$\\hat{x}=Φ^Tx$, where $\\hat{x}$ denotes the graph Fourier transform. Hence, the task of transforming the graph signal to Fourier domain is nothing but the matrix-vector multiplication.
\n",
"\n",
"Similarly, the inverse graph Fourier transform is given by:
\n",
"$x=Φ\\hat{x}$.
\n",
"This formulation of Fourier transform on graph gives us the required tools to perform convolution on graphs. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Filtering of signals on graph"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As we now have the two necessary tools to define convolution on non-Euclidean domain:\n",
"\n",
"1) Way to transform graph to Fourier domain.\n",
"\n",
"2) Convolution in Fourier domain, the convolution operation between graph signal $x$ and filter $g$ is given by the graph convolution of the input signal $x$ with a filter $g∈R^n$ defined as:\n",
"\n",
"\n",
"$x∗_Gg=ℱ^{−1}(ℱ(x)⊙ℱ(g))=Φ(Φ^Tx⊙Φ^Tg)$,\n",
"\n",
"\n",
"where $⊙$ denotes the element-wise product. If we denote a filter as $g_θ=diag(Φ^Tg)$, then the spectral graph convolution is simplified as $x∗_Gg_θ=Φg_θΦ^Tx$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Why can't we go forward with this scheme only?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"All spectral-based ConvGNNs follow this definition. But, this method has three major problems:\n",
"\n",
"1. The number of filter parameters to learn depends on the dimensionality of the input which translates into O(n) complexity and filter is non-parametric.\n",
"\n",
"2. The filters are not localized i.e. filters learnt for graph considers the entire graph, unlike traditional CNN which takes only nearby local pixels to compute convolution.\n",
"\n",
"3. The algorithm needs to calculate the eigen-decomposition explicitly and multiply signal with Fourier basis as there is no Fast Fourier Transform algorithm defined for graphs, hence the computation is $O(n^2)$. (Fast Fourier Transform defined for Euclidean data has $O(nlogn)$ complexity)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Polynomial parametrization of filters"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To overcome these problems they used an polynomial approximation to parametrize the filter.
\n",
"Now, filter is of the form of:
\n",
"$g_θ(Λ) =\\sum_{k=0}^{K-1}θ_kΛ_k$, where the parameter $θ∈R^K$ is a vector of polynomial coefficients.
\n",
"These spectral filters represented by $Kth$-order polynomials of the Laplacian are exactly $K$-localized. Besides, their learning complexity is $O(K)$, the support size of the filter, and thus the same complexity as classical CNNs."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Is everything fixed now?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"No, the cost to filter a signal is still high with $O(n^2)$ operations because of the multiplication with the Fourier basis U. (calculating the eigen-decomposition explicitly and multiply signal with Fourier basis)\n",
"\n",
"To bypass this problem, the authors parametrize $g_θ(Δ)$ as a polynomial function that can be computed recursively from $Δ$. One such polynomial, traditionally used in Graph Signal Processing to approximate kernels, is the Chebyshev expansion. The Chebyshev polynomial $T_k(x)$ of order $k$ may be computed by the stable recurrence relation $T_k(x) = 2xT_{k−1}(x)−T_{k−2}(x)$ with $T_0=1$ and $T_1=x$.\n",
"\n",
"The spectral filter is now given by a truncated Chebyshev polynomial:\n",
"\n",
"$$g_θ(\\barΔ)=Φg(\\barΛ)Φ^T=\\sum_{k=0}^{K-1}θ_kT_k(\\barΔ)$$\n",
"\n",
"where, $Θ∈R^K$ now represents a vector of the Chebyshev coefficients, the $\\barΔ$ denotes the rescaled $Δ$. (This rescaling is necessary as the Chebyshev polynomial form orthonormal basis in the interval [-1,1] and the eigenvalues of original Laplacian lies in the interval $[0,λ_{max}]$). Scaling is done as $\\barΔ= 2Δ/λ_{max}−I_n$.\n",
"\n",
"The filtering operation can now be written as $y=g_θ(Δ)x=\\sum_{k=0}^{K-1}θ_kT_k(\\barΔ)x$, where, $x_{i,k}$ are the input feature maps, $Θ_k$ are the trainable parameters."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Pooling Operation"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In case of images, the pooling operation consists of taking a fixed size patch of pixels, say 2x2, and keeping only the pixel with max value (assuming you apply max pooling) and discarding the other pixels from the patch. Similar concept of pooling can be applied to graphs.\n",
"\n",
"Defferrard et al. address this issue by using the coarsening phase of the Graclus multilevel clustering algorithm. Graclus’ greedy rule consists, at each coarsening level, in picking an unmarked vertex $i$ and matching it with one of its unmarked neighbors $j$ that maximizes the local normalized cut $Wij(1/di+ 1/dj)$. The two matched vertices are then marked and the coarsened weights are set as the sum of their weights. The matching is repeated until all nodes have been explored. This is an very fast coarsening scheme which divides the number of nodes by approximately two from one level to the next coarser level. After coarsening, the nodes of the input graph and its coarsened version are rearranged into a balanced binary tree. Arbitrarily aggregating the balanced binary tree from bottom to top will arrange similar nodes together. Pooling such a rearranged signal is much more efficient than pooling the original. The following figure shows the example of graph coarsening and pooling.\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Implementing ChebNET in PyTorch"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Imports"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "H0A29S0FAJLI"
},
"outputs": [],
"source": [
"import torch\n",
"from torch.autograd import Variable\n",
"import torch.nn.functional as F\n",
"import torch.nn as nn\n",
"import collections\n",
"import time\n",
"import numpy as np\n",
"from tensorflow.examples.tutorials.mnist import input_data\n",
"\n",
"import sys\n",
"\n",
"import os"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"colab_type": "code",
"id": "2CMl-YsaAwgq",
"outputId": "aa1ae30a-7420-49f8-bf89-952ef6133232"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"cuda available\n"
]
}
],
"source": [
"if torch.cuda.is_available():\n",
" print('cuda available')\n",
" dtypeFloat = torch.cuda.FloatTensor\n",
" dtypeLong = torch.cuda.LongTensor\n",
" torch.cuda.manual_seed(1)\n",
"else:\n",
" print('cuda not available')\n",
" dtypeFloat = torch.FloatTensor\n",
" dtypeLong = torch.LongTensor\n",
" torch.manual_seed(1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Data Prepration"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 535
},
"colab_type": "code",
"id": "SB56sMJSAzt4",
"outputId": "48f13a05-b8e6-4251-cd66-9841de43506e"
},
"outputs": [],
"source": [
"# load data in folder datasets\n",
"mnist = input_data.read_data_sets('datasets', one_hot=False)\n",
"\n",
"train_data = mnist.train.images.astype(np.float32)\n",
"val_data = mnist.validation.images.astype(np.float32)\n",
"test_data = mnist.test.images.astype(np.float32)\n",
"train_labels = mnist.train.labels\n",
"val_labels = mnist.validation.labels\n",
"test_labels = mnist.test.labels"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"colab_type": "code",
"id": "hyjuGFcVA_Xj",
"outputId": "f19604d0-84a0-471d-9ced-b9dc46aaeab1"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"nb edges: 6396\n"
]
}
],
"source": [
"from grid_graph import grid_graph\n",
"from coarsening import coarsen\n",
"from coarsening import lmax_L\n",
"from coarsening import perm_data\n",
"from coarsening import rescale_L\n",
"\n",
"# Construct graph\n",
"t_start = time.time()\n",
"grid_side = 28\n",
"number_edges = 8\n",
"metric = 'euclidean'\n",
"A = grid_graph(grid_side,number_edges,metric) # create graph of Euclidean grid"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 121
},
"colab_type": "code",
"id": "ocadJoz3BahS",
"outputId": "12a160e8-147c-4cfb-92d9-337f08cc2f7f"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Heavy Edge Matching coarsening with Xavier version\n",
"Layer 0: M_0 = |V| = 976 nodes (192 added), |E| = 3198 edges\n",
"Layer 1: M_1 = |V| = 488 nodes (83 added), |E| = 1619 edges\n",
"Layer 2: M_2 = |V| = 244 nodes (29 added), |E| = 794 edges\n",
"Layer 3: M_3 = |V| = 122 nodes (7 added), |E| = 396 edges\n",
"Layer 4: M_4 = |V| = 61 nodes (0 added), |E| = 194 edges\n"
]
}
],
"source": [
"# Compute coarsened graphs\n",
"coarsening_levels = 4\n",
"L, perm = coarsen(A, coarsening_levels)"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"colab_type": "code",
"id": "yJCEugS8CCFo",
"outputId": "c2eb3fe7-798a-4cde-e69f-d6b5a7f47daa"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"lmax: [1.3857538, 1.3440963, 1.1994357, 1.0239158]\n"
]
}
],
"source": [
"# Compute max eigenvalue of graph Laplacians\n",
"lmax = []\n",
"for i in range(coarsening_levels):\n",
" lmax.append(lmax_L(L[i]))\n",
"print('lmax: ' + str([lmax[i] for i in range(coarsening_levels)]))"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 86
},
"colab_type": "code",
"id": "-lxQNsyxHKvj",
"outputId": "cc4e9c8d-0d2c-4542-b52d-fea8533b8762"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"(55000, 976)\n",
"(5000, 976)\n",
"(10000, 976)\n",
"Execution time: 4.18s\n"
]
}
],
"source": [
"# Reindex nodes to satisfy a binary tree structure\n",
"train_data = perm_data(train_data, perm)\n",
"val_data = perm_data(val_data, perm)\n",
"test_data = perm_data(test_data, perm)\n",
"\n",
"print(train_data.shape)\n",
"print(val_data.shape)\n",
"print(test_data.shape)\n",
"\n",
"print('Execution time: {:.2f}s'.format(time.time() - t_start))\n",
"del perm"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Model"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "pLD_gJMwHW6b"
},
"outputs": [],
"source": [
"# class definitions\n",
"\n",
"class my_sparse_mm(torch.autograd.Function):\n",
" \"\"\"\n",
" Implementation of a new autograd function for sparse variables, \n",
" called \"my_sparse_mm\", by subclassing torch.autograd.Function \n",
" and implementing the forward and backward passes.\n",
" \"\"\"\n",
" \n",
" def forward(self, W, x): # W is SPARSE\n",
" self.save_for_backward(W, x)\n",
" y = torch.mm(W, x)\n",
" return y\n",
" \n",
" def backward(self, grad_output):\n",
" W, x = self.saved_tensors \n",
" grad_input = grad_output.clone()\n",
" grad_input_dL_dW = torch.mm(grad_input, x.t()) \n",
" grad_input_dL_dx = torch.mm(W.t(), grad_input )\n",
" return grad_input_dL_dW, grad_input_dL_dx\n",
" \n",
" \n",
"class Graph_ConvNet_LeNet5(nn.Module):\n",
" \n",
" def __init__(self, net_parameters):\n",
" \n",
" print('Graph ConvNet: LeNet5')\n",
" \n",
" super(Graph_ConvNet_LeNet5, self).__init__()\n",
" \n",
" # parameters\n",
" D, CL1_F, CL1_K, CL2_F, CL2_K, FC1_F, FC2_F = net_parameters\n",
" FC1Fin = CL2_F*(D//16)\n",
" \n",
" # graph CL1\n",
" self.cl1 = nn.Linear(CL1_K, CL1_F) \n",
" Fin = CL1_K; Fout = CL1_F;\n",
" scale = np.sqrt( 2.0/ (Fin+Fout) )\n",
" self.cl1.weight.data.uniform_(-scale, scale)\n",
" self.cl1.bias.data.fill_(0.0)\n",
" self.CL1_K = CL1_K; self.CL1_F = CL1_F; \n",
" \n",
" # graph CL2\n",
" self.cl2 = nn.Linear(CL2_K*CL1_F, CL2_F) \n",
" Fin = CL2_K*CL1_F; Fout = CL2_F;\n",
" scale = np.sqrt( 2.0/ (Fin+Fout) )\n",
" self.cl2.weight.data.uniform_(-scale, scale)\n",
" self.cl2.bias.data.fill_(0.0)\n",
" self.CL2_K = CL2_K; self.CL2_F = CL2_F; \n",
"\n",
" # FC1\n",
" self.fc1 = nn.Linear(FC1Fin, FC1_F) \n",
" Fin = FC1Fin; Fout = FC1_F;\n",
" scale = np.sqrt( 2.0/ (Fin+Fout) )\n",
" self.fc1.weight.data.uniform_(-scale, scale)\n",
" self.fc1.bias.data.fill_(0.0)\n",
" self.FC1Fin = FC1Fin\n",
" \n",
" # FC2\n",
" self.fc2 = nn.Linear(FC1_F, FC2_F)\n",
" Fin = FC1_F; Fout = FC2_F;\n",
" scale = np.sqrt( 2.0/ (Fin+Fout) )\n",
" self.fc2.weight.data.uniform_(-scale, scale)\n",
" self.fc2.bias.data.fill_(0.0)\n",
"\n",
" # nb of parameters\n",
" nb_param = CL1_K* CL1_F + CL1_F # CL1\n",
" nb_param += CL2_K* CL1_F* CL2_F + CL2_F # CL2\n",
" nb_param += FC1Fin* FC1_F + FC1_F # FC1\n",
" nb_param += FC1_F* FC2_F + FC2_F # FC2\n",
" print('nb of parameters=',nb_param,'\\n')\n",
" \n",
" \n",
" def init_weights(self, W, Fin, Fout):\n",
"\n",
" scale = np.sqrt( 2.0/ (Fin+Fout) )\n",
" W.uniform_(-scale, scale)\n",
"\n",
" return W\n",
" \n",
" \n",
" def graph_conv_cheby(self, x, cl, L, lmax, Fout, K):\n",
"\n",
" # parameters\n",
" # B = batch size\n",
" # V = nb vertices\n",
" # Fin = nb input features\n",
" # Fout = nb output features\n",
" # K = Chebyshev order & support size\n",
" B, V, Fin = x.size(); B, V, Fin = int(B), int(V), int(Fin) \n",
"\n",
" # rescale Laplacian\n",
" lmax = lmax_L(L)\n",
" L = rescale_L(L, lmax) \n",
" \n",
" # convert scipy sparse matric L to pytorch\n",
" L = L.tocoo()\n",
" indices = np.column_stack((L.row, L.col)).T \n",
" indices = indices.astype(np.int64)\n",
" indices = torch.from_numpy(indices)\n",
" indices = indices.type(torch.LongTensor)\n",
" L_data = L.data.astype(np.float32)\n",
" L_data = torch.from_numpy(L_data) \n",
" L_data = L_data.type(torch.FloatTensor)\n",
" L = torch.sparse.FloatTensor(indices, L_data, torch.Size(L.shape))\n",
" L = Variable( L , requires_grad=False)\n",
" if torch.cuda.is_available():\n",
" L = L.cuda()\n",
" \n",
" # transform to Chebyshev basis\n",
" x0 = x.permute(1,2,0).contiguous() # V x Fin x B\n",
" x0 = x0.view([V, Fin*B]) # V x Fin*B\n",
" x = x0.unsqueeze(0) # 1 x V x Fin*B\n",
" \n",
" def concat(x, x_):\n",
" x_ = x_.unsqueeze(0) # 1 x V x Fin*B\n",
" return torch.cat((x, x_), 0) # K x V x Fin*B \n",
" \n",
" if K > 1: \n",
" x1 = my_sparse_mm()(L,x0) # V x Fin*B\n",
" x = torch.cat((x, x1.unsqueeze(0)),0) # 2 x V x Fin*B\n",
" for k in range(2, K):\n",
" x2 = 2 * my_sparse_mm()(L,x1) - x0 \n",
" x = torch.cat((x, x2.unsqueeze(0)),0) # M x Fin*B\n",
" x0, x1 = x1, x2 \n",
" \n",
" x = x.view([K, V, Fin, B]) # K x V x Fin x B \n",
" x = x.permute(3,1,2,0).contiguous() # B x V x Fin x K \n",
" x = x.view([B*V, Fin*K]) # B*V x Fin*K\n",
" \n",
" # Compose linearly Fin features to get Fout features\n",
" x = cl(x) # B*V x Fout \n",
" x = x.view([B, V, Fout]) # B x V x Fout\n",
" \n",
" return x\n",
" \n",
" \n",
" # Max pooling of size p. Must be a power of 2.\n",
" def graph_max_pool(self, x, p): \n",
" if p > 1: \n",
" x = x.permute(0,2,1).contiguous() # x = B x F x V\n",
" x = nn.MaxPool1d(p)(x) # B x F x V/p \n",
" x = x.permute(0,2,1).contiguous() # x = B x V/p x F\n",
" return x \n",
" else:\n",
" return x \n",
" \n",
" \n",
" def forward(self, x, d, L, lmax):\n",
" \n",
" # graph CL1\n",
" x = x.unsqueeze(2) # B x V x Fin=1 \n",
" x = self.graph_conv_cheby(x, self.cl1, L[0], lmax[0], self.CL1_F, self.CL1_K)\n",
" x = F.relu(x)\n",
" x = self.graph_max_pool(x, 4)\n",
" \n",
" # graph CL2\n",
" x = self.graph_conv_cheby(x, self.cl2, L[2], lmax[2], self.CL2_F, self.CL2_K)\n",
" x = F.relu(x)\n",
" x = self.graph_max_pool(x, 4)\n",
" \n",
" # FC1\n",
" x = x.view(-1, self.FC1Fin)\n",
" x = self.fc1(x)\n",
" x = F.relu(x)\n",
" x = nn.Dropout(d)(x)\n",
" \n",
" # FC2\n",
" x = self.fc2(x)\n",
" \n",
" return x\n",
" \n",
" \n",
" def loss(self, y, y_target, l2_regularization):\n",
" \n",
" loss = nn.CrossEntropyLoss()(y,y_target)\n",
"\n",
" l2_loss = 0.0\n",
" for param in self.parameters():\n",
" data = param* param\n",
" l2_loss += data.sum()\n",
" \n",
" loss += 0.5* l2_regularization* l2_loss\n",
" \n",
" return loss\n",
" \n",
" \n",
" def update(self, lr):\n",
" \n",
" update = torch.optim.SGD( self.parameters(), lr=lr, momentum=0.9 )\n",
" \n",
" return update\n",
" \n",
" \n",
" def update_learning_rate(self, optimizer, lr):\n",
" \n",
" for param_group in optimizer.param_groups:\n",
" param_group['lr'] = lr\n",
"\n",
" return optimizer\n",
"\n",
" \n",
" def evaluation(self, y_predicted, test_l):\n",
" \n",
" _, class_predicted = torch.max(y_predicted.data, 1)\n",
" return 100.0* (class_predicted == test_l).sum()/ y_predicted.size(0)\n"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "Prw5O_pzHpSi"
},
"outputs": [],
"source": [
"# network parameters\n",
"D = train_data.shape[1]\n",
"CL1_F = 32\n",
"CL1_K = 25\n",
"CL2_F = 64\n",
"CL2_K = 25\n",
"FC1_F = 512\n",
"FC2_F = 10\n",
"net_parameters = [D, CL1_F, CL1_K, CL2_F, CL2_K, FC1_F, FC2_F]"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 173
},
"colab_type": "code",
"id": "YwatnOugHvCe",
"outputId": "2ec5a709-2001-4e31-b5c7-04195ffe4f8b"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Graph ConvNet: LeNet5\n",
"nb of parameters= 2056586 \n",
"\n",
"Graph_ConvNet_LeNet5(\n",
" (cl1): Linear(in_features=25, out_features=32, bias=True)\n",
" (cl2): Linear(in_features=800, out_features=64, bias=True)\n",
" (fc1): Linear(in_features=3904, out_features=512, bias=True)\n",
" (fc2): Linear(in_features=512, out_features=10, bias=True)\n",
")\n"
]
}
],
"source": [
"# instantiate the object net of the class \n",
"net = Graph_ConvNet_LeNet5(net_parameters)\n",
"if torch.cuda.is_available():\n",
" net.cuda()\n",
"print(net)"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "H2XxYUFaHxJr"
},
"outputs": [],
"source": [
"# Weights\n",
"L_net = list(net.parameters())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Hyper parameters setting"
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"colab_type": "code",
"id": "HNTmNQaIH4UI",
"outputId": "a44ccf61-746e-451f-c0b6-1728c2d98bdd"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"num_epochs= 20 , train_size= 55000 , nb_iter= 11000\n"
]
}
],
"source": [
"# learning parameters\n",
"learning_rate = 0.05\n",
"dropout_value = 0.5\n",
"l2_regularization = 5e-4 \n",
"batch_size = 100\n",
"num_epochs = 20\n",
"train_size = train_data.shape[0]\n",
"nb_iter = int(num_epochs * train_size) // batch_size\n",
"print('num_epochs=',num_epochs,', train_size=',train_size,', nb_iter=',nb_iter)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Training & Evaluation "
]
},
{
"cell_type": "code",
"execution_count": 0,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 1000
},
"colab_type": "code",
"id": "JmePIZCLH-eN",
"outputId": "fff9afd8-bab3-4fb1-82d8-5fcfb2164c97"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"epoch= 1, i= 100, loss(batch)= 0.4181, accuray(batch)= 90.00\n",
"epoch= 1, i= 200, loss(batch)= 0.3011, accuray(batch)= 89.00\n",
"epoch= 1, i= 300, loss(batch)= 0.2579, accuray(batch)= 95.00\n",
"epoch= 1, i= 400, loss(batch)= 0.2399, accuray(batch)= 96.00\n",
"epoch= 1, i= 500, loss(batch)= 0.2154, accuray(batch)= 96.00\n",
"epoch= 1, loss(train)= 0.387, accuracy(train)= 90.976, time= 89.638, lr= 0.05000\n",
" accuracy(test) = 97.560 %, time= 9.941\n",
"epoch= 2, i= 100, loss(batch)= 0.2784, accuray(batch)= 95.00\n",
"epoch= 2, i= 200, loss(batch)= 0.2130, accuray(batch)= 94.00\n",
"epoch= 2, i= 300, loss(batch)= 0.1589, accuray(batch)= 98.00\n",
"epoch= 2, i= 400, loss(batch)= 0.1755, accuray(batch)= 98.00\n",
"epoch= 2, i= 500, loss(batch)= 0.2534, accuray(batch)= 95.00\n",
"epoch= 2, loss(train)= 0.186, accuracy(train)= 97.556, time= 89.675, lr= 0.04750\n",
" accuracy(test) = 98.530 %, time= 9.967\n",
"epoch= 3, i= 100, loss(batch)= 0.2390, accuray(batch)= 95.00\n",
"epoch= 3, i= 200, loss(batch)= 0.1573, accuray(batch)= 96.00\n",
"epoch= 3, i= 300, loss(batch)= 0.1216, accuray(batch)= 99.00\n",
"epoch= 3, i= 400, loss(batch)= 0.2020, accuray(batch)= 98.00\n",
"epoch= 3, i= 500, loss(batch)= 0.1684, accuray(batch)= 98.00\n",
"epoch= 3, loss(train)= 0.155, accuracy(train)= 98.264, time= 89.724, lr= 0.04512\n",
" accuracy(test) = 98.570 %, time= 9.961\n",
"epoch= 4, i= 100, loss(batch)= 0.1359, accuray(batch)= 99.00\n",
"epoch= 4, i= 200, loss(batch)= 0.1273, accuray(batch)= 98.00\n",
"epoch= 4, i= 300, loss(batch)= 0.1234, accuray(batch)= 99.00\n",
"epoch= 4, i= 400, loss(batch)= 0.1352, accuray(batch)= 99.00\n",
"epoch= 4, i= 500, loss(batch)= 0.1034, accuray(batch)= 100.00\n",
"epoch= 4, loss(train)= 0.136, accuracy(train)= 98.540, time= 89.717, lr= 0.04287\n",
" accuracy(test) = 98.840 %, time= 9.976\n",
"epoch= 5, i= 100, loss(batch)= 0.1229, accuray(batch)= 99.00\n",
"epoch= 5, i= 200, loss(batch)= 0.0896, accuray(batch)= 100.00\n",
"epoch= 5, i= 300, loss(batch)= 0.0974, accuray(batch)= 100.00\n",
"epoch= 5, i= 400, loss(batch)= 0.1413, accuray(batch)= 98.00\n",
"epoch= 5, i= 500, loss(batch)= 0.0997, accuray(batch)= 99.00\n",
"epoch= 5, loss(train)= 0.123, accuracy(train)= 98.824, time= 89.633, lr= 0.04073\n",
" accuracy(test) = 98.770 %, time= 9.939\n",
"epoch= 6, i= 100, loss(batch)= 0.1051, accuray(batch)= 99.00\n",
"epoch= 6, i= 200, loss(batch)= 0.1060, accuray(batch)= 98.00\n",
"epoch= 6, i= 300, loss(batch)= 0.0966, accuray(batch)= 99.00\n",
"epoch= 6, i= 400, loss(batch)= 0.0942, accuray(batch)= 100.00\n",
"epoch= 6, i= 500, loss(batch)= 0.1439, accuray(batch)= 98.00\n",
"epoch= 6, loss(train)= 0.110, accuracy(train)= 98.998, time= 89.748, lr= 0.03869\n",
" accuracy(test) = 98.860 %, time= 9.885\n",
"epoch= 7, i= 100, loss(batch)= 0.2120, accuray(batch)= 96.00\n",
"epoch= 7, i= 200, loss(batch)= 0.1200, accuray(batch)= 98.00\n",
"epoch= 7, i= 300, loss(batch)= 0.1138, accuray(batch)= 99.00\n",
"epoch= 7, i= 400, loss(batch)= 0.0879, accuray(batch)= 100.00\n",
"epoch= 7, i= 500, loss(batch)= 0.1056, accuray(batch)= 99.00\n",
"epoch= 7, loss(train)= 0.101, accuracy(train)= 99.138, time= 89.662, lr= 0.03675\n",
" accuracy(test) = 98.950 %, time= 9.961\n",
"epoch= 8, i= 100, loss(batch)= 0.1075, accuray(batch)= 99.00\n",
"epoch= 8, i= 200, loss(batch)= 0.0909, accuray(batch)= 99.00\n",
"epoch= 8, i= 300, loss(batch)= 0.0770, accuray(batch)= 100.00\n",
"epoch= 8, i= 400, loss(batch)= 0.0718, accuray(batch)= 100.00\n",
"epoch= 8, i= 500, loss(batch)= 0.0801, accuray(batch)= 99.00\n",
"epoch= 8, loss(train)= 0.094, accuracy(train)= 99.145, time= 89.611, lr= 0.03492\n",
" accuracy(test) = 99.080 %, time= 9.954\n",
"epoch= 9, i= 100, loss(batch)= 0.1403, accuray(batch)= 96.00\n",
"epoch= 9, i= 200, loss(batch)= 0.0803, accuray(batch)= 100.00\n",
"epoch= 9, i= 300, loss(batch)= 0.0778, accuray(batch)= 100.00\n",
"epoch= 9, i= 400, loss(batch)= 0.0727, accuray(batch)= 100.00\n",
"epoch= 9, i= 500, loss(batch)= 0.0680, accuray(batch)= 100.00\n",
"epoch= 9, loss(train)= 0.090, accuracy(train)= 99.169, time= 89.386, lr= 0.03317\n",
" accuracy(test) = 99.020 %, time= 9.926\n",
"epoch= 10, i= 100, loss(batch)= 0.1055, accuray(batch)= 99.00\n",
"epoch= 10, i= 200, loss(batch)= 0.0800, accuray(batch)= 100.00\n",
"epoch= 10, i= 300, loss(batch)= 0.0802, accuray(batch)= 99.00\n",
"epoch= 10, i= 400, loss(batch)= 0.0751, accuray(batch)= 100.00\n",
"epoch= 10, i= 500, loss(batch)= 0.1007, accuray(batch)= 99.00\n",
"epoch= 10, loss(train)= 0.083, accuracy(train)= 99.333, time= 89.463, lr= 0.03151\n",
" accuracy(test) = 99.190 %, time= 9.909\n",
"epoch= 11, i= 100, loss(batch)= 0.0904, accuray(batch)= 98.00\n",
"epoch= 11, i= 200, loss(batch)= 0.0698, accuray(batch)= 100.00\n",
"epoch= 11, i= 300, loss(batch)= 0.0759, accuray(batch)= 99.00\n",
"epoch= 11, i= 400, loss(batch)= 0.0873, accuray(batch)= 99.00\n",
"epoch= 11, i= 500, loss(batch)= 0.1021, accuray(batch)= 98.00\n",
"epoch= 11, loss(train)= 0.080, accuracy(train)= 99.340, time= 88.944, lr= 0.02994\n",
" accuracy(test) = 98.910 %, time= 9.756\n",
"epoch= 12, i= 100, loss(batch)= 0.0617, accuray(batch)= 100.00\n",
"epoch= 12, i= 200, loss(batch)= 0.0923, accuray(batch)= 99.00\n",
"epoch= 12, i= 300, loss(batch)= 0.0951, accuray(batch)= 98.00\n",
"epoch= 12, i= 400, loss(batch)= 0.0960, accuray(batch)= 99.00\n",
"epoch= 12, i= 500, loss(batch)= 0.0774, accuray(batch)= 99.00\n",
"epoch= 12, loss(train)= 0.076, accuracy(train)= 99.431, time= 88.541, lr= 0.02844\n",
" accuracy(test) = 99.110 %, time= 9.737\n",
"epoch= 13, i= 100, loss(batch)= 0.0574, accuray(batch)= 100.00\n",
"epoch= 13, i= 200, loss(batch)= 0.0579, accuray(batch)= 100.00\n",
"epoch= 13, i= 300, loss(batch)= 0.0695, accuray(batch)= 100.00\n",
"epoch= 13, i= 400, loss(batch)= 0.0741, accuray(batch)= 100.00\n",
"epoch= 13, i= 500, loss(batch)= 0.0762, accuray(batch)= 99.00\n",
"epoch= 13, loss(train)= 0.072, accuracy(train)= 99.455, time= 88.890, lr= 0.02702\n",
" accuracy(test) = 99.070 %, time= 9.904\n",
"epoch= 14, i= 100, loss(batch)= 0.0727, accuray(batch)= 99.00\n",
"epoch= 14, i= 200, loss(batch)= 0.0621, accuray(batch)= 100.00\n",
"epoch= 14, i= 300, loss(batch)= 0.0973, accuray(batch)= 99.00\n",
"epoch= 14, i= 400, loss(batch)= 0.0736, accuray(batch)= 100.00\n",
"epoch= 14, i= 500, loss(batch)= 0.0742, accuray(batch)= 99.00\n",
"epoch= 14, loss(train)= 0.069, accuracy(train)= 99.482, time= 89.169, lr= 0.02567\n",
" accuracy(test) = 99.090 %, time= 9.814\n",
"epoch= 15, i= 100, loss(batch)= 0.0727, accuray(batch)= 99.00\n",
"epoch= 15, i= 200, loss(batch)= 0.0880, accuray(batch)= 98.00\n",
"epoch= 15, i= 300, loss(batch)= 0.0589, accuray(batch)= 100.00\n",
"epoch= 15, i= 400, loss(batch)= 0.0529, accuray(batch)= 100.00\n",
"epoch= 15, i= 500, loss(batch)= 0.0529, accuray(batch)= 100.00\n",
"epoch= 15, loss(train)= 0.066, accuracy(train)= 99.567, time= 88.707, lr= 0.02438\n",
" accuracy(test) = 99.120 %, time= 9.723\n",
"epoch= 16, i= 100, loss(batch)= 0.0523, accuray(batch)= 100.00\n",
"epoch= 16, i= 200, loss(batch)= 0.0550, accuray(batch)= 100.00\n",
"epoch= 16, i= 300, loss(batch)= 0.0558, accuray(batch)= 100.00\n",
"epoch= 16, i= 400, loss(batch)= 0.0682, accuray(batch)= 99.00\n",
"epoch= 16, i= 500, loss(batch)= 0.0549, accuray(batch)= 100.00\n",
"epoch= 16, loss(train)= 0.065, accuracy(train)= 99.573, time= 88.703, lr= 0.02316\n",
" accuracy(test) = 99.040 %, time= 9.811\n",
"epoch= 17, i= 100, loss(batch)= 0.0621, accuray(batch)= 100.00\n",
"epoch= 17, i= 200, loss(batch)= 0.0651, accuray(batch)= 99.00\n",
"epoch= 17, i= 300, loss(batch)= 0.0539, accuray(batch)= 100.00\n",
"epoch= 17, i= 400, loss(batch)= 0.0705, accuray(batch)= 99.00\n",
"epoch= 17, i= 500, loss(batch)= 0.0695, accuray(batch)= 99.00\n",
"epoch= 17, loss(train)= 0.062, accuracy(train)= 99.604, time= 88.800, lr= 0.02201\n",
" accuracy(test) = 99.130 %, time= 9.832\n",
"epoch= 18, i= 100, loss(batch)= 0.0560, accuray(batch)= 100.00\n",
"epoch= 18, i= 200, loss(batch)= 0.0697, accuray(batch)= 99.00\n",
"epoch= 18, i= 300, loss(batch)= 0.0637, accuray(batch)= 100.00\n",
"epoch= 18, i= 400, loss(batch)= 0.0576, accuray(batch)= 99.00\n",
"epoch= 18, i= 500, loss(batch)= 0.0584, accuray(batch)= 100.00\n",
"epoch= 18, loss(train)= 0.061, accuracy(train)= 99.653, time= 88.983, lr= 0.02091\n",
" accuracy(test) = 99.150 %, time= 9.936\n",
"epoch= 19, i= 100, loss(batch)= 0.0710, accuray(batch)= 100.00\n",
"epoch= 19, i= 200, loss(batch)= 0.0473, accuray(batch)= 100.00\n",
"epoch= 19, i= 300, loss(batch)= 0.0551, accuray(batch)= 100.00\n",
"epoch= 19, i= 400, loss(batch)= 0.0506, accuray(batch)= 100.00\n",
"epoch= 19, i= 500, loss(batch)= 0.0491, accuray(batch)= 100.00\n",
"epoch= 19, loss(train)= 0.059, accuracy(train)= 99.669, time= 89.217, lr= 0.01986\n",
" accuracy(test) = 99.160 %, time= 9.873\n",
"epoch= 20, i= 100, loss(batch)= 0.0525, accuray(batch)= 100.00\n",
"epoch= 20, i= 200, loss(batch)= 0.0482, accuray(batch)= 100.00\n",
"epoch= 20, i= 300, loss(batch)= 0.0642, accuray(batch)= 100.00\n",
"epoch= 20, i= 400, loss(batch)= 0.0515, accuray(batch)= 100.00\n",
"epoch= 20, i= 500, loss(batch)= 0.0504, accuray(batch)= 100.00\n",
"epoch= 20, loss(train)= 0.057, accuracy(train)= 99.698, time= 89.425, lr= 0.01887\n",
" accuracy(test) = 99.030 %, time= 9.874\n"
]
}
],
"source": [
"# Optimizer\n",
"global_lr = learning_rate\n",
"global_step = 0\n",
"decay = 0.95\n",
"decay_steps = train_size\n",
"lr = learning_rate\n",
"optimizer = net.update(lr) \n",
"\n",
"\n",
"# loop over epochs\n",
"indices = collections.deque()\n",
"for epoch in range(num_epochs): # loop over the dataset multiple times\n",
"\n",
" # reshuffle \n",
" indices.extend(np.random.permutation(train_size)) # rand permutation\n",
" \n",
" # reset time\n",
" t_start = time.time()\n",
" \n",
" # extract batches\n",
" running_loss = 0.0\n",
" running_accuray = 0\n",
" running_total = 0\n",
" while len(indices) >= batch_size:\n",
" \n",
" # extract batches\n",
" batch_idx = [indices.popleft() for i in range(batch_size)]\n",
" train_x, train_y = train_data[batch_idx,:], train_labels[batch_idx]\n",
" train_x = Variable( torch.FloatTensor(train_x).type(dtypeFloat) , requires_grad=False) \n",
" train_y = train_y.astype(np.int64)\n",
" train_y = torch.LongTensor(train_y).type(dtypeLong)\n",
" train_y = Variable( train_y , requires_grad=False) \n",
" \n",
" # Forward \n",
" y = net.forward(train_x, dropout_value, L, lmax)\n",
" loss = net.loss(y,train_y,l2_regularization) \n",
" loss_train = loss.data\n",
" \n",
" # Accuracy\n",
" acc_train = net.evaluation(y,train_y.data)\n",
" \n",
" # backward\n",
" loss.backward()\n",
" \n",
" # Update \n",
" global_step += batch_size # to update learning rate\n",
" optimizer.step()\n",
" optimizer.zero_grad()\n",
" \n",
" # loss, accuracy\n",
" running_loss += loss_train\n",
" running_accuray += acc_train\n",
" running_total += 1\n",
" \n",
" # print \n",
" if not running_total%100: # print every x mini-batches\n",
" print('epoch= %d, i= %4d, loss(batch)= %.4f, accuray(batch)= %.2f' % (epoch+1, running_total, loss_train, acc_train))\n",
" \n",
" \n",
" # print \n",
" t_stop = time.time() - t_start\n",
" print('epoch= %d, loss(train)= %.3f, accuracy(train)= %.3f, time= %.3f, lr= %.5f' % \n",
" (epoch+1, running_loss/running_total, running_accuray/running_total, t_stop, lr))\n",
" \n",
"\n",
" # update learning rate \n",
" lr = global_lr * pow( decay , float(global_step// decay_steps) )\n",
" optimizer = net.update_learning_rate(optimizer, lr)\n",
" \n",
" \n",
" # Test set\n",
" running_accuray_test = 0\n",
" running_total_test = 0\n",
" indices_test = collections.deque()\n",
" indices_test.extend(range(test_data.shape[0]))\n",
" t_start_test = time.time()\n",
" while len(indices_test) >= batch_size:\n",
" batch_idx_test = [indices_test.popleft() for i in range(batch_size)]\n",
" test_x, test_y = test_data[batch_idx_test,:], test_labels[batch_idx_test]\n",
" test_x = Variable( torch.FloatTensor(test_x).type(dtypeFloat) , requires_grad=False) \n",
" y = net.forward(test_x, 0.0, L, lmax) \n",
" test_y = test_y.astype(np.int64)\n",
" test_y = torch.LongTensor(test_y).type(dtypeLong)\n",
" test_y = Variable( test_y , requires_grad=False) \n",
" acc_test = net.evaluation(y,test_y.data)\n",
" running_accuray_test += acc_test\n",
" running_total_test += 1\n",
" t_stop_test = time.time() - t_start_test\n",
" print(' accuracy(test) = %.3f %%, time= %.3f' % (running_accuray_test / running_total_test, t_stop_test))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## References\n",
"\n",
"- [Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering](https://arxiv.org/abs/1606.09375)\n",
"- [Xavier Bresson: \"Convolutional Neural Networks on Graphs\"](https://www.youtube.com/watch?v=v3jZRkvIOIM)"
]
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"name": "Untitled14.ipynb",
"provenance": []
},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.5"
}
},
"nbformat": 4,
"nbformat_minor": 1
}