File size: 31,480 Bytes
a3ab6c4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 |
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## GraphSAGE (SAmple and aggreGatE) : Inductive Learning on Graphs"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Introduction\n",
"\n",
"In the previous blogs, we covered GCN and DeepWalk, which are methods to generate node embeddings. The basic\n",
"idea behind node embedding approaches is to use dimensionality reduction techniques to distill the\n",
"high-dimensional information about a node’s neighborhood into a dense vector embedding. These\n",
"node embeddings can then be fed to downstream machine learning systems and aid in tasks such as\n",
"node classification, clustering, and link prediction. Let us move on to a slightly different problem. Now, we need the embeddings for each node of a graph where new nodes are continuously being added. A possible way to do this would be to rerun the entire model (GCN or DeepWalk) on the new graph, but it is computationally expensive. Today we will be covering GraphSAGE, a method that will allow us to get embeddings for such graphs is a much easier way. Unlike embedding approaches that are based on matrix factorization, GraphSAGE leverage node features (e.g., text attributes, node profile information, node degrees) in order to learn an embedding function that generalizes to unseen nodes.\n",
"\n",
"\n",
"GraphSAGE is capable of learning\n",
"structural information about a node’s role in a graph, despite the fact that it is inherently based on\n",
"features"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### The start\n",
"In the (GCN or DeepWalk) model, the graph was fixed beforehand, let's say the 'Zachary karate club', some model was trained on it, and then we could make predictions about a person X, if he/she went to a particular part of the club after separation.\n",
"\n",
"\n",
"\n",
"\n",
"In this problem, the nodes in this graph were fixed from the beginning, and all the predictions were also to be made on these fixed nodes. In contrast to this, take an example where 'Youtube' videos are the nodes and assume there is an edge between related videos, and say we need to classify these videos depending on the content. If we take the same model as in the previous dataset, we can classify all these videos, but whenever a new video is added to 'YouTube', we will have to retrain the model on the entire new dataset again to classify it. This is not feasible as there will be too many videos or nodes being added everytime for us to retrain.\n",
"\n",
"To solve this issue, what we can do is not to learn embeddings for each node but to learn a function which, given the features and edges joining this node, will give the embeddings for the node. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Aggregating Neighbours\n",
"\n",
"The idea is to generate embeddings, based on the neighbourhood of a given node. In other words, the embedding of a node will depend upon the embedding of the nodes it is connected to. Like in the graph below, the node 1 and 2 are likely to be more similar than node 1 and 5.\n",
"\n",
"\n",
"How can this idea be formulated?\n",
"\n",
"First, we assign random values to the embeddings, and on each step, we will set the value of the embedding as the average of embeddings for all the nodes it is directly connected. The following example shows the working on a simple linear graph.\n",
"\n",
"\n",
"\n",
"This is a straightforward idea, which can be generalized by representing it in the following way,\n",
"\n",
"\n",
"Here The Black Box joining A with B, C, D represents some function of the A, B, C, D. ( In the above animation, it was the mean function). We can replace this box by any function like say sum or max. This function is known as the aggregator function.\n",
"\n",
"Now let's try to make it more general by using not only the neighbours of a node but also the neighbours of the neighbours. The first question is how to make use of neighbours of neighbours. The way which we will be using here is to first generate each node's embedding in the first step by using only its neighbours just like we did above, and then in the second step, we will use these embeddings to generate the new embeddings. Take a look at the following \n",
"\n",
"\n",
"\n",
"The numbers written along with the nodes are the value of embedding at the time, T=0.\n",
"\n",
"Values of embedding after one step are as follows:\n",
"\n",
"\n",
"\n",
"So after one iteration, the values are as follows:\n",
"\n",
"\n",
"\n",
"Repeating the same procedure on this new graph, we get (try verifying yourself)\n",
"\n",
"\n",
"\n",
"Lets try to do some analysis of the aggregation. Represent by $A^{(0)}$ the initial value of embedding of A(i.e. 0.1), by $A^{(1)}$ the value after one layer(i.e. 0.25) similarly $A^{(2)}$, $B^{(0)}$, $B^{(1)}$ and all other values.\n",
"\n",
"Clearly \n",
"\n",
"$$A^{(1)} = \\frac{(A^{(0)} + B^{(0)} + C^{(0)} + D^{(0)})}{4}$$\n",
"\n",
"Similarly\n",
"\n",
"$$A^{(2)} = \\frac{(A^{(1)} + B^{(1)} + C^{(1)} + D^{(1)})}{4}$$\n",
"\n",
"Writing all the value in the RHS in terms of initial values of embeddings we get\n",
"\n",
"$$A^{(2)} = \\frac{\\frac{(A^{(0)} + B^{(0)} + C^{(0)} + D^{(0)})}{4} + \\frac{A^{(0)}+B^{(0)}+C^{(0)}}{3} + \\frac{A^{(0)}+B^{(0)}+C^{(0)}+E^{(0)} +F^{(0)}}{5} + \\frac{A^{(0)}+D^{(0)}}{2}}{4}$$\n",
"\n",
"If you look closely, you will see that all the nodes that were either neighbour of A or neighbour of some neighbour of A are present in this term. It is equivalent to saying that all nodes that have a distance of less than or equal to 2 edges from A are influencing this term. Had there been a node G connected only to node F. then it is clearly at a distance of 3 from A and hence won't be influencing this term.\n",
"\n",
"Generalizing this we can say that if we repeat this produce N times, then all the nodes ( and only those nodes) that are at a within a distance N from the node will be influencing the value of the terms.\n",
"\n",
"If we replace the mean function, with some other function, lets say $F$, then, in this case, we can write,\n",
"\n",
"$$A^{(1)} = F(A^{(0)} , B^{(0)} , C^{(0)} , D^{(0)})$$\n",
"\n",
"Or more generally\n",
"\n",
"$$A^{(k)} = F(A^{(k-1)} , B^{(k-1)} , C^{(k-1)} , D^{(k-1)})$$\n",
"\n",
"If we denote by $N(v)$ the set of neighbours of $v$, so $N(A)=\\{B, C, D\\}$ and $N(A)^{(k)}=\\{B^{(k)}, C^{(k)}, D^{(k)}\\}$, the above equation can be simplified as\n",
"\n",
"$$A^{(k)} = F(A^{(k-1)}, N(A)^{(k-1)} )$$\n",
"\n",
"This process can be visualized as:\n",
"\n",
"\n",
"\n",
"This method is quite effective in generating node embeddings. But there is an issue if a new node is added to the graph how can get its embeddings? This is an issue that cannot be tackled with this type of model. Clearly, something new is needed, but what? "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"One alternative that we can try is to replace the function F by multiple functions such that in the first layer it is \n",
"F1, in second layer F2 and so on, and then fixing the number of layers that we want, let's say k.\n",
"\n",
"So our embedding generator would be like this,\n",
""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's formalize our notation a bit now so that it is easy to understand things.\n",
"\n",
"1. Instead of writing $A^{(k)}$ we will be writing $h_{A}^{k}$\n",
"2. Rename the functions $F1$, $F2$ and so on as, $AGGREGATE_{1}$, $AGGREGATE_{2}$ and so on. i.e, $Fk$ becomes $AGGREGATE_{k}$.\n",
"3. There are a total of $K$ aggregation functions.\n",
"3. Let our graph be represented by $G(V, E)$ where $V$ is the set of vertices and $E$ is the set of edges."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## What GraphSAGE proposes?\n",
"\n",
"What we have been doing by now can be written as \n",
"\n",
"Initialise($h_{v}^{0}$) $\\forall v \\in V$ <br>\n",
"for $k=1..K$ do <br>\n",
" for $v\\in V$ do<br>\n",
" $h_{v}^{k}=AGGREGATE_{k}(h_{v}^{k-1}, \\{h_{u}^{k-1} \\forall u \\in N(v)\\})$\n",
"\n",
"$h_{v}^{k}$ will now be containing the embeddings\n",
"\n",
"### Some issues with this\n",
"\n",
"Please take a look at the sample graph that we discussed above, in this graph even though the initial embeddings for $E$ and $F$ were different, but because their neighbours were same they ended with the same embedding, this is not a good thing as there must be at least some difference between their embeddings. \n",
"\n",
"GraphSAGE proposes an interesting idea to deal with it. Rather than passing both of them into the same aggregating function, what we will do is to pass into aggregating function only the neighbours and then concatenating this vector with the vector of that node. This can be written as:\n",
"\n",
"$h_{v}^{k}=CONCAT(h_{v}^{k-1},AGGREGATE_{k}( \\{h_{u}^{k-1} \\forall u \\in N(v)\\}))$\n",
"\n",
"In this way, we can prevent two vectors from attaining exactly the same embedding.\n",
"\n",
"Lets now add some non-linearity to make it more expressive. So it becomes\n",
"\n",
"$h_{v}^{k}=\\sigma[W^{(k)}.CONCAT(h_{v}^{k-1},AGGREGATE_{k}( \\{h_{u}^{k-1} \\forall u \\in N(v)\\}))]$\n",
"\n",
"Where \\sigma is some non-linear function (e.g. RELU, sigmoid, etc.) and $W^{(k)}$ is the weight matrix, each layer will have one such matrix. If you looked closely, you would have seen that there no trainable parameters till now in our model. The $W$ matrix has been added to have something that the model can learn.\n",
"\n",
"One more thing we will add is to normalize the value of h after each iteration, i.e., divide them by their L2 norm, and hence our complete algorithm becomes.\n",
"\n",
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To get the model learning, we need the loss function. For the general unsupervised learning problem, the following loss problem serves pretty well.\n",
"\n",
"\n",
"\n",
"This graph-based loss function encourages nearby nodes to have similar representations, while enforcing\n",
"that the representations of disparate nodes are highly distinct.\n",
"\n",
"\n",
"For supervised learning, either we can learn the embeddings first and then use those embeddings for the downstream task or combine both the part of learning embeddings and the part of applying these embeddings in the task into a single end to end models and then use the loss for the final part, and backpropagate to learn the embeddings while solving the task simultaneously."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Aggregator Architectures\n",
"One of the critical difference between GCN and Graphsage is the generalisation of the aggregation function, which was the mean aggregator in GCN. So rather than only taking the average, we use generalised aggregation function in GraphSAGE. GraphSAGE owes its inductivity to its aggregator functions.\n",
"\n",
"## Mean aggregator \n",
"Mean aggregator is as simple as you thought it would be. In mean aggregator we simply\n",
"take the elementwise mean of the vectors in **{h<sub>u</sub><sup>k-1</sup> ∀u ∈ N (v)}**.\n",
"In other words, we can average embeddings of all nodes in the neighbourhood to construct the neighbourhood embedding.\n",
"\n",
"\n",
"## Pool aggregator\n",
"Until now, we were using a weighted average type of approach. But we could also use pooling type of approach; for example, we can do elementwise min or max pooling. So this would be another option where we are taking the messages from our neighbours, transforming them and applying some pooling technique(max-pooling or min pooling).\n",
"\n",
"\n",
"In the above equation, max denotes the elementwise max operator, and σ is a nonlinear activation function (yes you are right it can be ReLU). Please note that the function applied before the max-pooling can be an arbitrarily deep multi-layer perceptron, but in the original paper, simple single-layer architectures are preferred.\n",
"\n",
"## LSTM aggregator\n",
"We could also use a deep neural network like LSTM to learn how to aggregate the neighbours. Order invariance is important in the aggregator function, but since LSTM is not order invariant, we would have to train the LSTM over several random orderings or permutation of neighbours to make sure that this will learn that order is not essential."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Inductive capability\n",
"One interesting property of GraphSAGE is that we can train our model on one subset of the graph and apply this model on another subset of this graph. The reason we can do this is that we can do parameter sharing, i.e. those processing boxes are the same everywhere (W and B are shared across all the computational graphs or architectures). So when a new architecture comes into play, we can borrow the parameters (W and B), do a forward pass, and we get our prediction. \n",
""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This property of GraphSAGE is advantageous in the prediction of protein interaction. For example, we can train our model on protein interaction graph from model organism A (left-hand side in the figure below) and generate embedding on newly collected data from other model organism say B (right-hand side in the figure).\n",
""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We know that our old methods like DeepWalk were not able to generalise to a new unseen graph. So if any new node gets added to the graph, we had to train our model from scratch, but since our new method is generalised to the unseen graphs, so to predict the embeddings of the new node we have to make the computational graph of the new node, transfer the parameters to the unseen part of the graph and we can make predictions.\n",
"\n",
""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can use this property in social-network (like Facebook). Consider the first graph in the above figure, users in a social-network are represented by the nodes of the graph. Initially, we would train our model on this graph. After some time suppose another user is added in the network, now we don't have to train our model from scratch on the second graph, we will create the computational graph of the new node, borrow the parameters from the already trained model and then we can find the embeddings of the newly added user."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Implementation in PyTorch"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Imports"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import torch\n",
"import torch.nn as nn\n",
"from torch.nn import init\n",
"from torch.autograd import Variable\n",
"import torch.nn.functional as F\n",
"import numpy as np\n",
"import time\n",
"import random\n",
"from sklearn.metrics import f1_score\n",
"from collections import defaultdict"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## GraphSAGE class"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"\"\"\"\n",
"Simple supervised GraphSAGE model as well as examples running the model\n",
"on the Cora and Pubmed datasets.\n",
"\"\"\"\n",
"\n",
"class MeanAggregator(nn.Module):\n",
" \"\"\"\n",
" Aggregates a node's embeddings using mean of neighbors' embeddings\n",
" \"\"\"\n",
" def __init__(self, features, cuda=False, gcn=False): \n",
" \"\"\"\n",
" Initializes the aggregator for a specific graph.\n",
" features -- function mapping LongTensor of node ids to FloatTensor of feature values.\n",
" cuda -- whether to use GPU\n",
" gcn --- whether to perform concatenation GraphSAGE-style, or add self-loops GCN-style\n",
" \"\"\"\n",
"\n",
" super(MeanAggregator, self).__init__()\n",
"\n",
" self.features = features\n",
" self.cuda = cuda\n",
" self.gcn = gcn\n",
" \n",
" def forward(self, nodes, to_neighs, num_sample=10):\n",
" \"\"\"\n",
" nodes --- list of nodes in a batch\n",
" to_neighs --- list of sets, each set is the set of neighbors for node in batch\n",
" num_sample --- number of neighbors to sample. No sampling if None.\n",
" \"\"\"\n",
" # Local pointers to functions (speed hack)\n",
" _set = set\n",
" if not num_sample is None:\n",
" _sample = random.sample\n",
" samp_neighs = [_set(_sample(to_neigh, \n",
" num_sample,\n",
" )) if len(to_neigh) >= num_sample else to_neigh for to_neigh in to_neighs]\n",
" else:\n",
" samp_neighs = to_neighs\n",
"\n",
" if self.gcn:\n",
" samp_neighs = [samp_neigh + set([nodes[i]]) for i, samp_neigh in enumerate(samp_neighs)]\n",
" unique_nodes_list = list(set.union(*samp_neighs))\n",
" unique_nodes = {n:i for i,n in enumerate(unique_nodes_list)}\n",
" mask = Variable(torch.zeros(len(samp_neighs), len(unique_nodes)))\n",
" column_indices = [unique_nodes[n] for samp_neigh in samp_neighs for n in samp_neigh] \n",
" row_indices = [i for i in range(len(samp_neighs)) for j in range(len(samp_neighs[i]))]\n",
" mask[row_indices, column_indices] = 1\n",
" if self.cuda:\n",
" mask = mask.cuda()\n",
" num_neigh = mask.sum(1, keepdim=True)\n",
" mask = mask.div(num_neigh)\n",
" if self.cuda:\n",
" embed_matrix = self.features(torch.LongTensor(unique_nodes_list).cuda())\n",
" else:\n",
" embed_matrix = self.features(torch.LongTensor(unique_nodes_list))\n",
" to_feats = mask.mm(embed_matrix)\n",
" return to_feats\n",
"\n",
"class Encoder(nn.Module):\n",
" \"\"\"\n",
" Encodes a node's using 'convolutional' GraphSage approach\n",
" \"\"\"\n",
" def __init__(self, features, feature_dim, \n",
" embed_dim, adj_lists, aggregator,\n",
" num_sample=10,\n",
" base_model=None, gcn=False, cuda=False, \n",
" feature_transform=False): \n",
" super(Encoder, self).__init__()\n",
"\n",
" self.features = features\n",
" self.feat_dim = feature_dim\n",
" self.adj_lists = adj_lists\n",
" self.aggregator = aggregator\n",
" self.num_sample = num_sample\n",
" if base_model != None:\n",
" self.base_model = base_model\n",
"\n",
" self.gcn = gcn\n",
" self.embed_dim = embed_dim\n",
" self.cuda = cuda\n",
" self.aggregator.cuda = cuda\n",
" self.weight = nn.Parameter(\n",
" torch.FloatTensor(embed_dim, self.feat_dim if self.gcn else 2 * self.feat_dim))\n",
" init.xavier_uniform(self.weight)\n",
"\n",
" def forward(self, nodes):\n",
" \"\"\"\n",
" Generates embeddings for a batch of nodes.\n",
" nodes -- list of nodes\n",
" \"\"\"\n",
" neigh_feats = self.aggregator.forward(nodes,\n",
" [self.adj_lists[int(node)] for node in nodes], self.num_sample)\n",
" if not self.gcn:\n",
" if self.cuda:\n",
" self_feats = self.features(torch.LongTensor(nodes).cuda())\n",
" else:\n",
" self_feats = self.features(torch.LongTensor(nodes))\n",
" combined = torch.cat([self_feats, neigh_feats], dim=1)\n",
" else:\n",
" combined = neigh_feats\n",
" combined = F.relu(self.weight.mm(combined.t()))\n",
" return combined\n",
"\n",
"\n",
"class SupervisedGraphSage(nn.Module):\n",
"\n",
" def __init__(self, num_classes, enc):\n",
" super(SupervisedGraphSage, self).__init__()\n",
" self.enc = enc\n",
" self.xent = nn.CrossEntropyLoss()\n",
"\n",
" self.weight = nn.Parameter(torch.FloatTensor(num_classes, enc.embed_dim))\n",
" init.xavier_uniform(self.weight)\n",
"\n",
" def forward(self, nodes):\n",
" embeds = self.enc(nodes)\n",
" scores = self.weight.mm(embeds)\n",
" return scores.t()\n",
"\n",
" def loss(self, nodes, labels):\n",
" scores = self.forward(nodes)\n",
" return self.xent(scores, labels.squeeze())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load and Run"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/home/solsec/anaconda3/envs/GCN/lib/python3.7/site-packages/ipykernel_launcher.py:84: UserWarning: nn.init.xavier_uniform is now deprecated in favor of nn.init.xavier_uniform_.\n",
"/home/solsec/anaconda3/envs/GCN/lib/python3.7/site-packages/ipykernel_launcher.py:113: UserWarning: nn.init.xavier_uniform is now deprecated in favor of nn.init.xavier_uniform_.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"0 1.942228078842163\n",
"1 1.921658992767334\n",
"2 1.9006750583648682\n",
"3 1.873147964477539\n",
"4 1.833079218864441\n",
"5 1.793070912361145\n",
"6 1.7698112726211548\n",
"7 1.7396035194396973\n",
"8 1.6929861307144165\n",
"9 1.6441305875778198\n",
"10 1.5536351203918457\n",
"11 1.5488044023513794\n",
"12 1.4822677373886108\n",
"13 1.468451738357544\n",
"14 1.3974864482879639\n",
"15 1.3166505098342896\n",
"16 1.2732900381088257\n",
"17 1.195784330368042\n",
"18 1.0451050996780396\n",
"19 0.9867343306541443\n",
"20 0.9533907175064087\n",
"21 0.9308909177780151\n",
"22 0.8159271478652954\n",
"23 0.7914730906486511\n",
"24 0.7673667669296265\n",
"25 0.7801153063774109\n",
"26 0.677147626876831\n",
"27 0.6584917902946472\n",
"28 0.6916540861129761\n",
"29 0.7556794881820679\n",
"30 0.7246103882789612\n",
"31 1.0994600057601929\n",
"32 0.8346526622772217\n",
"33 1.0626455545425415\n",
"34 0.5540371537208557\n",
"35 0.4707820415496826\n",
"36 0.47333627939224243\n",
"37 0.4838956296443939\n",
"38 0.4711683988571167\n",
"39 0.4963235855102539\n",
"40 0.48719295859336853\n",
"41 0.4026302695274353\n",
"42 0.35586124658584595\n",
"43 0.4207482933998108\n",
"44 0.41222259402275085\n",
"45 0.3622773289680481\n",
"46 0.33898842334747314\n",
"47 0.3108625113964081\n",
"48 0.34005632996559143\n",
"49 0.38214144110679626\n",
"50 0.314105749130249\n",
"51 0.3763721287250519\n",
"52 0.33562469482421875\n",
"53 0.40695565938949585\n",
"54 0.29900142550468445\n",
"55 0.36123421788215637\n",
"56 0.3518748879432678\n",
"57 0.3004622459411621\n",
"58 0.31813153624534607\n",
"59 0.25553494691848755\n",
"60 0.30214229226112366\n",
"61 0.30288413166999817\n",
"62 0.35318124294281006\n",
"63 0.2550695240497589\n",
"64 0.24285988509655\n",
"65 0.2586570382118225\n",
"66 0.27572184801101685\n",
"67 0.30874624848365784\n",
"68 0.25411731004714966\n",
"69 0.24063177406787872\n",
"70 0.2535572648048401\n",
"71 0.19541779160499573\n",
"72 0.20859725773334503\n",
"73 0.1995910108089447\n",
"74 0.20250269770622253\n",
"75 0.2077709287405014\n",
"76 0.20552675426006317\n",
"77 0.19936150312423706\n",
"78 0.24609258770942688\n",
"79 0.1969422698020935\n",
"80 0.19751787185668945\n",
"81 0.20629757642745972\n",
"82 0.19819925725460052\n",
"83 0.20762889087200165\n",
"84 0.17974525690078735\n",
"85 0.16918545961380005\n",
"86 0.2033073604106903\n",
"87 0.11312698572874069\n",
"88 0.19385862350463867\n",
"89 0.19625785946846008\n",
"90 0.20826341211795807\n",
"91 0.18184316158294678\n",
"92 0.17827709019184113\n",
"93 0.19169804453849792\n",
"94 0.1731080412864685\n",
"95 0.18547363579273224\n",
"96 0.13688258826732635\n",
"97 0.1454528272151947\n",
"98 0.18186761438846588\n",
"99 0.1714990884065628\n",
"Validation F1: 0.842\n",
"Average batch time: 0.04003107070922852\n"
]
}
],
"source": [
"def load_cora():\n",
" num_nodes = 2708\n",
" num_feats = 1433\n",
" feat_data = np.zeros((num_nodes, num_feats))\n",
" labels = np.empty((num_nodes,1), dtype=np.int64)\n",
" node_map = {}\n",
" label_map = {}\n",
" with open(\"./cora/cora.content\") as fp:\n",
" for i,line in enumerate(fp):\n",
" info = line.strip().split()\n",
" feat_data[i,:] = [float(x) for x in info[1:-1]]\n",
" node_map[info[0]] = i\n",
" if not info[-1] in label_map:\n",
" label_map[info[-1]] = len(label_map)\n",
" labels[i] = label_map[info[-1]]\n",
"\n",
" adj_lists = defaultdict(set)\n",
" with open(\"./cora/cora.cites\") as fp:\n",
" for i,line in enumerate(fp):\n",
" info = line.strip().split()\n",
" paper1 = node_map[info[0]]\n",
" paper2 = node_map[info[1]]\n",
" adj_lists[paper1].add(paper2)\n",
" adj_lists[paper2].add(paper1)\n",
" return feat_data, labels, adj_lists\n",
"\n",
"def run_cora():\n",
" np.random.seed(1)\n",
" random.seed(1)\n",
" \n",
" num_nodes = 2708\n",
" feat_data, labels, adj_lists = load_cora()\n",
" \n",
" features = nn.Embedding(2708, 1433)\n",
" features.weight = nn.Parameter(torch.FloatTensor(feat_data),\n",
" requires_grad=False)\n",
"\n",
" agg1 = MeanAggregator(features, cuda=True)\n",
" \n",
" enc1 = Encoder(features, 1433, 128, adj_lists, agg1, gcn=True,\n",
" cuda=False)\n",
" agg2 = MeanAggregator(lambda nodes : enc1(nodes).t(),\n",
" cuda=False)\n",
" enc2 = Encoder(lambda nodes : enc1(nodes).t(),\n",
" enc1.embed_dim, 128, adj_lists, agg2, \n",
" base_model=enc1, gcn=True, cuda=False)\n",
" \n",
" enc1.num_samples = 5\n",
" enc2.num_samples = 5\n",
"\n",
" graphsage = SupervisedGraphSage(7, enc2)\n",
" rand_indices = np.random.permutation(num_nodes)\n",
" test = rand_indices[:1000]\n",
" val = rand_indices[1000:1500]\n",
" train = list(rand_indices[1500:])\n",
"\n",
" optimizer = torch.optim.SGD(filter(lambda p : p.requires_grad,\n",
" graphsage.parameters()), lr=0.7)\n",
" times = []\n",
" \n",
" for batch in range(100):\n",
" batch_nodes = train[:256]\n",
" random.shuffle(train)\n",
" start_time = time.time()\n",
" optimizer.zero_grad()\n",
" loss = graphsage.loss(batch_nodes, \n",
" Variable(torch.LongTensor(labels[np.array(batch_nodes)])))\n",
" loss.backward()\n",
" optimizer.step()\n",
" end_time = time.time()\n",
" times.append(end_time-start_time)\n",
" print (batch, loss.item())\n",
"\n",
" val_output = graphsage.forward(val)\n",
" \n",
" print (\"Validation F1:\", f1_score(labels[val],\n",
" val_output.data.numpy().argmax(axis=1),\n",
" average=\"micro\"))\n",
" \n",
" print (\"Average batch time:\", np.mean(times))\n",
"\n",
"\n",
"if __name__ == \"__main__\":\n",
" run_cora()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## References\n",
"- https://cs.stanford.edu/people/jure/pubs/graphsage-nips17.pdf\n",
"- [Graph Node Embedding Algorithms (Stanford - Fall 2019) by Jure Leskovec](https://www.youtube.com/watch?v=7JELX6DiUxQ)\n",
"- [Jure Leskovec: \"Large-scale Graph Representation Learning\"](https://www.youtube.com/watch?v=oQL4E1gK3VU)\n",
"- [Jure Leskovec \"Deep Learning on Graphs\"\n",
"](https://www.youtube.com/watch?v=MIAbDNAxChI)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
|