File size: 103,036 Bytes
dbfbc03 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 |
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "-Jv7Y4hXwt0j"
},
"source": [
"# Question duplicates\n",
"\n",
"We will explore Siamese networks applied to natural language processing. We will further explore the fundamentals of TensorFlow and we will be able to implement a more complicated structure using it. By completing this project, we will learn how to implement models with different architectures. \n",
"\n",
"\n",
"## Outline\n",
"\n",
"- [Overview](#0)\n",
"- [Part 1: Importing the Data](#1)\n",
" - [1.1 Loading in the data](#1.1)\n",
" - [1.2 Learn question encoding](#1.2)\n",
"- [Part 2: Defining the Siamese model](#2)\n",
" - [2.1 Understanding the Siamese Network](#2.1)\n",
" - [Exercise 01](#ex01)\n",
" - [2.2 Hard Negative Mining](#2.2)\n",
" - [Exercise 02](#ex02)\n",
"- [Part 3: Training](#3)\n",
" - [3.1 Training the model](#3.1)\n",
" - [Exercise 03](#ex03)\n",
"- [Part 4: Evaluation](#4)\n",
" - [4.1 Evaluating your siamese network](#4.1)\n",
" - [4.2 Classify](#4.2)\n",
" - [Exercise 04](#ex04)\n",
"- [Part 5: Testing with your own questions](#5)\n",
" - [Exercise 05](#ex05)\n",
"- [On Siamese networks](#6)\n",
"\n",
"<a name='0'></a>\n",
"### Overview\n",
"In particular, in this assignment you will: \n",
"\n",
"- Learn about Siamese networks\n",
"- Understand how the triplet loss works\n",
"- Understand how to evaluate accuracy\n",
"- Use cosine similarity between the model's outputted vectors\n",
"- Use the data generator to get batches of questions\n",
"- Predict using your own model\n",
"\n",
"By now, you should be familiar with Tensorflow and know how to make use of it to define your model. We will start this homework by asking you to create a vocabulary in a similar way as you did in the previous assignments. After this, you will build a classifier that will allow you to identify whether two questions are the same or not. \n",
"\n",
"<img src = \"./img/meme.png\" style=\"width:550px;height:300px;\"/>\n",
"\n",
"\n",
"Your model will take in the two questions, which will be transformed into tensors, each tensor will then go through embeddings, and after that an LSTM. Finally you will compare the outputs of the two subnetworks using cosine similarity. \n",
"\n",
"Before taking a deep dive into the model, you will start by importing the data set, and exploring it a bit.\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "4sF9Hqzgwt0l"
},
"source": [
"###### <a name='1'></a>\n",
"# Part 1: Importing the Data\n",
"<a name='1.1'></a>\n",
"### 1.1 Loading in the data\n",
"\n",
"You will be using the 'Quora question answer' dataset to build a model that can identify similar questions. This is a useful task because you don't want to have several versions of the same question posted. Several times when teaching I end up responding to similar questions on piazza, or on other community forums. This data set has already been labeled for you. Run the cell below to import some of the packages you will be using. "
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"colab_type": "code",
"deletable": false,
"editable": false,
"id": "zdACgs491cs2",
"outputId": "b31042ef-845b-46b8-c783-185e96b135f7"
},
"outputs": [],
"source": [
"import os\n",
"import numpy as np\n",
"import pandas as pd\n",
"import random as rnd\n",
"import tensorflow as tf\n"
]
},
{
"cell_type": "code",
"execution_count": 85,
"metadata": {
"deletable": false,
"editable": false
},
"outputs": [],
"source": [
"import w3_unittest"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "3GYhQRMspitx"
},
"source": [
"You will now load the data set. We have done some preprocessing for you. If you have taken the deeplearning specialization, this is a slightly different training method than the one you have seen there. If you have not, then don't worry about it, we will explain everything. "
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 528
},
"colab_type": "code",
"deletable": false,
"editable": false,
"id": "sXWBVGWnpity",
"outputId": "afa90d4d-fed7-43b8-bcba-48c95d600ad5",
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Number of question pairs: 404351\n"
]
},
{
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>id</th>\n",
" <th>qid1</th>\n",
" <th>qid2</th>\n",
" <th>question1</th>\n",
" <th>question2</th>\n",
" <th>is_duplicate</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>0</td>\n",
" <td>1</td>\n",
" <td>2</td>\n",
" <td>What is the step by step guide to invest in sh...</td>\n",
" <td>What is the step by step guide to invest in sh...</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>1</td>\n",
" <td>3</td>\n",
" <td>4</td>\n",
" <td>What is the story of Kohinoor (Koh-i-Noor) Dia...</td>\n",
" <td>What would happen if the Indian government sto...</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>2</td>\n",
" <td>5</td>\n",
" <td>6</td>\n",
" <td>How can I increase the speed of my internet co...</td>\n",
" <td>How can Internet speed be increased by hacking...</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>3</td>\n",
" <td>7</td>\n",
" <td>8</td>\n",
" <td>Why am I mentally very lonely? How can I solve...</td>\n",
" <td>Find the remainder when [math]23^{24}[/math] i...</td>\n",
" <td>0</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4</th>\n",
" <td>4</td>\n",
" <td>9</td>\n",
" <td>10</td>\n",
" <td>Which one dissolve in water quikly sugar, salt...</td>\n",
" <td>Which fish would survive in salt water?</td>\n",
" <td>0</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" id qid1 qid2 question1 \\\n",
"0 0 1 2 What is the step by step guide to invest in sh... \n",
"1 1 3 4 What is the story of Kohinoor (Koh-i-Noor) Dia... \n",
"2 2 5 6 How can I increase the speed of my internet co... \n",
"3 3 7 8 Why am I mentally very lonely? How can I solve... \n",
"4 4 9 10 Which one dissolve in water quikly sugar, salt... \n",
"\n",
" question2 is_duplicate \n",
"0 What is the step by step guide to invest in sh... 0 \n",
"1 What would happen if the Indian government sto... 0 \n",
"2 How can Internet speed be increased by hacking... 0 \n",
"3 Find the remainder when [math]23^{24}[/math] i... 0 \n",
"4 Which fish would survive in salt water? 0 "
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"data = pd.read_csv(\"./data/questions.csv\")\n",
"N = len(data)\n",
"print('Number of question pairs: ', N)\n",
"data.head()"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "gkSQTu7Ypit0"
},
"source": [
"First, you will need to split the data into a training and test set. The test set will be used later to evaluate your model."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"colab_type": "code",
"deletable": false,
"editable": false,
"id": "z00A7vEMpit1",
"outputId": "c12ae7e8-a959-4f56-aa29-6ad34abc1c81",
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Train set: 300000 Test set: 10240\n"
]
}
],
"source": [
"N_train = 300000\n",
"N_test = 10240\n",
"data_train = data[:N_train]\n",
"data_test = data[N_train:N_train + N_test]\n",
"print(\"Train set:\", len(data_train), \"Test set:\", len(data_test))\n",
"del (data) # remove to free memory"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "FbqIRRyEpit4"
},
"source": [
"As explained in the lectures, you will select only the question pairs that are duplicate to train the model. <br>\n",
"You need to build two sets of questions as input for the Siamese network, assuming that question $q1_i$ (question $i$ in the first set) is a duplicate of $q2_i$ (question $i$ in the second set), but all other questions in the second set are not duplicates of $q1_i$. \n",
"The test set uses the original pairs of questions and the status describing if the questions are duplicates.\n",
"\n",
"The following cells are in charge of selecting only duplicate questions from the training set, which will give you a smaller dataset. First find the indexes with duplicate questions.\n",
"\n",
"You will start by identifying the indexes in the training set which correspond to duplicate questions. For this you will define a boolean variable `td_index`, which has value `True` if the index corresponds to duplicate questions and `False` otherwise."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 51
},
"colab_type": "code",
"deletable": false,
"editable": false,
"id": "Xi_TwXxxpit4",
"outputId": "f146046f-9c0d-4d8a-ecf8-8d6a4a5371f7",
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Number of duplicate questions: 111486\n",
"Indexes of first ten duplicate questions: [5, 7, 11, 12, 13, 15, 16, 18, 20, 29]\n"
]
}
],
"source": [
"td_index = data_train['is_duplicate'] == 1\n",
"td_index = [i for i, x in enumerate(td_index) if x]\n",
"print('Number of duplicate questions: ', len(td_index))\n",
"print('Indexes of first ten duplicate questions:', td_index[:10])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You will first need to split the data into a training and test set. The test set will be used later to evaluate your model."
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 68
},
"colab_type": "code",
"deletable": false,
"editable": false,
"id": "3I9oXSsKpit7",
"outputId": "6f6bd3a1-219f-4fb3-a524-450c38bf44ba",
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Astrology: I am a Capricorn Sun Cap moon and cap rising...what does that say about me?\n",
"I'm a triple Capricorn (Sun, Moon and ascendant in Capricorn) What does this say about me?\n",
"is_duplicate: 1\n"
]
}
],
"source": [
"print(data_train['question1'][5])\n",
"print(data_train['question2'][5])\n",
"print('is_duplicate: ', data_train['is_duplicate'][5])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, keep only the rows in the original training set that correspond to the rows where `td_index` is `True`"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"colab": {},
"colab_type": "code",
"deletable": false,
"editable": false,
"id": "XHpZO58Dss_v",
"tags": []
},
"outputs": [],
"source": [
"Q1_train = np.array(data_train['question1'][td_index])\n",
"Q2_train = np.array(data_train['question2'][td_index])\n",
"\n",
"Q1_test = np.array(data_test['question1'])\n",
"Q2_test = np.array(data_test['question2'])\n",
"y_test = np.array(data_test['is_duplicate'])"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "P5vBkxunpiuB"
},
"source": [
"<br>Let's print to see what your data looks like."
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 170
},
"colab_type": "code",
"deletable": false,
"editable": false,
"id": "joyrS1XEpLWn",
"outputId": "3257cde7-3164-40d9-910e-fa91eae917a0",
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"TRAINING QUESTIONS:\n",
"\n",
"Question 1: Astrology: I am a Capricorn Sun Cap moon and cap rising...what does that say about me?\n",
"Question 2: I'm a triple Capricorn (Sun, Moon and ascendant in Capricorn) What does this say about me? \n",
"\n",
"Question 1: What would a Trump presidency mean for current international master’s students on an F1 visa?\n",
"Question 2: How will a Trump presidency affect the students presently in US or planning to study in US? \n",
"\n",
"TESTING QUESTIONS:\n",
"\n",
"Question 1: How do I prepare for interviews for cse?\n",
"Question 2: What is the best way to prepare for cse? \n",
"\n",
"is_duplicate = 0 \n",
"\n"
]
}
],
"source": [
"print('TRAINING QUESTIONS:\\n')\n",
"print('Question 1: ', Q1_train[0])\n",
"print('Question 2: ', Q2_train[0], '\\n')\n",
"print('Question 1: ', Q1_train[5])\n",
"print('Question 2: ', Q2_train[5], '\\n')\n",
"\n",
"print('TESTING QUESTIONS:\\n')\n",
"print('Question 1: ', Q1_test[0])\n",
"print('Question 2: ', Q2_test[0], '\\n')\n",
"print('is_duplicate =', y_test[0], '\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "SuggGPaQpiuY"
},
"source": [
"Finally, split your training set into training/validation sets so that you can use them at training time."
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"deletable": false,
"editable": false,
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Number of duplicate questions: 111486\n",
"The length of the training set is: 89188\n",
"The length of the validation set is: 22298\n"
]
}
],
"source": [
"# Splitting the data\n",
"cut_off = int(len(Q1_train) * 0.8)\n",
"train_Q1, train_Q2 = Q1_train[:cut_off], Q2_train[:cut_off]\n",
"val_Q1, val_Q2 = Q1_train[cut_off:], Q2_train[cut_off:]\n",
"print('Number of duplicate questions: ', len(Q1_train))\n",
"print(\"The length of the training set is: \", len(train_Q1))\n",
"print(\"The length of the validation set is: \", len(val_Q1))"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "BDcxEmX31y3d"
},
"source": [
"<a name='1.2'></a>\n",
"### 1.2 Learning question encoding\n",
"\n",
"The next step is to learn how to encode each of the questions as a list of numbers (integers). You will be learning how to encode each word of the selected duplicate pairs with an index. \n",
"\n",
"You will start by learning a word dictionary, or vocabulary, containing all the words in your training dataset, which you will use to encode each word of the selected duplicate pairs with an index. \n",
"\n",
"For this task you will be using the [`TextVectorization`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/TextVectorization) layer from Keras. which will take care of everything for you. Begin by setting a seed, so we all get the same encoding.\n",
"\n",
"The vocabulary is learned using the `.adapt()`. This will analyze the dataset, determine the frequency of individual string values, and create a vocabulary from them. If you need, you can later access the vocabulary by using `.get_vocabulary()`."
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {
"deletable": false,
"editable": false,
"tags": []
},
"outputs": [],
"source": [
"tf.random.set_seed(0)\n",
"text_vectorization = tf.keras.layers.TextVectorization(output_mode='int',split='whitespace', standardize='strip_punctuation')\n",
"text_vectorization.adapt(np.concatenate((Q1_train,Q2_train)))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As you can see, it is set to split text on whitespaces and it's stripping the punctuation from text. You can check how big your vocabulary is."
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {
"deletable": false,
"editable": false,
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Vocabulary size: 36224\n"
]
}
],
"source": [
"print(f'Vocabulary size: {text_vectorization.vocabulary_size()}')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also call `text_vectorization` to see what the encoding looks like for the first questions of the training and test datasets"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {
"deletable": false,
"editable": false,
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"first question in the train set:\n",
"\n",
"Astrology: I am a Capricorn Sun Cap moon and cap rising...what does that say about me? \n",
"\n",
"encoded version:\n",
"tf.Tensor(\n",
"[ 6984 6 178 10 8988 2442 35393 761 13 6636 28205 31\n",
" 28 483 45 98], shape=(16,), dtype=int64) \n",
"\n",
"first question in the test set:\n",
"\n",
"How do I prepare for interviews for cse? \n",
"\n",
"encoded version:\n",
"tf.Tensor([ 4 8 6 160 17 2079 17 11775], shape=(8,), dtype=int64)\n"
]
}
],
"source": [
"print('first question in the train set:\\n')\n",
"print(Q1_train[0], '\\n') \n",
"print('encoded version:')\n",
"print(text_vectorization(Q1_train[0]),'\\n')\n",
"\n",
"print('first question in the test set:\\n')\n",
"print(Q1_test[0], '\\n')\n",
"print('encoded version:')\n",
"print(text_vectorization(Q1_test[0]) )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Expected output:\n",
"```\n",
"first question in the train set:\n",
"\n",
"Astrology: I am a Capricorn Sun Cap moon and cap rising...what does that say about me? \n",
"\n",
"encoded version:\n",
"tf.Tensor(\n",
"[ 6984 6 178 10 8988 2442 35393 761 13 6636 28205 31\n",
" 28 483 45 98], shape=(16,), dtype=int64) \n",
"\n",
"first question in the test set:\n",
"\n",
"How do I prepare for interviews for cse? \n",
"\n",
"encoded version:\n",
"tf.Tensor([ 4 8 6 160 17 2079 17 11775], shape=(8,), dtype=int64)\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "KmZRBoaMwt0w"
},
"source": [
"<a name='2'></a>\n",
"# Part 2: Defining the Siamese model\n",
"\n",
"<a name='2.1'></a>\n",
"\n",
"### 2.1 Understanding the Siamese Network \n",
"A Siamese network is a neural network which uses the same weights while working in tandem on two different input vectors to compute comparable output vectors. The Siamese network you are about to implement looks something like this:\n",
"\n",
"<img src = \"./img/Siamese.png\" style=\"width:790px;height:300px;\"/>\n",
"\n",
"You get the question, get it vectorized and embedded, run it through an LSTM layer, normalize $v_1$ and $v_2$, and finally get the corresponding cosine similarity for each pair of questions (remember that each question is a single string). Because of the implementation of the loss function you will see in the next section, you are not going to have the cosine similarity as output of your Siamese network, but rather $v_1$ and $v_2$. You will add the cosine distance step once you reach the classification step. \n",
"\n",
"To train the model, you will use the triplet loss (explained below). This loss makes use of a baseline (anchor) input that is compared to a positive (truthy) input and a negative (falsy) input. The (cosine) distance from the baseline input to the positive input is minimized, and the distance from the baseline input to the negative input is maximized. Mathematically, you are trying to maximize the following.\n",
"\n",
"$$\\mathcal{L}(A, P, N)=\\max \\left(\\|\\mathrm{f}(A)-\\mathrm{f}(P)\\|^{2}-\\|\\mathrm{f}(A)-\\mathrm{f}(N)\\|^{2}+\\alpha, 0\\right),$$\n",
"\n",
"where $A$ is the anchor input, for example $q1_1$, $P$ is the duplicate input, for example, $q2_1$, and $N$ is the negative input (the non duplicate question), for example $q2_2$.<br>\n",
"$\\alpha$ is a margin; you can think about it as a safety net, or by how much you want to push the duplicates from the non duplicates. This is the essence of the triplet loss. However, as you will see in the next section, you will be using a pretty smart trick to improve your training, known as hard negative mining. \n",
"<br>\n",
"\n",
"<a name='ex02'></a>\n",
"### Exercise 01\n",
"\n",
"**Instructions:** Implement the `Siamese` function below. You should be using all the functions explained below. \n",
"\n",
"To implement this model, you will be using `TensorFlow`. Concretely, you will be using the following functions.\n",
"\n",
"\n",
"- [`tf.keras.models.Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential): groups a linear stack of layers into a tf.keras.Model.\n",
" - You can pass in the layers as arguments to `Serial`, separated by commas, or simply instantiate the `Sequential`model and use the `add` method to add layers.\n",
" - For example: `Sequential(Embeddings(...), AveragePooling1D(...), Dense(...), Softmax(...))` or \n",
" \n",
" `model = Sequential()\n",
" model.add(Embeddings(...))\n",
" model.add(AveragePooling1D(...))\n",
" model.add(Dense(...))\n",
" model.add(Softmax(...))`\n",
"\n",
"- [`tf.keras.layers.Embedding`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding) : Maps positive integers into vectors of fixed size. It will have shape (vocabulary length X dimension of output vectors). The dimension of output vectors (called `d_feature`in the model) is the number of elements in the word embedding. \n",
" - `Embedding(input_dim, output_dim)`.\n",
" - `input_dim` is the number of unique words in the given vocabulary.\n",
" - `output_dim` is the number of elements in the word embedding (some choices for a word embedding size range from 150 to 300, for example).\n",
" \n",
"\n",
"\n",
"- [`tf.keras.layers.LSTM`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/LSTM) : The LSTM layer. The number of units should be specified and should match the number of elements in the word embedding. \n",
" - `LSTM(units)` Builds an LSTM layer of n_units.\n",
" \n",
" \n",
" \n",
"- [`tf.keras.layers.GlobalAveragePooling1D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/GlobalAveragePooling1D) : Computes global average pooling, which essentially takes the mean across a desired axis. GlobalAveragePooling1D uses one tensor axis to form groups of values and replaces each group with the mean value of that group. \n",
" - `GlobalAveragePooling1D()` takes the mean.\n",
"\n",
"\n",
"\n",
"- [`tf.keras.layers.Lambda`](https://trax-ml.readthedocs.io/en/latest/trax.layers.html#trax.layers.base.Fn): Layer with no weights that applies the function f, which should be specified using a lambda syntax. You will use this layer to apply normalization with the function\n",
" - `tfmath.l2_normalize(x)`\n",
"\n",
"\n",
"\n",
"- [`tf.keras.layers.Input`](https://www.tensorflow.org/api_docs/python/tf/keras/Input): it is used to instantiate a Keras tensor. Remember to set correctly the dimension and type of the input, which are batches of questions. For this, keep in mind that each question is a single string. \n",
" - `Input(input_shape,dtype=None,...)`\n",
" - `input_shape`: Shape tuple (not including the batch axis)\n",
" - `dtype`: (optional) data type of the input\n",
"\n",
"\n",
"\n",
"- [`tf.keras.layers.Concatenate`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Concatenate): Layer that concatenates a list of inputs. This layer will concatenate the normalized outputs of each LSTM into a single output for the model. \n",
" - `Concatenate()`"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {
"deletable": false,
"tags": [
"graded"
]
},
"outputs": [],
"source": [
"# GRADED FUNCTION: Siamese\n",
"def Siamese(text_vectorizer, vocab_size=36224, d_feature=128):\n",
" \"\"\"Returns a Siamese model.\n",
"\n",
" Args:\n",
" text_vectorizer (TextVectorization): TextVectorization instance, already adapted to your training data.\n",
" vocab_size (int, optional): Length of the vocabulary. Defaults to 36224, which is the vocabulary size for your case.\n",
" d_model (int, optional): Depth of the model. Defaults to 128.\n",
" \n",
" Returns:\n",
" tf.model.Model: A Siamese model. \n",
" \n",
" \"\"\"\n",
" ### START CODE HERE ###\n",
"\n",
" branch = tf.keras.models.Sequential(name='sequential') \n",
" # Add the text_vectorizer layer. This is the text_vectorizer you instantiated and trained before \n",
" branch.add(text_vectorizer)\n",
" # Add the Embedding layer. Remember to call it 'embedding' using the parameter `name`\n",
" branch.add(tf.keras.layers.Embedding(input_dim=vocab_size, output_dim=d_feature, name='embedding'))\n",
" # Add the LSTM layer, recall from W2 that you want to the LSTM layer to return sequences, ot just one value. \n",
" # Remember to call it 'LSTM' using the parameter `name`\n",
" branch.add(tf.keras.layers.LSTM(units=d_feature, return_sequences=True, name='LSTM'))\n",
" # Add the GlobalAveragePooling1D layer. Remember to call it 'mean' using the parameter `name`\n",
" branch.add(tf.keras.layers.GlobalAveragePooling1D(name='mean'))\n",
" \n",
" # Add the normalization layer using Lambda\n",
" branch.add(tf.keras.layers.Lambda(lambda x: tf.math.l2_normalize(x, axis=1), name='out'))\n",
" \n",
" # Define both inputs. Remember to call then 'input_1' and 'input_2' using the `name` parameter. \n",
" # Be mindful of the data type and size\n",
" input1 = tf.keras.layers.Input(shape=(1,), dtype=tf.string, name='input_1')\n",
" input2 = tf.keras.layers.Input(shape=(1,), dtype=tf.string, name='input_2')\n",
" # Define the output of each branch of your Siamese network. Remember that both branches have the same coefficients, \n",
" # but they each receive different inputs.\n",
" branch1 = branch(input1)\n",
" branch2 = branch(input2)\n",
" # Define the Concatenate layer. You should concatenate columns, you can fix this using the `axis`parameter. \n",
" # This layer is applied over the outputs of each branch of the Siamese network\n",
" conc = tf.keras.layers.Concatenate(axis=-1, name='conc_1_2')([branch1, branch2])\n",
" \n",
" ### END CODE HERE ###\n",
" \n",
" return tf.keras.models.Model(inputs=[input1, input2], outputs=conc, name=\"SiameseModel\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "es2gfwZypiul"
},
"source": [
"Setup the Siamese network model"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 255
},
"colab_type": "code",
"deletable": false,
"editable": false,
"id": "kvQ_jf52-JAn",
"outputId": "d409460d-2ffb-4ae6-8745-ddcfa1d892ad",
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"WARNING:tensorflow:From c:\\Users\\Pankaj rawat\\IdeaProjects\\Avoiding-duplicate-question-in-Quora\\seasme\\Lib\\site-packages\\keras\\src\\backend\\tensorflow\\core.py:204: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.\n",
"\n"
]
},
{
"data": {
"text/html": [
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\">Model: \"SiameseModel\"</span>\n",
"</pre>\n"
],
"text/plain": [
"\u001b[1mModel: \"SiameseModel\"\u001b[0m\n"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">┏━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━┓\n",
"┃<span style=\"font-weight: bold\"> Layer (type) </span>┃<span style=\"font-weight: bold\"> Output Shape </span>┃<span style=\"font-weight: bold\"> Param # </span>┃<span style=\"font-weight: bold\"> Connected to </span>┃\n",
"┡━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━┩\n",
"│ input_1 │ (<span style=\"color: #00d7ff; text-decoration-color: #00d7ff\">None</span>, <span style=\"color: #00af00; text-decoration-color: #00af00\">1</span>) │ <span style=\"color: #00af00; text-decoration-color: #00af00\">0</span> │ - │\n",
"│ (<span style=\"color: #0087ff; text-decoration-color: #0087ff\">InputLayer</span>) │ │ │ │\n",
"├─────────────────────┼───────────────────┼────────────┼───────────────────┤\n",
"│ input_2 │ (<span style=\"color: #00d7ff; text-decoration-color: #00d7ff\">None</span>, <span style=\"color: #00af00; text-decoration-color: #00af00\">1</span>) │ <span style=\"color: #00af00; text-decoration-color: #00af00\">0</span> │ - │\n",
"│ (<span style=\"color: #0087ff; text-decoration-color: #0087ff\">InputLayer</span>) │ │ │ │\n",
"├─────────────────────┼───────────────────┼────────────┼───────────────────┤\n",
"│ sequential │ (<span style=\"color: #00d7ff; text-decoration-color: #00d7ff\">None</span>, <span style=\"color: #00af00; text-decoration-color: #00af00\">128</span>) │ <span style=\"color: #00af00; text-decoration-color: #00af00\">4,768,256</span> │ input_1[<span style=\"color: #00af00; text-decoration-color: #00af00\">0</span>][<span style=\"color: #00af00; text-decoration-color: #00af00\">0</span>], │\n",
"│ (<span style=\"color: #0087ff; text-decoration-color: #0087ff\">Sequential</span>) │ │ │ input_2[<span style=\"color: #00af00; text-decoration-color: #00af00\">0</span>][<span style=\"color: #00af00; text-decoration-color: #00af00\">0</span>] │\n",
"├─────────────────────┼───────────────────┼────────────┼───────────────────┤\n",
"│ conc_1_2 │ (<span style=\"color: #00d7ff; text-decoration-color: #00d7ff\">None</span>, <span style=\"color: #00af00; text-decoration-color: #00af00\">256</span>) │ <span style=\"color: #00af00; text-decoration-color: #00af00\">0</span> │ sequential[<span style=\"color: #00af00; text-decoration-color: #00af00\">0</span>][<span style=\"color: #00af00; text-decoration-color: #00af00\">0</span>], │\n",
"│ (<span style=\"color: #0087ff; text-decoration-color: #0087ff\">Concatenate</span>) │ │ │ sequential[<span style=\"color: #00af00; text-decoration-color: #00af00\">1</span>][<span style=\"color: #00af00; text-decoration-color: #00af00\">0</span>] │\n",
"└─────────────────────┴───────────────────┴────────────┴───────────────────┘\n",
"</pre>\n"
],
"text/plain": [
"┏━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━┓\n",
"┃\u001b[1m \u001b[0m\u001b[1mLayer (type) \u001b[0m\u001b[1m \u001b[0m┃\u001b[1m \u001b[0m\u001b[1mOutput Shape \u001b[0m\u001b[1m \u001b[0m┃\u001b[1m \u001b[0m\u001b[1m Param #\u001b[0m\u001b[1m \u001b[0m┃\u001b[1m \u001b[0m\u001b[1mConnected to \u001b[0m\u001b[1m \u001b[0m┃\n",
"┡━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━┩\n",
"│ input_1 │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m1\u001b[0m) │ \u001b[38;5;34m0\u001b[0m │ - │\n",
"│ (\u001b[38;5;33mInputLayer\u001b[0m) │ │ │ │\n",
"├─────────────────────┼───────────────────┼────────────┼───────────────────┤\n",
"│ input_2 │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m1\u001b[0m) │ \u001b[38;5;34m0\u001b[0m │ - │\n",
"│ (\u001b[38;5;33mInputLayer\u001b[0m) │ │ │ │\n",
"├─────────────────────┼───────────────────┼────────────┼───────────────────┤\n",
"│ sequential │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m128\u001b[0m) │ \u001b[38;5;34m4,768,256\u001b[0m │ input_1[\u001b[38;5;34m0\u001b[0m][\u001b[38;5;34m0\u001b[0m], │\n",
"│ (\u001b[38;5;33mSequential\u001b[0m) │ │ │ input_2[\u001b[38;5;34m0\u001b[0m][\u001b[38;5;34m0\u001b[0m] │\n",
"├─────────────────────┼───────────────────┼────────────┼───────────────────┤\n",
"│ conc_1_2 │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m256\u001b[0m) │ \u001b[38;5;34m0\u001b[0m │ sequential[\u001b[38;5;34m0\u001b[0m][\u001b[38;5;34m0\u001b[0m], │\n",
"│ (\u001b[38;5;33mConcatenate\u001b[0m) │ │ │ sequential[\u001b[38;5;34m1\u001b[0m][\u001b[38;5;34m0\u001b[0m] │\n",
"└─────────────────────┴───────────────────┴────────────┴───────────────────┘\n"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\"> Total params: </span><span style=\"color: #00af00; text-decoration-color: #00af00\">4,768,256</span> (18.19 MB)\n",
"</pre>\n"
],
"text/plain": [
"\u001b[1m Total params: \u001b[0m\u001b[38;5;34m4,768,256\u001b[0m (18.19 MB)\n"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\"> Trainable params: </span><span style=\"color: #00af00; text-decoration-color: #00af00\">4,768,256</span> (18.19 MB)\n",
"</pre>\n"
],
"text/plain": [
"\u001b[1m Trainable params: \u001b[0m\u001b[38;5;34m4,768,256\u001b[0m (18.19 MB)\n"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\"> Non-trainable params: </span><span style=\"color: #00af00; text-decoration-color: #00af00\">0</span> (0.00 B)\n",
"</pre>\n"
],
"text/plain": [
"\u001b[1m Non-trainable params: \u001b[0m\u001b[38;5;34m0\u001b[0m (0.00 B)\n"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\">Model: \"sequential\"</span>\n",
"</pre>\n"
],
"text/plain": [
"\u001b[1mModel: \"sequential\"\u001b[0m\n"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓\n",
"┃<span style=\"font-weight: bold\"> Layer (type) </span>┃<span style=\"font-weight: bold\"> Output Shape </span>┃<span style=\"font-weight: bold\"> Param # </span>┃\n",
"┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩\n",
"│ text_vectorization │ (<span style=\"color: #00d7ff; text-decoration-color: #00d7ff\">None</span>, <span style=\"color: #00d7ff; text-decoration-color: #00d7ff\">None</span>) │ <span style=\"color: #00af00; text-decoration-color: #00af00\">0</span> │\n",
"│ (<span style=\"color: #0087ff; text-decoration-color: #0087ff\">TextVectorization</span>) │ │ │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ embedding (<span style=\"color: #0087ff; text-decoration-color: #0087ff\">Embedding</span>) │ (<span style=\"color: #00d7ff; text-decoration-color: #00d7ff\">None</span>, <span style=\"color: #00d7ff; text-decoration-color: #00d7ff\">None</span>, <span style=\"color: #00af00; text-decoration-color: #00af00\">128</span>) │ <span style=\"color: #00af00; text-decoration-color: #00af00\">4,636,672</span> │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ LSTM (<span style=\"color: #0087ff; text-decoration-color: #0087ff\">LSTM</span>) │ (<span style=\"color: #00d7ff; text-decoration-color: #00d7ff\">None</span>, <span style=\"color: #00d7ff; text-decoration-color: #00d7ff\">None</span>, <span style=\"color: #00af00; text-decoration-color: #00af00\">128</span>) │ <span style=\"color: #00af00; text-decoration-color: #00af00\">131,584</span> │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ mean (<span style=\"color: #0087ff; text-decoration-color: #0087ff\">GlobalAveragePooling1D</span>) │ (<span style=\"color: #00d7ff; text-decoration-color: #00d7ff\">None</span>, <span style=\"color: #00af00; text-decoration-color: #00af00\">128</span>) │ <span style=\"color: #00af00; text-decoration-color: #00af00\">0</span> │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ out (<span style=\"color: #0087ff; text-decoration-color: #0087ff\">Lambda</span>) │ (<span style=\"color: #00d7ff; text-decoration-color: #00d7ff\">None</span>, <span style=\"color: #00af00; text-decoration-color: #00af00\">128</span>) │ <span style=\"color: #00af00; text-decoration-color: #00af00\">0</span> │\n",
"└─────────────────────────────────┴────────────────────────┴───────────────┘\n",
"</pre>\n"
],
"text/plain": [
"┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓\n",
"┃\u001b[1m \u001b[0m\u001b[1mLayer (type) \u001b[0m\u001b[1m \u001b[0m┃\u001b[1m \u001b[0m\u001b[1mOutput Shape \u001b[0m\u001b[1m \u001b[0m┃\u001b[1m \u001b[0m\u001b[1m Param #\u001b[0m\u001b[1m \u001b[0m┃\n",
"┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩\n",
"│ text_vectorization │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;45mNone\u001b[0m) │ \u001b[38;5;34m0\u001b[0m │\n",
"│ (\u001b[38;5;33mTextVectorization\u001b[0m) │ │ │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ embedding (\u001b[38;5;33mEmbedding\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m128\u001b[0m) │ \u001b[38;5;34m4,636,672\u001b[0m │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ LSTM (\u001b[38;5;33mLSTM\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m128\u001b[0m) │ \u001b[38;5;34m131,584\u001b[0m │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ mean (\u001b[38;5;33mGlobalAveragePooling1D\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m128\u001b[0m) │ \u001b[38;5;34m0\u001b[0m │\n",
"├─────────────────────────────────┼────────────────────────┼───────────────┤\n",
"│ out (\u001b[38;5;33mLambda\u001b[0m) │ (\u001b[38;5;45mNone\u001b[0m, \u001b[38;5;34m128\u001b[0m) │ \u001b[38;5;34m0\u001b[0m │\n",
"└─────────────────────────────────┴────────────────────────┴───────────────┘\n"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\"> Total params: </span><span style=\"color: #00af00; text-decoration-color: #00af00\">4,768,256</span> (18.19 MB)\n",
"</pre>\n"
],
"text/plain": [
"\u001b[1m Total params: \u001b[0m\u001b[38;5;34m4,768,256\u001b[0m (18.19 MB)\n"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\"> Trainable params: </span><span style=\"color: #00af00; text-decoration-color: #00af00\">4,768,256</span> (18.19 MB)\n",
"</pre>\n"
],
"text/plain": [
"\u001b[1m Trainable params: \u001b[0m\u001b[38;5;34m4,768,256\u001b[0m (18.19 MB)\n"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\"> Non-trainable params: </span><span style=\"color: #00af00; text-decoration-color: #00af00\">0</span> (0.00 B)\n",
"</pre>\n"
],
"text/plain": [
"\u001b[1m Non-trainable params: \u001b[0m\u001b[38;5;34m0\u001b[0m (0.00 B)\n"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"# check your model\n",
"model = Siamese(text_vectorization, vocab_size=text_vectorization.vocabulary_size())\n",
"model.build(input_shape=None)\n",
"model.summary()\n",
"model.get_layer(name='sequential').summary()"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "LMK9zqhHpiuo"
},
"source": [
"**Expected output:** \n",
"\n",
"<font size=2>\n",
"\n",
"```Model: \"SiameseModel\"\n",
"__________________________________________________________________________________________________\n",
" Layer (type) Output Shape Param # Connected to \n",
"==================================================================================================\n",
" input_1 (InputLayer) [(None, 1)] 0 [] \n",
" \n",
" input_2 (InputLayer) [(None, 1)] 0 [] \n",
" \n",
" sequential (Sequential) (None, 128) 4768256 ['input_1[0][0]', \n",
" 'input_2[0][0]'] \n",
" \n",
" conc_1_2 (Concatenate) (None, 256) 0 ['sequential[0][0]', \n",
" 'sequential[1][0]'] \n",
" \n",
"==================================================================================================\n",
"Total params: 4768256 (18.19 MB)\n",
"Trainable params: 4768256 (18.19 MB)\n",
"Non-trainable params: 0 (0.00 Byte)\n",
"__________________________________________________________________________________________________\n",
"Model: \"sequential\"\n",
"_________________________________________________________________\n",
" Layer (type) Output Shape Param # \n",
"=================================================================\n",
" text_vectorization (TextVe (None, None) 0 \n",
" ctorization) \n",
" \n",
" embedding (Embedding) (None, None, 128) 4636672 \n",
" \n",
" LSTM (LSTM) (None, None, 128) 131584 \n",
" \n",
" mean (GlobalAveragePooling (None, 128) 0 \n",
" 1D) \n",
" \n",
" out (Lambda) (None, 128) 0 \n",
" \n",
"=================================================================\n",
"Total params: 4768256 (18.19 MB)\n",
"Trainable params: 4768256 (18.19 MB)\n",
"Non-trainable params: 0 (0.00 Byte)\n",
"_________________________________________________________________\n",
"```\n",
"</font>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also draw the model for a clearer view of your Siamese network"
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {
"deletable": false,
"editable": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"You must install pydot (`pip install pydot`) for `plot_model` to work.\n"
]
}
],
"source": [
"tf.keras.utils.plot_model(\n",
" model,\n",
" to_file=\"model.png\",\n",
" show_shapes=True,\n",
" show_dtype=True,\n",
" show_layer_names=True,\n",
" rankdir=\"TB\",\n",
" expand_nested=True)"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "KVo1Gvripiuo"
},
"source": [
"<a name='2.2'></a>\n",
"\n",
"### 2.2 Hard Negative Mining\n",
"\n",
"\n",
"You will now implement the `TripletLoss` with hard negative mining.<br>\n",
"As explained in the lecture, you will be using all the questions from each batch to compute this loss. Positive examples are questions $q1_i$, and $q2_i$, while all the other combinations $q1_i$, $q2_j$ ($i\\neq j$), are considered negative examples. The loss will be composed of two terms. One term utilizes the mean of all the non duplicates, the second utilizes the *closest negative*. Our loss expression is then:\n",
" \n",
"\\begin{align}\n",
" \\mathcal{Loss_1(A,P,N)} &=\\max \\left( -cos(A,P) + mean_{neg} +\\alpha, 0\\right) \\\\\n",
" \\mathcal{Loss_2(A,P,N)} &=\\max \\left( -cos(A,P) + closest_{neg} +\\alpha, 0\\right) \\\\\n",
"\\mathcal{Loss(A,P,N)} &= mean(Loss_1 + Loss_2) \\\\\n",
"\\end{align}\n",
"\n",
"\n",
"Further, two sets of instructions are provided. The first set, found just below, provides a brief description of the task. If that set proves insufficient, a more detailed set can be displayed. \n",
"\n",
"<a name='ex03'></a>\n",
"### Exercise 02\n",
"\n",
"**Instructions (Brief):** Here is a list of things you should do: <br>\n",
"\n",
"- As this will be run inside Tensorflow, use all operation supplied by `tf.math` or `tf.linalg`, instead of `numpy` functions. You will also need to explicitly use `tf.shape` to get the batch size from the inputs. This is to make it compatible with the Tensor inputs it will receive when doing actual training and testing. \n",
"- Use [`tf.linalg.matmul`](https://www.tensorflow.org/api_docs/python/tf/linalg/matmul) to calculate the similarity matrix $v_2v_1^T$ of dimension `batch_size` x `batch_size`. \n",
"- Take the score of the duplicates on the diagonal with [`tf.linalg.diag_part`](https://www.tensorflow.org/api_docs/python/tf/linalg/diag_part). \n",
"- Use the `TensorFlow` functions [`tf.eye`](https://www.tensorflow.org/api_docs/python/tf/eye) and [`tf.math.reduce_max`](https://www.tensorflow.org/api_docs/python/tf/math/reduce_max) for the identity matrix and the maximum respectively. "
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "GWsX-Wz3piup"
},
"source": [
"<details> \n",
"<summary>\n",
" <font size=\"3\" color=\"darkgreen\"><b>More Detailed Instructions </b></font>\n",
"</summary>\n",
"\n",
"We'll describe the algorithm using a detailed example. Below, $V_1$, $V_2$ are the output of the normalization blocks in our model. Here you will use a `batch_size` of 4 and a `d_model of 3`. As explained in lecture, the input questions, Q1, Q2 are arranged so that corresponding inputs are duplicates while non-corresponding entries are not. The outputs will have the same pattern.\n",
"\n",
"<img src = \"./img/tripletLossexample.png\" style=\"width:817px;\"/>\n",
"\n",
"This testcase arranges the outputs, $V_1$,$V_2$, to highlight different scenarios. Here, the first two outputs $V_1[0]$, $V_2[0]$ match exactly, so the model is generating the same vector for Q1[0] and Q2[0] inputs. The second pair of outputs, circled in orange, differ greatly on one of the values, so the transformation is not quite the same for these questions. Next, you have examples $V_1[3]$ and $V_2[3]$, which match almost exactly. Finally, $V_1[4]$ and $V_2[4]$, circled in purple, are set to be exactly opposite, being 180 degrees from each other. \n",
"\n",
"The first step is to compute the cosine similarity matrix or `score` in the code. As explained in the lectures, this is $$V_2 V_1^T.$$This is generated with `tf.linalg.matmul`. Since matrix multiplication is not commutative, the order in which you pass the arguments is important. If you want columns to represent different questions in Q1 and rows to represent different questions in Q2, as seen in the video, then you need to compute $V_2 V_1^T$. \n",
"\n",
"<img src = \"./img/tripletLoss2.png\" style=\"width:900px;\"/>\n",
"\n",
"The clever arrangement of inputs creates the data needed for positive *and* negative examples without having to run all pair-wise combinations. Because Q1[n] is a duplicate of only Q2[n], other combinations are explicitly created negative examples or *Hard Negative* examples. The matrix multiplication efficiently produces the cosine similarity of all positive/negative combinations as shown above on the left side of the diagram. 'Positive' are the results of duplicate examples (cells shaded in green) and 'negative' are the results of explicitly created negative examples (cells shaded in blue). The results for our test case are as expected, $V_1[0]\\cdot V_2[0]$ and $V_1[3]\\cdot V_2[3]$ match producing '1', and '0.99' respectively, while the other 'positive' cases don't match quite right. Note also that the $V_2[2]$ example was set to match $V_1[3]$, producing a not so good match at `score[2,2]` and an undesired 'negative' case of a '1', shown in grey. \n",
"\n",
"With the similarity matrix (`score`) you can begin to implement the loss equations. First, you can extract $cos(A,P)$ by utilizing `tf.linalg.diag_part`. The goal is to grab all the green entries in the diagram above. This is `positive` in the code.\n",
"\n",
"Next, you will create the *closest_negative*. This is the nonduplicate entry in $V_2$ that is closest to (has largest cosine similarity) to an entry in $V_1$, but still has smaller cosine similarity than the positive example. For example, consider row 2 in the score matrix. This row has the cosine similarity between $V_2[2]$ and all four vectors in $V_1$. In this case, the largest value in the off-diagonal is`score[2,3]`$=V_2[3]\\cdot V1[2]$, which has a score of 1. However, since 1 is grater than the similarity for the positive example, this is *not* the *closest_negative*. For this particular row, the *closes_negative* will have to be `score[2,1]=0.36`. This is the maximum value of the 'negative' entries, which are smaller than the 'positive' example.\n",
"\n",
"To implement this, you need to pick the maximum entry on a row of `score`, ignoring the 'positive'/green entries, and 'negative/blue entry greater that the 'positive' one. To avoid selecting these entries, you can make them larger negative numbers. For this, you can create a mask to identify these two scenarios, multiply it by 2.0 and subtract it out of `scores`. To create the mask, you need to check if the cell is diagonal by computing `tf.eye(batch_size) ==1`, or if the non-diagonal cell is greater than the diagonal with `(negative_zero_on_duplicate > tf.expand_dims(positive, 1)`. Remember that `positive` already has the diagonal values. Now you can use `tf.math.reduce_max`, row by row (axis=1), to select the maximum which is `closest_negative`.\n",
"\n",
"Next, we'll create *mean_negative*. As the name suggests, this is the mean of all the 'negative'/blue values in `score` on a row by row basis. You can use `tf.linalg.diag` to create a diagonal matrix, where the diagonal matches `positive`, and just subtract it from `score` to get just the 'negative' values. This is `negative_zero_on_duplicate` in the code. Compute the mean by using `tf.math.reduce_sum` on `negative_zero_on_duplicate` for `axis=1` and divide it by `(batch_size - 1)`. This is `mean_negative`.\n",
"\n",
"Now, you can compute loss using the two equations above and `tf.maximum`. This will form `triplet_loss1` and `triplet_loss2`. \n",
"\n",
"`triplet_loss` is the `tf.math.reduce_sum` of the sum of the two individual losses.\n"
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {
"deletable": false,
"tags": [
"graded"
]
},
"outputs": [],
"source": [
"import tensorflow as tf\n",
"\n",
"def TripletLossFn(v1, v2, margin=0.25):\n",
" \"\"\"Custom Loss function.\n",
"\n",
" Args:\n",
" v1 (numpy.ndarray or Tensor): Array with dimension (batch_size, model_dimension) associated to Q1.\n",
" v2 (numpy.ndarray or Tensor): Array with dimension (batch_size, model_dimension) associated to Q2.\n",
" margin (float, optional): Desired margin. Defaults to 0.25.\n",
"\n",
" Returns:\n",
" triplet_loss (numpy.ndarray or Tensor)\n",
" \"\"\"\n",
"\n",
" ### START CODE HERE ###\n",
"\n",
" # use `tf.linalg.matmul` to take the dot product of the two batches. \n",
" # Don't forget to transpose the second argument using `transpose_b=True`\n",
" scores = tf.linalg.matmul(v1, v2, transpose_b=True)\n",
" # calculate new batch size and cast it as the same datatype as scores.\n",
" batch_size = tf.cast(tf.shape(v1)[0], scores.dtype) \n",
" \n",
" # use `tf.linalg.diag_part` to grab the cosine similarity of all positive examples\n",
" positive = tf.linalg.diag_part(scores)\n",
" \n",
" # subtract the diagonal from scores. You can do this by creating a diagonal matrix with the values \n",
" # of all positive examples using `tf.linalg.diag`\n",
" negative_zero_on_duplicate = scores - tf.linalg.diag(positive)\n",
" \n",
" # use `tf.math.reduce_sum` on `negative_zero_on_duplicate` for `axis=1` and divide it by `(batch_size - 1)`\n",
" mean_negative = tf.math.reduce_sum(negative_zero_on_duplicate, axis=1) / (batch_size - 1)\n",
" \n",
" # create a composition of two masks: \n",
" # the first mask to extract the diagonal elements (make sure you use the variable batch_size here), \n",
" # the second mask to extract elements in the negative_zero_on_duplicate matrix that are larger than the elements in the diagonal \n",
" mask_exclude_positives = tf.cast((tf.eye(batch_size) == 1)|(negative_zero_on_duplicate > tf.reshape(positive, (batch_size, 1))),\n",
" scores.dtype)\n",
" \n",
" # multiply `mask_exclude_positives` with 2.0 and subtract it out of `negative_zero_on_duplicate`\n",
" negative_without_positive = negative_zero_on_duplicate - (mask_exclude_positives*2.0)\n",
" \n",
" # take the row by row `max` of `negative_without_positive`. \n",
" # Hint: `tf.math.reduce_max(negative_without_positive, axis = None)`\n",
" closest_negative = tf.math.reduce_max(negative_without_positive, axis = 0)\n",
" \n",
" # compute `tf.maximum` among 0.0 and `A`\n",
" # A = subtract `positive` from `margin` and add `closest_negative` \n",
" triplet_loss1 = tf.maximum(0.0, margin - positive + closest_negative)\n",
" \n",
" # compute `tf.maximum` among 0.0 and `B`\n",
" # B = subtract `positive` from `margin` and add `mean_negative` \n",
" triplet_loss2 = tf.maximum(0.0, margin - positive + mean_negative)\n",
" \n",
" # add the two losses together and take the `tf.math.reduce_sum` of it\n",
" triplet_loss = tf.math.reduce_sum(triplet_loss1+triplet_loss2)\n",
"\n",
" ### END CODE HERE ###\n",
"\n",
" return triplet_loss\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now you can check the triplet loss between two sets. The following example emulates the triplet loss between two groups of questions with `batch_size=2`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {
"deletable": false,
"editable": false,
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Triplet Loss: 2.499999993789265\n"
]
}
],
"source": [
"v1 = np.array([[0.26726124, 0.53452248, 0.80178373],[0.5178918 , 0.57543534, 0.63297887]])\n",
"v2 = np.array([[ 0.26726124, 0.53452248, 0.80178373],[-0.5178918 , -0.57543534, -0.63297887]])\n",
"print(\"Triplet Loss:\", TripletLossFn(v1,v2).numpy())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Expected Output:**\n",
"```CPP\n",
"Triplet Loss: ~ 0.70\n",
"``` "
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "r974ozuHYAom"
},
"source": [
"To recognize it as a loss function, keras needs it to have two inputs: true labels, and output labels. You will not be using the true labels, but you still need to pass some dummy variable with size `(batch_size,)` for TensorFlow to accept it as a valid loss.\n",
"\n",
"Additionally, the `out` parameter must coincide with the output of your Siamese network, which is the concatenation of the processing of each of the inputs, so you need to extract $v_1$ and $v_2$ from there."
]
},
{
"cell_type": "code",
"execution_count": 26,
"metadata": {
"deletable": false,
"editable": false,
"tags": [
"graded"
]
},
"outputs": [],
"source": [
"def TripletLoss(labels, out, margin=0.25):\n",
" _, out_size = out.shape # get embedding size\n",
" v1 = out[:,:int(out_size/2)] # Extract v1 from out\n",
" v2 = out[:,int(out_size/2):] # Extract v2 from out\n",
" return TripletLossFn(v1, v2, margin=margin)"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "lsvjaCQ6wt02"
},
"source": [
"<a name='3'></a>\n",
"\n",
"# Part 3: Training\n",
"\n",
"Now it's time to finally train your model. As usual, you have to define the cost function and the optimizer. You also have to build the actual model you will be training. \n",
"\n",
"To pass the input questions for training and validation you will use the iterator produced by [`tensorflow.data.Dataset`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset). Run the next cell to create your train and validation datasets. "
]
},
{
"cell_type": "code",
"execution_count": 27,
"metadata": {
"deletable": false,
"editable": false,
"tags": []
},
"outputs": [],
"source": [
"train_dataset = tf.data.Dataset.from_tensor_slices(((train_Q1, train_Q2),tf.constant([1]*len(train_Q1))))\n",
"val_dataset = tf.data.Dataset.from_tensor_slices(((val_Q1, val_Q2),tf.constant([1]*len(val_Q1))))"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "IgFMfH5awt07"
},
"source": [
"<a name='3.1'></a>\n",
"\n",
"### 3.1 Training the model\n",
"\n",
"You will now write a function that takes in your model to train it. To train your model you have to decide how many times you want to iterate over the entire data set; each iteration is defined as an `epoch`. For each epoch, you have to go over all the data, using your `Dataset` iterator.\n",
"\n",
"<a name='ex04'></a>\n",
"### Exercise 03\n",
"\n",
"**Instructions:** Implement the `train_model` below to train the neural network above. Here is a list of things you should do: \n",
"\n",
"- Compile the model. Here you will need to pass in:\n",
" - `loss=TripletLoss`\n",
" - `optimizer=Adam()` with learning rate `lr`\n",
"- Call the `fit` method. You should pass:\n",
" - `train_dataset`\n",
" - `epochs`\n",
" - `validation_data` \n",
"\n",
"\n",
"\n",
"You will be using your triplet loss function with Adam optimizer. Also, note that you are not explicitly defining the batch size, because it will be already determined by the `Dataset`.\n",
"\n",
"This function will return the trained model"
]
},
{
"cell_type": "code",
"execution_count": 28,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 391
},
"colab_type": "code",
"deletable": false,
"id": "-3KXjmBo_6Xa",
"outputId": "9d57f731-1534-4218-e744-783359d5cd19",
"scrolled": true,
"tags": [
"graded"
]
},
"outputs": [],
"source": [
"# GRADED FUNCTION: train_model\n",
"def train_model(Siamese, TripletLoss, text_vectorizer, train_dataset, val_dataset, d_feature=128, lr=0.01, train_steps=5):\n",
" \"\"\"Training the Siamese Model\n",
"\n",
" Args:\n",
" Siamese (function): Function that returns the Siamese model.\n",
" TripletLoss (function): Function that defines the TripletLoss loss function.\n",
" text_vectorizer: trained instance of `TextVecotrization` \n",
" train_dataset (tf.data.Dataset): Training dataset\n",
" val_dataset (tf.data.Dataset): Validation dataset\n",
" d_feature (int, optional) = size of the encoding. Defaults to 128.\n",
" lr (float, optional): learning rate for optimizer. Defaults to 0.01\n",
" train_steps (int): number of epochs\n",
" \n",
" Returns:\n",
" tf.keras.Model\n",
" \"\"\"\n",
" ## START CODE HERE ###\n",
"\n",
" # Instantiate your Siamese model\n",
" model = Siamese(text_vectorizer,\n",
" vocab_size = text_vectorizer.vocabulary_size(), #set vocab_size accordingly to the size of your vocabulary\n",
" d_feature = d_feature)\n",
" # Compile the model\n",
" model.compile(loss=TripletLoss,\n",
" optimizer = tf.optimizers.Adam(learning_rate=lr)\n",
" )\n",
" # Train the model \n",
" model.fit(train_dataset,\n",
" epochs = train_steps,\n",
" validation_data = val_dataset,\n",
" )\n",
" \n",
" ### END CODE HERE ###\n",
"\n",
" return model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now call the `train_model` function. You will be using a batch size of 256. \n",
"\n",
"To create the data generators you will be using the method `batch` for `Dataset` object. You will also call the `shuffle` method, to shuffle the dataset on each iteration."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"deletable": false,
"editable": false,
"scrolled": false,
"tags": []
},
"outputs": [
{
"ename": "NameError",
"evalue": "name 'train_dataset' is not defined",
"output_type": "error",
"traceback": [
"\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[1;31mNameError\u001b[0m Traceback (most recent call last)",
"Cell \u001b[1;32mIn[2], line 3\u001b[0m\n\u001b[0;32m 1\u001b[0m train_steps \u001b[38;5;241m=\u001b[39m \u001b[38;5;241m2\u001b[39m\n\u001b[0;32m 2\u001b[0m batch_size \u001b[38;5;241m=\u001b[39m \u001b[38;5;241m256\u001b[39m\n\u001b[1;32m----> 3\u001b[0m train_generator \u001b[38;5;241m=\u001b[39m \u001b[43mtrain_dataset\u001b[49m\u001b[38;5;241m.\u001b[39mshuffle(\u001b[38;5;28mlen\u001b[39m(train_Q1),\n\u001b[0;32m 4\u001b[0m seed\u001b[38;5;241m=\u001b[39m\u001b[38;5;241m7\u001b[39m, \n\u001b[0;32m 5\u001b[0m reshuffle_each_iteration\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mTrue\u001b[39;00m)\u001b[38;5;241m.\u001b[39mbatch(batch_size\u001b[38;5;241m=\u001b[39mbatch_size)\n\u001b[0;32m 6\u001b[0m val_generator \u001b[38;5;241m=\u001b[39m val_dataset\u001b[38;5;241m.\u001b[39mshuffle(\u001b[38;5;28mlen\u001b[39m(val_Q1), \n\u001b[0;32m 7\u001b[0m seed\u001b[38;5;241m=\u001b[39m\u001b[38;5;241m7\u001b[39m,\n\u001b[0;32m 8\u001b[0m reshuffle_each_iteration\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mTrue\u001b[39;00m)\u001b[38;5;241m.\u001b[39mbatch(batch_size\u001b[38;5;241m=\u001b[39mbatch_size)\n\u001b[0;32m 9\u001b[0m model \u001b[38;5;241m=\u001b[39m train_model(Siamese, TripletLoss,text_vectorization, \n\u001b[0;32m 10\u001b[0m train_generator, \n\u001b[0;32m 11\u001b[0m val_generator, \n\u001b[0;32m 12\u001b[0m train_steps\u001b[38;5;241m=\u001b[39mtrain_steps,)\n",
"\u001b[1;31mNameError\u001b[0m: name 'train_dataset' is not defined"
]
}
],
"source": [
"train_steps = 2\n",
"batch_size = 256\n",
"train_generator = train_dataset.shuffle(len(train_Q1),\n",
" seed=7, \n",
" reshuffle_each_iteration=True).batch(batch_size=batch_size)\n",
"val_generator = val_dataset.shuffle(len(val_Q1), \n",
" seed=7,\n",
" reshuffle_each_iteration=True).batch(batch_size=batch_size)\n",
"model = train_model(Siamese, TripletLoss,text_vectorization, \n",
" train_generator, \n",
" val_generator, \n",
" train_steps=train_steps,)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The model was only trained for 2 steps because training the whole Siamese network takes too long, and produces slightly different results for each run. For the rest of the assignment you will be using a pretrained model, but this small example should help you understand how the training can be done."
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "abKPe7d4wt1C"
},
"source": [
"<a name='4'></a>\n",
"\n",
"# Part 4: Evaluation \n",
"\n",
"<a name='4.1'></a>\n",
"\n",
"### 4.1 Evaluating your siamese network\n",
"\n",
"In this section you will learn how to evaluate a Siamese network. You will start by loading a pretrained model, and then you will use it to predict. For the prediction you will need to take the output of your model and compute the cosine loss between each pair of questions."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"deletable": false,
"editable": false,
"scrolled": false,
"tags": []
},
"outputs": [
{
"ename": "RecursionError",
"evalue": "maximum recursion depth exceeded in comparison",
"output_type": "error",
"traceback": [
"\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[1;31mRecursionError\u001b[0m Traceback (most recent call last)",
"Cell \u001b[1;32mIn[3], line 2\u001b[0m\n\u001b[0;32m 1\u001b[0m \u001b[38;5;28;01mimport\u001b[39;00m \u001b[38;5;21;01mtensorflow\u001b[39;00m \u001b[38;5;28;01mas\u001b[39;00m \u001b[38;5;21;01mtf\u001b[39;00m\n\u001b[1;32m----> 2\u001b[0m model \u001b[38;5;241m=\u001b[39m \u001b[43mtf\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mkeras\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mmodels\u001b[49m\u001b[38;5;241m.\u001b[39mload_model(\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mmodel/trained_model.keras\u001b[39m\u001b[38;5;124m'\u001b[39m, safe_mode\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mFalse\u001b[39;00m, \u001b[38;5;28mcompile\u001b[39m\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mFalse\u001b[39;00m)\n\u001b[0;32m 4\u001b[0m \u001b[38;5;66;03m# Show the model architecture\u001b[39;00m\n\u001b[0;32m 5\u001b[0m model\u001b[38;5;241m.\u001b[39msummary()\n",
"File \u001b[1;32mc:\\Users\\Pankaj rawat\\IdeaProjects\\Avoiding-duplicate-question-in-Quora\\seasme\\Lib\\site-packages\\tensorflow\\python\\util\\lazy_loader.py:182\u001b[0m, in \u001b[0;36mKerasLazyLoader.__getattr__\u001b[1;34m(self, item)\u001b[0m\n\u001b[0;32m 180\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_tfll_initialized:\n\u001b[0;32m 181\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_initialize()\n\u001b[1;32m--> 182\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_tfll_keras_version\u001b[49m \u001b[38;5;241m==\u001b[39m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mkeras_3\u001b[39m\u001b[38;5;124m\"\u001b[39m:\n\u001b[0;32m 183\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m (\n\u001b[0;32m 184\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_tfll_mode \u001b[38;5;241m==\u001b[39m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mv1\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[0;32m 185\u001b[0m \u001b[38;5;129;01mand\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_tfll_submodule\n\u001b[0;32m 186\u001b[0m \u001b[38;5;129;01mand\u001b[39;00m item\u001b[38;5;241m.\u001b[39mstartswith(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mcompat.v1.\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n\u001b[0;32m 187\u001b[0m ):\n\u001b[0;32m 188\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mAttributeError\u001b[39;00m(\n\u001b[0;32m 189\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m`tf.compat.v1.keras` is not available with Keras 3. Keras 3 has \u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[0;32m 190\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mno support for TF 1 APIs. You can install the `tf_keras` package \u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m (...)\u001b[0m\n\u001b[0;32m 193\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m`tf.compat.v1.keras` to `tf_keras`.\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[0;32m 194\u001b[0m )\n",
"File \u001b[1;32mc:\\Users\\Pankaj rawat\\IdeaProjects\\Avoiding-duplicate-question-in-Quora\\seasme\\Lib\\site-packages\\tensorflow\\python\\util\\lazy_loader.py:182\u001b[0m, in \u001b[0;36mKerasLazyLoader.__getattr__\u001b[1;34m(self, item)\u001b[0m\n\u001b[0;32m 180\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_tfll_initialized:\n\u001b[0;32m 181\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_initialize()\n\u001b[1;32m--> 182\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_tfll_keras_version\u001b[49m \u001b[38;5;241m==\u001b[39m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mkeras_3\u001b[39m\u001b[38;5;124m\"\u001b[39m:\n\u001b[0;32m 183\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m (\n\u001b[0;32m 184\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_tfll_mode \u001b[38;5;241m==\u001b[39m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mv1\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[0;32m 185\u001b[0m \u001b[38;5;129;01mand\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_tfll_submodule\n\u001b[0;32m 186\u001b[0m \u001b[38;5;129;01mand\u001b[39;00m item\u001b[38;5;241m.\u001b[39mstartswith(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mcompat.v1.\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n\u001b[0;32m 187\u001b[0m ):\n\u001b[0;32m 188\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mAttributeError\u001b[39;00m(\n\u001b[0;32m 189\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m`tf.compat.v1.keras` is not available with Keras 3. Keras 3 has \u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[0;32m 190\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mno support for TF 1 APIs. You can install the `tf_keras` package \u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m (...)\u001b[0m\n\u001b[0;32m 193\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m`tf.compat.v1.keras` to `tf_keras`.\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[0;32m 194\u001b[0m )\n",
" \u001b[1;31m[... skipping similar frames: KerasLazyLoader.__getattr__ at line 182 (1488 times)]\u001b[0m\n",
"File \u001b[1;32mc:\\Users\\Pankaj rawat\\IdeaProjects\\Avoiding-duplicate-question-in-Quora\\seasme\\Lib\\site-packages\\tensorflow\\python\\util\\lazy_loader.py:182\u001b[0m, in \u001b[0;36mKerasLazyLoader.__getattr__\u001b[1;34m(self, item)\u001b[0m\n\u001b[0;32m 180\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_tfll_initialized:\n\u001b[0;32m 181\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_initialize()\n\u001b[1;32m--> 182\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_tfll_keras_version\u001b[49m \u001b[38;5;241m==\u001b[39m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mkeras_3\u001b[39m\u001b[38;5;124m\"\u001b[39m:\n\u001b[0;32m 183\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m (\n\u001b[0;32m 184\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_tfll_mode \u001b[38;5;241m==\u001b[39m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mv1\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[0;32m 185\u001b[0m \u001b[38;5;129;01mand\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_tfll_submodule\n\u001b[0;32m 186\u001b[0m \u001b[38;5;129;01mand\u001b[39;00m item\u001b[38;5;241m.\u001b[39mstartswith(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mcompat.v1.\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n\u001b[0;32m 187\u001b[0m ):\n\u001b[0;32m 188\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mAttributeError\u001b[39;00m(\n\u001b[0;32m 189\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m`tf.compat.v1.keras` is not available with Keras 3. Keras 3 has \u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[0;32m 190\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mno support for TF 1 APIs. You can install the `tf_keras` package \u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m (...)\u001b[0m\n\u001b[0;32m 193\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m`tf.compat.v1.keras` to `tf_keras`.\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[0;32m 194\u001b[0m )\n",
"File \u001b[1;32mc:\\Users\\Pankaj rawat\\IdeaProjects\\Avoiding-duplicate-question-in-Quora\\seasme\\Lib\\site-packages\\tensorflow\\python\\util\\lazy_loader.py:178\u001b[0m, in \u001b[0;36mKerasLazyLoader.__getattr__\u001b[1;34m(self, item)\u001b[0m\n\u001b[0;32m 177\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21m__getattr__\u001b[39m(\u001b[38;5;28mself\u001b[39m, item):\n\u001b[1;32m--> 178\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[43mitem\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;129;43;01min\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43m_tfll_mode\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43m_tfll_initialized\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43m_tfll_name\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m)\u001b[49m:\n\u001b[0;32m 179\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28msuper\u001b[39m(types\u001b[38;5;241m.\u001b[39mModuleType, \u001b[38;5;28mself\u001b[39m)\u001b[38;5;241m.\u001b[39m\u001b[38;5;21m__getattribute__\u001b[39m(item)\n\u001b[0;32m 180\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_tfll_initialized:\n",
"\u001b[1;31mRecursionError\u001b[0m: maximum recursion depth exceeded in comparison"
]
}
],
"source": [
"import tensorflow as tf\n",
"model = tf.keras.models.load_model('model/trained_model.keras', safe_mode=False, compile=False)\n",
"\n",
"# Show the model architecture\n",
"model.summary()"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "QDi4MBiKpivF"
},
"source": [
"<a name='4.2'></a>\n",
"### 4.2 Classify\n",
"To determine the accuracy of the model, you will use the test set that was configured earlier. While in training you used only positive examples, the test data, `Q1_test`, `Q2_test` and `y_test`, is set up as pairs of questions, some of which are duplicates and some are not. \n",
"This routine will run all the test question pairs through the model, compute the cosine similarity of each pair, threshold it and compare the result to `y_test` - the correct response from the data set. The results are accumulated to produce an accuracy; the confusion matrix is also computed to have a better understanding of the errors.\n",
"\n",
"\n",
"<a name='ex05'></a>\n",
"### Exercise 04\n",
"\n",
"**Instructions** \n",
" - Use a `tensorflow.data.Dataset` to go through the data in chunks with size batch_size. This time you don't need the labels, so you can just replace them by `None`,\n",
" - use `predict` on the chunks of data.\n",
" - compute `v1`, `v2` using the model output,\n",
" - for each element of the batch\n",
" - compute the cosine similarity of each pair of entries, `v1[j]`,`v2[j]`\n",
" - determine if `d > threshold`\n",
" - increment accuracy if that result matches the expected results (`y_test[j]`)\n",
" \n",
" Instead of running a for loop, you will vectorize all these operations to make things more efficient,\n",
" - compute the final accuracy and confusion matrix and return. For the confusion matrix you can use the [`tf.math.confusion_matrix`](https://www.tensorflow.org/api_docs/python/tf/math/confusion_matrix) function. "
]
},
{
"cell_type": "code",
"execution_count": 62,
"metadata": {
"colab": {},
"colab_type": "code",
"deletable": false,
"id": "K-h6ZH507fUm",
"tags": [
"graded"
]
},
"outputs": [],
"source": [
"# GRADED FUNCTION: classify\n",
"def classify(test_Q1, test_Q2, y_test, threshold, model, batch_size=64, verbose=True):\n",
" \"\"\"Function to test the accuracy of the model.\n",
"\n",
" Args:\n",
" test_Q1 (numpy.ndarray): Array of Q1 questions. Each element of the array would be a string.\n",
" test_Q2 (numpy.ndarray): Array of Q2 questions. Each element of the array would be a string.\n",
" y_test (numpy.ndarray): Array of actual target.\n",
" threshold (float): Desired threshold\n",
" model (tensorflow.Keras.Model): The Siamese model.\n",
" batch_size (int, optional): Size of the batches. Defaults to 64.\n",
"\n",
" Returns:\n",
" float: Accuracy of the model\n",
" numpy.array: confusion matrix\n",
" \"\"\"\n",
" y_pred = []\n",
" test_gen = tf.data.Dataset.from_tensor_slices(((test_Q1, test_Q2),None)).batch(batch_size=batch_size)\n",
" \n",
" ### START CODE HERE ###\n",
" \n",
" for (batch_x1, batch_x2), _ in test_gen:\n",
" # Get the outputs of the two branches of the Siamese network\n",
" v1 = model.get_layer('sequential')(batch_x1)\n",
" v2 = model.get_layer('sequential')(batch_x2)\n",
" \n",
" # Compute the cosine similarity\n",
" d = tf.reduce_sum(v1 * v2, axis=1)\n",
" \n",
" # Make predictions based on the threshold\n",
" batch_y_pred = tf.cast(d > threshold, tf.float64)\n",
" y_pred.extend(batch_y_pred.numpy())\n",
" \n",
" # Calculate the accuracy\n",
" y_pred = tf.convert_to_tensor(y_pred, dtype=tf.float64)\n",
" accuracy = tf.reduce_mean(tf.cast(tf.equal(y_pred, y_test), tf.float64))\n",
" \n",
" # Compute the confusion matrix\n",
" cm = tf.math.confusion_matrix(y_test, y_pred, num_classes=2)\n",
" \n",
"# pred = None\n",
"# _, n_feat = None\n",
"# v1 = model(q1)\n",
"# v2 = None\n",
" \n",
"# # Compute the cosine similarity. Using `tf.math.reduce_sum`. \n",
"# # Don't forget to use the appropriate axis argument.\n",
"# d = None\n",
"# # Check if d>threshold to make predictions\n",
"# y_pred = tf.cast(d>threshold, tf.float64)\n",
"# # take the average of correct predictions to get the accuracy\n",
"# accuracy = None\n",
"# # compute the confusion matrix using `tf.math.confusion_matrix`\n",
"# cm = tf.math.confusion_matrix\n",
" \n",
" ### END CODE HERE ###\n",
" \n",
" return accuracy, cm"
]
},
{
"cell_type": "code",
"execution_count": 63,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"colab_type": "code",
"deletable": false,
"editable": false,
"id": "yeQjHxkfpivH",
"outputId": "103b8449-896f-403d-f011-583df70afdae",
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Accuracy 0.7259765625\n",
"Confusion matrix:\n",
"[[4876 1506]\n",
" [1300 2558]]\n"
]
}
],
"source": [
"# this takes around 1 minute\n",
"accuracy, cm = classify(Q1_test,Q2_test, y_test, 0.7, model, batch_size = 512) \n",
"print(\"Accuracy\", accuracy.numpy())\n",
"print(f\"Confusion matrix:\\n{cm.numpy()}\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "CsokYZwhpivJ"
},
"source": [
"### **Expected Result** \n",
"Accuracy ~0.725\n",
"\n",
"Confusion matrix:\n",
"```\n",
"[[4876 1506]\n",
" [1300 2558]]\n",
" ```"
]
},
{
"cell_type": "code",
"execution_count": 64,
"metadata": {
"deletable": false,
"editable": false,
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[92mAll tests passed!\n"
]
}
],
"source": [
"# Test your function!\n",
"w3_unittest.test_classify(classify, model)"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "4-STC44Ywt1I"
},
"source": [
"<a name='5'></a>\n",
"\n",
"# Part 5: Testing with your own questions\n",
"\n",
"In this final section you will test the model with your own questions. You will write a function `predict` which takes two questions as input and returns `True` or `False` depending on whether the question pair is a duplicate or not. "
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "21h3Y0FNpivK"
},
"source": [
"Write a function `predict` that takes in two questions, the threshold and the model, and returns whether the questions are duplicates (`True`) or not duplicates (`False`) given a similarity threshold. \n",
"\n",
"<a name='ex06'></a>\n",
"### Exercise 05\n",
"\n",
"\n",
"**Instructions:** \n",
"- Create a tensorflow.data.Dataset from your two questions. Again, labels are not important, so you simply write `None`\n",
"- use the trained model output to create `v1`, `v2`\n",
"- compute the cosine similarity (dot product) of `v1`, `v2`\n",
"- compute `res` by comparing d to the threshold\n"
]
},
{
"cell_type": "code",
"execution_count": 77,
"metadata": {
"colab": {},
"colab_type": "code",
"deletable": false,
"id": "kg0wQ8qhpivL",
"tags": [
"graded"
]
},
"outputs": [],
"source": [
"# GRADED FUNCTION: predict\n",
"def predict(question1, question2, threshold, model, verbose=False):\n",
" \"\"\"Function for predicting if two questions are duplicates.\n",
"\n",
" Args:\n",
" question1 (str): First question.\n",
" question2 (str): Second question.\n",
" threshold (float): Desired threshold.\n",
" model (tensorflow.keras.Model): The Siamese model.\n",
" verbose (bool, optional): If the results should be printed out. Defaults to False.\n",
"\n",
" Returns:\n",
" bool: True if the questions are duplicates, False otherwise.\n",
" \"\"\"\n",
" generator = tf.data.Dataset.from_tensor_slices((([question1], [question2]),None)).batch(batch_size=1)\n",
" \n",
" ### START CODE HERE ###\n",
" \n",
" # Call the predict method of your model and save the output into v1v2\n",
" v1v2 = model.predict(generator)\n",
" out_size = v1v2.shape[1]\n",
" # Extract v1 and v2 from the model output\n",
" v1 = v1v2[:,:int(out_size/2)]\n",
" v2 = v1v2[:,int(out_size/2):]\n",
" print(v1.shape)\n",
" # Take the dot product to compute cos similarity of each pair of entries, v1, v2\n",
" # Since v1 and v2 are both vectors, use the function tf.math.reduce_sum instead of tf.linalg.matmul\n",
" d = tf.reduce_sum(v1 * v2)\n",
" # Is d greater than the threshold?\n",
" res = d > threshold\n",
"\n",
" ### END CODE HERE ###\n",
" \n",
" if(verbose):\n",
" print(\"Q1 = \", question1, \"\\nQ2 = \", question2)\n",
" print(\"d = \", d.numpy())\n",
" print(\"res = \", res.numpy())\n",
"\n",
" return res.numpy()"
]
},
{
"cell_type": "code",
"execution_count": 78,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 102
},
"colab_type": "code",
"deletable": false,
"editable": false,
"id": "Raojyhw3z7HE",
"outputId": "b0907aaf-63c0-448d-99b0-012359381a97",
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"1/1 [==============================] - 0s 16ms/step\n",
"(1, 128)\n",
"Q1 = When will I see you? \n",
"Q2 = When can I see you again?\n",
"d = 0.8422112\n",
"res = True\n"
]
},
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 78,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Feel free to try with your own questions\n",
"question1 = \"When will I see you?\"\n",
"question2 = \"When can I see you again?\"\n",
"# 1 means it is duplicated, 0 otherwise\n",
"predict(question1 , question2, 0.7, model, verbose = True)"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "7OEKCa_hpivP"
},
"source": [
"##### Expected Output\n",
"If input is:\n",
"```\n",
"question1 = \"When will I see you?\"\n",
"question2 = \"When can I see you again?\"\n",
"```\n",
"\n",
"Output is (d may vary a bit):\n",
"```\n",
"1/1 [==============================] - 0s 13ms/step\n",
"Q1 = When will I see you? \n",
"Q2 = When can I see you again?\n",
"d = 0.8422112\n",
"res = True\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 79,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 102
},
"colab_type": "code",
"deletable": false,
"editable": false,
"id": "DZccIQ_lpivQ",
"outputId": "3ed0af7e-5d44-4eb3-cebe-d6f74abe3e41",
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"1/1 [==============================] - 0s 24ms/step\n",
"(1, 128)\n",
"Q1 = Do they enjoy eating the dessert? \n",
"Q2 = Do they like hiking in the desert?\n",
"d = 0.12625802\n",
"res = False\n"
]
},
{
"data": {
"text/plain": [
"False"
]
},
"execution_count": 79,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Feel free to try with your own questions\n",
"question1 = \"Do they enjoy eating the dessert?\"\n",
"question2 = \"Do they like hiking in the desert?\"\n",
"# 1 means it is duplicated, 0 otherwise\n",
"predict(question1 , question2, 0.7, model, verbose=True)"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "lWrt-yCMpivS"
},
"source": [
"##### Expected output\n",
"\n",
"If input is:\n",
"```\n",
"question1 = \"Do they enjoy eating the dessert?\"\n",
"question2 = \"Do they like hiking in the desert?\"\n",
"```\n",
"\n",
"Output (d may vary a bit):\n",
"\n",
"```\n",
"1/1 [==============================] - 0s 12ms/step\n",
"Q1 = Do they enjoy eating the dessert? \n",
"Q2 = Do they like hiking in the desert?\n",
"d = 0.12625802\n",
"res = False\n",
"\n",
"False\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "NAfV3l5Zwt1L"
},
"source": [
"You can see that the Siamese network is capable of catching complicated structures. Concretely it can identify question duplicates although the questions do not have many words in common. \n",
" "
]
},
{
"cell_type": "code",
"execution_count": 80,
"metadata": {
"deletable": false,
"editable": false,
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"1/1 [==============================] - 1s 556ms/step\n",
"(1, 128)\n",
"1/1 [==============================] - 0s 16ms/step\n",
"(1, 128)\n",
"1/1 [==============================] - 0s 23ms/step\n",
"(1, 128)\n",
"1/1 [==============================] - 0s 16ms/step\n",
"(1, 128)\n",
"1/1 [==============================] - 0s 16ms/step\n",
"(1, 128)\n",
"\u001b[92mAll tests passed!\n"
]
}
],
"source": [
"# Test your function!\n",
"w3_unittest.test_predict(predict, model)"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "FsE8tdTLwt1M"
},
"source": [
"<a name='6'></a>\n",
"\n",
"### On Siamese networks\n",
"\n",
"Siamese networks are important and useful. Many times there are several questions that are already asked in quora, or other platforms and you can use Siamese networks to avoid question duplicates. \n",
"\n",
"Congratulations, you have now built a powerful system that can recognize question duplicates. In the next course we will use transformers for machine translation, summarization, question answering, and chatbots. \n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# "
]
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"collapsed_sections": [],
"machine_shape": "hm",
"name": "C3_W4_Assignment_Solution.ipynb",
"provenance": [],
"toc_visible": true
},
"coursera": {
"schema_names": [
"NLPC3-4A"
]
},
"grader_version": "1",
"kernelspec": {
"display_name": "seasme",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.7"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
|