doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1705.06950 | 63 | 339. sweeping ï¬oor (604)
340. swimming backstroke (1077)
341. swimming breast stroke (833)
342. swimming butterï¬y stroke (678)
343. swing dancing (512)
344. swinging legs (409)
345. swinging on something (482)
346. sword ï¬ghting (473)
347. tai chi (1070)
348. taking a shower (378)
349. tango dancing (1114)
350. tap dancing (947)
351. tapping guitar (815)
352. tapping pen (703)
353. tasting beer (588)
354. tasting food (613)
355. testifying (497)
356. texting (704)
357. throwing axe (816)
358. throwing ball (634)
359. throwing discus (1104)
360. tickling (610)
361. tobogganing (1147)
362. tossing coin (461)
363. tossing salad (463)
364. training dog (481)
365. trapezing (786)
366. trimming or shaving beard (981)
367. trimming trees (665)
368. triple jump (784)
369. tying bow tie (387)
370. tying knot (not on a tie) (844)
371. tying tie (673)
372. unboxing (858)
373. unloading truck (406)
374. using computer (937) | 1705.06950#63 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 64 | 371. tying tie (673)
372. unboxing (858)
373. unloading truck (406)
374. using computer (937)
375. using remote controller (not gaming) (549)
376. using segway (387)
377. vault (562)
378. waiting in line (430)
379. walking the dog (1145)
380. washing dishes (1048)
381. washing feet (862)
382. washing hair (423)
383. washing hands (916)
384. water skiing (763)
385. water sliding (420)
386. watering plants (680)
387. waxing back (537)
388. waxing chest (760)
389. waxing eyebrows (720)
390. waxing legs (948)
391. weaving basket (743)
392. welding (759)
393. whistling (416)
394. windsurï¬ng (1114)
395. wrapping present (861)
396. wrestling (488)
397. writing (735)
398. yawning (398)
399. yoga (1140)
400. zumba (1093)
# B. List of Parent-Child Groupings
These lists are not exclusive and are not intended to be comprehensive. Rather, they are a guide for related human action classes. | 1705.06950#64 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 65 | # B. List of Parent-Child Groupings
These lists are not exclusive and are not intended to be comprehensive. Rather, they are a guide for related human action classes.
arts and crafts (12) arranging ï¬owers blowing glass brush painting carving pumpkin clay pottery making decorating the christmas tree drawing getting a tattoo knitting making jewelry spray painting weaving basket
athletics â jumping (6) high jump hurdling long jump parkour pole vault triple jump
athletics â throwing + launching (9) archery catching or throwing frisbee disc golï¬ng hammer throw javelin throw shot put throwing axe throwing ball throwing discus
auto maintenance (4) changing oil changing wheel checking tires pumping gas
ball sports (25) bowling catching or throwing baseball
catching or throwing softball dodgeball dribbling basketball dunking basketball golf chipping golf driving golf putting hitting baseball hurling (sport) juggling soccer ball kicking ï¬eld goal kicking soccer ball passing American football (in game) passing American football (not in game) playing basketball playing cricket playing kickball playing squash or racquetball playing tennis playing volleyball shooting basketball shooting goal (soccer) shot put
body motions (16) air drumming applauding baby waking up bending back clapping cracking neck drumming ï¬ngers ï¬nger snapping headbanging headbutting pumping ï¬st shaking head stretching arm stretching leg swinging legs | 1705.06950#65 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 66 | cleaning (13) cleaning ï¬oor cleaning gutters cleaning pool cleaning shoes cleaning toilet cleaning windows doing laundry making bed mopping ï¬oor setting table shining shoes
sweeping ï¬oor washing dishes
cloths (8) bandaging doing laundry folding clothes folding napkins ironing making bed tying bow tie tying knot (not on a tie) tying tie
communication (11) answering questions auctioning bartending celebrating crying giving or receiving award laughing news anchoring presenting weather forecast sign language interpreting testifying
cooking (22) baking cookies barbequing breading or breadcrumbing cooking chicken cooking egg cooking on campï¬re cooking sausages cutting pineapple cutting watermelon ï¬ipping pancake frying vegetables grinding meat making a cake making a sandwich making pizza making sushi making tea peeling apples peeling potatoes picking fruit scrambling eggs tossing salad
dancing (18) belly dancing
breakdancing capoeira cheerleading country line dancing dancing ballet dancing charleston dancing gangnam style dancing macarena jumpstyle dancing krumping marching robot dancing salsa dancing swing dancing tango dancing tap dancing zumba
eating + drinking (17) bartending dining drinking drinking beer drinking shots eating burger eating cake eating carrots eating chips eating doughnuts eating hotdog eating ice cream eating spaghetti eating watermelon opening bottle tasting beer tasting food
electronics (5) assembling computer playing controller texting using computer using remote controller (not gaming) | 1705.06950#66 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 67 | electronics (5) assembling computer playing controller texting using computer using remote controller (not gaming)
garden + plants (10) blowing leaves carving pumpkin chopping wood climbing tree decorating the christmas tree egg hunting mowing lawn planting trees
trimming trees watering plants
golf (3) golf chipping golf driving golf putting
gymnastics (5) bouncing on trampoline cartwheeling gymnastics tumbling somersaulting vault
hair (14) braiding hair brushing hair curling hair dying hair ï¬xing hair getting a haircut shaving head shaving legs trimming or shaving beard washing hair waxing back waxing chest waxing eyebrows waxing legs
hands (9) air drumming applauding clapping cutting nails doing nails drumming ï¬ngers ï¬nger snapping pumping ï¬st washing hands
head + mouth (17) balloon blowing beatboxing blowing nose blowing out candles brushing teeth gargling headbanging headbutting shaking head singing
smoking smoking hookah sneezing snifï¬ng sticking tongue out whistling yawning
heights (15) abseiling bungee jumping climbing a rope climbing ladder climbing tree diving cliff ice climbing jumping into pool paragliding rock climbing skydiving slacklining springboard diving swinging on something trapezing | 1705.06950#67 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 68 | interacting with animals (19) bee keeping catching ï¬sh feeding birds feeding ï¬sh feeding goats grooming dog grooming horse holding snake ice ï¬shing milking cow petting animal (not cat) petting cat riding camel riding elephant riding mule riding or walking with horse shearing sheep training dog walking the dog
juggling (6) contact juggling hula hooping juggling balls juggling ï¬re juggling soccer ball spinning poi
makeup (5) applying cream doing nails dying hair ï¬lling eyebrows getting a tattoo
martial arts (10) arm wrestling capoeira drop kicking high kick punching bag punching person side kick sword ï¬ghting tai chi wrestling
miscellaneous (9) digging extinguishing ï¬re garbage collecting laying bricks moving furniture spraying stomping grapes tapping pen unloading truck
mobility â land (20) crawling baby driving car driving tractor faceplanting hoverboarding jogging motorcycling parkour pushing car pushing cart pushing wheelchair riding a bike riding mountain bike riding scooter riding unicycle roller skating running on treadmill skateboarding surï¬ng crowd using segway waiting in line
mobility â water (10) crossing river diving cliff jumping into pool scuba diving snorkeling springboard diving swimming backstroke swimming breast stroke swimming butterï¬y stroke water sliding | 1705.06950#68 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 69 | mobility â water (10) crossing river diving cliff jumping into pool scuba diving snorkeling springboard diving swimming backstroke swimming breast stroke swimming butterï¬y stroke water sliding
music (29) beatboxing busking playing accordion playing bagpipes playing bass guitar playing cello playing clarinet playing cymbals playing didgeridoo playing drums playing ï¬ute playing guitar playing harmonica playing harp playing keyboard playing organ playing piano playing recorder playing saxophone playing trombone playing trumpet playing ukulele playing violin playing xylophone recording music singing strumming guitar tapping guitar whistling
paper (12) bookbinding counting money folding napkins folding paper opening present reading book reading newspaper ripping paper
shredding paper unboxing wrapping present writing
personal hygiene (6) brushing teeth taking a shower trimming or shaving beard washing feet washing hair washing hands
playing games (13) egg hunting ï¬ying kite hopscotch playing cards playing chess playing monopoly playing paintball playing poker riding mechanical bull rock scissors paper shufï¬ing cards skipping rope tossing coin
racquet + bat sports (8) catching or throwing baseball catching or throwing softball hitting baseball hurling (sport) playing badminton playing cricket playing squash or racquetball playing tennis | 1705.06950#69 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06950 | 70 | racquet + bat sports (8) catching or throwing baseball catching or throwing softball hitting baseball hurling (sport) playing badminton playing cricket playing squash or racquetball playing tennis
snow + ice (18) biking through snow bobsledding hockey stop ice climbing ice ï¬shing ice skating making snowman playing ice hockey shoveling snow ski jumping skiing (not slalom or crosscountry) skiing crosscountry skiing slalom sled dog racing
snowboarding snowkiting snowmobiling tobogganing
swimming (3) swimming backstroke swimming breast stroke swimming butterï¬y stroke
touching person (11) carrying baby hugging kissing massaging back massaging feet massaging legs massaging personâs head shaking hands slapping tickling
using tools (13) bending metal blasting sand building cabinet building shed changing oil changing wheel checking tires plastering pumping gas sanding ï¬oor sharpening knives sharpening pencil welding
water sports (8) canoeing or kayaking jetskiing kitesurï¬ng parasailing sailing surï¬ng water water skiing windsurï¬ng
waxing (4) waxing back waxing chest waxing eyebrows waxing legs | 1705.06950#70 | The Kinetics Human Action Video Dataset | We describe the DeepMind Kinetics human action video dataset. The dataset
contains 400 human action classes, with at least 400 video clips for each
action. Each clip lasts around 10s and is taken from a different YouTube video.
The actions are human focussed and cover a broad range of classes including
human-object interactions such as playing instruments, as well as human-human
interactions such as shaking hands. We describe the statistics of the dataset,
how it was collected, and give some baseline performance figures for neural
network architectures trained and tested for human action classification on
this dataset. We also carry out a preliminary analysis of whether imbalance in
the dataset leads to bias in the classifiers. | http://arxiv.org/pdf/1705.06950 | Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, Andrew Zisserman | cs.CV | null | null | cs.CV | 20170519 | 20170519 | [
{
"id": "1603.04467"
},
{
"id": "1502.03167"
},
{
"id": "1603.09025"
}
] |
1705.06476 | 0 | 8 1 0 2
r a M 8 ] L C . s c [
4 v 6 7 4 6 0 . 5 0 7 1 : v i X r a
# ParlAI: A Dialog Research Software Platform
# Alexander H. Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes, Devi Parikh and Jason Weston Facebook AI Research
# Abstract
# Abstract
We introduce ParlAI (pronounced âpar- layâ), an open-source software plat- form for dialog research implemented in Python, available at http://parl.ai. Its goal is to provide a uniï¬ed framework for sharing, training and testing dialog models; integration of Amazon Mechani- cal Turk for data collection, human eval- uation, and online/reinforcement learning; and a repository of machine learning mod- els for comparing with othersâ models, and improving upon existing architectures. Over 20 tasks are supported in the ï¬rst re- lease, including popular datasets such as SQuAD, bAbI tasks, MCTest, WikiQA, QACNN, QADailyMail, CBT, bAbI Di- alog, Ubuntu, OpenSubtitles and VQA. Several models are integrated, including neural models such as memory networks, seq2seq and attentive LSTMs.
# Introduction | 1705.06476#0 | ParlAI: A Dialog Research Software Platform | We introduce ParlAI (pronounced "par-lay"), an open-source software platform
for dialog research implemented in Python, available at http://parl.ai. Its
goal is to provide a unified framework for sharing, training and testing of
dialog models, integration of Amazon Mechanical Turk for data collection, human
evaluation, and online/reinforcement learning; and a repository of machine
learning models for comparing with others' models, and improving upon existing
architectures. Over 20 tasks are supported in the first release, including
popular datasets such as SQuAD, bAbI tasks, MCTest, WikiQA, QACNN, QADailyMail,
CBT, bAbI Dialog, Ubuntu, OpenSubtitles and VQA. Several models are integrated,
including neural models such as memory networks, seq2seq and attentive LSTMs. | http://arxiv.org/pdf/1705.06476 | Alexander H. Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes, Devi Parikh, Jason Weston | cs.CL | null | null | cs.CL | 20170518 | 20180308 | [
{
"id": "1612.03969"
},
{
"id": "1511.02301"
},
{
"id": "1606.05250"
},
{
"id": "1511.08130"
},
{
"id": "1502.05698"
},
{
"id": "1704.00051"
},
{
"id": "1606.04582"
},
{
"id": "1703.06585"
},
{
"id": "1506.08909"
},
{
"id": "1605.07683"
}
] |
1705.06476 | 1 | # Introduction
QA datasets SQuAD Sentence Completion bAbI tasks QACNN (Cloze) MCTest SimpleQuestions WikiQA, WebQuestions, WikiMovies, MTurkWikiMovies MovieDD (Movie Recommendations) QADailyMail CBT BookTest Goal-Oriented Dialog Dialog Chit-Chat bADI Dialog tasks Ubuntu Dialog-based Language Learning bAbl Movies SubReddit Dialog-based Language Learning Movie Cornell Movie MovieDD-QARecs dialog Opensubtitles Visual QA/Dialog van
Figure 1: The tasks in the ï¬rst release of ParlAI.
QA Collector: In the United States, heating, ventilation and air conditioning (HVAC) systems ac EJ/yt) ofthe energy used in commercial nearly 50% (10.1 Euiyt) of the energy used in residential buildings. Live Chat In this task, you will need to ask a question about a paragraph, and then provide your own answer toi Please provide a question given this context. You: How much of the energy used in residential buildings do HVAC systems account for? you are ready, please click "Accept HIT" to start this task. QA Collector: Thanks. And what is the answer to your question? Suto t | aaaa | 1705.06476#1 | ParlAI: A Dialog Research Software Platform | We introduce ParlAI (pronounced "par-lay"), an open-source software platform
for dialog research implemented in Python, available at http://parl.ai. Its
goal is to provide a unified framework for sharing, training and testing of
dialog models, integration of Amazon Mechanical Turk for data collection, human
evaluation, and online/reinforcement learning; and a repository of machine
learning models for comparing with others' models, and improving upon existing
architectures. Over 20 tasks are supported in the first release, including
popular datasets such as SQuAD, bAbI tasks, MCTest, WikiQA, QACNN, QADailyMail,
CBT, bAbI Dialog, Ubuntu, OpenSubtitles and VQA. Several models are integrated,
including neural models such as memory networks, seq2seq and attentive LSTMs. | http://arxiv.org/pdf/1705.06476 | Alexander H. Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes, Devi Parikh, Jason Weston | cs.CL | null | null | cs.CL | 20170518 | 20180308 | [
{
"id": "1612.03969"
},
{
"id": "1511.02301"
},
{
"id": "1606.05250"
},
{
"id": "1511.08130"
},
{
"id": "1502.05698"
},
{
"id": "1704.00051"
},
{
"id": "1606.04582"
},
{
"id": "1703.06585"
},
{
"id": "1506.08909"
},
{
"id": "1605.07683"
}
] |
1705.06476 | 2 | The purpose of language is to accomplish com- munication goals, which typically involve a dia- log between two or more communicators (Crystal, 2004). Hence, trying to solve dialog is a funda- mental goal for researchers in the NLP commu- nity. From a machine learning perspective, build- ing a learning agent capable of dialog is also fun- damental for various reasons, chieï¬y that the solu- tion involves achieving most of the subgoals of the ï¬eld, and in many cases those subtasks are directly impactful to the task.
Figure 2: MTurk Live Chat for collecting QA datasets in ParlAI.
about sports or the news, or answering factual or perceptually-grounded questions all fall under dia- log. Hence, methods that perform task transfer ap- pear useful for the end-goal. Memory, logical and commonsense reasoning, planning, learning from interaction, learning compositionality and other AI subgoals also have clear roles in dialog.
On the one hand dialog can be seen as a sin- gle task (learning how to talk) and on the other hand as thousands of related tasks that require dif- ferent skills, all using the same input and output format. The task of booking a restaurant, chatting | 1705.06476#2 | ParlAI: A Dialog Research Software Platform | We introduce ParlAI (pronounced "par-lay"), an open-source software platform
for dialog research implemented in Python, available at http://parl.ai. Its
goal is to provide a unified framework for sharing, training and testing of
dialog models, integration of Amazon Mechanical Turk for data collection, human
evaluation, and online/reinforcement learning; and a repository of machine
learning models for comparing with others' models, and improving upon existing
architectures. Over 20 tasks are supported in the first release, including
popular datasets such as SQuAD, bAbI tasks, MCTest, WikiQA, QACNN, QADailyMail,
CBT, bAbI Dialog, Ubuntu, OpenSubtitles and VQA. Several models are integrated,
including neural models such as memory networks, seq2seq and attentive LSTMs. | http://arxiv.org/pdf/1705.06476 | Alexander H. Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes, Devi Parikh, Jason Weston | cs.CL | null | null | cs.CL | 20170518 | 20180308 | [
{
"id": "1612.03969"
},
{
"id": "1511.02301"
},
{
"id": "1606.05250"
},
{
"id": "1511.08130"
},
{
"id": "1502.05698"
},
{
"id": "1704.00051"
},
{
"id": "1606.04582"
},
{
"id": "1703.06585"
},
{
"id": "1506.08909"
},
{
"id": "1605.07683"
}
] |
1705.06476 | 3 | However, to pursue these research goals, soft- ware tools should unify the different dialog sub- tasks and the agents that can learn from them. Working on individual datasets can lead to siloed
research, where the overï¬tting to speciï¬c quali- ties of a dataset do not generalize to solving other tasks. For example, methods that do not gener- alize beyond WebQuestions (Berant et al., 2013) because they specialize on knowledge bases only, SQuAD (Rajpurkar et al., 2016) because they pre- dict start and end context indices (see Sec. 7), or bAbI (Weston et al., 2015) because they use sup- porting facts or make use of its simulated nature.
In this paper we present a software platform, ParlAI (pronounced âpar-layâ), that provides re- searchers a uniï¬ed framework for training and testing dialog models, especially multitask train- ing or evaluation over many tasks at once, as well as seamless integration with Amazon Mechanical Turk. Over 20 tasks are supported in the ï¬rst re- lease, including many popular datasets, see Fig. 1. Included are examples of training neural models with PyTorch and Lua Torch1. Using Theano2 or Tensorï¬ow3 instead is also straightforward. | 1705.06476#3 | ParlAI: A Dialog Research Software Platform | We introduce ParlAI (pronounced "par-lay"), an open-source software platform
for dialog research implemented in Python, available at http://parl.ai. Its
goal is to provide a unified framework for sharing, training and testing of
dialog models, integration of Amazon Mechanical Turk for data collection, human
evaluation, and online/reinforcement learning; and a repository of machine
learning models for comparing with others' models, and improving upon existing
architectures. Over 20 tasks are supported in the first release, including
popular datasets such as SQuAD, bAbI tasks, MCTest, WikiQA, QACNN, QADailyMail,
CBT, bAbI Dialog, Ubuntu, OpenSubtitles and VQA. Several models are integrated,
including neural models such as memory networks, seq2seq and attentive LSTMs. | http://arxiv.org/pdf/1705.06476 | Alexander H. Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes, Devi Parikh, Jason Weston | cs.CL | null | null | cs.CL | 20170518 | 20180308 | [
{
"id": "1612.03969"
},
{
"id": "1511.02301"
},
{
"id": "1606.05250"
},
{
"id": "1511.08130"
},
{
"id": "1502.05698"
},
{
"id": "1704.00051"
},
{
"id": "1606.04582"
},
{
"id": "1703.06585"
},
{
"id": "1506.08909"
},
{
"id": "1605.07683"
}
] |
1705.06476 | 4 | The overarching goal of ParlAI is to build a community-based platform for easy access to both tasks and learning algorithms that perform well on them, in order to push the ï¬eld forward. This pa- per describes our goals in detail, and gives a tech- nical overview of the platform.
# 2 Goals
The goals of ParlAI are as follows:
A uniï¬ed framework for development of dia- log models. ParlAI aims to unify dialog dataset input formats fed to machine learning agents to a single format, and to standardize evaluation frameworks and metrics as much as possible. Re- searchers can submit their new tasks and their agent training code to the repository to share with others in order to aid reproducibility, and to better enable follow-on research.
General dialog involving many different skills. ParlAI contains a seamless combination of real and simulated language datasets, and encourages multitask model development & evaluation by making multitask models as easy to build as single task ones. This should reduce overï¬tting of model design to speciï¬c datasets and encourage models that perform task transfer, an important prerequi- site for a general dialog agent.
1
# lnttp://pytorch.org/
http://pytorch.org/ and http://torch.ch/
2
http://deeplearning.net/software/theano/
3 | 1705.06476#4 | ParlAI: A Dialog Research Software Platform | We introduce ParlAI (pronounced "par-lay"), an open-source software platform
for dialog research implemented in Python, available at http://parl.ai. Its
goal is to provide a unified framework for sharing, training and testing of
dialog models, integration of Amazon Mechanical Turk for data collection, human
evaluation, and online/reinforcement learning; and a repository of machine
learning models for comparing with others' models, and improving upon existing
architectures. Over 20 tasks are supported in the first release, including
popular datasets such as SQuAD, bAbI tasks, MCTest, WikiQA, QACNN, QADailyMail,
CBT, bAbI Dialog, Ubuntu, OpenSubtitles and VQA. Several models are integrated,
including neural models such as memory networks, seq2seq and attentive LSTMs. | http://arxiv.org/pdf/1705.06476 | Alexander H. Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes, Devi Parikh, Jason Weston | cs.CL | null | null | cs.CL | 20170518 | 20180308 | [
{
"id": "1612.03969"
},
{
"id": "1511.02301"
},
{
"id": "1606.05250"
},
{
"id": "1511.08130"
},
{
"id": "1502.05698"
},
{
"id": "1704.00051"
},
{
"id": "1606.04582"
},
{
"id": "1703.06585"
},
{
"id": "1506.08909"
},
{
"id": "1605.07683"
}
] |
1705.06476 | 5 | http://pytorch.org/ and http://torch.ch/
2
http://deeplearning.net/software/theano/
3
https://www.tensorflow.org/
Real dialog with people. ParlAI allows collect- ing, training and evaluating on live dialog with hu- mans via Amazon Mechanical Turk by making it easy to connect Turkers with a dialog agent, see Fig. 2. This also enables comparison of Turk ex- periments across different research groups, which has been historically difï¬cult. Towards a common general dialog model. Our aim is to motivate the building of new tasks and agents that move the ï¬eld towards a working di- alog model. Hence, each new task that goes into the repository should build towards that common goal, rather than being seen solely as a piece of independent research.
# 3 General Properties of ParlAI | 1705.06476#5 | ParlAI: A Dialog Research Software Platform | We introduce ParlAI (pronounced "par-lay"), an open-source software platform
for dialog research implemented in Python, available at http://parl.ai. Its
goal is to provide a unified framework for sharing, training and testing of
dialog models, integration of Amazon Mechanical Turk for data collection, human
evaluation, and online/reinforcement learning; and a repository of machine
learning models for comparing with others' models, and improving upon existing
architectures. Over 20 tasks are supported in the first release, including
popular datasets such as SQuAD, bAbI tasks, MCTest, WikiQA, QACNN, QADailyMail,
CBT, bAbI Dialog, Ubuntu, OpenSubtitles and VQA. Several models are integrated,
including neural models such as memory networks, seq2seq and attentive LSTMs. | http://arxiv.org/pdf/1705.06476 | Alexander H. Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes, Devi Parikh, Jason Weston | cs.CL | null | null | cs.CL | 20170518 | 20180308 | [
{
"id": "1612.03969"
},
{
"id": "1511.02301"
},
{
"id": "1606.05250"
},
{
"id": "1511.08130"
},
{
"id": "1502.05698"
},
{
"id": "1704.00051"
},
{
"id": "1606.04582"
},
{
"id": "1703.06585"
},
{
"id": "1506.08909"
},
{
"id": "1605.07683"
}
] |
1705.06476 | 6 | # 3 General Properties of ParlAI
ParlAI consists of a number of tasks and agents that can be used to solve them. All the tasks in ParlAI have a single format (API) which makes applying any agent to any task, or multiple tasks at once, simple. The tasks include both ï¬xed su- pervised/imitation learning datasets (i.e. conver- sation logs) and interactive (online or reinforce- ment learning) tasks, as well as both real language and simulated tasks, which can all be seamlessly trained on. ParlAI also supports other media, e.g. images as well as text for visual question an- swering (Antol et al., 2015) or visually grounded dialog (Das et al., 2017). ParlAI automatically downloads tasks and datasets the ï¬rst time they are used. One or more Mechanical Turkers can be embedded inside an environment (task) to collect data, train or evaluate learning agents.
Examples are included in the ï¬rst release of training with PyTorch and Lua Torch. ParlAI uses ZeroMQ to talk to languages other than Python (such as Lua Torch). Both batch training and hog- wild training of models are supported and built into the code. An example main for training an agent is given in Fig. 3.
# 4 Worlds, Agents and Teachers | 1705.06476#6 | ParlAI: A Dialog Research Software Platform | We introduce ParlAI (pronounced "par-lay"), an open-source software platform
for dialog research implemented in Python, available at http://parl.ai. Its
goal is to provide a unified framework for sharing, training and testing of
dialog models, integration of Amazon Mechanical Turk for data collection, human
evaluation, and online/reinforcement learning; and a repository of machine
learning models for comparing with others' models, and improving upon existing
architectures. Over 20 tasks are supported in the first release, including
popular datasets such as SQuAD, bAbI tasks, MCTest, WikiQA, QACNN, QADailyMail,
CBT, bAbI Dialog, Ubuntu, OpenSubtitles and VQA. Several models are integrated,
including neural models such as memory networks, seq2seq and attentive LSTMs. | http://arxiv.org/pdf/1705.06476 | Alexander H. Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes, Devi Parikh, Jason Weston | cs.CL | null | null | cs.CL | 20170518 | 20180308 | [
{
"id": "1612.03969"
},
{
"id": "1511.02301"
},
{
"id": "1606.05250"
},
{
"id": "1511.08130"
},
{
"id": "1502.05698"
},
{
"id": "1704.00051"
},
{
"id": "1606.04582"
},
{
"id": "1703.06585"
},
{
"id": "1506.08909"
},
{
"id": "1605.07683"
}
] |
1705.06476 | 7 | # 4 Worlds, Agents and Teachers
The main concepts (classes) in ParlAI are worlds, agents and teachers:
⢠world â the environment. This can vary from being very simple, e.g. just two agents con- versing, to much more complex, e.g. multiple agents in an interactive environment.
⢠agent â an agent that can act (especially, speak) in the world. An agent is either a learner (i.e. a machine learned system), a
teacher = SquadTeacher(opt) agent = MyAgent(opt) world = World(opt, [teacher, agent]) for i in range(num_exs): world.parley() print(world.display()) def parley(self): for agent in self.agents: act = agent.act() for other_agent in self.agents: if other_agent != agent: other_agent.observe(act)
Figure 3: ParlAI main for displaying data (top) and the code for the world.parley call (bottom).
hard-coded bot such as one designed to inter- act with learners, or a human (e.g. a Turker). ⢠teacher â a type of agent that talks to the learner in order to teach it, e.g. implements one of the tasks in Fig. 1. | 1705.06476#7 | ParlAI: A Dialog Research Software Platform | We introduce ParlAI (pronounced "par-lay"), an open-source software platform
for dialog research implemented in Python, available at http://parl.ai. Its
goal is to provide a unified framework for sharing, training and testing of
dialog models, integration of Amazon Mechanical Turk for data collection, human
evaluation, and online/reinforcement learning; and a repository of machine
learning models for comparing with others' models, and improving upon existing
architectures. Over 20 tasks are supported in the first release, including
popular datasets such as SQuAD, bAbI tasks, MCTest, WikiQA, QACNN, QADailyMail,
CBT, bAbI Dialog, Ubuntu, OpenSubtitles and VQA. Several models are integrated,
including neural models such as memory networks, seq2seq and attentive LSTMs. | http://arxiv.org/pdf/1705.06476 | Alexander H. Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes, Devi Parikh, Jason Weston | cs.CL | null | null | cs.CL | 20170518 | 20180308 | [
{
"id": "1612.03969"
},
{
"id": "1511.02301"
},
{
"id": "1606.05250"
},
{
"id": "1511.08130"
},
{
"id": "1502.05698"
},
{
"id": "1704.00051"
},
{
"id": "1606.04582"
},
{
"id": "1703.06585"
},
{
"id": "1506.08909"
},
{
"id": "1605.07683"
}
] |
1705.06476 | 8 | After deï¬ning a world and the agents in it, a main loop can be run for training, testing or dis- playing, which calls the function world.parley() to run one time step of the world. Example code to display data is given in Fig. 3, and the output of running it is in Fig. 4.
# 5 Actions and Observations
All agents (including teachers) speak to each other in a single common format â the observa- tion/action object (a python dict), see Fig. 5. It is used to pass text, labels and rewards between agents. The same object type is used for both talking (acting) and listening (observing), but with different values in the ï¬elds. Hence, the ob- ject is returned from agent.act() and passed in to agent.observe(), see Fig. 3.
The ï¬elds of the message are as follows: ⢠text: a speech act. ⢠id: the speakerâs identity. ⢠reward: a real-valued reward assigned to the
receiver of the message.
⢠episode done: indicating the end of a dialog. For supervised datasets, there are some addi- tional ï¬elds that can be used:
⢠label: a set of answers the speaker is expect- ing to receive in reply, e.g. for QA datasets the right answers to a question. | 1705.06476#8 | ParlAI: A Dialog Research Software Platform | We introduce ParlAI (pronounced "par-lay"), an open-source software platform
for dialog research implemented in Python, available at http://parl.ai. Its
goal is to provide a unified framework for sharing, training and testing of
dialog models, integration of Amazon Mechanical Turk for data collection, human
evaluation, and online/reinforcement learning; and a repository of machine
learning models for comparing with others' models, and improving upon existing
architectures. Over 20 tasks are supported in the first release, including
popular datasets such as SQuAD, bAbI tasks, MCTest, WikiQA, QACNN, QADailyMail,
CBT, bAbI Dialog, Ubuntu, OpenSubtitles and VQA. Several models are integrated,
including neural models such as memory networks, seq2seq and attentive LSTMs. | http://arxiv.org/pdf/1705.06476 | Alexander H. Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes, Devi Parikh, Jason Weston | cs.CL | null | null | cs.CL | 20170518 | 20180308 | [
{
"id": "1612.03969"
},
{
"id": "1511.02301"
},
{
"id": "1606.05250"
},
{
"id": "1511.08130"
},
{
"id": "1502.05698"
},
{
"id": "1704.00051"
},
{
"id": "1606.04582"
},
{
"id": "1703.06585"
},
{
"id": "1506.08909"
},
{
"id": "1605.07683"
}
] |
1705.06476 | 9 | ⢠label: a set of answers the speaker is expect- ing to receive in reply, e.g. for QA datasets the right answers to a question.
⢠label candidates: a set of possible ways to respond supplied by a teacher, e.g. for multi- ple choice datasets or ranking tasks.
ranked candidate predic- tions from a learner. Used to evaluate ranking
python examples/display_data.py -t babi [babi:Task1k:4]: The office is north of the kitchen. The bathroom is north of the office. What is north of the kitchen? [cands: office|garden|hallway|bedroom|kitchen|bathroom] [RepeatLabelAgent]: office - - - - - - - - - - - - - - - - - - - - - ËË [babi:Task1k:2]: Daniel went to the kitchen. Daniel grabbed the football there. Mary took the milk there. Mary journeyed to the office. Where is the milk? [cands: office|garden|hallway|bedroom|kitchen|bathroom] [RepeatLabelAgent]: office
Figure 4: Example output to display data of a given task (see Fig. 3 for corresponding code).
metrics, rather than just evaluate the single response in the text ï¬eld. | 1705.06476#9 | ParlAI: A Dialog Research Software Platform | We introduce ParlAI (pronounced "par-lay"), an open-source software platform
for dialog research implemented in Python, available at http://parl.ai. Its
goal is to provide a unified framework for sharing, training and testing of
dialog models, integration of Amazon Mechanical Turk for data collection, human
evaluation, and online/reinforcement learning; and a repository of machine
learning models for comparing with others' models, and improving upon existing
architectures. Over 20 tasks are supported in the first release, including
popular datasets such as SQuAD, bAbI tasks, MCTest, WikiQA, QACNN, QADailyMail,
CBT, bAbI Dialog, Ubuntu, OpenSubtitles and VQA. Several models are integrated,
including neural models such as memory networks, seq2seq and attentive LSTMs. | http://arxiv.org/pdf/1705.06476 | Alexander H. Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes, Devi Parikh, Jason Weston | cs.CL | null | null | cs.CL | 20170518 | 20180308 | [
{
"id": "1612.03969"
},
{
"id": "1511.02301"
},
{
"id": "1606.05250"
},
{
"id": "1511.08130"
},
{
"id": "1502.05698"
},
{
"id": "1704.00051"
},
{
"id": "1606.04582"
},
{
"id": "1703.06585"
},
{
"id": "1506.08909"
},
{
"id": "1605.07683"
}
] |
1705.06476 | 10 | Figure 4: Example output to display data of a given task (see Fig. 3 for corresponding code).
metrics, rather than just evaluate the single response in the text ï¬eld.
metrics: A teacher can communicate to a learning agent metrics on its performance. Finally other media can also be supported with
additional ï¬elds:
⢠image: an image, e.g. for Visual Question Answering or Visual Dialog datasets.
As the dict is extensible, we can add more ï¬elds over time, e.g. for audio and other sensory data, as well as actions other than speech acts.
Each of these ï¬elds are technically optional, de- pending on the dataset, though the text ï¬eld will most likely be used in nearly all exchanges. A typ- ical exchange from a ParlAI training set is shown in Fig. 6.
# 6 Code Structure
The ParlAI codebase has ï¬ve main directories: ⢠core: the primary code for the platform. ⢠agents: contains agents which can interact with the worlds/tasks (e.g. learning models). ⢠examples: contains examples of different mains (display data, training and evaluation). ⢠tasks: contains code for the different tasks
available from within ParlAI.
⢠mturk: contains code for setting up Mechan- ical Turk and sample MTurk tasks. | 1705.06476#10 | ParlAI: A Dialog Research Software Platform | We introduce ParlAI (pronounced "par-lay"), an open-source software platform
for dialog research implemented in Python, available at http://parl.ai. Its
goal is to provide a unified framework for sharing, training and testing of
dialog models, integration of Amazon Mechanical Turk for data collection, human
evaluation, and online/reinforcement learning; and a repository of machine
learning models for comparing with others' models, and improving upon existing
architectures. Over 20 tasks are supported in the first release, including
popular datasets such as SQuAD, bAbI tasks, MCTest, WikiQA, QACNN, QADailyMail,
CBT, bAbI Dialog, Ubuntu, OpenSubtitles and VQA. Several models are integrated,
including neural models such as memory networks, seq2seq and attentive LSTMs. | http://arxiv.org/pdf/1705.06476 | Alexander H. Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes, Devi Parikh, Jason Weston | cs.CL | null | null | cs.CL | 20170518 | 20180308 | [
{
"id": "1612.03969"
},
{
"id": "1511.02301"
},
{
"id": "1606.05250"
},
{
"id": "1511.08130"
},
{
"id": "1502.05698"
},
{
"id": "1704.00051"
},
{
"id": "1606.04582"
},
{
"id": "1703.06585"
},
{
"id": "1506.08909"
},
{
"id": "1605.07683"
}
] |
1705.06476 | 11 | available from within ParlAI.
⢠mturk: contains code for setting up Mechan- ical Turk and sample MTurk tasks.
# 6.1 Core
The core library contains the following ï¬les:
⢠agents.py: deï¬nes the Agent base class for all agents, which implements the observe() and act() methods, the Teacher class which also reports metrics, and MultiTaskTeacher for multitask training.
Observation/action dict Passed back and forth between agents & environment. Contains: .text .id .reward .episode done text of speaker(s) id of speaker(s) for reinforcement learning signals end of episode For supervised dialog datasets: .label .label candidates .text candidates .metrics multiple choice options ranked candidate responses evaluation metrics Other media: .image for VQA or Visual Dialog
Figure 5: The observation/action dict is the cen- tral message passing object in ParlAI: agents send this message to speak, and receive a message of this form to observe other speakers and the envi- ronment.
⢠dialog teacher.py: the base teacher class for doing dialog with ï¬xed chat logs. | 1705.06476#11 | ParlAI: A Dialog Research Software Platform | We introduce ParlAI (pronounced "par-lay"), an open-source software platform
for dialog research implemented in Python, available at http://parl.ai. Its
goal is to provide a unified framework for sharing, training and testing of
dialog models, integration of Amazon Mechanical Turk for data collection, human
evaluation, and online/reinforcement learning; and a repository of machine
learning models for comparing with others' models, and improving upon existing
architectures. Over 20 tasks are supported in the first release, including
popular datasets such as SQuAD, bAbI tasks, MCTest, WikiQA, QACNN, QADailyMail,
CBT, bAbI Dialog, Ubuntu, OpenSubtitles and VQA. Several models are integrated,
including neural models such as memory networks, seq2seq and attentive LSTMs. | http://arxiv.org/pdf/1705.06476 | Alexander H. Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes, Devi Parikh, Jason Weston | cs.CL | null | null | cs.CL | 20170518 | 20180308 | [
{
"id": "1612.03969"
},
{
"id": "1511.02301"
},
{
"id": "1606.05250"
},
{
"id": "1511.08130"
},
{
"id": "1502.05698"
},
{
"id": "1704.00051"
},
{
"id": "1606.04582"
},
{
"id": "1703.06585"
},
{
"id": "1506.08909"
},
{
"id": "1605.07683"
}
] |
1705.06476 | 12 | ⢠dialog teacher.py: the base teacher class for doing dialog with ï¬xed chat logs.
⢠worlds.py: deï¬nes the base World class, Di- alogPartnerWorld for two speakers, MultiA- gentDialogWorld for more than two, and two containers that can wrap a chosen environ- ment: BatchWorld for batch training, and HogwildWorld for training across multiple threads.
⢠dict.py: code for building language dictio- naries.
⢠metrics.py: computes exact match, F1 and ranking metrics for evaluation.
⢠params.py: uses argparse to interpret com- mand line arguments for ParlAI
# 6.2 Agents
The agents directory contains machine learning agents. Currently available within this directory:
an attentive LSTM model DrQA (Chen et al., 2017) implemented in PyTorch that has competitive results on SQuAD (Ra- jpurkar et al., 2016) amongst other datasets. ⢠memnn: code for an end-to-end memory net- work (Sukhbaatar et al., 2015) in Lua Torch. ⢠remote agent: basic class for any agent connecting over ZeroMQ.
⢠seq2seq: basic GRU sequence to sequence model (Sutskever et al., 2014) | 1705.06476#12 | ParlAI: A Dialog Research Software Platform | We introduce ParlAI (pronounced "par-lay"), an open-source software platform
for dialog research implemented in Python, available at http://parl.ai. Its
goal is to provide a unified framework for sharing, training and testing of
dialog models, integration of Amazon Mechanical Turk for data collection, human
evaluation, and online/reinforcement learning; and a repository of machine
learning models for comparing with others' models, and improving upon existing
architectures. Over 20 tasks are supported in the first release, including
popular datasets such as SQuAD, bAbI tasks, MCTest, WikiQA, QACNN, QADailyMail,
CBT, bAbI Dialog, Ubuntu, OpenSubtitles and VQA. Several models are integrated,
including neural models such as memory networks, seq2seq and attentive LSTMs. | http://arxiv.org/pdf/1705.06476 | Alexander H. Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes, Devi Parikh, Jason Weston | cs.CL | null | null | cs.CL | 20170518 | 20180308 | [
{
"id": "1612.03969"
},
{
"id": "1511.02301"
},
{
"id": "1606.05250"
},
{
"id": "1511.08130"
},
{
"id": "1502.05698"
},
{
"id": "1704.00051"
},
{
"id": "1606.04582"
},
{
"id": "1703.06585"
},
{
"id": "1506.08909"
},
{
"id": "1605.07683"
}
] |
1705.06476 | 13 | ⢠seq2seq: basic GRU sequence to sequence model (Sutskever et al., 2014)
⢠ir baseline: information retrieval (IR) base- line that scores responses with TFIDFTeacher: { âtextâ: âSam went to the kitchen.
Pat gave Sam the milk.
Where is the milk?â,\ âlabelsâ: [âkitchenâ], âlabel_candidatesâ: [âhallwayâ, âkitchenâ, âbathroomâ], âepisode_doneâ: False } Student: { âtextâ: âhallwayâ } Teacher: { âtextâ: âSam went to the hallway
Pat went to the bathroom
Where is the milk?â, âlabelsâ: [âhallwayâ], âlabel_candidatesâ: [âhallwayâ, âkitchenâ, âbathroomâ], âdoneâ: True } Student: { âtextâ: âhallwayâ } ... | 1705.06476#13 | ParlAI: A Dialog Research Software Platform | We introduce ParlAI (pronounced "par-lay"), an open-source software platform
for dialog research implemented in Python, available at http://parl.ai. Its
goal is to provide a unified framework for sharing, training and testing of
dialog models, integration of Amazon Mechanical Turk for data collection, human
evaluation, and online/reinforcement learning; and a repository of machine
learning models for comparing with others' models, and improving upon existing
architectures. Over 20 tasks are supported in the first release, including
popular datasets such as SQuAD, bAbI tasks, MCTest, WikiQA, QACNN, QADailyMail,
CBT, bAbI Dialog, Ubuntu, OpenSubtitles and VQA. Several models are integrated,
including neural models such as memory networks, seq2seq and attentive LSTMs. | http://arxiv.org/pdf/1705.06476 | Alexander H. Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes, Devi Parikh, Jason Weston | cs.CL | null | null | cs.CL | 20170518 | 20180308 | [
{
"id": "1612.03969"
},
{
"id": "1511.02301"
},
{
"id": "1606.05250"
},
{
"id": "1511.08130"
},
{
"id": "1502.05698"
},
{
"id": "1704.00051"
},
{
"id": "1606.04582"
},
{
"id": "1703.06585"
},
{
"id": "1506.08909"
},
{
"id": "1605.07683"
}
] |
1705.06476 | 14 | Figure 6: A typical exchange from a ParlAI train- ing set involves messages passed using the obser- vation/action dict (the test set would not include labels). Shown here is the bAbI dataset.
weighted matching (Ritter et al., 2011).
⢠repeat label: basic class for merely repeat- ing all data sent to it (e.g. for debugging).
# 6.3 Examples
This directory contains examples of different mains:.
⢠display data: display data from a particu- lar task provided on the command-line.
⢠display model: show the predictions of a provided model.
⢠eval model: compute evaluation metrics for a given model on a given task.
⢠train model: execute a standard training procedure with a given task and model, in- cluding logging and possibly alternating be- tween training and validation.
For example, one can display 10 random examples from the bAbI tasks (Weston et al., 2015): python display data.py -t babi -n 10
Display multitasking bAbI and SQuAD (Ra- jpurkar et al., 2016) at the same time:
python display data.py -t babi,squad
Evaluate an IR baseline model on the Movies Sub- reddit:
python eval model.py -m ir baseline -t â#moviedd-redditâ -dt valid | 1705.06476#14 | ParlAI: A Dialog Research Software Platform | We introduce ParlAI (pronounced "par-lay"), an open-source software platform
for dialog research implemented in Python, available at http://parl.ai. Its
goal is to provide a unified framework for sharing, training and testing of
dialog models, integration of Amazon Mechanical Turk for data collection, human
evaluation, and online/reinforcement learning; and a repository of machine
learning models for comparing with others' models, and improving upon existing
architectures. Over 20 tasks are supported in the first release, including
popular datasets such as SQuAD, bAbI tasks, MCTest, WikiQA, QACNN, QADailyMail,
CBT, bAbI Dialog, Ubuntu, OpenSubtitles and VQA. Several models are integrated,
including neural models such as memory networks, seq2seq and attentive LSTMs. | http://arxiv.org/pdf/1705.06476 | Alexander H. Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes, Devi Parikh, Jason Weston | cs.CL | null | null | cs.CL | 20170518 | 20180308 | [
{
"id": "1612.03969"
},
{
"id": "1511.02301"
},
{
"id": "1606.05250"
},
{
"id": "1511.08130"
},
{
"id": "1502.05698"
},
{
"id": "1704.00051"
},
{
"id": "1606.04582"
},
{
"id": "1703.06585"
},
{
"id": "1506.08909"
},
{
"id": "1605.07683"
}
] |
1705.06476 | 16 | python train model.py -m drqa -t squad -b 32
# 6.4 Tasks
Over 20 tasks are supported in the ï¬rst release, including popular datasets such as SQuAD (Ra- jpurkar et al., 2016), bAbI tasks (Weston et al., (Hermann 2015), QACNN and QADailyMail et al., 2015), CBT (Hill et al., 2015), bAbI Dialog tasks (Bordes and Weston, 2016), Ubuntu (Lowe et al., 2015) and VQA (Antol et al., 2015). All the datasets in the ï¬rst release are shown in Fig. 14. The tasks are separated into ï¬ve categories: ⢠Question answering (QA): one of the sim- plest forms of dialog, with only 1 turn per speaker. Any intelligent dialog agent should be capable of answering questions, and there are many kinds of questions (and hence datasets) that one can build, providing a set of very important tests. Question answering is particularly useful in that the evaluation is simpler than other forms of dialog if the dataset is labeled with QA pairs and the ques- tions are mostly unambiguous. | 1705.06476#16 | ParlAI: A Dialog Research Software Platform | We introduce ParlAI (pronounced "par-lay"), an open-source software platform
for dialog research implemented in Python, available at http://parl.ai. Its
goal is to provide a unified framework for sharing, training and testing of
dialog models, integration of Amazon Mechanical Turk for data collection, human
evaluation, and online/reinforcement learning; and a repository of machine
learning models for comparing with others' models, and improving upon existing
architectures. Over 20 tasks are supported in the first release, including
popular datasets such as SQuAD, bAbI tasks, MCTest, WikiQA, QACNN, QADailyMail,
CBT, bAbI Dialog, Ubuntu, OpenSubtitles and VQA. Several models are integrated,
including neural models such as memory networks, seq2seq and attentive LSTMs. | http://arxiv.org/pdf/1705.06476 | Alexander H. Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes, Devi Parikh, Jason Weston | cs.CL | null | null | cs.CL | 20170518 | 20180308 | [
{
"id": "1612.03969"
},
{
"id": "1511.02301"
},
{
"id": "1606.05250"
},
{
"id": "1511.08130"
},
{
"id": "1502.05698"
},
{
"id": "1704.00051"
},
{
"id": "1606.04582"
},
{
"id": "1703.06585"
},
{
"id": "1506.08909"
},
{
"id": "1605.07683"
}
] |
1705.06476 | 17 | the agent has to ï¬ll in a missing word in the next utterance in a dialog. Again, this is special- ized dialog task, but it has the advantage that the datasets are cheap to make and evaluation is simple, which is why the community has built several such datasets.
⢠Goal-Oriented Dialog: a more realistic class of tasks is where there is a goal to be achieved by the end of the dialog. For example, a cus- tomer and a travel agent discussing a ï¬ight, one speaker recommending another a movie to watch, and so on.
⢠Chit-Chat: dialog tasks where there may not be an explicit goal, but more of a discus- sion â for example two speakers discussing sports, movies or a mutual interest.
⢠Visual Dialog: dialog is often grounded in physical objects in the world, so we also in- clude dialog tasks with images as well as text. Choosing a task in ParlAI is as easy as specify- ing it on the command line, as shown in the dataset display utility, Fig. 4. If the dataset has not been used before, ParlAI will automatically download it. As all datasets are treated in the same way in ParlAI (with a single dialog API, see Sec. 5), a di- alog agent can switch training and testing between any of them. Importantly, one can specify many | 1705.06476#17 | ParlAI: A Dialog Research Software Platform | We introduce ParlAI (pronounced "par-lay"), an open-source software platform
for dialog research implemented in Python, available at http://parl.ai. Its
goal is to provide a unified framework for sharing, training and testing of
dialog models, integration of Amazon Mechanical Turk for data collection, human
evaluation, and online/reinforcement learning; and a repository of machine
learning models for comparing with others' models, and improving upon existing
architectures. Over 20 tasks are supported in the first release, including
popular datasets such as SQuAD, bAbI tasks, MCTest, WikiQA, QACNN, QADailyMail,
CBT, bAbI Dialog, Ubuntu, OpenSubtitles and VQA. Several models are integrated,
including neural models such as memory networks, seq2seq and attentive LSTMs. | http://arxiv.org/pdf/1705.06476 | Alexander H. Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes, Devi Parikh, Jason Weston | cs.CL | null | null | cs.CL | 20170518 | 20180308 | [
{
"id": "1612.03969"
},
{
"id": "1511.02301"
},
{
"id": "1606.05250"
},
{
"id": "1511.08130"
},
{
"id": "1502.05698"
},
{
"id": "1704.00051"
},
{
"id": "1606.04582"
},
{
"id": "1703.06585"
},
{
"id": "1506.08909"
},
{
"id": "1605.07683"
}
] |
1705.06476 | 18 | 4All dataset descriptions and references are at http:// parl.ai in the README.md and task list.py.
tasks at once (multitasking) by simply providing a comma-separated list, e.g. the command line argu- ments -t babi,squad, to use those two datasets, or even all the QA datasets at once (-t #qa) or in- deed every task in ParlAI at once (-t #all). The aim is to make it easy to build and evaluate very rich dialog models.
Each task is contained in a folder with the fol- lowing standardized ï¬les:
⢠build.py: ï¬le for setting up data for the task, including downloading the data the ï¬rst time it is requested.
contains agents that live in the world of the task.
⢠worlds.py: optionally added for tasks that need to deï¬ne new/complex environments. To add a new task, one must implement build.py to download any required data, and agents.py for the teacher. If the data consist of ï¬xed logs/dialog scripts such as in many supervised datasets (SQuAD, Ubuntu, etc.) there is very lit- tle code to write. For more complex setups where an environment with interaction has to be deï¬ned, new worlds and/or teachers can be implemented. | 1705.06476#18 | ParlAI: A Dialog Research Software Platform | We introduce ParlAI (pronounced "par-lay"), an open-source software platform
for dialog research implemented in Python, available at http://parl.ai. Its
goal is to provide a unified framework for sharing, training and testing of
dialog models, integration of Amazon Mechanical Turk for data collection, human
evaluation, and online/reinforcement learning; and a repository of machine
learning models for comparing with others' models, and improving upon existing
architectures. Over 20 tasks are supported in the first release, including
popular datasets such as SQuAD, bAbI tasks, MCTest, WikiQA, QACNN, QADailyMail,
CBT, bAbI Dialog, Ubuntu, OpenSubtitles and VQA. Several models are integrated,
including neural models such as memory networks, seq2seq and attentive LSTMs. | http://arxiv.org/pdf/1705.06476 | Alexander H. Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes, Devi Parikh, Jason Weston | cs.CL | null | null | cs.CL | 20170518 | 20180308 | [
{
"id": "1612.03969"
},
{
"id": "1511.02301"
},
{
"id": "1606.05250"
},
{
"id": "1511.08130"
},
{
"id": "1502.05698"
},
{
"id": "1704.00051"
},
{
"id": "1606.04582"
},
{
"id": "1703.06585"
},
{
"id": "1506.08909"
},
{
"id": "1605.07683"
}
] |
1705.06476 | 19 | # 6.5 Mechanical Turk
An important part of ParlAI is seamless integra- tion with Mechanical Turk for data collection, training or evaluation. Human Turkers are also viewed as agents in ParlAI and hence human- human, human-bot, or multiple humans and bots in group chat can all converse within the standard framework, switching out the roles as desired with no code changes to the agents. This is because Turkers also receive and send via the same in- terface: using the ï¬elds of the observation/action dict. We provide two examples in the ï¬rst release: (i) qa collector: an agent that talks to Turkers to collect question-answer pairs given a context paragraph to build a QA dataset, see Fig. 2.
(ii) model evaluator: an agent which collects ratings from Turkers on the performance of a bot on a given task.
Running a new MTurk task involves implement- ing and running a main ï¬le (like run.py) and deï¬n- ing several task speciï¬c parameters for the world and agent(s) you wish humans to talk to. For data collection tasks the agent should pose the prob- lem and ask the Turker for e.g. the answers to questions, see Fig. 2. Other parameters include the task description, the role of the Turker in the | 1705.06476#19 | ParlAI: A Dialog Research Software Platform | We introduce ParlAI (pronounced "par-lay"), an open-source software platform
for dialog research implemented in Python, available at http://parl.ai. Its
goal is to provide a unified framework for sharing, training and testing of
dialog models, integration of Amazon Mechanical Turk for data collection, human
evaluation, and online/reinforcement learning; and a repository of machine
learning models for comparing with others' models, and improving upon existing
architectures. Over 20 tasks are supported in the first release, including
popular datasets such as SQuAD, bAbI tasks, MCTest, WikiQA, QACNN, QADailyMail,
CBT, bAbI Dialog, Ubuntu, OpenSubtitles and VQA. Several models are integrated,
including neural models such as memory networks, seq2seq and attentive LSTMs. | http://arxiv.org/pdf/1705.06476 | Alexander H. Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes, Devi Parikh, Jason Weston | cs.CL | null | null | cs.CL | 20170518 | 20180308 | [
{
"id": "1612.03969"
},
{
"id": "1511.02301"
},
{
"id": "1606.05250"
},
{
"id": "1511.08130"
},
{
"id": "1502.05698"
},
{
"id": "1704.00051"
},
{
"id": "1606.04582"
},
{
"id": "1703.06585"
},
{
"id": "1506.08909"
},
{
"id": "1605.07683"
}
] |
1705.06476 | 20 | task, keywords to describe the task, the number of hits and the rewards for the Turkers. One can run in a sandbox mode before launching the real task where Turkers are paid.
For online training or evaluation, the Turker can talk to your machine learning agent, e.g. LSTM, memory network or other implemented technique. New tasks can be checked into the repository so researchers can share data collection and data eval- uation procedures and reproduce experiments.
# 7 Demonstrative Experiment
To demonstrate ParlAI in action, we give results in Table 1 of DrQA, an attentive LSTM architec- ture with single task and multitask training on the SQuAD and bAbI tasks, a combination not shown before with any method, to our knowledge. | 1705.06476#20 | ParlAI: A Dialog Research Software Platform | We introduce ParlAI (pronounced "par-lay"), an open-source software platform
for dialog research implemented in Python, available at http://parl.ai. Its
goal is to provide a unified framework for sharing, training and testing of
dialog models, integration of Amazon Mechanical Turk for data collection, human
evaluation, and online/reinforcement learning; and a repository of machine
learning models for comparing with others' models, and improving upon existing
architectures. Over 20 tasks are supported in the first release, including
popular datasets such as SQuAD, bAbI tasks, MCTest, WikiQA, QACNN, QADailyMail,
CBT, bAbI Dialog, Ubuntu, OpenSubtitles and VQA. Several models are integrated,
including neural models such as memory networks, seq2seq and attentive LSTMs. | http://arxiv.org/pdf/1705.06476 | Alexander H. Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes, Devi Parikh, Jason Weston | cs.CL | null | null | cs.CL | 20170518 | 20180308 | [
{
"id": "1612.03969"
},
{
"id": "1511.02301"
},
{
"id": "1606.05250"
},
{
"id": "1511.08130"
},
{
"id": "1502.05698"
},
{
"id": "1704.00051"
},
{
"id": "1606.04582"
},
{
"id": "1703.06585"
},
{
"id": "1506.08909"
},
{
"id": "1605.07683"
}
] |
1705.06476 | 21 | This experiment simultaneously shows the power of ParlAI â how easy it is to set up this experiment â and the limitations of current meth- ods. Almost all methods working well on SQuAD have been designed to predict a phrase from the given context (they are given labeled start and end indices in training). Hence, those models cannot be applied to all dialog datasets, e.g. some of the bAbI tasks include yes/no questions, where yes and no do not appear in the context. This high- lights that researchers should not focus models on a single dataset. ParlAI does not provide start and end label indices as its API is dialog only, see Fig. 5. This is a deliberate choice that discourages such dataset overï¬tting/ specialization. However, this also results in a slight drop in performance be- cause less information is given5 (66.4 EM vs. 69.5 EM, see (Chen et al., 2017), which is still in the range of many existing well-performing methods, see https://stanford-qa.com). | 1705.06476#21 | ParlAI: A Dialog Research Software Platform | We introduce ParlAI (pronounced "par-lay"), an open-source software platform
for dialog research implemented in Python, available at http://parl.ai. Its
goal is to provide a unified framework for sharing, training and testing of
dialog models, integration of Amazon Mechanical Turk for data collection, human
evaluation, and online/reinforcement learning; and a repository of machine
learning models for comparing with others' models, and improving upon existing
architectures. Over 20 tasks are supported in the first release, including
popular datasets such as SQuAD, bAbI tasks, MCTest, WikiQA, QACNN, QADailyMail,
CBT, bAbI Dialog, Ubuntu, OpenSubtitles and VQA. Several models are integrated,
including neural models such as memory networks, seq2seq and attentive LSTMs. | http://arxiv.org/pdf/1705.06476 | Alexander H. Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes, Devi Parikh, Jason Weston | cs.CL | null | null | cs.CL | 20170518 | 20180308 | [
{
"id": "1612.03969"
},
{
"id": "1511.02301"
},
{
"id": "1606.05250"
},
{
"id": "1511.08130"
},
{
"id": "1502.05698"
},
{
"id": "1704.00051"
},
{
"id": "1606.04582"
},
{
"id": "1703.06585"
},
{
"id": "1506.08909"
},
{
"id": "1605.07683"
}
] |
1705.06476 | 22 | Overall, while DrQA can solve some of the bAbI tasks and performs well on SQuAD, it does not match the best performing methods on bAbI (Seo et al., 2016; Henaff et al., 2016), and multi- tasking does not help. Hence, ParlAI lays out the challenge to the community to ï¬nd learning algo- rithms that are generally applicable and that bene- ï¬t from training over many dialog datasets.
5As we now do not know the location of the true answer, we randomly pick the start and end indices of any context phrase matching the given training set answer, in some cases this is unique.
bAbI 10k Task 1: Single Supporting Fact 2: Two Supporting Facts 3: Three Supporting Facts 4: Two Arg. Relations 5: Three Arg. Relations 11: Basic Coreference 12: Conjunction 13: Compound Coref. 14: Time Reasoning 16: Basic Induction Single Multitask 100 98.1 45.4 100 98.9 100 100 100 99.8 47.7 66.4 100 54.3 58.1 100 98.2 100 100 100 99.9 48.2 63.4 SQuAD (Dev. Set) | 1705.06476#22 | ParlAI: A Dialog Research Software Platform | We introduce ParlAI (pronounced "par-lay"), an open-source software platform
for dialog research implemented in Python, available at http://parl.ai. Its
goal is to provide a unified framework for sharing, training and testing of
dialog models, integration of Amazon Mechanical Turk for data collection, human
evaluation, and online/reinforcement learning; and a repository of machine
learning models for comparing with others' models, and improving upon existing
architectures. Over 20 tasks are supported in the first release, including
popular datasets such as SQuAD, bAbI tasks, MCTest, WikiQA, QACNN, QADailyMail,
CBT, bAbI Dialog, Ubuntu, OpenSubtitles and VQA. Several models are integrated,
including neural models such as memory networks, seq2seq and attentive LSTMs. | http://arxiv.org/pdf/1705.06476 | Alexander H. Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes, Devi Parikh, Jason Weston | cs.CL | null | null | cs.CL | 20170518 | 20180308 | [
{
"id": "1612.03969"
},
{
"id": "1511.02301"
},
{
"id": "1606.05250"
},
{
"id": "1511.08130"
},
{
"id": "1502.05698"
},
{
"id": "1704.00051"
},
{
"id": "1606.04582"
},
{
"id": "1703.06585"
},
{
"id": "1506.08909"
},
{
"id": "1605.07683"
}
] |
1705.06476 | 23 | Table 1: Test Accuracy of DrQA on bAbI 10k and SQuAD (Exact Match metric) using ParlAI. The subset of bAbI tasks for which the answer is ex- actly contained in the text is used.
# 8 Related Software
There are many existing independent dialog datasets, and training code for individual models that work on some of them. Many are framed in slightly different ways (different formats, with dif- ferent types of supervision), and ParlAI attempts to unify this fragmented landscape.
There are some existing software platforms that are related in their scope, but not in their special- ization. OpenAIâs Gym and Universe6 are toolk- its for developing and comparing reinforcement learning (RL) algorithms. Gym is for games like Pong or Go, and Universe is for online games and websites. Neither focuses on dialog or covers the case of supervised datasets as we do. | 1705.06476#23 | ParlAI: A Dialog Research Software Platform | We introduce ParlAI (pronounced "par-lay"), an open-source software platform
for dialog research implemented in Python, available at http://parl.ai. Its
goal is to provide a unified framework for sharing, training and testing of
dialog models, integration of Amazon Mechanical Turk for data collection, human
evaluation, and online/reinforcement learning; and a repository of machine
learning models for comparing with others' models, and improving upon existing
architectures. Over 20 tasks are supported in the first release, including
popular datasets such as SQuAD, bAbI tasks, MCTest, WikiQA, QACNN, QADailyMail,
CBT, bAbI Dialog, Ubuntu, OpenSubtitles and VQA. Several models are integrated,
including neural models such as memory networks, seq2seq and attentive LSTMs. | http://arxiv.org/pdf/1705.06476 | Alexander H. Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes, Devi Parikh, Jason Weston | cs.CL | null | null | cs.CL | 20170518 | 20180308 | [
{
"id": "1612.03969"
},
{
"id": "1511.02301"
},
{
"id": "1606.05250"
},
{
"id": "1511.08130"
},
{
"id": "1502.05698"
},
{
"id": "1704.00051"
},
{
"id": "1606.04582"
},
{
"id": "1703.06585"
},
{
"id": "1506.08909"
},
{
"id": "1605.07683"
}
] |
1705.06476 | 24 | CommAI7 is a framework that uses textual com- munication for the goal of developing artiï¬cial general intelligence through incremental tasks that test increasingly more complex skills, as described in (Mikolov et al., 2015). CommAI is in a RL set- ting, and contains only synthetic datasets, rather than real natural language datasets as we do here. In that regard it has a different focus to ParlAI, which emphasizes the more immediate task of real dialog, rather than directly on evaluation of ma- chine intelligence.
# 9 Conclusion and Outlook
ParlAI is a framework allowing the research com- munity to share existing and new tasks for dia- log as well as agents that learn on them, and to collect and evaluate conversations between agents and humans via Mechanical Turk. We hope this
6
https://gym.openai.com/ and https://universe.openai.com/ 7 https://github.com/facebookresearch/CommAI-env
tool enables systematic development and evalua- tion of dialog agents, helps push the state of the art in dialog further, and beneï¬ts the ï¬eld as a whole.
# Acknowledgments | 1705.06476#24 | ParlAI: A Dialog Research Software Platform | We introduce ParlAI (pronounced "par-lay"), an open-source software platform
for dialog research implemented in Python, available at http://parl.ai. Its
goal is to provide a unified framework for sharing, training and testing of
dialog models, integration of Amazon Mechanical Turk for data collection, human
evaluation, and online/reinforcement learning; and a repository of machine
learning models for comparing with others' models, and improving upon existing
architectures. Over 20 tasks are supported in the first release, including
popular datasets such as SQuAD, bAbI tasks, MCTest, WikiQA, QACNN, QADailyMail,
CBT, bAbI Dialog, Ubuntu, OpenSubtitles and VQA. Several models are integrated,
including neural models such as memory networks, seq2seq and attentive LSTMs. | http://arxiv.org/pdf/1705.06476 | Alexander H. Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes, Devi Parikh, Jason Weston | cs.CL | null | null | cs.CL | 20170518 | 20180308 | [
{
"id": "1612.03969"
},
{
"id": "1511.02301"
},
{
"id": "1606.05250"
},
{
"id": "1511.08130"
},
{
"id": "1502.05698"
},
{
"id": "1704.00051"
},
{
"id": "1606.04582"
},
{
"id": "1703.06585"
},
{
"id": "1506.08909"
},
{
"id": "1605.07683"
}
] |
1705.06476 | 25 | # Acknowledgments
We thank Mike Lewis, Denis Yarats, Douwe Kiela, Michael Auli, Y-Lan Boureau, Arthur Szlam, MarcâAurelio Ranzato, Yuandong Tian, Maximilian Nickel, Martin Raison, Myle Ott, Marco Baroni, Leon Bottou and other members of the FAIR team for discussions helpful to building ParlAI.
# References
Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual Question Answering. In Pro- ceedings of the IEEE International Conference on Com- puter Vision, pages 2425â2433.
Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question- answer pairs. In EMNLP, volume 2, page 6.
Antoine Bordes and Jason Weston. 2016. ing end-to-end goal-oriented dialog. arXiv:1605.07683. Learn- arXiv preprint
Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bor- des. 2017. Reading wikipedia to answer open-domain questions. arXiv:1704.00051. | 1705.06476#25 | ParlAI: A Dialog Research Software Platform | We introduce ParlAI (pronounced "par-lay"), an open-source software platform
for dialog research implemented in Python, available at http://parl.ai. Its
goal is to provide a unified framework for sharing, training and testing of
dialog models, integration of Amazon Mechanical Turk for data collection, human
evaluation, and online/reinforcement learning; and a repository of machine
learning models for comparing with others' models, and improving upon existing
architectures. Over 20 tasks are supported in the first release, including
popular datasets such as SQuAD, bAbI tasks, MCTest, WikiQA, QACNN, QADailyMail,
CBT, bAbI Dialog, Ubuntu, OpenSubtitles and VQA. Several models are integrated,
including neural models such as memory networks, seq2seq and attentive LSTMs. | http://arxiv.org/pdf/1705.06476 | Alexander H. Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes, Devi Parikh, Jason Weston | cs.CL | null | null | cs.CL | 20170518 | 20180308 | [
{
"id": "1612.03969"
},
{
"id": "1511.02301"
},
{
"id": "1606.05250"
},
{
"id": "1511.08130"
},
{
"id": "1502.05698"
},
{
"id": "1704.00051"
},
{
"id": "1606.04582"
},
{
"id": "1703.06585"
},
{
"id": "1506.08909"
},
{
"id": "1605.07683"
}
] |
1705.06476 | 26 | Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bor- des. 2017. Reading wikipedia to answer open-domain questions. arXiv:1704.00051.
David Crystal. 2004. The Cambridge encyclopedia of the En- glish language. Ernst Klett Sprachen.
Abhishek Das, Satwik Kottur, Jos´e MF Moura, Stefan Lee, and Dhruv Batra. 2017. Learning cooperative visual di- arXiv alog agents with deep reinforcement learning. preprint arXiv:1703.06585.
Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bor- Tracking the world arXiv preprint des, and Yann LeCun. 2016. state with recurrent entity networks. arXiv:1612.03969.
Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and compre- hend. In Advances in Neural Information Processing Sys- tems, pages 1693â1701. | 1705.06476#26 | ParlAI: A Dialog Research Software Platform | We introduce ParlAI (pronounced "par-lay"), an open-source software platform
for dialog research implemented in Python, available at http://parl.ai. Its
goal is to provide a unified framework for sharing, training and testing of
dialog models, integration of Amazon Mechanical Turk for data collection, human
evaluation, and online/reinforcement learning; and a repository of machine
learning models for comparing with others' models, and improving upon existing
architectures. Over 20 tasks are supported in the first release, including
popular datasets such as SQuAD, bAbI tasks, MCTest, WikiQA, QACNN, QADailyMail,
CBT, bAbI Dialog, Ubuntu, OpenSubtitles and VQA. Several models are integrated,
including neural models such as memory networks, seq2seq and attentive LSTMs. | http://arxiv.org/pdf/1705.06476 | Alexander H. Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes, Devi Parikh, Jason Weston | cs.CL | null | null | cs.CL | 20170518 | 20180308 | [
{
"id": "1612.03969"
},
{
"id": "1511.02301"
},
{
"id": "1606.05250"
},
{
"id": "1511.08130"
},
{
"id": "1502.05698"
},
{
"id": "1704.00051"
},
{
"id": "1606.04582"
},
{
"id": "1703.06585"
},
{
"id": "1506.08909"
},
{
"id": "1605.07683"
}
] |
1705.06476 | 27 | Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2015. The goldilocks principle: Reading childrenâs books arXiv preprint with explicit memory representations. arXiv:1511.02301.
Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The ubuntu dialogue corpus: A large dataset for re- search in unstructured multi-turn dialogue systems. arXiv preprint arXiv:1506.08909.
Tomas Mikolov, Armand Joulin, and Marco Baroni. 2015. A roadmap towards machine intelligence. arXiv preprint arXiv:1511.08130.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for ma- chine comprehension of text. arXiv:1606.05250.
Alan Ritter, Colin Cherry, and William B Dolan. 2011. Data- driven response generation in social media. In EMNLP, pages 583â593. Association for Computational Linguis- tics.
Minjoon Seo, Sewon Min, Ali Farhadi, and Hannaneh Ha- jishirzi. 2016. Query-reduction networks for question an- swering. arXiv preprint arXiv:1606.04582. | 1705.06476#27 | ParlAI: A Dialog Research Software Platform | We introduce ParlAI (pronounced "par-lay"), an open-source software platform
for dialog research implemented in Python, available at http://parl.ai. Its
goal is to provide a unified framework for sharing, training and testing of
dialog models, integration of Amazon Mechanical Turk for data collection, human
evaluation, and online/reinforcement learning; and a repository of machine
learning models for comparing with others' models, and improving upon existing
architectures. Over 20 tasks are supported in the first release, including
popular datasets such as SQuAD, bAbI tasks, MCTest, WikiQA, QACNN, QADailyMail,
CBT, bAbI Dialog, Ubuntu, OpenSubtitles and VQA. Several models are integrated,
including neural models such as memory networks, seq2seq and attentive LSTMs. | http://arxiv.org/pdf/1705.06476 | Alexander H. Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes, Devi Parikh, Jason Weston | cs.CL | null | null | cs.CL | 20170518 | 20180308 | [
{
"id": "1612.03969"
},
{
"id": "1511.02301"
},
{
"id": "1606.05250"
},
{
"id": "1511.08130"
},
{
"id": "1502.05698"
},
{
"id": "1704.00051"
},
{
"id": "1606.04582"
},
{
"id": "1703.06585"
},
{
"id": "1506.08909"
},
{
"id": "1605.07683"
}
] |
1705.06476 | 28 | Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In Advances in neural in- formation processing systems, pages 2440â2448.
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Se- quence to sequence learning with neural networks. In Ad- vances in neural information processing systems, pages 3104â3112.
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart van Merri¨enboer, Armand Joulin, and Tomas Mikolov. 2015. Towards ai-complete question answering: A set of prerequisite toy tasks. arXiv:1502.05698. | 1705.06476#28 | ParlAI: A Dialog Research Software Platform | We introduce ParlAI (pronounced "par-lay"), an open-source software platform
for dialog research implemented in Python, available at http://parl.ai. Its
goal is to provide a unified framework for sharing, training and testing of
dialog models, integration of Amazon Mechanical Turk for data collection, human
evaluation, and online/reinforcement learning; and a repository of machine
learning models for comparing with others' models, and improving upon existing
architectures. Over 20 tasks are supported in the first release, including
popular datasets such as SQuAD, bAbI tasks, MCTest, WikiQA, QACNN, QADailyMail,
CBT, bAbI Dialog, Ubuntu, OpenSubtitles and VQA. Several models are integrated,
including neural models such as memory networks, seq2seq and attentive LSTMs. | http://arxiv.org/pdf/1705.06476 | Alexander H. Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes, Devi Parikh, Jason Weston | cs.CL | null | null | cs.CL | 20170518 | 20180308 | [
{
"id": "1612.03969"
},
{
"id": "1511.02301"
},
{
"id": "1606.05250"
},
{
"id": "1511.08130"
},
{
"id": "1502.05698"
},
{
"id": "1704.00051"
},
{
"id": "1606.04582"
},
{
"id": "1703.06585"
},
{
"id": "1506.08909"
},
{
"id": "1605.07683"
}
] |
1705.04146 | 0 | 7 1 0 2
t c O 3 2 ] I A . s c [
3 v 6 4 1 4 0 . 5 0 7 1 : v i X r a
# Program Induction by Rationale Generation: Learning to Solve and Explain Algebraic Word Problems
# Wang Lingâ Dani Yogatamaâ â DeepMind Chris Dyerâ â¦University of Oxford Phil Blunsomâ â¦
{lingwang,dyogatama,cdyer,pblunsom}@google.com
# Abstract
Solving algebraic word problems re- quires executing a series of arithmetic operationsâa programâto obtain a ï¬nal answer. However, since programs can be arbitrarily complicated, inducing them directly from question-answer pairs is a formidable challenge. To make this task more feasible, we solve these problems by generating answer rationales, sequences of natural language and human-readable mathematical expressions that derive the ï¬nal answer through a series of small steps. Although rationales do not explic- itly specify programs, they provide a scaf- folding for their structure via intermedi- ate milestones. To evaluate our approach, we have created a new 100,000-sample dataset of questions, answers and ratio- nales. Experimental results show that in- direct supervision of program learning via answer rationales is a promising strategy for inducing arithmetic programs. | 1705.04146#0 | Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems | Solving algebraic word problems requires executing a series of arithmetic
operations---a program---to obtain a final answer. However, since programs can
be arbitrarily complicated, inducing them directly from question-answer pairs
is a formidable challenge. To make this task more feasible, we solve these
problems by generating answer rationales, sequences of natural language and
human-readable mathematical expressions that derive the final answer through a
series of small steps. Although rationales do not explicitly specify programs,
they provide a scaffolding for their structure via intermediate milestones. To
evaluate our approach, we have created a new 100,000-sample dataset of
questions, answers and rationales. Experimental results show that indirect
supervision of program learning via answer rationales is a promising strategy
for inducing arithmetic programs. | http://arxiv.org/pdf/1705.04146 | Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20170511 | 20171023 | [] |
1705.04146 | 1 | used natural language speciï¬cations of algebraic word problems, and solved these by either learn- ing to ï¬ll in templates that can be solved with equation solvers (Hosseini et al., 2014; Kushman et al., 2014) or inferring and modeling operation sequences (programs) that lead to the ï¬nal an- swer (Roy and Roth, 2015).
In this paper, we learn to solve algebraic word problems by inducing and modeling programs that generate not only the answer, but an answer ratio- nale, a natural language explanation interspersed with algebraic expressions justifying the overall solution. Such rationales are what examiners re- quire from students in order to demonstrate un- derstanding of the problem solution; they play the very same role in our task. Not only do natural language rationales enhance model interpretabil- ity, but they provide a coarse guide to the structure of the arithmetic programs that must be executed. In fact the learner we propose (which relies on a heuristic search; §4) fails to solve this task with- out modeling the rationalesâthe search space is too unconstrained.
# Introduction | 1705.04146#1 | Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems | Solving algebraic word problems requires executing a series of arithmetic
operations---a program---to obtain a final answer. However, since programs can
be arbitrarily complicated, inducing them directly from question-answer pairs
is a formidable challenge. To make this task more feasible, we solve these
problems by generating answer rationales, sequences of natural language and
human-readable mathematical expressions that derive the final answer through a
series of small steps. Although rationales do not explicitly specify programs,
they provide a scaffolding for their structure via intermediate milestones. To
evaluate our approach, we have created a new 100,000-sample dataset of
questions, answers and rationales. Experimental results show that indirect
supervision of program learning via answer rationales is a promising strategy
for inducing arithmetic programs. | http://arxiv.org/pdf/1705.04146 | Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20170511 | 20171023 | [] |
1705.04304 | 1 | # ABSTRACT
Attentional, RNN-based encoder-decoder models for abstractive summarization have achieved good performance on short input and output sequences. For longer documents and summaries however these models often include repetitive and incoherent phrases. We introduce a neural network model with a novel intra- attention that attends over the input and continuously generated output separately, and a new training method that combines standard supervised word prediction and reinforcement learning (RL). Models trained only with supervised learning often exhibit âexposure biasâ â they assume ground truth is provided at each step during training. However, when standard word prediction is combined with the global se- quence prediction training of RL the resulting summaries become more readable. We evaluate this model on the CNN/Daily Mail and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the CNN/Daily Mail dataset, an improvement over previous state-of-the-art models. Human evaluation also shows that our model produces higher quality summaries.
# INTRODUCTION
Text summarization is the process of automatically generating natural language summaries from an input document while retaining the important points. By condensing large quantities of information into short, informative summaries, summarization can aid many downstream applications such as creating news digests, search, and report generation. | 1705.04304#1 | A Deep Reinforced Model for Abstractive Summarization | Attentional, RNN-based encoder-decoder models for abstractive summarization
have achieved good performance on short input and output sequences. For longer
documents and summaries however these models often include repetitive and
incoherent phrases. We introduce a neural network model with a novel
intra-attention that attends over the input and continuously generated output
separately, and a new training method that combines standard supervised word
prediction and reinforcement learning (RL). Models trained only with supervised
learning often exhibit "exposure bias" - they assume ground truth is provided
at each step during training. However, when standard word prediction is
combined with the global sequence prediction training of RL the resulting
summaries become more readable. We evaluate this model on the CNN/Daily Mail
and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the
CNN/Daily Mail dataset, an improvement over previous state-of-the-art models.
Human evaluation also shows that our model produces higher quality summaries. | http://arxiv.org/pdf/1705.04304 | Romain Paulus, Caiming Xiong, Richard Socher | cs.CL | null | null | cs.CL | 20170511 | 20171113 | [
{
"id": "1603.08148"
},
{
"id": "1612.00563"
},
{
"id": "1608.02927"
},
{
"id": "1603.08887"
},
{
"id": "1511.06732"
},
{
"id": "1611.03382"
},
{
"id": "1603.08023"
},
{
"id": "1609.08144"
},
{
"id": "1601.06733"
},
{
"id": "1602.06023"
},
{
"id": "1608.05859"
},
{
"id": "1509.00685"
}
] |
1705.04146 | 2 | # Introduction
Behaving intelligently often requires mathemat- ical reasoning. Shopkeepers calculate change, tax, and sale prices; agriculturists calculate the proper amounts of fertilizers, pesticides, and wa- ter for their crops; and managers analyze produc- tivity. Even determining whether you have enough money to pay for a list of items requires applying addition, multiplication, and comparison. Solv- ing these tasks is challenging as it involves rec- ognizing how goals, entities, and quantities in the real-world map onto a mathematical formaliza- tion, computing the solution, and mapping the so- lution back onto the world. As a proxy for the richness of the real world, a series of papers have
This work is thus related to models that can explain or rationalize their decisions (Hendricks et al., 2016; Harrison et al., 2017). However, the use of rationales in this work is quite different from the role they play in most prior work, where interpretation models are trained to generate plau- sible sounding (but not necessarily accurate) post- hoc descriptions of the decision making process they used. In this work, the rationale is generated as a latent variable that gives rise to the answerâit is thus a more faithful representation of the steps used in computing the answer. | 1705.04146#2 | Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems | Solving algebraic word problems requires executing a series of arithmetic
operations---a program---to obtain a final answer. However, since programs can
be arbitrarily complicated, inducing them directly from question-answer pairs
is a formidable challenge. To make this task more feasible, we solve these
problems by generating answer rationales, sequences of natural language and
human-readable mathematical expressions that derive the final answer through a
series of small steps. Although rationales do not explicitly specify programs,
they provide a scaffolding for their structure via intermediate milestones. To
evaluate our approach, we have created a new 100,000-sample dataset of
questions, answers and rationales. Experimental results show that indirect
supervision of program learning via answer rationales is a promising strategy
for inducing arithmetic programs. | http://arxiv.org/pdf/1705.04146 | Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20170511 | 20171023 | [] |
1705.04304 | 2 | There are two prominent types of summarization algorithms. First, extractive summarization sys- tems form summaries by copying parts of the input (Dorr et al., 2003; Nallapati et al., 2017). Second, abstractive summarization systems generate new phrases, possibly rephrasing or using words that were not in the original text (Chopra et al., 2016; Nallapati et al., 2016).
Neural network models (Nallapati et al., 2016) based on the attentional encoder-decoder model for machine translation (Bahdanau et al., 2014) were able to generate abstractive summaries with high ROUGE scores. However, these systems have typically been used for summarizing short input sequences (one or two sentences) to generate even shorter summaries. For example, the summaries on the DUC-2004 dataset generated by the state-of-the-art system by Zeng et al. (2016) are limited to 75 characters.
Nallapati et al. (2016) also applied their abstractive summarization model on the CNN/Daily Mail dataset (Hermann et al., 2015), which contains input sequences of up to 800 tokens and multi- sentence summaries of up to 100 tokens. But their analysis illustrates a key problem with attentional encoder-decoder models: they often generate unnatural summaries consisting of repeated phrases. | 1705.04304#2 | A Deep Reinforced Model for Abstractive Summarization | Attentional, RNN-based encoder-decoder models for abstractive summarization
have achieved good performance on short input and output sequences. For longer
documents and summaries however these models often include repetitive and
incoherent phrases. We introduce a neural network model with a novel
intra-attention that attends over the input and continuously generated output
separately, and a new training method that combines standard supervised word
prediction and reinforcement learning (RL). Models trained only with supervised
learning often exhibit "exposure bias" - they assume ground truth is provided
at each step during training. However, when standard word prediction is
combined with the global sequence prediction training of RL the resulting
summaries become more readable. We evaluate this model on the CNN/Daily Mail
and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the
CNN/Daily Mail dataset, an improvement over previous state-of-the-art models.
Human evaluation also shows that our model produces higher quality summaries. | http://arxiv.org/pdf/1705.04304 | Romain Paulus, Caiming Xiong, Richard Socher | cs.CL | null | null | cs.CL | 20170511 | 20171113 | [
{
"id": "1603.08148"
},
{
"id": "1612.00563"
},
{
"id": "1608.02927"
},
{
"id": "1603.08887"
},
{
"id": "1511.06732"
},
{
"id": "1611.03382"
},
{
"id": "1603.08023"
},
{
"id": "1609.08144"
},
{
"id": "1601.06733"
},
{
"id": "1602.06023"
},
{
"id": "1608.05859"
},
{
"id": "1509.00685"
}
] |
1705.04304 | 3 | We present a new abstractive summarization model that achieves state-of-the-art results on the CNN/Daily Mail and similarly good results on the New York Times dataset (NYT) (Sandhaus, 2008). To our knowledge, this is the ï¬rst end-to-end model for abstractive summarization on the NYT dataset. We introduce a key attention mechanism and a new learning objective to address the repeating phrase problem: (i) we use an intra-temporal attention in the encoder that records previous attention weights for each of the input tokens while a sequential intra-attention model in the decoder
1
Ce) 2 re siahiet 4 / The United States became the yest tech . US. tech
Encoder
# Decoder
Figure 1: Illustration of the encoder and decoder attention functions combined. The two context vectors (marked âCâ) are computed from attending over the encoder hidden states and decoder hidden states. Using these two contexts and the current decoder hidden state (âHâ), a new word is generated and added to the output sequence.
takes into account which words have already been generated by the decoder. (ii) we propose a new objective function by combining the maximum-likelihood cross-entropy loss used in prior work with rewards from policy gradient reinforcement learning to reduce exposure bias. | 1705.04304#3 | A Deep Reinforced Model for Abstractive Summarization | Attentional, RNN-based encoder-decoder models for abstractive summarization
have achieved good performance on short input and output sequences. For longer
documents and summaries however these models often include repetitive and
incoherent phrases. We introduce a neural network model with a novel
intra-attention that attends over the input and continuously generated output
separately, and a new training method that combines standard supervised word
prediction and reinforcement learning (RL). Models trained only with supervised
learning often exhibit "exposure bias" - they assume ground truth is provided
at each step during training. However, when standard word prediction is
combined with the global sequence prediction training of RL the resulting
summaries become more readable. We evaluate this model on the CNN/Daily Mail
and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the
CNN/Daily Mail dataset, an improvement over previous state-of-the-art models.
Human evaluation also shows that our model produces higher quality summaries. | http://arxiv.org/pdf/1705.04304 | Romain Paulus, Caiming Xiong, Richard Socher | cs.CL | null | null | cs.CL | 20170511 | 20171113 | [
{
"id": "1603.08148"
},
{
"id": "1612.00563"
},
{
"id": "1608.02927"
},
{
"id": "1603.08887"
},
{
"id": "1511.06732"
},
{
"id": "1611.03382"
},
{
"id": "1603.08023"
},
{
"id": "1609.08144"
},
{
"id": "1601.06733"
},
{
"id": "1602.06023"
},
{
"id": "1608.05859"
},
{
"id": "1509.00685"
}
] |
1705.04146 | 4 | Problem 1: Question: Two trains running in opposite directions cross a man standing on the platform in 27 seconds and 17 seconds respectively and they cross each other in 23 seconds. The ratio of their speeds is: Options: A) 3/7 B) 3/2 C) 3/88 D) 3/8 E) 2/2 Rationale: Let the speeds of the two trains be x m/sec and y m/sec respectively. Then, length of the ï¬rst train = 27x meters, and length of the second train = 17 y meters. (27x + 17y) / (x + y) = 23 â 27x + 17y = 23x + 23y â 4x = 6y â x/y = 3/2. Correct Option: B Problem 2: Question: From a pack of 52 cards, two cards are drawn to- gether at random. What is the probability of both the cards being kings? Options: A) 2/1223 B) 1/122 C) 1/221 D) 3/1253 E) 2/153 Rationale: Let s be the sample space. Then n(s) = 52C2 = 1326 E = event of getting 2 kings out of 4 n(E) = 4C2 = 6 P(E) = 6/1326 = 1/221 Answer | 1705.04146#4 | Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems | Solving algebraic word problems requires executing a series of arithmetic
operations---a program---to obtain a final answer. However, since programs can
be arbitrarily complicated, inducing them directly from question-answer pairs
is a formidable challenge. To make this task more feasible, we solve these
problems by generating answer rationales, sequences of natural language and
human-readable mathematical expressions that derive the final answer through a
series of small steps. Although rationales do not explicitly specify programs,
they provide a scaffolding for their structure via intermediate milestones. To
evaluate our approach, we have created a new 100,000-sample dataset of
questions, answers and rationales. Experimental results show that indirect
supervision of program learning via answer rationales is a promising strategy
for inducing arithmetic programs. | http://arxiv.org/pdf/1705.04146 | Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20170511 | 20171023 | [] |
1705.04304 | 4 | Our model achieves 41.16 ROUGE-1 on the CNN/Daily Mail dataset. Moreover, we show, through human evaluation of generated outputs, that our model generates more readable summaries com- pared to other abstractive approaches.
# 2 NEURAL INTRA-ATTENTION MODEL
In this section, we present our intra-attention model based on the encoder-decoder network (Sutskever et al., 2014). In all our equations, x = {x1,r2,...,2»,} represents the sequence of input (article) tokens, y = {y1, y2,---, Ynâ } the sequence of output (summary) tokens, and || denotes the vector concatenation operator.
Our model reads the input sequence with a bi-directional LSTM encoder {RNNe fwd, RNNe bwd} computing hidden states he ] from the embedding vectors of xi. We use a single LSTM decoder RNNd, computing hidden states hd t from the embedding vectors of yt. Both input and output embeddings are taken from the same matrix Wemb. We initialize the decoder hidden state with hd
2.1
# INTRA-TEMPORAL ATTENTION ON INPUT SEQUENCE | 1705.04304#4 | A Deep Reinforced Model for Abstractive Summarization | Attentional, RNN-based encoder-decoder models for abstractive summarization
have achieved good performance on short input and output sequences. For longer
documents and summaries however these models often include repetitive and
incoherent phrases. We introduce a neural network model with a novel
intra-attention that attends over the input and continuously generated output
separately, and a new training method that combines standard supervised word
prediction and reinforcement learning (RL). Models trained only with supervised
learning often exhibit "exposure bias" - they assume ground truth is provided
at each step during training. However, when standard word prediction is
combined with the global sequence prediction training of RL the resulting
summaries become more readable. We evaluate this model on the CNN/Daily Mail
and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the
CNN/Daily Mail dataset, an improvement over previous state-of-the-art models.
Human evaluation also shows that our model produces higher quality summaries. | http://arxiv.org/pdf/1705.04304 | Romain Paulus, Caiming Xiong, Richard Socher | cs.CL | null | null | cs.CL | 20170511 | 20171113 | [
{
"id": "1603.08148"
},
{
"id": "1612.00563"
},
{
"id": "1608.02927"
},
{
"id": "1603.08887"
},
{
"id": "1511.06732"
},
{
"id": "1611.03382"
},
{
"id": "1603.08023"
},
{
"id": "1609.08144"
},
{
"id": "1601.06733"
},
{
"id": "1602.06023"
},
{
"id": "1608.05859"
},
{
"id": "1509.00685"
}
] |
1705.04146 | 5 | = 1326 E = event of getting 2 kings out of 4 n(E) = 4C2 = 6 P(E) = 6/1326 = 1/221 Answer is C Correct Option: C Problem 3: Question: For which of the following does p(a)âp(b) = p(aâ b) for all values of a and b? Options:A) p(x) = x2, B) p(x) = x/2, C) p(x) = x + 5, D) p(x) = 2x1, E) p(x) = |x| Rationale: To solve this easiest way is just put the value and see that if it equals or not. with option A. p(a) = a2 and p(b) = b2 so L.H.S = a2 â b2 and R.H.S = (a â b)2 â a2 + b2 â 2ab. so L.H.S not equal to R.H.S with option B. p(a) = a/2 and p(b) = b/2 L.H.S = a/2 â b/2 â 1/2(a â b) R.H.S = (a â b)/2 so L.H.S = R.H.S which is the | 1705.04146#5 | Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems | Solving algebraic word problems requires executing a series of arithmetic
operations---a program---to obtain a final answer. However, since programs can
be arbitrarily complicated, inducing them directly from question-answer pairs
is a formidable challenge. To make this task more feasible, we solve these
problems by generating answer rationales, sequences of natural language and
human-readable mathematical expressions that derive the final answer through a
series of small steps. Although rationales do not explicitly specify programs,
they provide a scaffolding for their structure via intermediate milestones. To
evaluate our approach, we have created a new 100,000-sample dataset of
questions, answers and rationales. Experimental results show that indirect
supervision of program learning via answer rationales is a promising strategy
for inducing arithmetic programs. | http://arxiv.org/pdf/1705.04146 | Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20170511 | 20171023 | [] |
1705.04304 | 5 | 2.1
# INTRA-TEMPORAL ATTENTION ON INPUT SEQUENCE
At each decoding step t, we use an intra-temporal attention function to attend over speciï¬c parts of the encoded input sequence in addition to the decoderâs own hidden state and the previously- generated word (Sankaran et al., 2016). This kind of attention prevents the model from attending over the sames parts of the input on different decoding steps. Nallapati et al. (2016) have shown that such an intra-temporal attention can reduce the amount of repetitions when attending over long documents. We deï¬ne eti as the attention score of the hidden input state he
# eti = f (hd
# t , he
eti = f (hd t , he i ), (1)
where f can be any function returning a scalar eti from the hd i vectors. While some attention models use functions as simple as the dot-product between the two vectors, we choose to use a bilinear function:
f (hd t , he i ) = hd t T W e attnhe i . (2)
2
We normalize the attention weights with the following temporal attention function, penalizing input tokens that have obtained high attention scores in past decoding steps. We define new temporal J scores â¬},;: ti
# supers Setton ee | 1705.04304#5 | A Deep Reinforced Model for Abstractive Summarization | Attentional, RNN-based encoder-decoder models for abstractive summarization
have achieved good performance on short input and output sequences. For longer
documents and summaries however these models often include repetitive and
incoherent phrases. We introduce a neural network model with a novel
intra-attention that attends over the input and continuously generated output
separately, and a new training method that combines standard supervised word
prediction and reinforcement learning (RL). Models trained only with supervised
learning often exhibit "exposure bias" - they assume ground truth is provided
at each step during training. However, when standard word prediction is
combined with the global sequence prediction training of RL the resulting
summaries become more readable. We evaluate this model on the CNN/Daily Mail
and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the
CNN/Daily Mail dataset, an improvement over previous state-of-the-art models.
Human evaluation also shows that our model produces higher quality summaries. | http://arxiv.org/pdf/1705.04304 | Romain Paulus, Caiming Xiong, Richard Socher | cs.CL | null | null | cs.CL | 20170511 | 20171113 | [
{
"id": "1603.08148"
},
{
"id": "1612.00563"
},
{
"id": "1608.02927"
},
{
"id": "1603.08887"
},
{
"id": "1511.06732"
},
{
"id": "1611.03382"
},
{
"id": "1603.08023"
},
{
"id": "1609.08144"
},
{
"id": "1601.06733"
},
{
"id": "1602.06023"
},
{
"id": "1608.05859"
},
{
"id": "1509.00685"
}
] |
1705.04304 | 6 | # supers Setton ee
I= supers . 3 ti Setton otherwise. (3) , ee ift=1
Finally, we compute the normalized attention scores αe obtain the input context vector ce t : ti across the inputs and use these weights to
Ul ti Va ti e On = n (4) cf = Do afihe. (5) i=1
INTRA-DECODER ATTENTION
While this intra-temporal attention function ensures that different parts of the encoded input se- quence are used, our decoder can still generate repeated phrases based on its own hidden states, especially when generating long sequences. To prevent that, we can incorporate more information about the previously decoded sequence into the decoder. Looking back at previous decoding steps will allow our model to make more structured predictions and avoid repeating the same information, even if that information was generated many steps away. To achieve this, we introduce an intra- decoder attention mechanism. This mechanism is not present in existing encoder-decoder models for abstractive summarization. For each decoding step t, our model computes a new decoder context vector cd 1 to a vector of zeros since the generated sequence is empty on the ï¬rst decoding step. For t > 1, we use the following equations:
d t-1 of, = Pew) = Danny 8) j=l T d _ pd? ya ety = he Watn ny (6) | 1705.04304#6 | A Deep Reinforced Model for Abstractive Summarization | Attentional, RNN-based encoder-decoder models for abstractive summarization
have achieved good performance on short input and output sequences. For longer
documents and summaries however these models often include repetitive and
incoherent phrases. We introduce a neural network model with a novel
intra-attention that attends over the input and continuously generated output
separately, and a new training method that combines standard supervised word
prediction and reinforcement learning (RL). Models trained only with supervised
learning often exhibit "exposure bias" - they assume ground truth is provided
at each step during training. However, when standard word prediction is
combined with the global sequence prediction training of RL the resulting
summaries become more readable. We evaluate this model on the CNN/Daily Mail
and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the
CNN/Daily Mail dataset, an improvement over previous state-of-the-art models.
Human evaluation also shows that our model produces higher quality summaries. | http://arxiv.org/pdf/1705.04304 | Romain Paulus, Caiming Xiong, Richard Socher | cs.CL | null | null | cs.CL | 20170511 | 20171113 | [
{
"id": "1603.08148"
},
{
"id": "1612.00563"
},
{
"id": "1608.02927"
},
{
"id": "1603.08887"
},
{
"id": "1511.06732"
},
{
"id": "1611.03382"
},
{
"id": "1603.08023"
},
{
"id": "1609.08144"
},
{
"id": "1601.06733"
},
{
"id": "1602.06023"
},
{
"id": "1608.05859"
},
{
"id": "1509.00685"
}
] |
1705.04146 | 7 | from the dataset. Second, we propose a sequence to sequence model that generates a sequence of in- structions that, when executed, generates the ra- tionale; only after this is the answer chosen (§3). Since the target program is not given in the train- ing data (most obviously, its speciï¬c form will de- pend on the operations that are supported by the program interpreter); the third contribution is thus a technique for inferring programs that generate a rationale and, ultimately, the answer. Even con- strained by a text rationale, the search space of possible programs is quite large, and we employ a heuristic search to ï¬nd plausible next steps to guide the search for programs (§4). Empirically, we are able to show that state-of-the-art sequence to sequence models are unable to perform above chance on this task, but that our model doubles the accuracy of the baseline (§6).
# 2 Dataset | 1705.04146#7 | Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems | Solving algebraic word problems requires executing a series of arithmetic
operations---a program---to obtain a final answer. However, since programs can
be arbitrarily complicated, inducing them directly from question-answer pairs
is a formidable challenge. To make this task more feasible, we solve these
problems by generating answer rationales, sequences of natural language and
human-readable mathematical expressions that derive the final answer through a
series of small steps. Although rationales do not explicitly specify programs,
they provide a scaffolding for their structure via intermediate milestones. To
evaluate our approach, we have created a new 100,000-sample dataset of
questions, answers and rationales. Experimental results show that indirect
supervision of program learning via answer rationales is a promising strategy
for inducing arithmetic programs. | http://arxiv.org/pdf/1705.04146 | Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20170511 | 20171023 | [] |
1705.04304 | 7 | d t-1 of, = Pew) = Danny 8) j=l T d _ pd? ya ety = he Watn ny (6)
Figure 1 illustrates the intra-attention context vector computation cd temporal attention, and their use in the decoder. t , in addition to the encoder
A closely-related intra-RNN attention function has been introduced by Cheng et al. (2016) but their implementation works by modifying the underlying LSTM function, and they do not apply it to long sequence generation problems. This is a major difference with our method, which makes no assumptions about the type of decoder RNN, thus is more simple and widely applicable to other types of recurrent networks.
# 2.3 TOKEN GENERATION AND POINTER
To generate a token, our decoder uses either a token-generation softmax layer or a pointer mecha- nism to copy rare or unseen from the input sequence. We use a switch function that decides at each decoding step whether to use the token generation or the pointer (Gulcehre et al., 2016; Nallapati et al., 2016). We deï¬ne ut as a binary value, equal to 1 if the pointer mechanism is used to output yt, and 0 otherwise. In the following equations, all probabilities are conditioned on y1, . . . , ytâ1, x, even when not explicitly stated. | 1705.04304#7 | A Deep Reinforced Model for Abstractive Summarization | Attentional, RNN-based encoder-decoder models for abstractive summarization
have achieved good performance on short input and output sequences. For longer
documents and summaries however these models often include repetitive and
incoherent phrases. We introduce a neural network model with a novel
intra-attention that attends over the input and continuously generated output
separately, and a new training method that combines standard supervised word
prediction and reinforcement learning (RL). Models trained only with supervised
learning often exhibit "exposure bias" - they assume ground truth is provided
at each step during training. However, when standard word prediction is
combined with the global sequence prediction training of RL the resulting
summaries become more readable. We evaluate this model on the CNN/Daily Mail
and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the
CNN/Daily Mail dataset, an improvement over previous state-of-the-art models.
Human evaluation also shows that our model produces higher quality summaries. | http://arxiv.org/pdf/1705.04304 | Romain Paulus, Caiming Xiong, Richard Socher | cs.CL | null | null | cs.CL | 20170511 | 20171113 | [
{
"id": "1603.08148"
},
{
"id": "1612.00563"
},
{
"id": "1608.02927"
},
{
"id": "1603.08887"
},
{
"id": "1511.06732"
},
{
"id": "1611.03382"
},
{
"id": "1603.08023"
},
{
"id": "1609.08144"
},
{
"id": "1601.06733"
},
{
"id": "1602.06023"
},
{
"id": "1608.05859"
},
{
"id": "1509.00685"
}
] |
1705.04146 | 8 | # 2 Dataset
We built a dataset1 with 100,000 problems with the annotations shown in Figure 1. Each question is decomposed in four parts, two inputs and two outputs: the description of the problem, which we will denote as the question, and the possible (mul- tiple choice) answer options, denoted as options. Our goal is to generate the description of the ratio- nale used to reach the correct answer, denoted as rationale and the correct option label. Problem 1 illustrates an example of an algebra problem, which must be translated into an expression (i.e., (27x + 17y)/(x + y) = 23) and then the desired quantity (x/y) solved for. Problem 2 is an exam- ple that could be solved by multi-step arithmetic operations proposed in (Roy and Roth, 2015). Fi- nally, Problem 3 describes a problem that is solved by testing each of the options, which has not been addressed in the past.
# 2.1 Construction
We ï¬rst collect a set of 34,202 seed problems that consist of multiple option math questions covering a broad range of topics and difï¬culty levels. Ex- amples of exams with such problems include the GMAT (Graduate Management Admission Test) and GRE (General Test). Many websites contain example math questions in such exams, where the answer is supported by a rationale. | 1705.04146#8 | Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems | Solving algebraic word problems requires executing a series of arithmetic
operations---a program---to obtain a final answer. However, since programs can
be arbitrarily complicated, inducing them directly from question-answer pairs
is a formidable challenge. To make this task more feasible, we solve these
problems by generating answer rationales, sequences of natural language and
human-readable mathematical expressions that derive the final answer through a
series of small steps. Although rationales do not explicitly specify programs,
they provide a scaffolding for their structure via intermediate milestones. To
evaluate our approach, we have created a new 100,000-sample dataset of
questions, answers and rationales. Experimental results show that indirect
supervision of program learning via answer rationales is a promising strategy
for inducing arithmetic programs. | http://arxiv.org/pdf/1705.04146 | Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20170511 | 20171023 | [] |
1705.04304 | 8 | Our token-generation layer generates the following probability distribution:
p(yelue = 0) = softmax(Woulh?||c% ||c7] + Bout) (9) cf
On the other hand, the pointer mechanism uses the temporal attention weights αe distribution to copy the input token xi. ti as the probability
# p(yt = xi|ut = 1) = αe ti
(10)
We also compute the probability of using the copy mechanism for the decoding step t: p(ut = 1) = Ï(Wu[hd
p(ur = 1) = o (Wi [hi ||c°||c4] + bu), (11) e ât
3
where Ï is the sigmoid activation function.
Putting Equations 9 , 10 and 11 together, we obtain our ï¬nal probability distribution for the output token yt:
p(yt) = p(ut = 1)p(yt|ut = 1) + p(ut = 0)p(yt|ut = 0).
The ground-truth value for ut and the corresponding i index of the target input token when ut = 1 are provided at every decoding step during training. We set ut = 1 either when yt is an out-of- vocabulary token or when it is a pre-deï¬ned named entity (see Section 5).
# 2.4 SHARING DECODER WEIGHTS | 1705.04304#8 | A Deep Reinforced Model for Abstractive Summarization | Attentional, RNN-based encoder-decoder models for abstractive summarization
have achieved good performance on short input and output sequences. For longer
documents and summaries however these models often include repetitive and
incoherent phrases. We introduce a neural network model with a novel
intra-attention that attends over the input and continuously generated output
separately, and a new training method that combines standard supervised word
prediction and reinforcement learning (RL). Models trained only with supervised
learning often exhibit "exposure bias" - they assume ground truth is provided
at each step during training. However, when standard word prediction is
combined with the global sequence prediction training of RL the resulting
summaries become more readable. We evaluate this model on the CNN/Daily Mail
and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the
CNN/Daily Mail dataset, an improvement over previous state-of-the-art models.
Human evaluation also shows that our model produces higher quality summaries. | http://arxiv.org/pdf/1705.04304 | Romain Paulus, Caiming Xiong, Richard Socher | cs.CL | null | null | cs.CL | 20170511 | 20171113 | [
{
"id": "1603.08148"
},
{
"id": "1612.00563"
},
{
"id": "1608.02927"
},
{
"id": "1603.08887"
},
{
"id": "1511.06732"
},
{
"id": "1611.03382"
},
{
"id": "1603.08023"
},
{
"id": "1609.08144"
},
{
"id": "1601.06733"
},
{
"id": "1602.06023"
},
{
"id": "1608.05859"
},
{
"id": "1509.00685"
}
] |
1705.04146 | 9 | Next, we turned to crowdsourcing to generate new questions. We create a task where users are presented with a set of 5 questions from our seed dataset. Then, we ask the Turker to choose one of the questions and write a similar question. We also force the answers and rationale to differ from the original question in order to avoid paraphrases of the original question. Once again, we manually check a subset of the jobs for each Turker for qual- ity control. The type of questions generated us- ing this method vary. Some turkers propose small changes in the values of the questions (e.g., chang- ing the equality p(a) â p(b) = p(a â b) in Prob- lem 3 to a different equality is a valid question, as long as the rationale and options are rewritten to reï¬ect the change). We designate these as replica problems as the natural language used in the ques- tion and rationales tend to be only minimally un- altered. Others propose new problems in the same topic where the generated questions tend to dif1Available at https://github.com/deepmind/ AQuA | 1705.04146#9 | Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems | Solving algebraic word problems requires executing a series of arithmetic
operations---a program---to obtain a final answer. However, since programs can
be arbitrarily complicated, inducing them directly from question-answer pairs
is a formidable challenge. To make this task more feasible, we solve these
problems by generating answer rationales, sequences of natural language and
human-readable mathematical expressions that derive the final answer through a
series of small steps. Although rationales do not explicitly specify programs,
they provide a scaffolding for their structure via intermediate milestones. To
evaluate our approach, we have created a new 100,000-sample dataset of
questions, answers and rationales. Experimental results show that indirect
supervision of program learning via answer rationales is a promising strategy
for inducing arithmetic programs. | http://arxiv.org/pdf/1705.04146 | Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20170511 | 20171023 | [] |
1705.04304 | 9 | # 2.4 SHARING DECODER WEIGHTS
In addition to using the same embedding matrix Wemb for the encoder and the decoder sequences, we introduce some weight-sharing between this embedding matrix and the Wout matrix of the token- generation layer, similarly to Inan et al. (2017) and Press & Wolf (2016). This allows the token- generation function to use syntactic and semantic information contained in the embedding matrix.
Wout = tanh(WembWproj) (13)
2.5 REPETITION AVOIDANCE AT TEST TIME
Another way to avoid repetitions comes from our observation that in both the CNN/Daily Mail and NYT datasets, ground-truth summaries almost never contain the same trigram twice. Based on this observation, we force our decoder to never output the same trigram more than once during testing. We do this by setting p(yt) = 0 during beam search, when outputting yt would create a trigram that already exists in the previously decoded sequence of the current beam.
# 3 HYBRID LEARNING OBJECTIVE
In this section, we explore different ways of training our encoder-decoder model. In particular, we propose reinforcement learning-based algorithms and their application to our summarization task.
# 3.1 SUPERVISED LEARNING WITH TEACHER FORCING | 1705.04304#9 | A Deep Reinforced Model for Abstractive Summarization | Attentional, RNN-based encoder-decoder models for abstractive summarization
have achieved good performance on short input and output sequences. For longer
documents and summaries however these models often include repetitive and
incoherent phrases. We introduce a neural network model with a novel
intra-attention that attends over the input and continuously generated output
separately, and a new training method that combines standard supervised word
prediction and reinforcement learning (RL). Models trained only with supervised
learning often exhibit "exposure bias" - they assume ground truth is provided
at each step during training. However, when standard word prediction is
combined with the global sequence prediction training of RL the resulting
summaries become more readable. We evaluate this model on the CNN/Daily Mail
and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the
CNN/Daily Mail dataset, an improvement over previous state-of-the-art models.
Human evaluation also shows that our model produces higher quality summaries. | http://arxiv.org/pdf/1705.04304 | Romain Paulus, Caiming Xiong, Richard Socher | cs.CL | null | null | cs.CL | 20170511 | 20171113 | [
{
"id": "1603.08148"
},
{
"id": "1612.00563"
},
{
"id": "1608.02927"
},
{
"id": "1603.08887"
},
{
"id": "1511.06732"
},
{
"id": "1611.03382"
},
{
"id": "1603.08023"
},
{
"id": "1609.08144"
},
{
"id": "1601.06733"
},
{
"id": "1602.06023"
},
{
"id": "1608.05859"
},
{
"id": "1509.00685"
}
] |
1705.04146 | 10 | Training Examples Dev Examples Test Examples Question Rationale 100,949 250 250 Numeric Non-Numeric All Average Length Vocab Size Average Length Vocab Size Average Length Vocab Size 9.6 21,009 67.8 17,849 77.4 38,858 16.6 14,745 89.1 25,034 105.7 39,779
Table 1: Descriptive statistics of our dataset.
fer more radically from existing ones. Some Turk- ers also copy math problems available on the web, and we deï¬ne in the instructions that this is not allowed, as it will generate multiple copies of the same problem in the dataset if two or more Turkers copy from the same resource. These Turkers can be detected by checking the nearest neighbours within the collected datasets as problems obtained from online resources are frequently submitted by more than one Turker. Using this method, we ob- tained 70,318 additional questions.
# 2.2 Statistics | 1705.04146#10 | Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems | Solving algebraic word problems requires executing a series of arithmetic
operations---a program---to obtain a final answer. However, since programs can
be arbitrarily complicated, inducing them directly from question-answer pairs
is a formidable challenge. To make this task more feasible, we solve these
problems by generating answer rationales, sequences of natural language and
human-readable mathematical expressions that derive the final answer through a
series of small steps. Although rationales do not explicitly specify programs,
they provide a scaffolding for their structure via intermediate milestones. To
evaluate our approach, we have created a new 100,000-sample dataset of
questions, answers and rationales. Experimental results show that indirect
supervision of program learning via answer rationales is a promising strategy
for inducing arithmetic programs. | http://arxiv.org/pdf/1705.04146 | Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20170511 | 20171023 | [] |
1705.04304 | 10 | # 3.1 SUPERVISED LEARNING WITH TEACHER FORCING
The most widely used method to train a decoder RNN for sequence generation, called the teacher forcingâ algorithm (Williams & Zipser, 1989), minimizes a maximum-likelihood loss at each decoding step. We define y* = {y{,y3,...,y%-} as the ground-truth output sequence for a given input sequence x. The maximum-likelihood training objective is the minimization of the following loss:
n Lint = â Yo log p(y li. --+9f-1.2) (14) t=1 | 1705.04304#10 | A Deep Reinforced Model for Abstractive Summarization | Attentional, RNN-based encoder-decoder models for abstractive summarization
have achieved good performance on short input and output sequences. For longer
documents and summaries however these models often include repetitive and
incoherent phrases. We introduce a neural network model with a novel
intra-attention that attends over the input and continuously generated output
separately, and a new training method that combines standard supervised word
prediction and reinforcement learning (RL). Models trained only with supervised
learning often exhibit "exposure bias" - they assume ground truth is provided
at each step during training. However, when standard word prediction is
combined with the global sequence prediction training of RL the resulting
summaries become more readable. We evaluate this model on the CNN/Daily Mail
and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the
CNN/Daily Mail dataset, an improvement over previous state-of-the-art models.
Human evaluation also shows that our model produces higher quality summaries. | http://arxiv.org/pdf/1705.04304 | Romain Paulus, Caiming Xiong, Richard Socher | cs.CL | null | null | cs.CL | 20170511 | 20171113 | [
{
"id": "1603.08148"
},
{
"id": "1612.00563"
},
{
"id": "1608.02927"
},
{
"id": "1603.08887"
},
{
"id": "1511.06732"
},
{
"id": "1611.03382"
},
{
"id": "1603.08023"
},
{
"id": "1609.08144"
},
{
"id": "1601.06733"
},
{
"id": "1602.06023"
},
{
"id": "1608.05859"
},
{
"id": "1509.00685"
}
] |
1705.04146 | 11 | # 2.2 Statistics
Descriptive statistics of the dataset is shown in Figure 1. In total, we collected 104,519 problems (34,202 seed problems and 70,318 crowdsourced problems). We removed 500 problems as heldout set (250 for development and 250 for testing). As replicas of the heldout problems may be present in the training set, these were removed manually by listing for each heldout instance the closest prob- lems in the training set in terms of character-based Levenstein distance. After ï¬ltering, 100,949 prob- lems remained in the training set.
We also show the average number of tokens (to- tal number of tokens in the question, options and rationale) and the vocabulary size of the questions and rationales. Finally, we provide the same statis- tics exclusively for tokens that are numeric values and tokens that are not.
Figure 2 shows the distribution of examples based on the total number of tokens. We can see that most examples consist of 30 to 500 tokens, but there are also extremely long examples with more than 1000 tokens in our dataset.
# 3 Model
Generating rationales for math problems is chal- lenging as it requires models that learn to per- form math operations at a ï¬ner granularity as
# frequency
1000 2000 3000 0 1 T T T ~ T T 0 200 400 600 800 1000 length
Figure 2: Distribution of examples per length. | 1705.04146#11 | Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems | Solving algebraic word problems requires executing a series of arithmetic
operations---a program---to obtain a final answer. However, since programs can
be arbitrarily complicated, inducing them directly from question-answer pairs
is a formidable challenge. To make this task more feasible, we solve these
problems by generating answer rationales, sequences of natural language and
human-readable mathematical expressions that derive the final answer through a
series of small steps. Although rationales do not explicitly specify programs,
they provide a scaffolding for their structure via intermediate milestones. To
evaluate our approach, we have created a new 100,000-sample dataset of
questions, answers and rationales. Experimental results show that indirect
supervision of program learning via answer rationales is a promising strategy
for inducing arithmetic programs. | http://arxiv.org/pdf/1705.04146 | Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20170511 | 20171023 | [] |
1705.04304 | 11 | n Lint = â Yo log p(y li. --+9f-1.2) (14) t=1
However, minimizing Lml does not always produce the best results on discrete evaluation metrics such as ROUGE (Lin, 2004). This phenomenon has been observed with similar sequence generation tasks like image captioning with CIDEr (Rennie et al., 2016) and machine translation with BLEU (Wu et al., 2016; Norouzi et al., 2016). There are two main reasons for this discrepancy. The ï¬rst one, called exposure bias (Ranzato et al., 2015), comes from the fact that the network has knowledge of the ground truth sequence up to the next token during training but does not have such supervision when testing, hence accumulating errors as it predicts the sequence. The second reason is due to the large number of potentially valid summaries, since there are more ways to arrange tokens to produce paraphrases or different sentence orders. The ROUGE metrics take some of this ï¬exibility into account, but the maximum-likelihood objective does not.
3.2 POLICY LEARNING | 1705.04304#11 | A Deep Reinforced Model for Abstractive Summarization | Attentional, RNN-based encoder-decoder models for abstractive summarization
have achieved good performance on short input and output sequences. For longer
documents and summaries however these models often include repetitive and
incoherent phrases. We introduce a neural network model with a novel
intra-attention that attends over the input and continuously generated output
separately, and a new training method that combines standard supervised word
prediction and reinforcement learning (RL). Models trained only with supervised
learning often exhibit "exposure bias" - they assume ground truth is provided
at each step during training. However, when standard word prediction is
combined with the global sequence prediction training of RL the resulting
summaries become more readable. We evaluate this model on the CNN/Daily Mail
and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the
CNN/Daily Mail dataset, an improvement over previous state-of-the-art models.
Human evaluation also shows that our model produces higher quality summaries. | http://arxiv.org/pdf/1705.04304 | Romain Paulus, Caiming Xiong, Richard Socher | cs.CL | null | null | cs.CL | 20170511 | 20171113 | [
{
"id": "1603.08148"
},
{
"id": "1612.00563"
},
{
"id": "1608.02927"
},
{
"id": "1603.08887"
},
{
"id": "1511.06732"
},
{
"id": "1611.03382"
},
{
"id": "1603.08023"
},
{
"id": "1609.08144"
},
{
"id": "1601.06733"
},
{
"id": "1602.06023"
},
{
"id": "1608.05859"
},
{
"id": "1509.00685"
}
] |
1705.04146 | 12 | # frequency
1000 2000 3000 0 1 T T T ~ T T 0 200 400 600 800 1000 length
Figure 2: Distribution of examples per length.
each step within the solution must be explained. For instance, in Problem 1, the equation (27x + 17y)/(x + y) = 23 must be solved to obtain In previous work (Kushman et al., the answer. 2014), this could be done by feeding the equation into an expression solver to obtain x/y = 3/2. However, this would skip the intermediate steps 27x + 17y = 23x + 23y and 4x = 6y, which must also be generated in our problem. We propose a model that jointly learns to generate the text in the rationale, and to perform the math operations re- quired to solve the problem. This is done by gener- ating a program, containing both instructions that generate output and instructions that simply gener- ate intermediate values used by following instruc- tions.
# 3.1 Problem Deï¬nition
In traditional to sequence mod- els (Sutskever et al., 2014; Bahdanau et al., 2014), the goal is to predict the output sequence y = y1, . . . , y|y| from the input sequence x = x1, . . . , x|x|, with lengths |y| and |x|. | 1705.04146#12 | Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems | Solving algebraic word problems requires executing a series of arithmetic
operations---a program---to obtain a final answer. However, since programs can
be arbitrarily complicated, inducing them directly from question-answer pairs
is a formidable challenge. To make this task more feasible, we solve these
problems by generating answer rationales, sequences of natural language and
human-readable mathematical expressions that derive the final answer through a
series of small steps. Although rationales do not explicitly specify programs,
they provide a scaffolding for their structure via intermediate milestones. To
evaluate our approach, we have created a new 100,000-sample dataset of
questions, answers and rationales. Experimental results show that indirect
supervision of program learning via answer rationales is a promising strategy
for inducing arithmetic programs. | http://arxiv.org/pdf/1705.04146 | Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20170511 | 20171023 | [] |
1705.04304 | 12 | 3.2 POLICY LEARNING
One way to remedy this is to learn a policy that maximizes a speciï¬c discrete metric instead of minimizing the maximum-likelihood loss, which is made possible with reinforcement learning. In our model, we use the self-critical policy gradient training algorithm (Rennie et al., 2016).
4
(12)
For this training algorithm, we produce two separate output sequences at each training iteration: ys, which is obtained by sampling from the p(ys 1, . . . , ys tâ1, x) probability distribution at each decod- ing time step, and Ëy, the baseline output, obtained by maximizing the output probability distribution at each time step, essentially performing a greedy search. We deï¬ne r(y) as the reward function for an output sequence y, comparing it with the ground truth sequence yâ with the evaluation metric of our choice.
, n Lr = (r(G) ~r(y")) Slog vi lis --+Â¥ia+) (15) t=1
We can see that minimizing Lrl is equivalent to maximizing the conditional likelihood of the sam- pled sequence ys if it obtains a higher reward than the baseline Ëy, thus increasing the reward expec- tation of our model.
# 3.3 MIXED TRAINING OBJECTIVE FUNCTION | 1705.04304#12 | A Deep Reinforced Model for Abstractive Summarization | Attentional, RNN-based encoder-decoder models for abstractive summarization
have achieved good performance on short input and output sequences. For longer
documents and summaries however these models often include repetitive and
incoherent phrases. We introduce a neural network model with a novel
intra-attention that attends over the input and continuously generated output
separately, and a new training method that combines standard supervised word
prediction and reinforcement learning (RL). Models trained only with supervised
learning often exhibit "exposure bias" - they assume ground truth is provided
at each step during training. However, when standard word prediction is
combined with the global sequence prediction training of RL the resulting
summaries become more readable. We evaluate this model on the CNN/Daily Mail
and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the
CNN/Daily Mail dataset, an improvement over previous state-of-the-art models.
Human evaluation also shows that our model produces higher quality summaries. | http://arxiv.org/pdf/1705.04304 | Romain Paulus, Caiming Xiong, Richard Socher | cs.CL | null | null | cs.CL | 20170511 | 20171113 | [
{
"id": "1603.08148"
},
{
"id": "1612.00563"
},
{
"id": "1608.02927"
},
{
"id": "1603.08887"
},
{
"id": "1511.06732"
},
{
"id": "1611.03382"
},
{
"id": "1603.08023"
},
{
"id": "1609.08144"
},
{
"id": "1601.06733"
},
{
"id": "1602.06023"
},
{
"id": "1608.05859"
},
{
"id": "1509.00685"
}
] |
1705.04146 | 13 | In our particular problem, we are given the problem and the set of options, and wish to pre- dict the rationale and the correct option. We set a as the sequence of words in the problem, concate- nated with words in each of the options separated by a special tag. Note that knowledge about the possible options is required as some problems are solved by the process of elimination or by testing each of the options (e.g. Problem 3). We wish to generate y, which is the sequence of words in the rationale. We also append the correct option as the last word in y, which is interpreted as the chosen option. For example, y in Problem 1 is âLet the ... = 3/2. (EOR) B (EOS)â, whereas in Problem 2 itis âLet s be ... Answer is C (EOR) C (EOS)â, where â(EOS)â is the end of sentence symbol and | 1705.04146#13 | Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems | Solving algebraic word problems requires executing a series of arithmetic
operations---a program---to obtain a final answer. However, since programs can
be arbitrarily complicated, inducing them directly from question-answer pairs
is a formidable challenge. To make this task more feasible, we solve these
problems by generating answer rationales, sequences of natural language and
human-readable mathematical expressions that derive the final answer through a
series of small steps. Although rationales do not explicitly specify programs,
they provide a scaffolding for their structure via intermediate milestones. To
evaluate our approach, we have created a new 100,000-sample dataset of
questions, answers and rationales. Experimental results show that indirect
supervision of program learning via answer rationales is a promising strategy
for inducing arithmetic programs. | http://arxiv.org/pdf/1705.04146 | Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20170511 | 20171023 | [] |
1705.04304 | 13 | # 3.3 MIXED TRAINING OBJECTIVE FUNCTION
One potential issue of this reinforcement training objective is that optimizing for a speciï¬c discrete metric like ROUGE does not guarantee an increase in quality and readability of the output. It is possible to game such discrete metrics and increase their score without an actual increase in readability or relevance (Liu et al., 2016). While ROUGE measures the n-gram overlap between our generated summary and a reference sequence, human-readability is better captured by a language model, which is usually measured by perplexity.
Since our maximum-likelihood training objective (Equation 14) is essentially a conditional lan- guage model, calculating the probability of a token yt based on the previously predicted sequence {y1, . . . , ytâ1} and the input sequence x, we hypothesize that it can assist our policy learning algo- rithm to generate more natural summaries. This motivates us to deï¬ne a mixed learning objective function that combines equations 14 and 15:
Lmixed = γLrl + (1 â γ)Lml, (16) | 1705.04304#13 | A Deep Reinforced Model for Abstractive Summarization | Attentional, RNN-based encoder-decoder models for abstractive summarization
have achieved good performance on short input and output sequences. For longer
documents and summaries however these models often include repetitive and
incoherent phrases. We introduce a neural network model with a novel
intra-attention that attends over the input and continuously generated output
separately, and a new training method that combines standard supervised word
prediction and reinforcement learning (RL). Models trained only with supervised
learning often exhibit "exposure bias" - they assume ground truth is provided
at each step during training. However, when standard word prediction is
combined with the global sequence prediction training of RL the resulting
summaries become more readable. We evaluate this model on the CNN/Daily Mail
and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the
CNN/Daily Mail dataset, an improvement over previous state-of-the-art models.
Human evaluation also shows that our model produces higher quality summaries. | http://arxiv.org/pdf/1705.04304 | Romain Paulus, Caiming Xiong, Richard Socher | cs.CL | null | null | cs.CL | 20170511 | 20171113 | [
{
"id": "1603.08148"
},
{
"id": "1612.00563"
},
{
"id": "1608.02927"
},
{
"id": "1603.08887"
},
{
"id": "1511.06732"
},
{
"id": "1611.03382"
},
{
"id": "1603.08023"
},
{
"id": "1609.08144"
},
{
"id": "1601.06733"
},
{
"id": "1602.06023"
},
{
"id": "1608.05859"
},
{
"id": "1509.00685"
}
] |
1705.04146 | 14 | i|a« z v r 1 | From Id(ââLetâ) Let YI 2/a Td(ââsâ) S Yy2 3 | pack Id(âbeââ) be Y3 4 | of Id(âtheââ) the ya 5 | 52 Id(âsampleââ) sample | ys 6 | cards Id(âspaceâ) space Y6 7), Tacââ) . Uz 8 | two Ta(â
â)
ys 9 | cards Id(âThenâ) Then Th) 10 | are Td(âânâ) n Y10 11 | drawn Tacââ(â) ( Yu 12 | together Td(âsâ) Ss yi2 13 | at racâ)â) ) ys 14 | random Ta(ââ=â) = ya 15 |. Str_to-Float(xs) | 52 mi 16 | What Float_to_Str(mi) | 52 ys 17 | is Ta(âCâ) Cc Yi6 18 | the Ta(ââ2â) 2 yi7 19 | probability | Id(â=â") = yis 20 | of | 1705.04146#14 | Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems | Solving algebraic word problems requires executing a series of arithmetic
operations---a program---to obtain a final answer. However, since programs can
be arbitrarily complicated, inducing them directly from question-answer pairs
is a formidable challenge. To make this task more feasible, we solve these
problems by generating answer rationales, sequences of natural language and
human-readable mathematical expressions that derive the final answer through a
series of small steps. Although rationales do not explicitly specify programs,
they provide a scaffolding for their structure via intermediate milestones. To
evaluate our approach, we have created a new 100,000-sample dataset of
questions, answers and rationales. Experimental results show that indirect
supervision of program learning via answer rationales is a promising strategy
for inducing arithmetic programs. | http://arxiv.org/pdf/1705.04146 | Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20170511 | 20171023 | [] |
1705.04304 | 14 | Lmixed = γLrl + (1 â γ)Lml, (16)
where γ is a scaling factor accounting for the difference in magnitude between Lrl and Lml. A similar mixed-objective learning function has been used by Wu et al. (2016) for machine translation on short sequences, but this is its ï¬rst use in combination with self-critical policy learning for long summarization to explicitly improve readability in addition to evaluation metrics.
# 4 RELATED WORK
# 4.1 NEURAL ENCODER-DECODER SEQUENCE MODELS | 1705.04304#14 | A Deep Reinforced Model for Abstractive Summarization | Attentional, RNN-based encoder-decoder models for abstractive summarization
have achieved good performance on short input and output sequences. For longer
documents and summaries however these models often include repetitive and
incoherent phrases. We introduce a neural network model with a novel
intra-attention that attends over the input and continuously generated output
separately, and a new training method that combines standard supervised word
prediction and reinforcement learning (RL). Models trained only with supervised
learning often exhibit "exposure bias" - they assume ground truth is provided
at each step during training. However, when standard word prediction is
combined with the global sequence prediction training of RL the resulting
summaries become more readable. We evaluate this model on the CNN/Daily Mail
and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the
CNN/Daily Mail dataset, an improvement over previous state-of-the-art models.
Human evaluation also shows that our model produces higher quality summaries. | http://arxiv.org/pdf/1705.04304 | Romain Paulus, Caiming Xiong, Richard Socher | cs.CL | null | null | cs.CL | 20170511 | 20171113 | [
{
"id": "1603.08148"
},
{
"id": "1612.00563"
},
{
"id": "1608.02927"
},
{
"id": "1603.08887"
},
{
"id": "1511.06732"
},
{
"id": "1611.03382"
},
{
"id": "1603.08023"
},
{
"id": "1609.08144"
},
{
"id": "1601.06733"
},
{
"id": "1602.06023"
},
{
"id": "1608.05859"
},
{
"id": "1509.00685"
}
] |
1705.04146 | 15 | Yi6 18 | the Ta(ââ2â) 2 yi7 19 | probability | Id(â=â") = yis 20 | of Str_to_-Float(yiz) | 2 m2 21 | both Choose(mi,ma2) 1326 m3 22 | cards Float_to_Str(ms3) | 1326 yo 23 | being Tda(âEâ) E Yy20 24 | kings Id(â=") = yi 25 |? Id(âeventâ) event Yy22 26 | <O> Id(âofâ) of Y23 27 | A) Id(âgettingâ) getting | yoa 28 | 2/1223 Ta(ââ2â) 2 Yo 29 | <O> Id(âkingsâ) kings yo6 30 | B) Id(âoutâ) out yor 31 | 1/122 Td(âofâ) of Yy28 iz| Ta(â(EOS)") (E03) | iy | 1705.04146#15 | Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems | Solving algebraic word problems requires executing a series of arithmetic
operations---a program---to obtain a final answer. However, since programs can
be arbitrarily complicated, inducing them directly from question-answer pairs
is a formidable challenge. To make this task more feasible, we solve these
problems by generating answer rationales, sequences of natural language and
human-readable mathematical expressions that derive the final answer through a
series of small steps. Although rationales do not explicitly specify programs,
they provide a scaffolding for their structure via intermediate milestones. To
evaluate our approach, we have created a new 100,000-sample dataset of
questions, answers and rationales. Experimental results show that indirect
supervision of program learning via answer rationales is a promising strategy
for inducing arithmetic programs. | http://arxiv.org/pdf/1705.04146 | Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20170511 | 20171023 | [] |
1705.04304 | 15 | Neural encoder-decoder models are widely used in NLP applications such as machine translation (Sutskever et al., 2014), summarization (Chopra et al., 2016; Nallapati et al., 2016), and question answering (Hermann et al., 2015). These models use recurrent neural networks (RNN), such as long-short term memory network (LSTM) (Hochreiter & Schmidhuber, 1997) to encode an input sentence into a ï¬xed vector, and create a new output sequence from that vector using another RNN. To apply this sequence-to-sequence approach to natural language, word embeddings (Mikolov et al., 2013; Pennington et al., 2014) are used to convert language tokens to vectors that can be used as inputs for these networks. Attention mechanisms (Bahdanau et al., 2014) make these models more performant and scalable, allowing them to look back at parts of the encoded input sequence while the output is generated. These models often use a ï¬xed input and output vocabulary, which prevents them from learning representations for new words. One way to ï¬x this is to allow the decoder network to point back to some speciï¬c words or | 1705.04304#15 | A Deep Reinforced Model for Abstractive Summarization | Attentional, RNN-based encoder-decoder models for abstractive summarization
have achieved good performance on short input and output sequences. For longer
documents and summaries however these models often include repetitive and
incoherent phrases. We introduce a neural network model with a novel
intra-attention that attends over the input and continuously generated output
separately, and a new training method that combines standard supervised word
prediction and reinforcement learning (RL). Models trained only with supervised
learning often exhibit "exposure bias" - they assume ground truth is provided
at each step during training. However, when standard word prediction is
combined with the global sequence prediction training of RL the resulting
summaries become more readable. We evaluate this model on the CNN/Daily Mail
and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the
CNN/Daily Mail dataset, an improvement over previous state-of-the-art models.
Human evaluation also shows that our model produces higher quality summaries. | http://arxiv.org/pdf/1705.04304 | Romain Paulus, Caiming Xiong, Richard Socher | cs.CL | null | null | cs.CL | 20170511 | 20171113 | [
{
"id": "1603.08148"
},
{
"id": "1612.00563"
},
{
"id": "1608.02927"
},
{
"id": "1603.08887"
},
{
"id": "1511.06732"
},
{
"id": "1611.03382"
},
{
"id": "1603.08023"
},
{
"id": "1609.08144"
},
{
"id": "1601.06733"
},
{
"id": "1602.06023"
},
{
"id": "1608.05859"
},
{
"id": "1509.00685"
}
] |
1705.04146 | 16 | Table 2: Example of a program z that would gen- In v, italics indicates string erate the output y. types; bold indicates ï¬oat types. Refer to §3.3 for description of variable names.
â(EOR)â is the end of rationale symbol.
# 3.2 Generating Programs to Generate Rationales
We wish to generate a latent sequence of program instructions, z = z1, . . . , z|z|, with length |z|, that will generate y when executed.
We express z as a program that can access x, y, and the memory buffer m. Upon ï¬nishing execu- tion we expect that the sequence of output tokens to be placed in the output vector y.
Table 2 illustrates an example of a sequence of instructions that would generate an excerpt from Problem 2, where columns x, z, v, and r denote the input sequence, the instruction sequence (pro- gram), the values of executing the instruction, and where each value vi is written (i.e., either to the output or to the memory). In this example, instruc- tions from indexes 1 to 14 simply ï¬ll each position with the observed output y1, . . . , y14 with a string, | 1705.04146#16 | Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems | Solving algebraic word problems requires executing a series of arithmetic
operations---a program---to obtain a final answer. However, since programs can
be arbitrarily complicated, inducing them directly from question-answer pairs
is a formidable challenge. To make this task more feasible, we solve these
problems by generating answer rationales, sequences of natural language and
human-readable mathematical expressions that derive the final answer through a
series of small steps. Although rationales do not explicitly specify programs,
they provide a scaffolding for their structure via intermediate milestones. To
evaluate our approach, we have created a new 100,000-sample dataset of
questions, answers and rationales. Experimental results show that indirect
supervision of program learning via answer rationales is a promising strategy
for inducing arithmetic programs. | http://arxiv.org/pdf/1705.04146 | Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20170511 | 20171023 | [] |
1705.04304 | 16 | which prevents them from learning representations for new words. One way to ï¬x this is to allow the decoder network to point back to some speciï¬c words or sub-sequences of the input and copy them onto the output sequence (Vinyals et al., 2015). Gulcehre et al. (2016) and Merity et al. (2017) combine this pointer mechanism with the original word generation layer in the decoder to allow the model to use either method at each decoding step. | 1705.04304#16 | A Deep Reinforced Model for Abstractive Summarization | Attentional, RNN-based encoder-decoder models for abstractive summarization
have achieved good performance on short input and output sequences. For longer
documents and summaries however these models often include repetitive and
incoherent phrases. We introduce a neural network model with a novel
intra-attention that attends over the input and continuously generated output
separately, and a new training method that combines standard supervised word
prediction and reinforcement learning (RL). Models trained only with supervised
learning often exhibit "exposure bias" - they assume ground truth is provided
at each step during training. However, when standard word prediction is
combined with the global sequence prediction training of RL the resulting
summaries become more readable. We evaluate this model on the CNN/Daily Mail
and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the
CNN/Daily Mail dataset, an improvement over previous state-of-the-art models.
Human evaluation also shows that our model produces higher quality summaries. | http://arxiv.org/pdf/1705.04304 | Romain Paulus, Caiming Xiong, Richard Socher | cs.CL | null | null | cs.CL | 20170511 | 20171113 | [
{
"id": "1603.08148"
},
{
"id": "1612.00563"
},
{
"id": "1608.02927"
},
{
"id": "1603.08887"
},
{
"id": "1511.06732"
},
{
"id": "1611.03382"
},
{
"id": "1603.08023"
},
{
"id": "1609.08144"
},
{
"id": "1601.06733"
},
{
"id": "1602.06023"
},
{
"id": "1608.05859"
},
{
"id": "1509.00685"
}
] |
1705.04146 | 17 | where the Id operation simply returns its parame- ter without applying any operation. As such, run- ning this operation is analogous to generating a word by sampling from a softmax over a vocabu- lary. However, instruction z15 reads the input word a5, 52, and applies the operation St r_to_Float, which converts the word 52 into a floating point number, and the same is done for instruction z29, which reads a previously generated output word yi7- Unlike, instructions z,..., 214, these op- erations write to the external memory m, which stores intermediate values. A more sophisticated instructionâwhich shows some of the power of our modelâis z213 = Choose(m1,mz2) > m3 which evaluates (me) and stores the result in m3. This process repeats until the model generates the end-of-sentence symbol. The last token of the pro- gram as said previously must generate the correct option value, from âAâ to âEâ. | 1705.04146#17 | Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems | Solving algebraic word problems requires executing a series of arithmetic
operations---a program---to obtain a final answer. However, since programs can
be arbitrarily complicated, inducing them directly from question-answer pairs
is a formidable challenge. To make this task more feasible, we solve these
problems by generating answer rationales, sequences of natural language and
human-readable mathematical expressions that derive the final answer through a
series of small steps. Although rationales do not explicitly specify programs,
they provide a scaffolding for their structure via intermediate milestones. To
evaluate our approach, we have created a new 100,000-sample dataset of
questions, answers and rationales. Experimental results show that indirect
supervision of program learning via answer rationales is a promising strategy
for inducing arithmetic programs. | http://arxiv.org/pdf/1705.04146 | Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20170511 | 20171023 | [] |
1705.04304 | 17 | # 4.2 REINFORCEMENT LEARNING FOR SEQUENCE GENERATION
Reinforcement learning (RL) is a way of training an agent to interact with a given environment in order to maximize a reward. RL has been used to solve a wide variety of problems, usually when
5
an agent has to perform discrete actions before obtaining a reward, or when the metric to optimize is not differentiable and traditional supervised learning methods cannot be used. This is applicable to sequence generation tasks, because many of the metrics used to evaluate these tasks (like BLEU, ROUGE or METEOR) are not differentiable.
In order to optimize that metric directly, Ranzato et al. (2015) have applied the REINFORCE algo- rithm (Williams, 1992) to train various RNN-based models for sequence generation tasks, leading to signiï¬cant improvements compared to previous supervised learning methods. While their method requires an additional neural network, called a critic model, to predict the expected reward and sta- bilize the objective function gradients, Rennie et al. (2016) designed a self-critical sequence training method that does not require this critic model and lead to further improvements on image captioning tasks.
4.3 TEXT SUMMARIZATION | 1705.04304#17 | A Deep Reinforced Model for Abstractive Summarization | Attentional, RNN-based encoder-decoder models for abstractive summarization
have achieved good performance on short input and output sequences. For longer
documents and summaries however these models often include repetitive and
incoherent phrases. We introduce a neural network model with a novel
intra-attention that attends over the input and continuously generated output
separately, and a new training method that combines standard supervised word
prediction and reinforcement learning (RL). Models trained only with supervised
learning often exhibit "exposure bias" - they assume ground truth is provided
at each step during training. However, when standard word prediction is
combined with the global sequence prediction training of RL the resulting
summaries become more readable. We evaluate this model on the CNN/Daily Mail
and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the
CNN/Daily Mail dataset, an improvement over previous state-of-the-art models.
Human evaluation also shows that our model produces higher quality summaries. | http://arxiv.org/pdf/1705.04304 | Romain Paulus, Caiming Xiong, Richard Socher | cs.CL | null | null | cs.CL | 20170511 | 20171113 | [
{
"id": "1603.08148"
},
{
"id": "1612.00563"
},
{
"id": "1608.02927"
},
{
"id": "1603.08887"
},
{
"id": "1511.06732"
},
{
"id": "1611.03382"
},
{
"id": "1603.08023"
},
{
"id": "1609.08144"
},
{
"id": "1601.06733"
},
{
"id": "1602.06023"
},
{
"id": "1608.05859"
},
{
"id": "1509.00685"
}
] |
1705.04146 | 18 | By training a model to generate instructions that can manipulate existing tokens, the model ben- eï¬ts from the additional expressiveness needed to solve math problems within the generation process. In total we deï¬ne 22 different oper- ations, 13 of which are frequently used opera- tions when solving math problems. These are: Id, Add, Subtract, Multiply, Divide, Power, Log, Sqrt, Sine, Cosine, Tangent, Factorial, and Choose (number of combi- nations). We also provide 2 operations to con- vert between Radians and Degrees, as these are needed for the sine, cosine and tangent opera- tions. There are 6 operations that convert ï¬oating point numbers into strings and vice-versa. These include the Str to Float and Float to Str operations described previously, as well as opera- tions which convert between ï¬oating point num- bers and fractions, since in many math problems the answers are in the form â3/4â. For the same reason, an operation to convert between a ï¬oat- ing point number and number grouped in thou- sands is also used (e.g. 1000000 to | 1705.04146#18 | Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems | Solving algebraic word problems requires executing a series of arithmetic
operations---a program---to obtain a final answer. However, since programs can
be arbitrarily complicated, inducing them directly from question-answer pairs
is a formidable challenge. To make this task more feasible, we solve these
problems by generating answer rationales, sequences of natural language and
human-readable mathematical expressions that derive the final answer through a
series of small steps. Although rationales do not explicitly specify programs,
they provide a scaffolding for their structure via intermediate milestones. To
evaluate our approach, we have created a new 100,000-sample dataset of
questions, answers and rationales. Experimental results show that indirect
supervision of program learning via answer rationales is a promising strategy
for inducing arithmetic programs. | http://arxiv.org/pdf/1705.04146 | Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20170511 | 20171023 | [] |
1705.04304 | 18 | 4.3 TEXT SUMMARIZATION
Most summarization models studied in the past are extractive in nature (Dorr et al., 2003; Nallapati et al., 2017; Durrett et al., 2016), which usually work by identifying the most important phrases of an input document and re-arranging them into a new summary sequence. The more recent abstractive summarization models have more degrees of freedom and can create more novel sequences. Many abstractive models such as Rush et al. (2015), Chopra et al. (2016) and Nallapati et al. (2016) are all based on the neural encoder-decoder architecture (Section 4.1). | 1705.04304#18 | A Deep Reinforced Model for Abstractive Summarization | Attentional, RNN-based encoder-decoder models for abstractive summarization
have achieved good performance on short input and output sequences. For longer
documents and summaries however these models often include repetitive and
incoherent phrases. We introduce a neural network model with a novel
intra-attention that attends over the input and continuously generated output
separately, and a new training method that combines standard supervised word
prediction and reinforcement learning (RL). Models trained only with supervised
learning often exhibit "exposure bias" - they assume ground truth is provided
at each step during training. However, when standard word prediction is
combined with the global sequence prediction training of RL the resulting
summaries become more readable. We evaluate this model on the CNN/Daily Mail
and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the
CNN/Daily Mail dataset, an improvement over previous state-of-the-art models.
Human evaluation also shows that our model produces higher quality summaries. | http://arxiv.org/pdf/1705.04304 | Romain Paulus, Caiming Xiong, Richard Socher | cs.CL | null | null | cs.CL | 20170511 | 20171113 | [
{
"id": "1603.08148"
},
{
"id": "1612.00563"
},
{
"id": "1608.02927"
},
{
"id": "1603.08887"
},
{
"id": "1511.06732"
},
{
"id": "1611.03382"
},
{
"id": "1603.08023"
},
{
"id": "1609.08144"
},
{
"id": "1601.06733"
},
{
"id": "1602.06023"
},
{
"id": "1608.05859"
},
{
"id": "1509.00685"
}
] |
1705.04146 | 19 | the same reason, an operation to convert between a ï¬oat- ing point number and number grouped in thou- sands is also used (e.g. 1000000 to â1,000,000â or â1.000.000â). Finally, we deï¬ne an opera- tion (Check) that given the input string, searches through the list of options and returns a string with the option index in {âAâ, âBâ, âCâ, âDâ, âEâ}. If the input value does not match any of the options, or more than one option contains that value, it can- not be applied. For instance, in Problem 2, once the correct probability â1/221â is generated, by ap- plying the check operation to this number we can | 1705.04146#19 | Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems | Solving algebraic word problems requires executing a series of arithmetic
operations---a program---to obtain a final answer. However, since programs can
be arbitrarily complicated, inducing them directly from question-answer pairs
is a formidable challenge. To make this task more feasible, we solve these
problems by generating answer rationales, sequences of natural language and
human-readable mathematical expressions that derive the final answer through a
series of small steps. Although rationales do not explicitly specify programs,
they provide a scaffolding for their structure via intermediate milestones. To
evaluate our approach, we have created a new 100,000-sample dataset of
questions, answers and rationales. Experimental results show that indirect
supervision of program learning via answer rationales is a promising strategy
for inducing arithmetic programs. | http://arxiv.org/pdf/1705.04146 | Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20170511 | 20171023 | [] |
1705.04304 | 19 | A well-studied set of summarization tasks is the Document Understanding Conference (DUC) 1. These summarization tasks are varied, including short summaries of a single document and long summaries of multiple documents categorized by subject. Most abstractive summarization models have been evaluated on the DUC-2004 dataset, and outperform extractive models on that task (Dorr et al., 2003). However, models trained on the DUC-2004 task can only generate very short sum- maries up to 75 characters, and are usually used with one or two input sentences. Chen et al. (2016) applied different kinds of attention mechanisms for summarization on the CNN dataset, and Nalla- pati et al. (2016) used different attention and pointer functions on the CNN and Daily Mail datasets combined. In parallel of our work, See et al. (2017) also developed an abstractive summarization model on this dataset with an extra loss term to increase temporal coverage of the encoder attention function.
# 5 DATASETS
# 5.1 CNN/DAILY MAIL | 1705.04304#19 | A Deep Reinforced Model for Abstractive Summarization | Attentional, RNN-based encoder-decoder models for abstractive summarization
have achieved good performance on short input and output sequences. For longer
documents and summaries however these models often include repetitive and
incoherent phrases. We introduce a neural network model with a novel
intra-attention that attends over the input and continuously generated output
separately, and a new training method that combines standard supervised word
prediction and reinforcement learning (RL). Models trained only with supervised
learning often exhibit "exposure bias" - they assume ground truth is provided
at each step during training. However, when standard word prediction is
combined with the global sequence prediction training of RL the resulting
summaries become more readable. We evaluate this model on the CNN/Daily Mail
and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the
CNN/Daily Mail dataset, an improvement over previous state-of-the-art models.
Human evaluation also shows that our model produces higher quality summaries. | http://arxiv.org/pdf/1705.04304 | Romain Paulus, Caiming Xiong, Richard Socher | cs.CL | null | null | cs.CL | 20170511 | 20171113 | [
{
"id": "1603.08148"
},
{
"id": "1612.00563"
},
{
"id": "1608.02927"
},
{
"id": "1603.08887"
},
{
"id": "1511.06732"
},
{
"id": "1611.03382"
},
{
"id": "1603.08023"
},
{
"id": "1609.08144"
},
{
"id": "1601.06733"
},
{
"id": "1602.06023"
},
{
"id": "1608.05859"
},
{
"id": "1509.00685"
}
] |
1705.04146 | 20 | 1 i i | hy ail >| dijer >| ij =| Bis ; softmax. softmax softmax 5 j<arge(o,)? Oo © âa Â¥ copy execute output we softmax
Figure 3: Illustration of the generation process of a single instruction tuple at timestamp i.
obtain correct option âCâ.
# 3.3 Generating and Executing Instructions
In our model, programs consist of sequences of instructions, z. We turn now to how we model each zi, conditional on the text program speciï¬- cation, and the programâs history. The instruction zi is a tuple consisting of an operation (oi), an or- dered sequence of its arguments (ai), and a deci- sion about where its results will be placed (ri) (is it appended in the output y or in a memory buffer m?), and the result of applying the operation to its arguments (vi). That is, zi = (oi, ri, ai, vi). | 1705.04146#20 | Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems | Solving algebraic word problems requires executing a series of arithmetic
operations---a program---to obtain a final answer. However, since programs can
be arbitrarily complicated, inducing them directly from question-answer pairs
is a formidable challenge. To make this task more feasible, we solve these
problems by generating answer rationales, sequences of natural language and
human-readable mathematical expressions that derive the final answer through a
series of small steps. Although rationales do not explicitly specify programs,
they provide a scaffolding for their structure via intermediate milestones. To
evaluate our approach, we have created a new 100,000-sample dataset of
questions, answers and rationales. Experimental results show that indirect
supervision of program learning via answer rationales is a promising strategy
for inducing arithmetic programs. | http://arxiv.org/pdf/1705.04146 | Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20170511 | 20171023 | [] |
1705.04304 | 20 | # 5 DATASETS
# 5.1 CNN/DAILY MAIL
We evaluate our model on a modiï¬ed version of the CNN/Daily Mail dataset (Hermann et al., 2015), following the same pre-processing steps described in Nallapati et al. (2016). We refer the reader to that paper for a detailed description. Our ï¬nal dataset contains 287,113 training examples, 13,368 validation examples and 11,490 testing examples. After limiting the input length to 800 tokens and output length to 100 tokens, the average input and output lengths are respectively 632 and 53 tokens.
5.2 NEW YORK TIMES | 1705.04304#20 | A Deep Reinforced Model for Abstractive Summarization | Attentional, RNN-based encoder-decoder models for abstractive summarization
have achieved good performance on short input and output sequences. For longer
documents and summaries however these models often include repetitive and
incoherent phrases. We introduce a neural network model with a novel
intra-attention that attends over the input and continuously generated output
separately, and a new training method that combines standard supervised word
prediction and reinforcement learning (RL). Models trained only with supervised
learning often exhibit "exposure bias" - they assume ground truth is provided
at each step during training. However, when standard word prediction is
combined with the global sequence prediction training of RL the resulting
summaries become more readable. We evaluate this model on the CNN/Daily Mail
and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the
CNN/Daily Mail dataset, an improvement over previous state-of-the-art models.
Human evaluation also shows that our model produces higher quality summaries. | http://arxiv.org/pdf/1705.04304 | Romain Paulus, Caiming Xiong, Richard Socher | cs.CL | null | null | cs.CL | 20170511 | 20171113 | [
{
"id": "1603.08148"
},
{
"id": "1612.00563"
},
{
"id": "1608.02927"
},
{
"id": "1603.08887"
},
{
"id": "1511.06732"
},
{
"id": "1611.03382"
},
{
"id": "1603.08023"
},
{
"id": "1609.08144"
},
{
"id": "1601.06733"
},
{
"id": "1602.06023"
},
{
"id": "1608.05859"
},
{
"id": "1509.00685"
}
] |
1705.04146 | 21 | Formally, oi is an element of the pre-speciï¬ed set of operations O, which contains, for example add, div, Str to Float, etc. The number of arguments required by oi is given by argc(oi), e.g., argc(add) = 2 and argc(log) = 1. The argu- ments are ai = ai,1, . . . , ai,argc(oi). An instruc- tion will generate a return value vi upon execution, which will either be placed in the output y or hid- den. This decision is controlled by ri. We deï¬ne the instruction probability as:
P(0;, Qi, THY: | Z<i, ZY, M) = plo; | Z<i, L) X p(ri | Zi, L, 04) X argc(o;) Il P(ai,j | Z<i, L,01,m, y) x j=l [vi = apply(0;, @)],
where [p] evaluates to 1 if p is true and 0 otherwise, and apply(f, x) evaluates the operation f with ar- guments x. Note that the apply function is not learned, but pre-deï¬ned. | 1705.04146#21 | Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems | Solving algebraic word problems requires executing a series of arithmetic
operations---a program---to obtain a final answer. However, since programs can
be arbitrarily complicated, inducing them directly from question-answer pairs
is a formidable challenge. To make this task more feasible, we solve these
problems by generating answer rationales, sequences of natural language and
human-readable mathematical expressions that derive the final answer through a
series of small steps. Although rationales do not explicitly specify programs,
they provide a scaffolding for their structure via intermediate milestones. To
evaluate our approach, we have created a new 100,000-sample dataset of
questions, answers and rationales. Experimental results show that indirect
supervision of program learning via answer rationales is a promising strategy
for inducing arithmetic programs. | http://arxiv.org/pdf/1705.04146 | Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20170511 | 20171023 | [] |
1705.04304 | 21 | 5.2 NEW YORK TIMES
The New York Times (NYT) dataset (Sandhaus, 2008) is a large collection of articles published between 1996 and 2007. Even though this dataset has been used to train extractive summarization systems (Durrett et al., 2016; Hong & Nenkova, 2014; Li et al., 2016) or closely-related models for predicting the importance of a phrase in an article (Yang & Nenkova, 2014; Nye & Nenkova, 2015; Hong et al., 2015), we are the ï¬rst group to run an end-to-end abstractive summarization model on the article-abstract pairs of this dataset. While CNN/Daily Mail summaries have a similar wording to their corresponding articles, NYT abstracts are more varied, are shorter and can use a higher level of abstraction and paraphrase. Because of these differences, these two formats are a good complement to each other for abstractive summarization models. We describe the dataset preprocessing and pointer supervision in Section A of the Appendix.
1http://duc.nist.gov/
6 | 1705.04304#21 | A Deep Reinforced Model for Abstractive Summarization | Attentional, RNN-based encoder-decoder models for abstractive summarization
have achieved good performance on short input and output sequences. For longer
documents and summaries however these models often include repetitive and
incoherent phrases. We introduce a neural network model with a novel
intra-attention that attends over the input and continuously generated output
separately, and a new training method that combines standard supervised word
prediction and reinforcement learning (RL). Models trained only with supervised
learning often exhibit "exposure bias" - they assume ground truth is provided
at each step during training. However, when standard word prediction is
combined with the global sequence prediction training of RL the resulting
summaries become more readable. We evaluate this model on the CNN/Daily Mail
and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the
CNN/Daily Mail dataset, an improvement over previous state-of-the-art models.
Human evaluation also shows that our model produces higher quality summaries. | http://arxiv.org/pdf/1705.04304 | Romain Paulus, Caiming Xiong, Richard Socher | cs.CL | null | null | cs.CL | 20170511 | 20171113 | [
{
"id": "1603.08148"
},
{
"id": "1612.00563"
},
{
"id": "1608.02927"
},
{
"id": "1603.08887"
},
{
"id": "1511.06732"
},
{
"id": "1611.03382"
},
{
"id": "1603.08023"
},
{
"id": "1609.08144"
},
{
"id": "1601.06733"
},
{
"id": "1602.06023"
},
{
"id": "1608.05859"
},
{
"id": "1509.00685"
}
] |
1705.04146 | 22 | The network used to generate an instruction at a given timestamp i is illustrated in Figure 3. We
ï¬rst use the recurrent state hi to generate p(oi | (hi), using a softmax over the z<i, x) = softmax oiâO set of available operations O.
In order to predict ri, we generate a new hid- den state ri, which is a function of the current pro- gram context hi, and an embedding of the cur- rent predicted operation, oi. As the output can either be placed in the memory m or the output y, we compute the probability p(ri = OUTPUT | z<i, x, oi) = Ï(ri · wr + br), where Ï is the lo- If ri = OUTPUT, vi is gistic sigmoid function. appended to the output y; otherwise it is appended to the memory m. | 1705.04146#22 | Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems | Solving algebraic word problems requires executing a series of arithmetic
operations---a program---to obtain a final answer. However, since programs can
be arbitrarily complicated, inducing them directly from question-answer pairs
is a formidable challenge. To make this task more feasible, we solve these
problems by generating answer rationales, sequences of natural language and
human-readable mathematical expressions that derive the final answer through a
series of small steps. Although rationales do not explicitly specify programs,
they provide a scaffolding for their structure via intermediate milestones. To
evaluate our approach, we have created a new 100,000-sample dataset of
questions, answers and rationales. Experimental results show that indirect
supervision of program learning via answer rationales is a promising strategy
for inducing arithmetic programs. | http://arxiv.org/pdf/1705.04146 | Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20170511 | 20171023 | [] |
1705.04304 | 22 | 1http://duc.nist.gov/
6
Model Lead-3 (Nallapati et al., 2017) SummaRuNNer (Nallapati et al., 2017) words-lvt2k-temp-att (Nallapati et al., 2016) ML, no intra-attention ML, with intra-attention RL, with intra-attention ML+RL, with intra-attention ROUGE-1 ROUGE-2 ROUGE-L 39.2 39.6 35.46 37.86 38.30 41.16 39.87 15.7 16.2 13.30 14.69 14.81 15.75 15.82 35.5 35.3 32.65 34.99 35.49 39.08 36.90
Table 1: Quantitative results for various models on the CNN/Daily Mail test dataset
Model ML, no intra-attention ML, with intra-attention RL, no intra-attention ML+RL, no intra-attention ROUGE-1 ROUGE-2 ROUGE-L 44.26 43.86 47.22 47.03 27.43 27.10 30.51 30.72 40.41 40.11 43.27 43.10
Table 2: Quantitative results for various models on the New York Times test dataset | 1705.04304#22 | A Deep Reinforced Model for Abstractive Summarization | Attentional, RNN-based encoder-decoder models for abstractive summarization
have achieved good performance on short input and output sequences. For longer
documents and summaries however these models often include repetitive and
incoherent phrases. We introduce a neural network model with a novel
intra-attention that attends over the input and continuously generated output
separately, and a new training method that combines standard supervised word
prediction and reinforcement learning (RL). Models trained only with supervised
learning often exhibit "exposure bias" - they assume ground truth is provided
at each step during training. However, when standard word prediction is
combined with the global sequence prediction training of RL the resulting
summaries become more readable. We evaluate this model on the CNN/Daily Mail
and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the
CNN/Daily Mail dataset, an improvement over previous state-of-the-art models.
Human evaluation also shows that our model produces higher quality summaries. | http://arxiv.org/pdf/1705.04304 | Romain Paulus, Caiming Xiong, Richard Socher | cs.CL | null | null | cs.CL | 20170511 | 20171113 | [
{
"id": "1603.08148"
},
{
"id": "1612.00563"
},
{
"id": "1608.02927"
},
{
"id": "1603.08887"
},
{
"id": "1511.06732"
},
{
"id": "1611.03382"
},
{
"id": "1603.08023"
},
{
"id": "1609.08144"
},
{
"id": "1601.06733"
},
{
"id": "1602.06023"
},
{
"id": "1608.05859"
},
{
"id": "1509.00685"
}
] |
1705.04146 | 23 | Once we generate ri, we must predict ai, the argc(oi)-length sequence of arguments that oper- ation oi requires. The jth argument ai,j can be either generated from a softmax over the vocab- ulary, copied from the input vector x, or copied from previously generated values in the output y or memory m. This decision is modeled us- ing a latent predictor network (Ling et al., 2016), where the control over which method used to gen- erate ai,j is governed by a latent variable qi,j â {SOFTMAX, COPY-INPUT, COPY-OUTPUT}. Sim- ilar to when predicting ri, in order to make this choice, we also generate a new hidden state for each argument slot j, denoted by qi,j with an LSTM, feeding the previous argument in at each time step, and initializing it with ri and by reading the predicted value of the output ri.
⢠If qi,j = SOFTMAX, ai,j is generated by sam- pling from a softmax over the vocabulary Y,
p(ai,j | qi,j = SOFTMAX) = softmax ai,j âY (qi,j). | 1705.04146#23 | Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems | Solving algebraic word problems requires executing a series of arithmetic
operations---a program---to obtain a final answer. However, since programs can
be arbitrarily complicated, inducing them directly from question-answer pairs
is a formidable challenge. To make this task more feasible, we solve these
problems by generating answer rationales, sequences of natural language and
human-readable mathematical expressions that derive the final answer through a
series of small steps. Although rationales do not explicitly specify programs,
they provide a scaffolding for their structure via intermediate milestones. To
evaluate our approach, we have created a new 100,000-sample dataset of
questions, answers and rationales. Experimental results show that indirect
supervision of program learning via answer rationales is a promising strategy
for inducing arithmetic programs. | http://arxiv.org/pdf/1705.04146 | Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20170511 | 20171023 | [] |
1705.04304 | 23 | Source document Jenson Button was denied his 100th race for McLaren after an ERS prevented him from making it to the start- line. It capped a miserable weekend for the Briton; his time in Bahrain plagued by reliability issues. Button spent much of the race on Twitter delivering his verdict as the action unfolded. âKimi is the man to watch,â and âloving the sparksâ, were among his pearls of wisdom, but the tweet which courted the most attention was a rather mischievous one: âOoh is Lewis backing his team mate into Vettel?â he quizzed after Rosberg accused Hamilton of pulling off such a manoeuvre in China. Jenson Button waves to the crowd ahead of the Bahrain Grand Prix which he failed to start Perhaps a career in the media beckons... Lewis Hamilton has out-qualiï¬ed and ï¬nished ahead of Nico Rosberg at every race this season. Indeed Rosberg has now beaten his Mercedes team-mate only once in the 11 races since the pair infamously collided in Belgium last year. Hamilton secured the 36th win of his career in Bahrain and his 21st from pole position. Only Michael Schumacher (40), Ayrton | 1705.04304#23 | A Deep Reinforced Model for Abstractive Summarization | Attentional, RNN-based encoder-decoder models for abstractive summarization
have achieved good performance on short input and output sequences. For longer
documents and summaries however these models often include repetitive and
incoherent phrases. We introduce a neural network model with a novel
intra-attention that attends over the input and continuously generated output
separately, and a new training method that combines standard supervised word
prediction and reinforcement learning (RL). Models trained only with supervised
learning often exhibit "exposure bias" - they assume ground truth is provided
at each step during training. However, when standard word prediction is
combined with the global sequence prediction training of RL the resulting
summaries become more readable. We evaluate this model on the CNN/Daily Mail
and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the
CNN/Daily Mail dataset, an improvement over previous state-of-the-art models.
Human evaluation also shows that our model produces higher quality summaries. | http://arxiv.org/pdf/1705.04304 | Romain Paulus, Caiming Xiong, Richard Socher | cs.CL | null | null | cs.CL | 20170511 | 20171113 | [
{
"id": "1603.08148"
},
{
"id": "1612.00563"
},
{
"id": "1608.02927"
},
{
"id": "1603.08887"
},
{
"id": "1511.06732"
},
{
"id": "1611.03382"
},
{
"id": "1603.08023"
},
{
"id": "1609.08144"
},
{
"id": "1601.06733"
},
{
"id": "1602.06023"
},
{
"id": "1608.05859"
},
{
"id": "1509.00685"
}
] |
1705.04146 | 24 | p(ai,j | qi,j = SOFTMAX) = softmax ai,j âY (qi,j).
This corresponds to a case where a string is used as argument (e.g. y1=âLetâ).
⢠If qi,j = COPY-INPUT, ai,j is obtained by copy- ing an element from the input vector with a pointer network (Vinyals et al., 2015) over input words x1, . . . , x|x|, represented by their encoder LSTM state u1, . . . , u|x|. As such, we compute the probability distribution over input words as:
p(ai,j | dij =COPY-INPUT) = (1) softmax (f(ua,,; ; qi,;)) Gi,j FL15-+-L |
Function f computes the afï¬nity of each to- ken xai,j and the current output context qi,j. A common implementation of f , which we follow, is to apply a linear projection from [uai,j ; qi,j]
into a ï¬xed size vector (where [u; v] is vector concatenation), followed by a tanh and a linear projection into a single value. | 1705.04146#24 | Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems | Solving algebraic word problems requires executing a series of arithmetic
operations---a program---to obtain a final answer. However, since programs can
be arbitrarily complicated, inducing them directly from question-answer pairs
is a formidable challenge. To make this task more feasible, we solve these
problems by generating answer rationales, sequences of natural language and
human-readable mathematical expressions that derive the final answer through a
series of small steps. Although rationales do not explicitly specify programs,
they provide a scaffolding for their structure via intermediate milestones. To
evaluate our approach, we have created a new 100,000-sample dataset of
questions, answers and rationales. Experimental results show that indirect
supervision of program learning via answer rationales is a promising strategy
for inducing arithmetic programs. | http://arxiv.org/pdf/1705.04146 | Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20170511 | 20171023 | [] |
1705.04304 | 24 | in Belgium last year. Hamilton secured the 36th win of his career in Bahrain and his 21st from pole position. Only Michael Schumacher (40), Ayrton Senna (29) and Sebastian Vettel (27) have more. (...) Ground truth summary Button denied 100th race start for McLaren after ERS failure. Button then spent much of the Bahrain Grand Prix on Twitter delivering his verdict on the action as it unfolded. Lewis Hamilton has out-qualiï¬ed and ï¬nished ahead of Mercedes team-mate Nico Rosberg at every race this season. Bernie Ecclestone conï¬rms F1 will make its bow in Azerbaijan next season. ML, with intra-attention (ROUGE-1 41.58) Button was denied his 100th race for McLaren. ERS prevented him from making it to the start-line. The Briton. He quizzed after Nico Rosberg accused Lewis Hamilton of pulling off such a manoeuvre in China. Button has been in Azerbaijan for the ï¬rst time since 2013. RL, with intra-attention (ROUGE-1 50.00) Button was denied his 100th race for McLaren after an ERS prevented him from | 1705.04304#24 | A Deep Reinforced Model for Abstractive Summarization | Attentional, RNN-based encoder-decoder models for abstractive summarization
have achieved good performance on short input and output sequences. For longer
documents and summaries however these models often include repetitive and
incoherent phrases. We introduce a neural network model with a novel
intra-attention that attends over the input and continuously generated output
separately, and a new training method that combines standard supervised word
prediction and reinforcement learning (RL). Models trained only with supervised
learning often exhibit "exposure bias" - they assume ground truth is provided
at each step during training. However, when standard word prediction is
combined with the global sequence prediction training of RL the resulting
summaries become more readable. We evaluate this model on the CNN/Daily Mail
and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the
CNN/Daily Mail dataset, an improvement over previous state-of-the-art models.
Human evaluation also shows that our model produces higher quality summaries. | http://arxiv.org/pdf/1705.04304 | Romain Paulus, Caiming Xiong, Richard Socher | cs.CL | null | null | cs.CL | 20170511 | 20171113 | [
{
"id": "1603.08148"
},
{
"id": "1612.00563"
},
{
"id": "1608.02927"
},
{
"id": "1603.08887"
},
{
"id": "1511.06732"
},
{
"id": "1611.03382"
},
{
"id": "1603.08023"
},
{
"id": "1609.08144"
},
{
"id": "1601.06733"
},
{
"id": "1602.06023"
},
{
"id": "1608.05859"
},
{
"id": "1509.00685"
}
] |
1705.04146 | 25 | into a ï¬xed size vector (where [u; v] is vector concatenation), followed by a tanh and a linear projection into a single value.
⢠If qi,j = COPY-OUTPUT, the model copies from either the output y or the memory m. This is equivalent to ï¬nding the instruction zi, where the value was generated. Once again, we de- ï¬ne a pointer network that points to the output instructions and deï¬ne the distribution over pre- viously generated instructions as:
D(a, | dig =COPY-OUTPUT) = softmax (f (Ia, ;,4i,;)) Qj CZ 5024-1
Here, the afï¬nity is computed using the decoder state hai,j and the current state qi,j.
2 and the state qi,j to generate the next state qi,j+1. Once all arguments for oi are generated, the operation is executed to obtain vi. Then, the embedding of vi, the ï¬nal state of the instruction qi,|ai| and the previous state hi are used to generate the state at the next timestamp hi+1.
# Inducing Programs while Learning
The set of instructions z that will generate y is un- observed. Thus, given x we optimize the marginal probability function: | 1705.04146#25 | Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems | Solving algebraic word problems requires executing a series of arithmetic
operations---a program---to obtain a final answer. However, since programs can
be arbitrarily complicated, inducing them directly from question-answer pairs
is a formidable challenge. To make this task more feasible, we solve these
problems by generating answer rationales, sequences of natural language and
human-readable mathematical expressions that derive the final answer through a
series of small steps. Although rationales do not explicitly specify programs,
they provide a scaffolding for their structure via intermediate milestones. To
evaluate our approach, we have created a new 100,000-sample dataset of
questions, answers and rationales. Experimental results show that indirect
supervision of program learning via answer rationales is a promising strategy
for inducing arithmetic programs. | http://arxiv.org/pdf/1705.04146 | Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20170511 | 20171023 | [] |
1705.04304 | 25 | 2013. RL, with intra-attention (ROUGE-1 50.00) Button was denied his 100th race for McLaren after an ERS prevented him from making it to the start-line. It capped a miserable weekend for the Briton. Button has out-qualiï¬ed. Finished ahead of Nico Rosberg at Bahrain. Lewis Hamilton has. In 11 races. . The race. To lead 2,000 laps. . In. . . And. . ML+RL, with intra-attention (ROUGE-1 44.00) Button was denied his 100th race for McLaren. The ERS prevented him from making it to the start-line. Button was his team mate in the 11 races in Bahrain. He quizzed after Nico Rosberg accused Lewis Hamilton of pulling off such a manoeuvre in China. | 1705.04304#25 | A Deep Reinforced Model for Abstractive Summarization | Attentional, RNN-based encoder-decoder models for abstractive summarization
have achieved good performance on short input and output sequences. For longer
documents and summaries however these models often include repetitive and
incoherent phrases. We introduce a neural network model with a novel
intra-attention that attends over the input and continuously generated output
separately, and a new training method that combines standard supervised word
prediction and reinforcement learning (RL). Models trained only with supervised
learning often exhibit "exposure bias" - they assume ground truth is provided
at each step during training. However, when standard word prediction is
combined with the global sequence prediction training of RL the resulting
summaries become more readable. We evaluate this model on the CNN/Daily Mail
and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the
CNN/Daily Mail dataset, an improvement over previous state-of-the-art models.
Human evaluation also shows that our model produces higher quality summaries. | http://arxiv.org/pdf/1705.04304 | Romain Paulus, Caiming Xiong, Richard Socher | cs.CL | null | null | cs.CL | 20170511 | 20171113 | [
{
"id": "1603.08148"
},
{
"id": "1612.00563"
},
{
"id": "1608.02927"
},
{
"id": "1603.08887"
},
{
"id": "1511.06732"
},
{
"id": "1611.03382"
},
{
"id": "1603.08023"
},
{
"id": "1609.08144"
},
{
"id": "1601.06733"
},
{
"id": "1602.06023"
},
{
"id": "1608.05859"
},
{
"id": "1509.00685"
}
] |
1705.04146 | 26 | # Inducing Programs while Learning
The set of instructions z that will generate y is un- observed. Thus, given x we optimize the marginal probability function:
ply | @) => p(y | z)p(z| 2) = S> plex), ZeZ zEâ¬Z(y)
where p(y | z) is the Kronecker delta function Se(z),y> which is | if the execution of z, denoted as e(z), generates y and 0 otherwise. Thus, we can redefine p(y|a), the marginal over all programs Z, as a marginal over programs that would generate y, defined as Z(y). As marginalizing over z ⬠Z(y) is intractable, we approximate the marginal by generating samples from our model. Denote the set of samples that are generated by Z(y). We maximize > z ⬠Z(y)p(z|a).
However, generating programs that generate y is not trivial, as randomly sampling from the RNN distribution over instructions at each timestamp is unlikely to generate a sequence z â Z(y). | 1705.04146#26 | Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems | Solving algebraic word problems requires executing a series of arithmetic
operations---a program---to obtain a final answer. However, since programs can
be arbitrarily complicated, inducing them directly from question-answer pairs
is a formidable challenge. To make this task more feasible, we solve these
problems by generating answer rationales, sequences of natural language and
human-readable mathematical expressions that derive the final answer through a
series of small steps. Although rationales do not explicitly specify programs,
they provide a scaffolding for their structure via intermediate milestones. To
evaluate our approach, we have created a new 100,000-sample dataset of
questions, answers and rationales. Experimental results show that indirect
supervision of program learning via answer rationales is a promising strategy
for inducing arithmetic programs. | http://arxiv.org/pdf/1705.04146 | Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20170511 | 20171023 | [] |
1705.04304 | 26 | Table 3: Example from the CNN/Daily Mail test dataset showing the outputs of our three best models after de-tokenization, re-capitalization, replacing anonymized entities, and replacing numbers. The ROUGE score corresponds to the speciï¬c example.
# 6 RESULTS
6.1 EXPERIMENTS
Setup: We evaluate the intra-decoder attention mechanism and the mixed-objective learning by running the following experiments on both datasets. We ï¬rst run maximum-likelihood (ML) training with and without intra-decoder attention (removing cd t from Equations 9 and 11 to disable intra7
Model First sentences First k words Full (Durrett et al., 2016) ML+RL, with intra-attn R-1 28.6 35.7 42.2 42.94 R-2 17.3 21.6 24.9 26.02
Table 4: Comparison of ROUGE recall scores for lead baselines, the extractive model of Durrett et al. (2016) and our model on their NYT dataset splits.
attention) and select the best performing architecture. Next, we initialize our model with the best ML parameters and we compare reinforcement learning (RL) with our mixed-objective learning (ML+RL), following our objective functions in Equation 15 and 16. The hyperparameters and other implementation details are described in the Appendix. | 1705.04304#26 | A Deep Reinforced Model for Abstractive Summarization | Attentional, RNN-based encoder-decoder models for abstractive summarization
have achieved good performance on short input and output sequences. For longer
documents and summaries however these models often include repetitive and
incoherent phrases. We introduce a neural network model with a novel
intra-attention that attends over the input and continuously generated output
separately, and a new training method that combines standard supervised word
prediction and reinforcement learning (RL). Models trained only with supervised
learning often exhibit "exposure bias" - they assume ground truth is provided
at each step during training. However, when standard word prediction is
combined with the global sequence prediction training of RL the resulting
summaries become more readable. We evaluate this model on the CNN/Daily Mail
and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the
CNN/Daily Mail dataset, an improvement over previous state-of-the-art models.
Human evaluation also shows that our model produces higher quality summaries. | http://arxiv.org/pdf/1705.04304 | Romain Paulus, Caiming Xiong, Richard Socher | cs.CL | null | null | cs.CL | 20170511 | 20171113 | [
{
"id": "1603.08148"
},
{
"id": "1612.00563"
},
{
"id": "1608.02927"
},
{
"id": "1603.08887"
},
{
"id": "1511.06732"
},
{
"id": "1611.03382"
},
{
"id": "1603.08023"
},
{
"id": "1609.08144"
},
{
"id": "1601.06733"
},
{
"id": "1602.06023"
},
{
"id": "1608.05859"
},
{
"id": "1509.00685"
}
] |
1705.04146 | 27 | However, generating programs that generate y is not trivial, as randomly sampling from the RNN distribution over instructions at each timestamp is unlikely to generate a sequence z â Z(y).
2 The embeddings of a given argument ai,j and the return value vi are obtained with a lookup table embedding and two ï¬ags indicating whether it is a string and whether it is a ï¬oat. Furthermore, if the the value is a ï¬oat we also add its numeric value as a feature.
This is analogous to the question answering work in Liang et al. (2016), where the query that generates the correct answer must be found dur- ing inference, and training proved to be difï¬cult without supervision. In Roy and Roth (2015) this problem is also addressed by adding prior knowl- edge to constrain the exponential space. | 1705.04146#27 | Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems | Solving algebraic word problems requires executing a series of arithmetic
operations---a program---to obtain a final answer. However, since programs can
be arbitrarily complicated, inducing them directly from question-answer pairs
is a formidable challenge. To make this task more feasible, we solve these
problems by generating answer rationales, sequences of natural language and
human-readable mathematical expressions that derive the final answer through a
series of small steps. Although rationales do not explicitly specify programs,
they provide a scaffolding for their structure via intermediate milestones. To
evaluate our approach, we have created a new 100,000-sample dataset of
questions, answers and rationales. Experimental results show that indirect
supervision of program learning via answer rationales is a promising strategy
for inducing arithmetic programs. | http://arxiv.org/pdf/1705.04146 | Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20170511 | 20171023 | [] |
1705.04304 | 27 | ROUGE metrics and options: We report the full-length F-1 score of the ROUGE-1, ROUGE-2 and ROUGE-L metrics with the Porter stemmer option. For RL and ML+RL training, we use the ROUGE-L score as a reinforcement reward. We also tried ROUGE-2 but we found that it created summaries that almost always reached the maximum length, often ending sentences abruptly.
# 6.2 QUANTITATIVE ANALYSIS
Our results for the CNN/Daily Mail dataset are shown in Table 1, and for the NYT dataset in Table 2. We observe that the intra-decoder attention func- tion helps our model achieve better ROUGE scores on the CNN/Daily Mail but not on the NYT dataset.
Further analysis on the CNN/Daily Mail test set shows that intra-attention increases the ROUGE-1 score of examples with a long ground truth sum- mary, while decreasing the score of shorter sum- maries, as illustrated in Figure 2. This conï¬rms our assumption that intra-attention improves per- formance on longer output sequences, and explains why intra-attention doesnt improve performance on the NYT dataset, which has shorter summaries on average. | 1705.04304#27 | A Deep Reinforced Model for Abstractive Summarization | Attentional, RNN-based encoder-decoder models for abstractive summarization
have achieved good performance on short input and output sequences. For longer
documents and summaries however these models often include repetitive and
incoherent phrases. We introduce a neural network model with a novel
intra-attention that attends over the input and continuously generated output
separately, and a new training method that combines standard supervised word
prediction and reinforcement learning (RL). Models trained only with supervised
learning often exhibit "exposure bias" - they assume ground truth is provided
at each step during training. However, when standard word prediction is
combined with the global sequence prediction training of RL the resulting
summaries become more readable. We evaluate this model on the CNN/Daily Mail
and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the
CNN/Daily Mail dataset, an improvement over previous state-of-the-art models.
Human evaluation also shows that our model produces higher quality summaries. | http://arxiv.org/pdf/1705.04304 | Romain Paulus, Caiming Xiong, Richard Socher | cs.CL | null | null | cs.CL | 20170511 | 20171113 | [
{
"id": "1603.08148"
},
{
"id": "1612.00563"
},
{
"id": "1608.02927"
},
{
"id": "1603.08887"
},
{
"id": "1511.06732"
},
{
"id": "1611.03382"
},
{
"id": "1603.08023"
},
{
"id": "1609.08144"
},
{
"id": "1601.06733"
},
{
"id": "1602.06023"
},
{
"id": "1608.05859"
},
{
"id": "1509.00685"
}
] |
1705.04146 | 28 | In our work, we leverage the fact that we are generating rationales, where there is a sense of progression within the rationale. That is, we as- sume that the rationale solves the problem step by step. For instance, in Problem 2, the rationale ï¬rst describes the number of combinations of two cards in a deck of 52 cards, then describes the number of combinations of two kings, and ï¬nally com- putes the probability of drawing two kings. Thus, while generating the ï¬nal answer without the ra- tionale requires a long sequence of latent instruc- tions, generating each of the tokens of the rationale requires far less operations.
More formally, given the sequence z1, . . . , ziâ1 generated so far, and the possible values for zi given by the network, denoted Zi, we wish to ï¬lter Zi to Zi(yk), which denotes a set of possible op- tions that contain at least one path capable of gen- erating the next token at index k. Finding the set Zi(yk) is achieved by testing all combinations of instructions that are possible with at most one level of indirection, and keeping those that can generate yk. This means that the model can only gener- ate one intermediate value in memory (not includ- ing the operations that convert strings into ï¬oating point values and vice-versa). | 1705.04146#28 | Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems | Solving algebraic word problems requires executing a series of arithmetic
operations---a program---to obtain a final answer. However, since programs can
be arbitrarily complicated, inducing them directly from question-answer pairs
is a formidable challenge. To make this task more feasible, we solve these
problems by generating answer rationales, sequences of natural language and
human-readable mathematical expressions that derive the final answer through a
series of small steps. Although rationales do not explicitly specify programs,
they provide a scaffolding for their structure via intermediate milestones. To
evaluate our approach, we have created a new 100,000-sample dataset of
questions, answers and rationales. Experimental results show that indirect
supervision of program learning via answer rationales is a promising strategy
for inducing arithmetic programs. | http://arxiv.org/pdf/1705.04146 | Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20170511 | 20171023 | [] |
1705.04304 | 28 | oa 2 val 3 EL] A oo} ââ ey 0 CNN /Daily Mail with intra-attn OF TEI DOH SET TOP wT BO TOOTS Cumulated ROUGE g
# WOT SOT SET
Figure 2: Cumulated ROUGE-1 relative im- provement obtained by adding intra-attention to the ML model on the CNN/Daily Mail dataset.
In addition, we can see that on all datasets, both the RL and ML+RL models obtain much higher scores than the ML model. In particular, these methods clearly surpass the state-of-the-art model from Nallapati et al. (2016) on the CNN/Daily Mail dataset, as well as the lead-3 extractive baseline (taking the ï¬rst 3 sentences of the article as the summary) and the SummaRuNNer extractive model (Nallapati et al., 2017).
See et al. (2017) also reported their results on a closely-related abstractive model the CNN/DailyMail but used a different dataset preprocessing pipeline, which makes direct comparison with our numbers difï¬cult. However, their best model has lower ROUGE scores than their lead-3 baseline, while our ML+RL model beats the lead-3 baseline as shown in Table 1. Thus, we conclude that our mixed- objective model obtains a higher ROUGE performance than theirs. | 1705.04304#28 | A Deep Reinforced Model for Abstractive Summarization | Attentional, RNN-based encoder-decoder models for abstractive summarization
have achieved good performance on short input and output sequences. For longer
documents and summaries however these models often include repetitive and
incoherent phrases. We introduce a neural network model with a novel
intra-attention that attends over the input and continuously generated output
separately, and a new training method that combines standard supervised word
prediction and reinforcement learning (RL). Models trained only with supervised
learning often exhibit "exposure bias" - they assume ground truth is provided
at each step during training. However, when standard word prediction is
combined with the global sequence prediction training of RL the resulting
summaries become more readable. We evaluate this model on the CNN/Daily Mail
and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the
CNN/Daily Mail dataset, an improvement over previous state-of-the-art models.
Human evaluation also shows that our model produces higher quality summaries. | http://arxiv.org/pdf/1705.04304 | Romain Paulus, Caiming Xiong, Richard Socher | cs.CL | null | null | cs.CL | 20170511 | 20171113 | [
{
"id": "1603.08148"
},
{
"id": "1612.00563"
},
{
"id": "1608.02927"
},
{
"id": "1603.08887"
},
{
"id": "1511.06732"
},
{
"id": "1611.03382"
},
{
"id": "1603.08023"
},
{
"id": "1609.08144"
},
{
"id": "1601.06733"
},
{
"id": "1602.06023"
},
{
"id": "1608.05859"
},
{
"id": "1509.00685"
}
] |
1705.04146 | 29 | Decoding. During decoding we find the most likely sequence of instructions z given 2, which can be performed with a stack-based decoder. However, it is important to refer that each gen- erated instruction 2; = (0;,7i, @i1,.-- 1 Qj, Ja;|> Vi) must be executed to obtain v;. To avoid generating unexecutable codeâe.g., log(0)âeach hypothesis instruction is executed and removed if an error oc- curs. Finally, once the â(EOR)â tag is generated, we only allow instructions that would generate one of the option âAâ to âEâ to be generated, which guarantees that one of the options is chosen.
# 5 Staged Back-propagation
As it is shown in Figure 2, math rationales with more than 200 tokens are not uncommon, and with additional intermediate instructions, the size z can easily exceed 400. This poses a practical challenge
for training the model. | 1705.04146#29 | Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems | Solving algebraic word problems requires executing a series of arithmetic
operations---a program---to obtain a final answer. However, since programs can
be arbitrarily complicated, inducing them directly from question-answer pairs
is a formidable challenge. To make this task more feasible, we solve these
problems by generating answer rationales, sequences of natural language and
human-readable mathematical expressions that derive the final answer through a
series of small steps. Although rationales do not explicitly specify programs,
they provide a scaffolding for their structure via intermediate milestones. To
evaluate our approach, we have created a new 100,000-sample dataset of
questions, answers and rationales. Experimental results show that indirect
supervision of program learning via answer rationales is a promising strategy
for inducing arithmetic programs. | http://arxiv.org/pdf/1705.04146 | Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20170511 | 20171023 | [] |
1705.04304 | 29 | We also compare our model against extractive baselines (either lead sentences or lead words) and the extractive summarization model built by Durrett et al. (2016), which was trained using a smaller version of the NYT dataset that is 6 times smaller than ours but contains longer summaries. We trained our ML+RL model on their dataset and show the results on Table 4. Similarly to Durrett et al. (2016), we report the limited-length ROUGE recall scores instead of full-length F-scores. For each example, we limit the generated summary length or the baseline length to the ground truth summary length. Our results show that our mixed-objective model has higher ROUGE scores than their extractive model and the extractive baselines.
8
Readability Relevance Model 6.76 ML RL 4.18 ML+RL 7.04
Table 5: Comparison of human readability scores on a random subset of the CNN/Daily Mail test dataset. All models are with intra-decoder attention.
6.3 QUALITATIVE ANALYSIS
We perform human evaluation to ensure that our increase in ROUGE scores is also followed by an increase in human readability and quality. In particular, we want to know whether the ML+RL training objective did improve readability compared to RL. | 1705.04304#29 | A Deep Reinforced Model for Abstractive Summarization | Attentional, RNN-based encoder-decoder models for abstractive summarization
have achieved good performance on short input and output sequences. For longer
documents and summaries however these models often include repetitive and
incoherent phrases. We introduce a neural network model with a novel
intra-attention that attends over the input and continuously generated output
separately, and a new training method that combines standard supervised word
prediction and reinforcement learning (RL). Models trained only with supervised
learning often exhibit "exposure bias" - they assume ground truth is provided
at each step during training. However, when standard word prediction is
combined with the global sequence prediction training of RL the resulting
summaries become more readable. We evaluate this model on the CNN/Daily Mail
and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the
CNN/Daily Mail dataset, an improvement over previous state-of-the-art models.
Human evaluation also shows that our model produces higher quality summaries. | http://arxiv.org/pdf/1705.04304 | Romain Paulus, Caiming Xiong, Richard Socher | cs.CL | null | null | cs.CL | 20170511 | 20171113 | [
{
"id": "1603.08148"
},
{
"id": "1612.00563"
},
{
"id": "1608.02927"
},
{
"id": "1603.08887"
},
{
"id": "1511.06732"
},
{
"id": "1611.03382"
},
{
"id": "1603.08023"
},
{
"id": "1609.08144"
},
{
"id": "1601.06733"
},
{
"id": "1602.06023"
},
{
"id": "1608.05859"
},
{
"id": "1509.00685"
}
] |
1705.04146 | 30 | for training the model.
For both the attention and copy mechanisms, for each instruction zi, the model needs to com- pute the probability distribution between all the at- tendable units c conditioned on the previous state hiâ1. For the attention model and input copy mechanisms, c = x0,iâ1 and for the output copy mechanism c = z. These operations generally involve an exponential number of matrix multi- plications as the size of c and z grows. For in- stance, during the computation of the probabilities for the input copy mechanism in Equation 1, the afï¬nity function f between the current context q and a given input uk is generally implemented by projecting u and q into a single vector followed by a non-linearity, which is projected into a sin- gle afï¬nity value. Thus, for each possible input u, 3 matrix multiplications must be performed. Furthermore, for RNN unrolling, parameters and intermediate outputs for these operations must be replicated for each timestamp. Thus, as z becomes larger the attention and copy mechanisms quickly become a memory bottleneck as the computation graph becomes too large to ï¬t on the GPU. In con- trast, the sequence-to-sequence model proposed in (Sutskever et al., 2014), does not suffer from these issues as each timestamp is dependent only on the previous state hiâ1. | 1705.04146#30 | Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems | Solving algebraic word problems requires executing a series of arithmetic
operations---a program---to obtain a final answer. However, since programs can
be arbitrarily complicated, inducing them directly from question-answer pairs
is a formidable challenge. To make this task more feasible, we solve these
problems by generating answer rationales, sequences of natural language and
human-readable mathematical expressions that derive the final answer through a
series of small steps. Although rationales do not explicitly specify programs,
they provide a scaffolding for their structure via intermediate milestones. To
evaluate our approach, we have created a new 100,000-sample dataset of
questions, answers and rationales. Experimental results show that indirect
supervision of program learning via answer rationales is a promising strategy
for inducing arithmetic programs. | http://arxiv.org/pdf/1705.04146 | Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20170511 | 20171023 | [] |
1705.04304 | 30 | Evaluation setup: To perform this evaluation, we randomly select 100 test examples from the CNN/Daily Mail dataset. For each example, we show the original article, the ground truth summary as well as summaries generated by different models side by side to a human evaluator. The human evaluator does not know which summaries come from which model or which one is the ground truth. Two scores from 1 to 10 are then assigned to each summary, one for relevance (how well does the summary capture the important parts of the article) and one for readability (how well-written the summary is). Each summary is rated by 5 different human evaluators on Amazon Mechanical Turk and the results are averaged across all examples and evaluators.
Results: Our human evaluation results are shown in Table 5. We can see that even though RL has the highest ROUGE-1 and ROUGE-L scores, it produces the least readable summaries among our experiments. The most common readability issue observed in our RL results, as shown in the example of Table 3, is the presence of short and truncated sentences towards the end of sequences. This conï¬rms that optimizing for single discrete evaluation metric such as ROUGE with RL can be detrimental to the model quality. | 1705.04304#30 | A Deep Reinforced Model for Abstractive Summarization | Attentional, RNN-based encoder-decoder models for abstractive summarization
have achieved good performance on short input and output sequences. For longer
documents and summaries however these models often include repetitive and
incoherent phrases. We introduce a neural network model with a novel
intra-attention that attends over the input and continuously generated output
separately, and a new training method that combines standard supervised word
prediction and reinforcement learning (RL). Models trained only with supervised
learning often exhibit "exposure bias" - they assume ground truth is provided
at each step during training. However, when standard word prediction is
combined with the global sequence prediction training of RL the resulting
summaries become more readable. We evaluate this model on the CNN/Daily Mail
and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the
CNN/Daily Mail dataset, an improvement over previous state-of-the-art models.
Human evaluation also shows that our model produces higher quality summaries. | http://arxiv.org/pdf/1705.04304 | Romain Paulus, Caiming Xiong, Richard Socher | cs.CL | null | null | cs.CL | 20170511 | 20171113 | [
{
"id": "1603.08148"
},
{
"id": "1612.00563"
},
{
"id": "1608.02927"
},
{
"id": "1603.08887"
},
{
"id": "1511.06732"
},
{
"id": "1611.03382"
},
{
"id": "1603.08023"
},
{
"id": "1609.08144"
},
{
"id": "1601.06733"
},
{
"id": "1602.06023"
},
{
"id": "1608.05859"
},
{
"id": "1509.00685"
}
] |
1705.04146 | 31 | To deal with this, we use a training method we call staged back-propagation which saves mem- ory by considering slices of K tokens in z, rather than the full sequence. That is, to train on a mini- batch where |z| = 300 with K = 100, we would actually train on 3 mini-batches, where the ï¬rst batch would optimize for the ï¬rst z1:100, the sec- ond for z101:200 and the third for z201:300. The advantage of this method is that memory intensive operations, such as attention and the copy mecha- nism, only need to be unrolled for K steps, and K can be adjusted so that the computation graph ï¬ts in memory.
However, unlike truncated back-propagation for language modeling, where context outside the scope of K is ignored, sequence-to-sequence models require global context. Thus, the sequence of states h is still built for the whole sequence z. Afterwards, we obtain a slice hj:j+K, and com- pute the attention vector.3 Finally, the prediction of the instruction is conditioned on the LSTM state
3This modeling strategy is sometimes known as late fu- sion, as the attention vector is not used for state propagation, it is incorporated âlaterâ.
and the attention vector.
# 6 Experiments | 1705.04146#31 | Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems | Solving algebraic word problems requires executing a series of arithmetic
operations---a program---to obtain a final answer. However, since programs can
be arbitrarily complicated, inducing them directly from question-answer pairs
is a formidable challenge. To make this task more feasible, we solve these
problems by generating answer rationales, sequences of natural language and
human-readable mathematical expressions that derive the final answer through a
series of small steps. Although rationales do not explicitly specify programs,
they provide a scaffolding for their structure via intermediate milestones. To
evaluate our approach, we have created a new 100,000-sample dataset of
questions, answers and rationales. Experimental results show that indirect
supervision of program learning via answer rationales is a promising strategy
for inducing arithmetic programs. | http://arxiv.org/pdf/1705.04146 | Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20170511 | 20171023 | [] |
1705.04304 | 31 | On the other hand, our RL+ML summaries obtain the highest readability and relevance scores among our models, hence solving the readability issues of the RL model while also having a higher ROUGE score than ML. This demonstrates the usefulness and value of our RL+ML training method for abstractive summarization.
# 7 CONCLUSION
We presented a new model and training procedure that obtains state-of-the-art results in text summa- rization for the CNN/Daily Mail, improves the readability of the generated summaries and is better suited to long output sequences. We also run our abstractive model on the NYT dataset for the ï¬rst time. We saw that despite their common use for evaluation, ROUGE scores have their shortcom- ings and should not be the only metric to optimize on summarization model for long sequences. Our intra-attention decoder and combined training objective could be applied to other sequence-to- sequence tasks with long inputs and outputs, which is an interesting direction for further research.
# REFERENCES
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014. | 1705.04304#31 | A Deep Reinforced Model for Abstractive Summarization | Attentional, RNN-based encoder-decoder models for abstractive summarization
have achieved good performance on short input and output sequences. For longer
documents and summaries however these models often include repetitive and
incoherent phrases. We introduce a neural network model with a novel
intra-attention that attends over the input and continuously generated output
separately, and a new training method that combines standard supervised word
prediction and reinforcement learning (RL). Models trained only with supervised
learning often exhibit "exposure bias" - they assume ground truth is provided
at each step during training. However, when standard word prediction is
combined with the global sequence prediction training of RL the resulting
summaries become more readable. We evaluate this model on the CNN/Daily Mail
and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the
CNN/Daily Mail dataset, an improvement over previous state-of-the-art models.
Human evaluation also shows that our model produces higher quality summaries. | http://arxiv.org/pdf/1705.04304 | Romain Paulus, Caiming Xiong, Richard Socher | cs.CL | null | null | cs.CL | 20170511 | 20171113 | [
{
"id": "1603.08148"
},
{
"id": "1612.00563"
},
{
"id": "1608.02927"
},
{
"id": "1603.08887"
},
{
"id": "1511.06732"
},
{
"id": "1611.03382"
},
{
"id": "1603.08023"
},
{
"id": "1609.08144"
},
{
"id": "1601.06733"
},
{
"id": "1602.06023"
},
{
"id": "1608.05859"
},
{
"id": "1509.00685"
}
] |
1705.04146 | 32 | and the attention vector.
# 6 Experiments
We apply our model to the task of generating ratio- nales for solutions to math problems, evaluating it on both the quality of the rationale and the ability of the model to obtain correct answers.
# 6.1 Baselines
As the baseline we use the attention-based se- quence to sequence model proposed by Bahdanau et al. (2014), and proposed augmentations, allow- ing it to copy from the input (Ling et al., 2016) and from the output (Merity et al., 2016).
# 6.2 Hyperparameters
We used a two-layer LSTM with a hidden size of H = 200, and word embeddings with size 200. The number of levels that the graph G is expanded during sampling D is set to 5. Decoding is per- formed with a beam of 200. As for the vocabulary of the softmax and embeddings, we keep the most frequent 20,000 word types, and replace the rest of the words with an unknown token. During train- ing, the model only learns to predict a word as an unknown token, when there is no other alternative to generate the word.
# 6.3 Evaluation Metrics | 1705.04146#32 | Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems | Solving algebraic word problems requires executing a series of arithmetic
operations---a program---to obtain a final answer. However, since programs can
be arbitrarily complicated, inducing them directly from question-answer pairs
is a formidable challenge. To make this task more feasible, we solve these
problems by generating answer rationales, sequences of natural language and
human-readable mathematical expressions that derive the final answer through a
series of small steps. Although rationales do not explicitly specify programs,
they provide a scaffolding for their structure via intermediate milestones. To
evaluate our approach, we have created a new 100,000-sample dataset of
questions, answers and rationales. Experimental results show that indirect
supervision of program learning via answer rationales is a promising strategy
for inducing arithmetic programs. | http://arxiv.org/pdf/1705.04146 | Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20170511 | 20171023 | [] |
1705.04304 | 32 | Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, and Hui Jiang. Distraction-based neural networks for modeling documents. In Proceedings of the Twenty-Fifth International Joint Conference on Artiï¬cial Intelligence (IJCAI-16), pp. 2754â2760, 2016.
Jianpeng Cheng, Li Dong, and Mirella Lapata. Long short-term memory-networks for machine reading. arXiv preprint arXiv:1601.06733, 2016.
Sumit Chopra, Michael Auli, Alexander M Rush, and SEAS Harvard. Abstractive sentence sum- marization with attentive recurrent neural networks. Proceedings of NAACL-HLT16, pp. 93â98, 2016.
9
Bonnie Dorr, David Zajic, and Richard Schwartz. Hedge trimmer: A parse-and-trim approach to In Proceedings of the HLT-NAACL 03 on Text summarization workshop- headline generation. Volume 5, pp. 1â8. Association for Computational Linguistics, 2003.
Greg Durrett, Taylor Berg-Kirkpatrick, and Dan Klein. Learning-based single-document summa- rization with compression and anaphoricity constraints. arXiv preprint arXiv:1603.08887, 2016. | 1705.04304#32 | A Deep Reinforced Model for Abstractive Summarization | Attentional, RNN-based encoder-decoder models for abstractive summarization
have achieved good performance on short input and output sequences. For longer
documents and summaries however these models often include repetitive and
incoherent phrases. We introduce a neural network model with a novel
intra-attention that attends over the input and continuously generated output
separately, and a new training method that combines standard supervised word
prediction and reinforcement learning (RL). Models trained only with supervised
learning often exhibit "exposure bias" - they assume ground truth is provided
at each step during training. However, when standard word prediction is
combined with the global sequence prediction training of RL the resulting
summaries become more readable. We evaluate this model on the CNN/Daily Mail
and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the
CNN/Daily Mail dataset, an improvement over previous state-of-the-art models.
Human evaluation also shows that our model produces higher quality summaries. | http://arxiv.org/pdf/1705.04304 | Romain Paulus, Caiming Xiong, Richard Socher | cs.CL | null | null | cs.CL | 20170511 | 20171113 | [
{
"id": "1603.08148"
},
{
"id": "1612.00563"
},
{
"id": "1608.02927"
},
{
"id": "1603.08887"
},
{
"id": "1511.06732"
},
{
"id": "1611.03382"
},
{
"id": "1603.08023"
},
{
"id": "1609.08144"
},
{
"id": "1601.06733"
},
{
"id": "1602.06023"
},
{
"id": "1608.05859"
},
{
"id": "1509.00685"
}
] |
1705.04146 | 33 | # 6.3 Evaluation Metrics
The evaluation of the rationales is performed with average sentence level perplexity and BLEU- 4 (Papineni et al., 2002). When a model cannot generate a token for perplexity computation, we predict unknown token. This beneï¬ts the baselines as they are less expressive. As the perplexity of our model is dependent on the latent program that is generated, we force decode our model to gener- ate the rationale, while maximizing the probability of the program. This is analogous to the method used to obtain sample programs described in Sec- tion 4, but we choose the most likely instructions at each timestamp instead of sampling. Finally, the correctness of the answer is evaluated by com- puting the percentage of the questions, where the chosen option matches the correct one.
# 6.4 Results
The test set results, evaluated on perplexity, BLEU, and accuracy, are presented in Table 3.
Model Seq2Seq +Copy Input +Copy Output Our Model Perplexity BLEU Accuracy 20.8 20.4 20.2 36.4 524.7 46.8 45.9 28.5 8.57 21.3 20.6 27.2
Table 3: Results over the test set measured in Per- plexity, BLEU and Accuracy. | 1705.04146#33 | Program Induction by Rationale Generation : Learning to Solve and Explain Algebraic Word Problems | Solving algebraic word problems requires executing a series of arithmetic
operations---a program---to obtain a final answer. However, since programs can
be arbitrarily complicated, inducing them directly from question-answer pairs
is a formidable challenge. To make this task more feasible, we solve these
problems by generating answer rationales, sequences of natural language and
human-readable mathematical expressions that derive the final answer through a
series of small steps. Although rationales do not explicitly specify programs,
they provide a scaffolding for their structure via intermediate milestones. To
evaluate our approach, we have created a new 100,000-sample dataset of
questions, answers and rationales. Experimental results show that indirect
supervision of program learning via answer rationales is a promising strategy
for inducing arithmetic programs. | http://arxiv.org/pdf/1705.04146 | Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20170511 | 20171023 | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.