doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2307.08072
72
keep you fresh and focused... Answer:Here are some ways you can increase your productivity while working from home: 1. Create a separate workspace: One of the biggest challenges of working from home is the tendency to mix work life with home life. You can overcome this by creating a separate workspace in your home that you can set aside just for work... Answer:The answer is to have a work-life-home type lifestyle. 2 Score 58.4 57.3 26.1 34.65 35.41 0.15 1142 1120 630
2307.08072#72
Do Emergent Abilities Exist in Quantized Large Language Models: An Empirical Study
Despite the superior performance, Large Language Models~(LLMs) require significant computational resources for deployment and use. To overcome this issue, quantization methods have been widely applied to reduce the memory footprint of LLMs as well as increasing the inference rate. However, a major challenge is that low-bit quantization methods often lead to performance degradation. It is important to understand how quantization impacts the capacity of LLMs. Different from previous studies focused on overall performance, this work aims to investigate the impact of quantization on \emph{emergent abilities}, which are important characteristics that distinguish LLMs from small language models. Specially, we examine the abilities of in-context learning, chain-of-thought reasoning, and instruction-following in quantized LLMs. Our empirical experiments show that these emergent abilities still exist in 4-bit quantization models, while 2-bit models encounter severe performance degradation on the test of these abilities. To improve the performance of low-bit models, we conduct two special experiments: (1) fine-gained impact analysis that studies which components (or substructures) are more sensitive to quantization, and (2) performance compensation through model fine-tuning. Our work derives a series of important findings to understand the impact of quantization on emergent abilities, and sheds lights on the possibilities of extremely low-bit quantization for LLMs.
http://arxiv.org/pdf/2307.08072
Peiyu Liu, Zikang Liu, Ze-Feng Gao, Dawei Gao, Wayne Xin Zhao, Yaliang Li, Bolin Ding, Ji-Rong Wen
cs.CL, cs.AI
15 pages, 4 figures
null
cs.CL
20230716
20230726
[ { "id": "2305.14314" }, { "id": "2206.07682" }, { "id": "2210.17323" }, { "id": "2303.08302" } ]
2307.08074
72
22 Wang et al. Disco-Bench: A Discourse-Aware Evaluation Benchmark # 1. Appendix # 1.1 Details of ZPR (Zero Pronoun Recovery) Task Chinese pronouns correspond to the personal pronouns in English, and the Chinese pronominal system is relatively simple as there is no inflection, conjugation, or case makers (Li and Thompson 1989). Thus, there is no difference between subjective and objective pronouns (we call them “basic pronouns”). Besides, possessive and reflexive pronouns can be generated by adding some particle or modifier based on the basic pronouns. Table 12: Chinese-English pronouns with corresponding forms. The pronoun types are short for: person = 1st, 2nd, 3rd, singular = SG, plural = PL, male = M, female = F and neutral = N.
2307.08074#72
Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling
Modeling discourse -- the linguistic phenomena that go beyond individual sentences, is a fundamental yet challenging aspect of natural language processing (NLP). However, existing evaluation benchmarks primarily focus on the evaluation of inter-sentence properties and overlook critical discourse phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a benchmark that can evaluate intra-sentence discourse properties across a diverse set of NLP tasks, covering understanding, translation, and generation. Disco-Bench consists of 9 document-level testsets in the literature domain, which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese and/or English. For linguistic analysis, we also design a diagnostic test suite that can examine whether the target models learn discourse knowledge. We totally evaluate 20 general-, in-domain and commercial models based on Transformer, advanced pretraining architectures and large language models (LLMs). Our results show (1) the challenge and necessity of our evaluation benchmark; (2) fine-grained pretraining based on literary document-level training data consistently improves the modeling of discourse information. We will release the datasets, pretrained models, and leaderboard, which we hope can significantly facilitate research in this field: https://github.com/longyuewangdcu/Disco-Bench.
http://arxiv.org/pdf/2307.08074
Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu
cs.CL, cs.AI
Zhaopeng Tu is the corresponding author
null
cs.CL
20230716
20230722
[ { "id": "2109.05729" }, { "id": "1907.11692" }, { "id": "2110.06696" }, { "id": "2304.02210" }, { "id": "2012.11157" }, { "id": "1901.00158" }, { "id": "2305.10196" } ]
2307.07924
73
: To design a basic Gomoku game in Python with a GUI, we will need the following core classes, functions, and methods: • Board: This class represents the game board and handles the logic for placing and checking moves. Game: This class manages the game flow and user interactions. • GUI: This class handles the graphical user interface using a GUI framework (e.g., tkinter). Now, let’s proceed with the implementation of each file. # game.py
2307.07924#73
Communicative Agents for Software Development
Software engineering is a domain characterized by intricate decision-making processes, often relying on nuanced intuition and consultation. Recent advancements in deep learning have started to revolutionize software engineering practices through elaborate designs implemented at various stages of software development. In this paper, we present an innovative paradigm that leverages large language models (LLMs) throughout the entire software development process, streamlining and unifying key processes through natural language communication, thereby eliminating the need for specialized models at each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered software development company that mirrors the established waterfall model, meticulously dividing the development process into four distinct chronological stages: designing, coding, testing, and documenting. Each stage engages a team of "software agents", such as programmers, code reviewers, and test engineers, fostering collaborative dialogue and facilitating a seamless workflow. The chat chain acts as a facilitator, breaking down each stage into atomic subtasks. This enables dual roles, allowing for proposing and validating solutions through context-aware communication, leading to efficient resolution of specific subtasks. The instrumental analysis of ChatDev highlights its remarkable efficacy in software generation, enabling the completion of the entire software development process in under seven minutes at a cost of less than one dollar. It not only identifies and alleviates potential vulnerabilities but also rectifies potential hallucinations while maintaining commendable efficiency and cost-effectiveness. The potential of ChatDev unveils fresh possibilities for integrating LLMs into the realm of software development. Our code is available at https://github.com/OpenBMB/ChatDev.
http://arxiv.org/pdf/2307.07924
Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.SE, cs.CL, cs.MA
https://github.com/OpenBMB/ChatDev
null
cs.SE
20230716
20231219
[ { "id": "2204.06125" }, { "id": "2107.03374" }, { "id": "2305.13281" }, { "id": "2304.03442" }, { "id": "2304.05128" }, { "id": "2303.17760" } ]
2307.08074
73
Object Possessive adjective Subject 我的 (my) 我 (me) 我 (I) 1st SG 你的 (your) 你 (you) 你 (you) 2nd SG 他的 (his) 他 (him) 他 (he) 3rd SGM 她的 (her) 她 (her) 她 (she) 3rd SGF 它 (me) 它的 (its) 它 (it) 3rd SGN 我们 (we) 我们 (us) 1st PL 你们 (you) 你们 (you) 2nd PL 3rd PLM 他们 (they) 他们 (them) 3rd PLF 她们 (they) 她们 (them) 3rd PLN 它们 (they) 它们 (them) Form Reflexive 我自己的 (myself) 你自己的 (yourself) 他自己的 (himself) 她自己的 (herself) 它自己的 (itself) 你们的 (your) 你们的 (yours) 你们自己的 (yourselves) 我们的
2307.08074#73
Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling
Modeling discourse -- the linguistic phenomena that go beyond individual sentences, is a fundamental yet challenging aspect of natural language processing (NLP). However, existing evaluation benchmarks primarily focus on the evaluation of inter-sentence properties and overlook critical discourse phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a benchmark that can evaluate intra-sentence discourse properties across a diverse set of NLP tasks, covering understanding, translation, and generation. Disco-Bench consists of 9 document-level testsets in the literature domain, which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese and/or English. For linguistic analysis, we also design a diagnostic test suite that can examine whether the target models learn discourse knowledge. We totally evaluate 20 general-, in-domain and commercial models based on Transformer, advanced pretraining architectures and large language models (LLMs). Our results show (1) the challenge and necessity of our evaluation benchmark; (2) fine-grained pretraining based on literary document-level training data consistently improves the modeling of discourse information. We will release the datasets, pretrained models, and leaderboard, which we hope can significantly facilitate research in this field: https://github.com/longyuewangdcu/Disco-Bench.
http://arxiv.org/pdf/2307.08074
Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu
cs.CL, cs.AI
Zhaopeng Tu is the corresponding author
null
cs.CL
20230716
20230722
[ { "id": "2109.05729" }, { "id": "1907.11692" }, { "id": "2110.06696" }, { "id": "2304.02210" }, { "id": "2012.11157" }, { "id": "1901.00158" }, { "id": "2305.10196" } ]
2307.07924
74
framework (e.g., tkinter). Now, let’s proceed with the implementation of each file. # game.py class Game: def __init__(self): self.board = [[0] * 15 for _ in range(15)] self.current_player = 1 self.game_over = False def make_move(self, row, col): if self.board[row][col] != 0 or self.game_over: return self.board[row][col] = self.current_player if self.check_win(row, col): self.game_over = True elif self.is_board_full(): self.game_over = True self.current_player = 3 - self.current_player · · · gui.py class GUI: def create_board(self): self.board_frame = tk.Frame(self.window) self.board_frame.pack() for row in range(15): row_buttons = [] for col in range(15): button = tk.Button(width=50, height=50) button.grid(row=row, column=col) row_buttons.append(button) self.buttons.append(row_buttons) self.update_board_gui() def is_game_over(self, result): pass · · · · · · 23
2307.07924#74
Communicative Agents for Software Development
Software engineering is a domain characterized by intricate decision-making processes, often relying on nuanced intuition and consultation. Recent advancements in deep learning have started to revolutionize software engineering practices through elaborate designs implemented at various stages of software development. In this paper, we present an innovative paradigm that leverages large language models (LLMs) throughout the entire software development process, streamlining and unifying key processes through natural language communication, thereby eliminating the need for specialized models at each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered software development company that mirrors the established waterfall model, meticulously dividing the development process into four distinct chronological stages: designing, coding, testing, and documenting. Each stage engages a team of "software agents", such as programmers, code reviewers, and test engineers, fostering collaborative dialogue and facilitating a seamless workflow. The chat chain acts as a facilitator, breaking down each stage into atomic subtasks. This enables dual roles, allowing for proposing and validating solutions through context-aware communication, leading to efficient resolution of specific subtasks. The instrumental analysis of ChatDev highlights its remarkable efficacy in software generation, enabling the completion of the entire software development process in under seven minutes at a cost of less than one dollar. It not only identifies and alleviates potential vulnerabilities but also rectifies potential hallucinations while maintaining commendable efficiency and cost-effectiveness. The potential of ChatDev unveils fresh possibilities for integrating LLMs into the realm of software development. Our code is available at https://github.com/OpenBMB/ChatDev.
http://arxiv.org/pdf/2307.07924
Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.SE, cs.CL, cs.MA
https://github.com/OpenBMB/ChatDev
null
cs.SE
20230716
20231219
[ { "id": "2204.06125" }, { "id": "2107.03374" }, { "id": "2305.13281" }, { "id": "2304.03442" }, { "id": "2304.05128" }, { "id": "2303.17760" } ]
2307.08074
74
(itself) 你们的 (your) 你们的 (yours) 你们自己的 (yourselves) 我们的 (our) 我们的 (ours) 我们自己的 (ourselves) 他们的 (their) 他们的 (theirs) 他们自己的 (themselves) 她们的 (their) 她们的 (theirs) 她们自己的 (themselves) 它们的 (their) 它们的 (theirs) 它们自己的 (themselves) Possessive 我的 (mine) 你的 (yours) 他的 (his) 她的 (hers) 它的 (its)
2307.08074#74
Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling
Modeling discourse -- the linguistic phenomena that go beyond individual sentences, is a fundamental yet challenging aspect of natural language processing (NLP). However, existing evaluation benchmarks primarily focus on the evaluation of inter-sentence properties and overlook critical discourse phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a benchmark that can evaluate intra-sentence discourse properties across a diverse set of NLP tasks, covering understanding, translation, and generation. Disco-Bench consists of 9 document-level testsets in the literature domain, which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese and/or English. For linguistic analysis, we also design a diagnostic test suite that can examine whether the target models learn discourse knowledge. We totally evaluate 20 general-, in-domain and commercial models based on Transformer, advanced pretraining architectures and large language models (LLMs). Our results show (1) the challenge and necessity of our evaluation benchmark; (2) fine-grained pretraining based on literary document-level training data consistently improves the modeling of discourse information. We will release the datasets, pretrained models, and leaderboard, which we hope can significantly facilitate research in this field: https://github.com/longyuewangdcu/Disco-Bench.
http://arxiv.org/pdf/2307.08074
Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu
cs.CL, cs.AI
Zhaopeng Tu is the corresponding author
null
cs.CL
20230716
20230722
[ { "id": "2109.05729" }, { "id": "1907.11692" }, { "id": "2110.06696" }, { "id": "2304.02210" }, { "id": "2012.11157" }, { "id": "1901.00158" }, { "id": "2305.10196" } ]
2307.08074
75
# 1.2 Details of TE (Text Expansion) Task This task encourages the addition of different content types to detail and deepen the context. Table 13 classifies the five primary types of expanded content utilized in TE tasks, including simple items like adjectives and adverbs to more intricate ones like prepositional phrases and attributive clauses. Table 14 illustrates how TE operates in practice. The content additions enhance the original text with more explicit details and amplify the conveyed sentiment, thereby enhancing the context’s richness and complexity. Therefore, the TE task serves as a valuable measure of a language model’s capacity for context understanding and enrichment. Table 13: The expansion types in TE task are summarized. All the exemplar spans are highlighted in texts in Table 14. Expanded Content Type Exemplar Spans (1) adjective (phrase) (2) adverb (phrase) (3) noun (phrase) (4) prepositional phrase (5) attributive clause innocent little firmly, even now still, unanimously President, weak country, weakness of democratic To this effect, In the past, at this time, in a barrel pierced full of holes that still bears his name, who was to say a mass for his sou 23 # Preprint Preprint Volume 1, Number 1
2307.08074#75
Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling
Modeling discourse -- the linguistic phenomena that go beyond individual sentences, is a fundamental yet challenging aspect of natural language processing (NLP). However, existing evaluation benchmarks primarily focus on the evaluation of inter-sentence properties and overlook critical discourse phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a benchmark that can evaluate intra-sentence discourse properties across a diverse set of NLP tasks, covering understanding, translation, and generation. Disco-Bench consists of 9 document-level testsets in the literature domain, which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese and/or English. For linguistic analysis, we also design a diagnostic test suite that can examine whether the target models learn discourse knowledge. We totally evaluate 20 general-, in-domain and commercial models based on Transformer, advanced pretraining architectures and large language models (LLMs). Our results show (1) the challenge and necessity of our evaluation benchmark; (2) fine-grained pretraining based on literary document-level training data consistently improves the modeling of discourse information. We will release the datasets, pretrained models, and leaderboard, which we hope can significantly facilitate research in this field: https://github.com/longyuewangdcu/Disco-Bench.
http://arxiv.org/pdf/2307.08074
Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu
cs.CL, cs.AI
Zhaopeng Tu is the corresponding author
null
cs.CL
20230716
20230722
[ { "id": "2109.05729" }, { "id": "1907.11692" }, { "id": "2110.06696" }, { "id": "2304.02210" }, { "id": "2012.11157" }, { "id": "1901.00158" }, { "id": "2305.10196" } ]
2307.07924
76
# . Programmer # : Programmer # &. Designer # : Designer @. : To equip the software with a beautiful graphical user interface, we will discuss and design many decorative images for GUI decoration. Now, we, according to the codes, keep discussing the GUI beautification by listing some functionally independent elements in GUI that are being considered to be decorated by different pictures. gui.py class GUI: def __init__(self, game): player_o = str("O") player_x = str("X") def make_move(self, row, col): self.game.play_move(row, col) self.buttons[row][col].config("X" if self.game.current_player == ’X’ else "O") def is_game_over(self, result): pass · · · : · · board.png: The game board where players make their moves. • button.png: The button used to make a move on the game board. • player_X.png: The image representing player X. • player_O.png: The image representing player O. • win.png: The message displayed when a player wins. • game_over.png: The message displayed when the game is over. use
2307.07924#76
Communicative Agents for Software Development
Software engineering is a domain characterized by intricate decision-making processes, often relying on nuanced intuition and consultation. Recent advancements in deep learning have started to revolutionize software engineering practices through elaborate designs implemented at various stages of software development. In this paper, we present an innovative paradigm that leverages large language models (LLMs) throughout the entire software development process, streamlining and unifying key processes through natural language communication, thereby eliminating the need for specialized models at each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered software development company that mirrors the established waterfall model, meticulously dividing the development process into four distinct chronological stages: designing, coding, testing, and documenting. Each stage engages a team of "software agents", such as programmers, code reviewers, and test engineers, fostering collaborative dialogue and facilitating a seamless workflow. The chat chain acts as a facilitator, breaking down each stage into atomic subtasks. This enables dual roles, allowing for proposing and validating solutions through context-aware communication, leading to efficient resolution of specific subtasks. The instrumental analysis of ChatDev highlights its remarkable efficacy in software generation, enabling the completion of the entire software development process in under seven minutes at a cost of less than one dollar. It not only identifies and alleviates potential vulnerabilities but also rectifies potential hallucinations while maintaining commendable efficiency and cost-effectiveness. The potential of ChatDev unveils fresh possibilities for integrating LLMs into the realm of software development. Our code is available at https://github.com/OpenBMB/ChatDev.
http://arxiv.org/pdf/2307.07924
Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.SE, cs.CL, cs.MA
https://github.com/OpenBMB/ChatDev
null
cs.SE
20230716
20231219
[ { "id": "2204.06125" }, { "id": "2107.03374" }, { "id": "2305.13281" }, { "id": "2304.03442" }, { "id": "2304.05128" }, { "id": "2303.17760" } ]
2307.08074
76
23 # Preprint Preprint Volume 1, Number 1 Table 14: Three examples to illustrate the task of TE, where the blue span in Target are expanded content generated based on the source input as context. # Source 1 In 1823 James Monroe proclaimed the doctrine. The United States was an infant, threatened by European actions. # Target 1 To this effect, in 1823 President James Monroe proclaimed the doctrine that still bears his name. The United States at this time was an infant, weak country, threatened by European actions. # Source 2 First was the rule. Political democracies have not been institutionalized in parts of Latin America. No democratic regime had lasted half a century. # Target 2 First was the weakness of democratic rule. Political democracies even now still have not been firmly institutionalized in parts of Latin America. In the past no democratic regime had lasted half a century. The peasant was sentenced to death, and was to be rolled into the water. He was led forth, and a priest was brought. # Source 3 The innocent little peasant was unanimously sentenced to death, and was to be rolled into the water, in a barrel pierced full of holes. He was led forth, and a priest was brought who was to say a mass for his soul. # 1.3 Hyper-parameter Configuration for Pretrained Models
2307.08074#76
Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling
Modeling discourse -- the linguistic phenomena that go beyond individual sentences, is a fundamental yet challenging aspect of natural language processing (NLP). However, existing evaluation benchmarks primarily focus on the evaluation of inter-sentence properties and overlook critical discourse phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a benchmark that can evaluate intra-sentence discourse properties across a diverse set of NLP tasks, covering understanding, translation, and generation. Disco-Bench consists of 9 document-level testsets in the literature domain, which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese and/or English. For linguistic analysis, we also design a diagnostic test suite that can examine whether the target models learn discourse knowledge. We totally evaluate 20 general-, in-domain and commercial models based on Transformer, advanced pretraining architectures and large language models (LLMs). Our results show (1) the challenge and necessity of our evaluation benchmark; (2) fine-grained pretraining based on literary document-level training data consistently improves the modeling of discourse information. We will release the datasets, pretrained models, and leaderboard, which we hope can significantly facilitate research in this field: https://github.com/longyuewangdcu/Disco-Bench.
http://arxiv.org/pdf/2307.08074
Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu
cs.CL, cs.AI
Zhaopeng Tu is the corresponding author
null
cs.CL
20230716
20230722
[ { "id": "2109.05729" }, { "id": "1907.11692" }, { "id": "2110.06696" }, { "id": "2304.02210" }, { "id": "2012.11157" }, { "id": "1901.00158" }, { "id": "2305.10196" } ]
2307.07924
77
I have created all the pictures in the same directory as the Python files. Now, use some or all of the them into the GUI to make it more beautiful and creative. : To incorporate the designed images into the GUI, I have modified the ‘GUI’ class as follows: # gui.py class GUI: def __init__(self, game): player_o = Image.open("player_O.png") player_x = Image.open("player_X.png") def make_move(self, row, col): self.game.play_move(row, col) self.buttons[row][col].config(image=self.player_x if self.game.current_player == ’X’ else self.player_o) def is_game_over(self, result): pass · · · · · The modified ‘GUI’ class will load and resize the images, convert them to Tkinter-compatible format, and use them to create the GUI elements. 24 Testing: Code Review (via Thought Instruction) # . Programmer g. # : Programmer # : Reviewer 8. : Review the codes detailedly, propose one comment with the highest priority about the codes, and give me instructions on how to fix.
2307.07924#77
Communicative Agents for Software Development
Software engineering is a domain characterized by intricate decision-making processes, often relying on nuanced intuition and consultation. Recent advancements in deep learning have started to revolutionize software engineering practices through elaborate designs implemented at various stages of software development. In this paper, we present an innovative paradigm that leverages large language models (LLMs) throughout the entire software development process, streamlining and unifying key processes through natural language communication, thereby eliminating the need for specialized models at each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered software development company that mirrors the established waterfall model, meticulously dividing the development process into four distinct chronological stages: designing, coding, testing, and documenting. Each stage engages a team of "software agents", such as programmers, code reviewers, and test engineers, fostering collaborative dialogue and facilitating a seamless workflow. The chat chain acts as a facilitator, breaking down each stage into atomic subtasks. This enables dual roles, allowing for proposing and validating solutions through context-aware communication, leading to efficient resolution of specific subtasks. The instrumental analysis of ChatDev highlights its remarkable efficacy in software generation, enabling the completion of the entire software development process in under seven minutes at a cost of less than one dollar. It not only identifies and alleviates potential vulnerabilities but also rectifies potential hallucinations while maintaining commendable efficiency and cost-effectiveness. The potential of ChatDev unveils fresh possibilities for integrating LLMs into the realm of software development. Our code is available at https://github.com/OpenBMB/ChatDev.
http://arxiv.org/pdf/2307.07924
Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.SE, cs.CL, cs.MA
https://github.com/OpenBMB/ChatDev
null
cs.SE
20230716
20231219
[ { "id": "2204.06125" }, { "id": "2107.03374" }, { "id": "2305.13281" }, { "id": "2304.03442" }, { "id": "2304.05128" }, { "id": "2303.17760" } ]
2307.08074
77
# 1.3 Hyper-parameter Configuration for Pretrained Models Table 15 encapsulates the hyper-parameter configurations applied to the Disco-Bench pretrained models. These models include RoBERTa, GPT2, BART, and mBART. The models were all trained using the Adam optimizer, with a learning rate of 3e-4. A vocabulary size that varies between models was employed, ranging from approximately 21k (for RoBERTa, GPT2, BART) up to 250k (for mBART). The maximum length of sequences processed varied from 512 to 1024 tokens. The models’ architecture varied in terms of the number of layers and heads. Finally, the total parameters of the models ranged from 110 million (for the smaller variant of RoBERTa) up to 737 million (for GPT2), thus demonstrating the diverse scale of models used in the study. # 1.4 Hyper-parameter Configuration for Fine-tuning Downstream Tasks
2307.08074#77
Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling
Modeling discourse -- the linguistic phenomena that go beyond individual sentences, is a fundamental yet challenging aspect of natural language processing (NLP). However, existing evaluation benchmarks primarily focus on the evaluation of inter-sentence properties and overlook critical discourse phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a benchmark that can evaluate intra-sentence discourse properties across a diverse set of NLP tasks, covering understanding, translation, and generation. Disco-Bench consists of 9 document-level testsets in the literature domain, which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese and/or English. For linguistic analysis, we also design a diagnostic test suite that can examine whether the target models learn discourse knowledge. We totally evaluate 20 general-, in-domain and commercial models based on Transformer, advanced pretraining architectures and large language models (LLMs). Our results show (1) the challenge and necessity of our evaluation benchmark; (2) fine-grained pretraining based on literary document-level training data consistently improves the modeling of discourse information. We will release the datasets, pretrained models, and leaderboard, which we hope can significantly facilitate research in this field: https://github.com/longyuewangdcu/Disco-Bench.
http://arxiv.org/pdf/2307.08074
Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu
cs.CL, cs.AI
Zhaopeng Tu is the corresponding author
null
cs.CL
20230716
20230722
[ { "id": "2109.05729" }, { "id": "1907.11692" }, { "id": "2110.06696" }, { "id": "2304.02210" }, { "id": "2012.11157" }, { "id": "1901.00158" }, { "id": "2305.10196" } ]
2307.07924
78
9. : The highest priority comment is regarding the incomplete implemen- tation of the check_winner() and is_board_full() methods in the GameBoard class. These methods are marked with a TODO comment and need to be implemented to determine the winner of the game and check if the game board is full. To fix this, you need to implement the logic for both methods. Here are the instructions to do so: 1. In the check_winner() method, you need to implement the logic to check if there is a winning condition on the game board. This can be done by checking for five consecutive stones in a row, column, or diagonal. You can use nested loops to iterate through the board and check for these conditions. Once a winner is found, you can return the player number (1 or 2) or any other value to indicate the winner. If no winner is found, you can return None or any other value to indicate no winner. 2. In the is_board_full() method, you need to implement the logic to check if the game board is full. This can be done by iterating through the board and checking if any cell is empty (i.e., has a value of 0). If all cells are filled, you can return True to indicate that the board is full. Otherwise, you can return False. Once you have implemented these methods, make sure to remove the pass statement from both methods.
2307.07924#78
Communicative Agents for Software Development
Software engineering is a domain characterized by intricate decision-making processes, often relying on nuanced intuition and consultation. Recent advancements in deep learning have started to revolutionize software engineering practices through elaborate designs implemented at various stages of software development. In this paper, we present an innovative paradigm that leverages large language models (LLMs) throughout the entire software development process, streamlining and unifying key processes through natural language communication, thereby eliminating the need for specialized models at each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered software development company that mirrors the established waterfall model, meticulously dividing the development process into four distinct chronological stages: designing, coding, testing, and documenting. Each stage engages a team of "software agents", such as programmers, code reviewers, and test engineers, fostering collaborative dialogue and facilitating a seamless workflow. The chat chain acts as a facilitator, breaking down each stage into atomic subtasks. This enables dual roles, allowing for proposing and validating solutions through context-aware communication, leading to efficient resolution of specific subtasks. The instrumental analysis of ChatDev highlights its remarkable efficacy in software generation, enabling the completion of the entire software development process in under seven minutes at a cost of less than one dollar. It not only identifies and alleviates potential vulnerabilities but also rectifies potential hallucinations while maintaining commendable efficiency and cost-effectiveness. The potential of ChatDev unveils fresh possibilities for integrating LLMs into the realm of software development. Our code is available at https://github.com/OpenBMB/ChatDev.
http://arxiv.org/pdf/2307.07924
Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.SE, cs.CL, cs.MA
https://github.com/OpenBMB/ChatDev
null
cs.SE
20230716
20231219
[ { "id": "2204.06125" }, { "id": "2107.03374" }, { "id": "2305.13281" }, { "id": "2304.03442" }, { "id": "2304.05128" }, { "id": "2303.17760" } ]
2307.08074
78
# 1.4 Hyper-parameter Configuration for Fine-tuning Downstream Tasks Table 16 summarizes the hyper-parameter configurations used for fine-tuning Disco- Bench models on various downstream tasks, encompassing SI, ZPR, MRC, NT, ACT, PT, TE, TI, and TC tasks. The batch size for the tasks varied, with a lower limit of 5 (for ZPR) and an upper limit of 3000 tokens (for NT, ACT, and PT). The maximum length of input sequences also differed, ranging from 64 (for TI) to 1024 (for NT, ACT, and PT). Training epochs varied significantly, with a minimum of 3 (for TE and TI) and a maximum of 30000 steps (for NT, ACT, and PT). Learning rates also varied to accommodate the specific needs of each task, ranging from 5e-6 (for ZPR) to 1e-4 (for NT and ACT). 24 Wang et al. Disco-Bench: A Discourse-Aware Evaluation Benchmark Table 15: The summary of hyper-parameters used for Disco-Bench pretrained models.
2307.08074#78
Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling
Modeling discourse -- the linguistic phenomena that go beyond individual sentences, is a fundamental yet challenging aspect of natural language processing (NLP). However, existing evaluation benchmarks primarily focus on the evaluation of inter-sentence properties and overlook critical discourse phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a benchmark that can evaluate intra-sentence discourse properties across a diverse set of NLP tasks, covering understanding, translation, and generation. Disco-Bench consists of 9 document-level testsets in the literature domain, which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese and/or English. For linguistic analysis, we also design a diagnostic test suite that can examine whether the target models learn discourse knowledge. We totally evaluate 20 general-, in-domain and commercial models based on Transformer, advanced pretraining architectures and large language models (LLMs). Our results show (1) the challenge and necessity of our evaluation benchmark; (2) fine-grained pretraining based on literary document-level training data consistently improves the modeling of discourse information. We will release the datasets, pretrained models, and leaderboard, which we hope can significantly facilitate research in this field: https://github.com/longyuewangdcu/Disco-Bench.
http://arxiv.org/pdf/2307.08074
Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu
cs.CL, cs.AI
Zhaopeng Tu is the corresponding author
null
cs.CL
20230716
20230722
[ { "id": "2109.05729" }, { "id": "1907.11692" }, { "id": "2110.06696" }, { "id": "2304.02210" }, { "id": "2012.11157" }, { "id": "1901.00158" }, { "id": "2305.10196" } ]
2307.07924
79
# g. : Modify corresponding codes according to the above-mentioned comments. : This modified file contains the GameBoard class that represents the game board and handles the logic of the game. board.py class GameBoard: def check_winner(self): for row in range(15): for col in range(11): if check_winner(): return self.board[row][col] for col in range(15): for row in range(11): if check_winner(): return self.board[row][col] return None def is_board_full(self): for row in range(15): for col in range(15): if self.board[row][col] == 0: return False return True · · · 25 # Testing: Code Execution (via Thought Instruction) # . Programmer 2. # : Programmer # : Tester # e. : Here is the test report of the software: # Error # Traceback (most recent call last): # File "main.py", line 19, in <module> # game = GomokuGame() NameError: name ’GomokuGame’ is not defined : According to the test reports, please locate and summarize the bugs that cause the problem.
2307.07924#79
Communicative Agents for Software Development
Software engineering is a domain characterized by intricate decision-making processes, often relying on nuanced intuition and consultation. Recent advancements in deep learning have started to revolutionize software engineering practices through elaborate designs implemented at various stages of software development. In this paper, we present an innovative paradigm that leverages large language models (LLMs) throughout the entire software development process, streamlining and unifying key processes through natural language communication, thereby eliminating the need for specialized models at each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered software development company that mirrors the established waterfall model, meticulously dividing the development process into four distinct chronological stages: designing, coding, testing, and documenting. Each stage engages a team of "software agents", such as programmers, code reviewers, and test engineers, fostering collaborative dialogue and facilitating a seamless workflow. The chat chain acts as a facilitator, breaking down each stage into atomic subtasks. This enables dual roles, allowing for proposing and validating solutions through context-aware communication, leading to efficient resolution of specific subtasks. The instrumental analysis of ChatDev highlights its remarkable efficacy in software generation, enabling the completion of the entire software development process in under seven minutes at a cost of less than one dollar. It not only identifies and alleviates potential vulnerabilities but also rectifies potential hallucinations while maintaining commendable efficiency and cost-effectiveness. The potential of ChatDev unveils fresh possibilities for integrating LLMs into the realm of software development. Our code is available at https://github.com/OpenBMB/ChatDev.
http://arxiv.org/pdf/2307.07924
Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.SE, cs.CL, cs.MA
https://github.com/OpenBMB/ChatDev
null
cs.SE
20230716
20231219
[ { "id": "2204.06125" }, { "id": "2107.03374" }, { "id": "2305.13281" }, { "id": "2304.03442" }, { "id": "2304.05128" }, { "id": "2303.17760" } ]
2307.08074
79
24 Wang et al. Disco-Bench: A Discourse-Aware Evaluation Benchmark Table 15: The summary of hyper-parameters used for Disco-Bench pretrained models. Model RoBERTa GPT2 BART mBART Tokenization Optimizer Masking BERTtok. Adam word BERTtok. BERTtok. Adam - Adam word SentPiece Adam word Vocabulary Size Learning Rate Batch Size Training Step Max Length Layer Head 21128 3e-4 4K 1M 512 12/24 12/16 21131 3e-4 4K 1M 1024 20 36 21128 3e-4 4K 1M 512 24 16 250000 3e-4 4K 1M 1024 12/24 12/16 Total Param. 110m/340m 737M 406M 669M Table 16: A summary of hyper-parameter for fine-tuning downstream tasks. Task Batch Size Max Length Epoch Learning Rate SI ZPR MRC 64 5 6 512 512 512 5 40 10 3e-5 5e-6 2e-5 NT ACT PT 3K token 3K token 3K token 1024 1024 1024 30K step 30K step 30K step 1e-4 1e-4 1e-5 TE TI TC 32 24 24 512 64 512 3 3 8 2e-4 2e-5 2e-5 25 # Preprint Preprint Volume 1, Number 1
2307.08074#79
Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling
Modeling discourse -- the linguistic phenomena that go beyond individual sentences, is a fundamental yet challenging aspect of natural language processing (NLP). However, existing evaluation benchmarks primarily focus on the evaluation of inter-sentence properties and overlook critical discourse phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a benchmark that can evaluate intra-sentence discourse properties across a diverse set of NLP tasks, covering understanding, translation, and generation. Disco-Bench consists of 9 document-level testsets in the literature domain, which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese and/or English. For linguistic analysis, we also design a diagnostic test suite that can examine whether the target models learn discourse knowledge. We totally evaluate 20 general-, in-domain and commercial models based on Transformer, advanced pretraining architectures and large language models (LLMs). Our results show (1) the challenge and necessity of our evaluation benchmark; (2) fine-grained pretraining based on literary document-level training data consistently improves the modeling of discourse information. We will release the datasets, pretrained models, and leaderboard, which we hope can significantly facilitate research in this field: https://github.com/longyuewangdcu/Disco-Bench.
http://arxiv.org/pdf/2307.08074
Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu
cs.CL, cs.AI
Zhaopeng Tu is the corresponding author
null
cs.CL
20230716
20230722
[ { "id": "2109.05729" }, { "id": "1907.11692" }, { "id": "2110.06696" }, { "id": "2304.02210" }, { "id": "2012.11157" }, { "id": "1901.00158" }, { "id": "2305.10196" } ]
2307.07924
80
: According to the test reports, please locate and summarize the bugs that cause the problem. : Based on the test report, the bug causing the problem is a NameError. The error message indicates that the name GomokuGame is not defined. This suggests that there is a missing import or definition for the GomokuGame class in the source code. To resolve this issue, you need to import or define the GomokuGame class in the source code before it is used. This can be done by adding the necessary import statement or by defining the class within the same file or in a separate module that is imported. : To make the software execute smoothly and robustly, you should modify the problematic code segments based on the test report and my suggestions. : This modified file has imported the GameBoard class that represents the game board and handles the logic of the game. main.py import tkinter as tk from tkinter import messagebox as tkmessagebox from board import GameBoard class GameGUI: def __init__(self): self.root = tk.Tk() self.root.title("Gomoku") self.game_board = GameBoard() self.buttons = [[None] * 15 for _ in range(15)] self.create_widgets() · · · · · · : The code is well-structured and follows good coding practices. It passed the test successfully.
2307.07924#80
Communicative Agents for Software Development
Software engineering is a domain characterized by intricate decision-making processes, often relying on nuanced intuition and consultation. Recent advancements in deep learning have started to revolutionize software engineering practices through elaborate designs implemented at various stages of software development. In this paper, we present an innovative paradigm that leverages large language models (LLMs) throughout the entire software development process, streamlining and unifying key processes through natural language communication, thereby eliminating the need for specialized models at each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered software development company that mirrors the established waterfall model, meticulously dividing the development process into four distinct chronological stages: designing, coding, testing, and documenting. Each stage engages a team of "software agents", such as programmers, code reviewers, and test engineers, fostering collaborative dialogue and facilitating a seamless workflow. The chat chain acts as a facilitator, breaking down each stage into atomic subtasks. This enables dual roles, allowing for proposing and validating solutions through context-aware communication, leading to efficient resolution of specific subtasks. The instrumental analysis of ChatDev highlights its remarkable efficacy in software generation, enabling the completion of the entire software development process in under seven minutes at a cost of less than one dollar. It not only identifies and alleviates potential vulnerabilities but also rectifies potential hallucinations while maintaining commendable efficiency and cost-effectiveness. The potential of ChatDev unveils fresh possibilities for integrating LLMs into the realm of software development. Our code is available at https://github.com/OpenBMB/ChatDev.
http://arxiv.org/pdf/2307.07924
Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.SE, cs.CL, cs.MA
https://github.com/OpenBMB/ChatDev
null
cs.SE
20230716
20231219
[ { "id": "2204.06125" }, { "id": "2107.03374" }, { "id": "2305.13281" }, { "id": "2304.03442" }, { "id": "2304.05128" }, { "id": "2303.17760" } ]
2307.08074
80
25 # Preprint Preprint Volume 1, Number 1 # 1.5 English Translations of Figure 4-6 Table 17 presents English translations of the examples from Figures 5, 6, and 7. Each row details the discourse context and the task description for a specific task. By mapping these discourse phenomena into English, we can better understand the tasks and their associated challenges when developing and evaluating models. # 1.6 Introduction to Diagnostic Prompts for Language Models Table 18 showcases the prompts used in the Language Language Models (LLMs) probing for the Disco-Bench Benchmark tasks and the Disco-Bench Cohesion Test Suit. Each row describes a specific task, such as Speaker Identification (SI), Zero Pronoun Recovery (ZPR), and Multiple-choice Reading Comprehension (MRC), along with their corre- sponding prompts. The prompts were designed to assess various aspects of language understanding, including context interpretation, anaphora resolution, translation, and text completion. For translation and text evaluation tasks, the LLMs are required to choose from multiple candidates, making these tasks challenging and comprehensive. The diagnostic prompts aid in benchmarking the performance of LLMs in various discourse-level tasks, and they serve as a resource to assess the coherence and cohesion understanding of the models. 26 Wang et al. Disco-Bench: A Discourse-Aware Evaluation Benchmark
2307.08074#80
Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling
Modeling discourse -- the linguistic phenomena that go beyond individual sentences, is a fundamental yet challenging aspect of natural language processing (NLP). However, existing evaluation benchmarks primarily focus on the evaluation of inter-sentence properties and overlook critical discourse phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a benchmark that can evaluate intra-sentence discourse properties across a diverse set of NLP tasks, covering understanding, translation, and generation. Disco-Bench consists of 9 document-level testsets in the literature domain, which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese and/or English. For linguistic analysis, we also design a diagnostic test suite that can examine whether the target models learn discourse knowledge. We totally evaluate 20 general-, in-domain and commercial models based on Transformer, advanced pretraining architectures and large language models (LLMs). Our results show (1) the challenge and necessity of our evaluation benchmark; (2) fine-grained pretraining based on literary document-level training data consistently improves the modeling of discourse information. We will release the datasets, pretrained models, and leaderboard, which we hope can significantly facilitate research in this field: https://github.com/longyuewangdcu/Disco-Bench.
http://arxiv.org/pdf/2307.08074
Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu
cs.CL, cs.AI
Zhaopeng Tu is the corresponding author
null
cs.CL
20230716
20230722
[ { "id": "2109.05729" }, { "id": "1907.11692" }, { "id": "2110.06696" }, { "id": "2304.02210" }, { "id": "2012.11157" }, { "id": "1901.00158" }, { "id": "2305.10196" } ]
2307.07924
81
26 Documenting : CTO : Programmer : CEO : CPO : To specify the dependencies or packages required for the project to run properly, write a requirements.txt file according to the codes to specify the dependencies or packages required for the project to run properly. : Here is the environment dependence file: requirements.txt numpy==1.24.3 pandas>=1.1.4 pygame==2.5.0 requests==2.31.0 tkinter==8.6 tqdm==4.65.0 · · · : According to the user demands, codes, requirements, and file format listed above, write a manual.md file to provide detailed introduction to the uses of the software.
2307.07924#81
Communicative Agents for Software Development
Software engineering is a domain characterized by intricate decision-making processes, often relying on nuanced intuition and consultation. Recent advancements in deep learning have started to revolutionize software engineering practices through elaborate designs implemented at various stages of software development. In this paper, we present an innovative paradigm that leverages large language models (LLMs) throughout the entire software development process, streamlining and unifying key processes through natural language communication, thereby eliminating the need for specialized models at each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered software development company that mirrors the established waterfall model, meticulously dividing the development process into four distinct chronological stages: designing, coding, testing, and documenting. Each stage engages a team of "software agents", such as programmers, code reviewers, and test engineers, fostering collaborative dialogue and facilitating a seamless workflow. The chat chain acts as a facilitator, breaking down each stage into atomic subtasks. This enables dual roles, allowing for proposing and validating solutions through context-aware communication, leading to efficient resolution of specific subtasks. The instrumental analysis of ChatDev highlights its remarkable efficacy in software generation, enabling the completion of the entire software development process in under seven minutes at a cost of less than one dollar. It not only identifies and alleviates potential vulnerabilities but also rectifies potential hallucinations while maintaining commendable efficiency and cost-effectiveness. The potential of ChatDev unveils fresh possibilities for integrating LLMs into the realm of software development. Our code is available at https://github.com/OpenBMB/ChatDev.
http://arxiv.org/pdf/2307.07924
Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.SE, cs.CL, cs.MA
https://github.com/OpenBMB/ChatDev
null
cs.SE
20230716
20231219
[ { "id": "2204.06125" }, { "id": "2107.03374" }, { "id": "2305.13281" }, { "id": "2304.03442" }, { "id": "2304.05128" }, { "id": "2303.17760" } ]
2307.07924
82
: Here is the user manual: Gomoku Game User Manual Introduction Welcome to the Gomoku Game! Installation 1. Install Python on your computer. 2. Open a terminal and run: pip install -r requirements. txt Running the Game 1. Open a terminal and run: python main.py 2. Click on the empty cells. 3. The game will continue until one player gets five stones in a row. 27 Graphics | Music Photo Creation Video \ Schedule “Notebook Language |-Science |-News Culture [+ Referenc&Books \ Family&Kids /- Shopping | Restaurants&Delivery |- Entertainment |- Personalisation Health&Fitness Medical / SocialNetwork Sports Travel “Transportation Finance |-Tool&Utilities |- Budgeting Business Development Security [Office \ Data - Action Game |- Strategy Game |- Racing Game |- Role Playing Game Sport Game Management Game \ Shooter Game Puzzle Game Board Game \ Simulation Game Figure 11: The category of NLDD. 28 3 vil ¥ Figure 12: The distribution of description length in NLDD.It can be seen from the figure that the length presents an approximate mixed Gaussian distribution, mainly concentrated around the lengths of 17 and 77, which represents the long and short software descriptions in the NLDD. * _ “ we a enon © “the had 0 Beh "Sport came @ we ceontbtcls . & tookaus of
2307.07924#82
Communicative Agents for Software Development
Software engineering is a domain characterized by intricate decision-making processes, often relying on nuanced intuition and consultation. Recent advancements in deep learning have started to revolutionize software engineering practices through elaborate designs implemented at various stages of software development. In this paper, we present an innovative paradigm that leverages large language models (LLMs) throughout the entire software development process, streamlining and unifying key processes through natural language communication, thereby eliminating the need for specialized models at each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered software development company that mirrors the established waterfall model, meticulously dividing the development process into four distinct chronological stages: designing, coding, testing, and documenting. Each stage engages a team of "software agents", such as programmers, code reviewers, and test engineers, fostering collaborative dialogue and facilitating a seamless workflow. The chat chain acts as a facilitator, breaking down each stage into atomic subtasks. This enables dual roles, allowing for proposing and validating solutions through context-aware communication, leading to efficient resolution of specific subtasks. The instrumental analysis of ChatDev highlights its remarkable efficacy in software generation, enabling the completion of the entire software development process in under seven minutes at a cost of less than one dollar. It not only identifies and alleviates potential vulnerabilities but also rectifies potential hallucinations while maintaining commendable efficiency and cost-effectiveness. The potential of ChatDev unveils fresh possibilities for integrating LLMs into the realm of software development. Our code is available at https://github.com/OpenBMB/ChatDev.
http://arxiv.org/pdf/2307.07924
Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.SE, cs.CL, cs.MA
https://github.com/OpenBMB/ChatDev
null
cs.SE
20230716
20231219
[ { "id": "2204.06125" }, { "id": "2107.03374" }, { "id": "2305.13281" }, { "id": "2304.03442" }, { "id": "2304.05128" }, { "id": "2303.17760" } ]
2307.08074
82
Task Discourse Context Task Description Figure 2 SI Xing Jiu’an followed Mu Qing into the car and sat in the co-pilot position. "Are you in a bad mood?" Mu Qing asked. "Um, yes." Inp: "Um, yes." Out: Speaker=Xing Jiu’an ZPR A: Phoebe would love to buy a TV. B: Joey won’t let ∅ buy ∅? A: Yes. Inp: B: Joey won’t let ∅ buy ∅? Out: B: Joey won’t let her buy it? MRC The little princess climbed out of the castle window while her mother was sleeping. She climbed down the south wall and slipped out. Finally ∅ walked into the forest without telegraph poles. Inp: Where did the little princess go after she escaped? (A) South Wall; (B) Forest; (C) Castle; (D) Mountain. Out: Answer=(B) Forest Figure 3 NT King Ding sat on the side, smiling as he looked at Qing Shuang’s astounded thoughts. ∅ mind had already flown to a faraway place. Inp: ∅ mind had already flown to a faraway
2307.08074#82
Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling
Modeling discourse -- the linguistic phenomena that go beyond individual sentences, is a fundamental yet challenging aspect of natural language processing (NLP). However, existing evaluation benchmarks primarily focus on the evaluation of inter-sentence properties and overlook critical discourse phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a benchmark that can evaluate intra-sentence discourse properties across a diverse set of NLP tasks, covering understanding, translation, and generation. Disco-Bench consists of 9 document-level testsets in the literature domain, which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese and/or English. For linguistic analysis, we also design a diagnostic test suite that can examine whether the target models learn discourse knowledge. We totally evaluate 20 general-, in-domain and commercial models based on Transformer, advanced pretraining architectures and large language models (LLMs). Our results show (1) the challenge and necessity of our evaluation benchmark; (2) fine-grained pretraining based on literary document-level training data consistently improves the modeling of discourse information. We will release the datasets, pretrained models, and leaderboard, which we hope can significantly facilitate research in this field: https://github.com/longyuewangdcu/Disco-Bench.
http://arxiv.org/pdf/2307.08074
Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu
cs.CL, cs.AI
Zhaopeng Tu is the corresponding author
null
cs.CL
20230716
20230722
[ { "id": "2109.05729" }, { "id": "1907.11692" }, { "id": "2110.06696" }, { "id": "2304.02210" }, { "id": "2012.11157" }, { "id": "1901.00158" }, { "id": "2305.10196" } ]
2307.07924
83
* _ “ we a enon © “the had 0 Beh "Sport came @ we ceontbtcls . & tookaus of Figure 13: We transform the software description into embeddings with the OpenAI Ada model and then perform dimensionality reduction and visualization. As shown in the figure, it can be observed that 1) software descriptions of the same category are distributed in clusters, indicating that the generated descriptions are highly related to their categories. 2) Descriptions in different subcategories under the same category are clustered together, such as the eight game subcategories in the lower right corner. 3) Some subcategories of different categories also show overlaps in the figure, such as Tools & Utilities and Graphics, Schedule and Business, Sports and Sports Game. Such an overlap is comprehensible given the multi-functionality of some software applications that may not be confined to a single classification. 29
2307.07924#83
Communicative Agents for Software Development
Software engineering is a domain characterized by intricate decision-making processes, often relying on nuanced intuition and consultation. Recent advancements in deep learning have started to revolutionize software engineering practices through elaborate designs implemented at various stages of software development. In this paper, we present an innovative paradigm that leverages large language models (LLMs) throughout the entire software development process, streamlining and unifying key processes through natural language communication, thereby eliminating the need for specialized models at each phase. At the core of this paradigm lies ChatDev, a virtual chat-powered software development company that mirrors the established waterfall model, meticulously dividing the development process into four distinct chronological stages: designing, coding, testing, and documenting. Each stage engages a team of "software agents", such as programmers, code reviewers, and test engineers, fostering collaborative dialogue and facilitating a seamless workflow. The chat chain acts as a facilitator, breaking down each stage into atomic subtasks. This enables dual roles, allowing for proposing and validating solutions through context-aware communication, leading to efficient resolution of specific subtasks. The instrumental analysis of ChatDev highlights its remarkable efficacy in software generation, enabling the completion of the entire software development process in under seven minutes at a cost of less than one dollar. It not only identifies and alleviates potential vulnerabilities but also rectifies potential hallucinations while maintaining commendable efficiency and cost-effectiveness. The potential of ChatDev unveils fresh possibilities for integrating LLMs into the realm of software development. Our code is available at https://github.com/OpenBMB/ChatDev.
http://arxiv.org/pdf/2307.07924
Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun
cs.SE, cs.CL, cs.MA
https://github.com/OpenBMB/ChatDev
null
cs.SE
20230716
20231219
[ { "id": "2204.06125" }, { "id": "2107.03374" }, { "id": "2305.13281" }, { "id": "2304.03442" }, { "id": "2304.05128" }, { "id": "2303.17760" } ]
2307.08074
83
astounded thoughts. ∅ mind had already flown to a faraway place. Inp: ∅ mind had already flown to a faraway place. Out: – CCT ©, when she is playing Xiao, not only can her beautiful face remain as usual, but also her charm increases. Why? © ∅ is playing, ∅ fingers press the holes on the flute, and in this way, ∅ tender and slim fingers will seem to be slimmer and fairer. ©, when shrinking ∅ month to blow, ∅ mouth appears to be smaller. I ask your lad beneath a tree. “My master’s gone for herbs, ” says he, “Amid the hills I know not where, For clouds have veiled them here and there. ” PT Inp: ©, when shrinking ∅ month to blow, ∅ mouth appears to be smaller. Out: Besides, when shrinking her month to blow, her mouth appears to be smaller. Inp: I ask your lad beneath a tree. Out: – Figure 4 TE – – TI Mu Xiaoxiao looked at his back aggrieved, why did it suddenly change like this? She was inexplicably trained
2307.08074#83
Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling
Modeling discourse -- the linguistic phenomena that go beyond individual sentences, is a fundamental yet challenging aspect of natural language processing (NLP). However, existing evaluation benchmarks primarily focus on the evaluation of inter-sentence properties and overlook critical discourse phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a benchmark that can evaluate intra-sentence discourse properties across a diverse set of NLP tasks, covering understanding, translation, and generation. Disco-Bench consists of 9 document-level testsets in the literature domain, which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese and/or English. For linguistic analysis, we also design a diagnostic test suite that can examine whether the target models learn discourse knowledge. We totally evaluate 20 general-, in-domain and commercial models based on Transformer, advanced pretraining architectures and large language models (LLMs). Our results show (1) the challenge and necessity of our evaluation benchmark; (2) fine-grained pretraining based on literary document-level training data consistently improves the modeling of discourse information. We will release the datasets, pretrained models, and leaderboard, which we hope can significantly facilitate research in this field: https://github.com/longyuewangdcu/Disco-Bench.
http://arxiv.org/pdf/2307.08074
Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu
cs.CL, cs.AI
Zhaopeng Tu is the corresponding author
null
cs.CL
20230716
20230722
[ { "id": "2109.05729" }, { "id": "1907.11692" }, { "id": "2110.06696" }, { "id": "2304.02210" }, { "id": "2012.11157" }, { "id": "1901.00158" }, { "id": "2305.10196" } ]
2307.08074
84
tree. Out: – Figure 4 TE – – TI Mu Xiaoxiao looked at his back aggrieved, why did it suddenly change like this? She was inexplicably trained for a while, which made her feel bad. When she got to class S, she was lying on the table and was sullen. Inp: Mu Xiaoxiao looked at his back ag- grieved, why did it suddenly change like this? [x] [x] [x] ... When she got to class S, she was lying on the table and was sullen. Out: She was inexplicably trained for a while, which made her feel bad. TC Chen Xu was hungry and cold. He used a small gas stove to cook a pot of noodles. The two gathered around the pot and devoured everything. After they ate the noodles, they felt alive. Inp: Chen Xu was hungry and cold. [x] [x] [x] ... Out: The two gathered around the pot and devoured everything. After they ate the noodles, they felt alive.
2307.08074#84
Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling
Modeling discourse -- the linguistic phenomena that go beyond individual sentences, is a fundamental yet challenging aspect of natural language processing (NLP). However, existing evaluation benchmarks primarily focus on the evaluation of inter-sentence properties and overlook critical discourse phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a benchmark that can evaluate intra-sentence discourse properties across a diverse set of NLP tasks, covering understanding, translation, and generation. Disco-Bench consists of 9 document-level testsets in the literature domain, which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese and/or English. For linguistic analysis, we also design a diagnostic test suite that can examine whether the target models learn discourse knowledge. We totally evaluate 20 general-, in-domain and commercial models based on Transformer, advanced pretraining architectures and large language models (LLMs). Our results show (1) the challenge and necessity of our evaluation benchmark; (2) fine-grained pretraining based on literary document-level training data consistently improves the modeling of discourse information. We will release the datasets, pretrained models, and leaderboard, which we hope can significantly facilitate research in this field: https://github.com/longyuewangdcu/Disco-Bench.
http://arxiv.org/pdf/2307.08074
Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu
cs.CL, cs.AI
Zhaopeng Tu is the corresponding author
null
cs.CL
20230716
20230722
[ { "id": "2109.05729" }, { "id": "1907.11692" }, { "id": "2110.06696" }, { "id": "2304.02210" }, { "id": "2012.11157" }, { "id": "1901.00158" }, { "id": "2305.10196" } ]
2307.08074
85
27 # Preprint Preprint Volume 1, Number 1 Table 18: The prompt for probing in LLMs. C represents the context for machine reading, C and SRC and TGT denote source and target languages, respectively. D represents a document contains several sentences. T1 . . .Tm refer to the translation candidates, where only one of them is a positive translation and the others are negative due to the modification of discourse-specific words. # Task Prompt
2307.08074#85
Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling
Modeling discourse -- the linguistic phenomena that go beyond individual sentences, is a fundamental yet challenging aspect of natural language processing (NLP). However, existing evaluation benchmarks primarily focus on the evaluation of inter-sentence properties and overlook critical discourse phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a benchmark that can evaluate intra-sentence discourse properties across a diverse set of NLP tasks, covering understanding, translation, and generation. Disco-Bench consists of 9 document-level testsets in the literature domain, which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese and/or English. For linguistic analysis, we also design a diagnostic test suite that can examine whether the target models learn discourse knowledge. We totally evaluate 20 general-, in-domain and commercial models based on Transformer, advanced pretraining architectures and large language models (LLMs). Our results show (1) the challenge and necessity of our evaluation benchmark; (2) fine-grained pretraining based on literary document-level training data consistently improves the modeling of discourse information. We will release the datasets, pretrained models, and leaderboard, which we hope can significantly facilitate research in this field: https://github.com/longyuewangdcu/Disco-Bench.
http://arxiv.org/pdf/2307.08074
Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu
cs.CL, cs.AI
Zhaopeng Tu is the corresponding author
null
cs.CL
20230716
20230722
[ { "id": "2109.05729" }, { "id": "1907.11692" }, { "id": "2110.06696" }, { "id": "2304.02210" }, { "id": "2012.11157" }, { "id": "1901.00158" }, { "id": "2305.10196" } ]
2307.08074
86
Disco-Bench Benchmark SI ZPR In this cloze reading comprehension task, I will input a passage of text and a sentence, and you will need to find relevant information from the text and determine the speaker of the sentence. Passage: P , Question: Q, Speaker: The zero-anaphora recovery task is to restore the expression of omitted pronouns in terms of position and form based on the anaphoric information in the sentence. Please restore the original sentence with <> as the marker. If there is no zero-anaphora phenomenon, output "none." MRC Answer the following multiple-choice questions. Choose A, B, C, orD as the final answer. "Content": C, "Question": Q, "Choices": [C1C2C3C4], "Answer": NT CCT PT Translate the given Chinese into English. D Translate this ancient text into modern Chinese. D Translate the given Chinese into English. D TE TI TC given a predefined text, the goal of TE is to insert appropriate words, phrases, or clauses for adding more details and deepening the meaning, while retaining coherence and cohesiveness." D The purpose of the text filling task is to
2307.08074#86
Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling
Modeling discourse -- the linguistic phenomena that go beyond individual sentences, is a fundamental yet challenging aspect of natural language processing (NLP). However, existing evaluation benchmarks primarily focus on the evaluation of inter-sentence properties and overlook critical discourse phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a benchmark that can evaluate intra-sentence discourse properties across a diverse set of NLP tasks, covering understanding, translation, and generation. Disco-Bench consists of 9 document-level testsets in the literature domain, which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese and/or English. For linguistic analysis, we also design a diagnostic test suite that can examine whether the target models learn discourse knowledge. We totally evaluate 20 general-, in-domain and commercial models based on Transformer, advanced pretraining architectures and large language models (LLMs). Our results show (1) the challenge and necessity of our evaluation benchmark; (2) fine-grained pretraining based on literary document-level training data consistently improves the modeling of discourse information. We will release the datasets, pretrained models, and leaderboard, which we hope can significantly facilitate research in this field: https://github.com/longyuewangdcu/Disco-Bench.
http://arxiv.org/pdf/2307.08074
Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu
cs.CL, cs.AI
Zhaopeng Tu is the corresponding author
null
cs.CL
20230716
20230722
[ { "id": "2109.05729" }, { "id": "1907.11692" }, { "id": "2110.06696" }, { "id": "2304.02210" }, { "id": "2012.11157" }, { "id": "1901.00158" }, { "id": "2305.10196" } ]
2307.08074
87
or clauses for adding more details and deepening the meaning, while retaining coherence and cohesiveness." D The purpose of the text filling task is to predict text fragments based on context. The input includes the two sentences before and after the target sentence. Please output the target sentence. S−2, S−1, S1, S2 Based on the given context, the text completion task requires outputting the next four sentences. S−2 Disco-Bench Cohesion Test Suit MRC Output the model’s confidence for the answer based on the content and NT TC
2307.08074#87
Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling
Modeling discourse -- the linguistic phenomena that go beyond individual sentences, is a fundamental yet challenging aspect of natural language processing (NLP). However, existing evaluation benchmarks primarily focus on the evaluation of inter-sentence properties and overlook critical discourse phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a benchmark that can evaluate intra-sentence discourse properties across a diverse set of NLP tasks, covering understanding, translation, and generation. Disco-Bench consists of 9 document-level testsets in the literature domain, which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese and/or English. For linguistic analysis, we also design a diagnostic test suite that can examine whether the target models learn discourse knowledge. We totally evaluate 20 general-, in-domain and commercial models based on Transformer, advanced pretraining architectures and large language models (LLMs). Our results show (1) the challenge and necessity of our evaluation benchmark; (2) fine-grained pretraining based on literary document-level training data consistently improves the modeling of discourse information. We will release the datasets, pretrained models, and leaderboard, which we hope can significantly facilitate research in this field: https://github.com/longyuewangdcu/Disco-Bench.
http://arxiv.org/pdf/2307.08074
Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu
cs.CL, cs.AI
Zhaopeng Tu is the corresponding author
null
cs.CL
20230716
20230722
[ { "id": "2109.05729" }, { "id": "1907.11692" }, { "id": "2110.06696" }, { "id": "2304.02210" }, { "id": "2012.11157" }, { "id": "1901.00158" }, { "id": "2305.10196" } ]
2307.08074
88
corresponding answer of the following multiple-choice reading comprehension. Answer the confidence for the following multiple-choice questions. Choose A, B, C, or D as the final answer. "Content": C, "Question": Q,"Choices": [C1C2C3C4],"Answer": "Cx", "Confidence": According to the Chinese text, which of the following is the correct English translation? Please output the correct translation’s corresponding number. Chinese: D English:[T1, T2, ..., Tm]. Correct translation number: Given the Chinese text, please evaluate the following sentences based on cohesion and fluency, and output the corresponding number of the optimal sentences: [S1, S2, ..., Sm]. 28 Wang et al. Disco-Bench: A Discourse-Aware Evaluation Benchmark References Banerjee, Satanjeev and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In ACL.
2307.08074#88
Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling
Modeling discourse -- the linguistic phenomena that go beyond individual sentences, is a fundamental yet challenging aspect of natural language processing (NLP). However, existing evaluation benchmarks primarily focus on the evaluation of inter-sentence properties and overlook critical discourse phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a benchmark that can evaluate intra-sentence discourse properties across a diverse set of NLP tasks, covering understanding, translation, and generation. Disco-Bench consists of 9 document-level testsets in the literature domain, which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese and/or English. For linguistic analysis, we also design a diagnostic test suite that can examine whether the target models learn discourse knowledge. We totally evaluate 20 general-, in-domain and commercial models based on Transformer, advanced pretraining architectures and large language models (LLMs). Our results show (1) the challenge and necessity of our evaluation benchmark; (2) fine-grained pretraining based on literary document-level training data consistently improves the modeling of discourse information. We will release the datasets, pretrained models, and leaderboard, which we hope can significantly facilitate research in this field: https://github.com/longyuewangdcu/Disco-Bench.
http://arxiv.org/pdf/2307.08074
Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu
cs.CL, cs.AI
Zhaopeng Tu is the corresponding author
null
cs.CL
20230716
20230722
[ { "id": "2109.05729" }, { "id": "1907.11692" }, { "id": "2110.06696" }, { "id": "2304.02210" }, { "id": "2012.11157" }, { "id": "1901.00158" }, { "id": "2305.10196" } ]
2307.08074
89
Bates, Madeleine. 1995. Models of natural language understanding. Proceedings of the National Academy of Sciences, 92(22):9977–9982. Bawden, Rachel, Rico Sennrich, Alexandra Birch, and Barry Haddow. 2018. Evaluating discourse phenomena in neural machine translation. In NAACL. Cai, Deng, Yizhe Zhang, Yichen Huang, Wai Lam, and Bill Dolan. 2020. Narrative incoherence detection. arXiv preprint arXiv:2012.11157. Cai, Xinyi and Deyi Xiong. 2020. A test suite for evaluating discourse phenomena in document-level neural machine translation. In Proceedings of the Second International Workshop of Discourse Processing, pages 13–17. Chen, Mingda, Zewei Chu, and Kevin Gimpel. 2019. Evaluation benchmarks and learning criteria for discourse-aware sentence representations. In EMNLP-IJCNLP. Conneau, Alexis and Douwe Kiela. 2018. Senteval: An evaluation toolkit for universal sentence representations. In LREC. Cook, Guy. 1989. Discourse. Oxford University Press. Crystal, David. 1985. A
2307.08074#89
Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling
Modeling discourse -- the linguistic phenomena that go beyond individual sentences, is a fundamental yet challenging aspect of natural language processing (NLP). However, existing evaluation benchmarks primarily focus on the evaluation of inter-sentence properties and overlook critical discourse phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a benchmark that can evaluate intra-sentence discourse properties across a diverse set of NLP tasks, covering understanding, translation, and generation. Disco-Bench consists of 9 document-level testsets in the literature domain, which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese and/or English. For linguistic analysis, we also design a diagnostic test suite that can examine whether the target models learn discourse knowledge. We totally evaluate 20 general-, in-domain and commercial models based on Transformer, advanced pretraining architectures and large language models (LLMs). Our results show (1) the challenge and necessity of our evaluation benchmark; (2) fine-grained pretraining based on literary document-level training data consistently improves the modeling of discourse information. We will release the datasets, pretrained models, and leaderboard, which we hope can significantly facilitate research in this field: https://github.com/longyuewangdcu/Disco-Bench.
http://arxiv.org/pdf/2307.08074
Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu
cs.CL, cs.AI
Zhaopeng Tu is the corresponding author
null
cs.CL
20230716
20230722
[ { "id": "2109.05729" }, { "id": "1907.11692" }, { "id": "2110.06696" }, { "id": "2304.02210" }, { "id": "2012.11157" }, { "id": "1901.00158" }, { "id": "2305.10196" } ]
2307.08074
90
Senteval: An evaluation toolkit for universal sentence representations. In LREC. Cook, Guy. 1989. Discourse. Oxford University Press. Crystal, David. 1985. A Dictionary of Linguistics and Phonetics. Oxford: Blackwell Publishers. Cui, Yiming, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, and Guoping Hu. 2020. Revisiting pre-trained models for Chinese natural language processing. In EMNLP: Findings. Cui, Yiming, Ting Liu, Wanxiang Che, Li Xiao, Zhipeng Chen, Wentao Ma, Shijin Wang, and Guoping Hu. 2019. A span-extraction dataset for Chinese machine reading comprehension. In EMNLP. De Beaugrande, Robert and Wolfgang Ulrich Dressler. 1981. Einführung in die Textlinguistik, volume 28. Tübingen: Niemeyer. Devlin, Jacob, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In NAACL.
2307.08074#90
Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling
Modeling discourse -- the linguistic phenomena that go beyond individual sentences, is a fundamental yet challenging aspect of natural language processing (NLP). However, existing evaluation benchmarks primarily focus on the evaluation of inter-sentence properties and overlook critical discourse phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a benchmark that can evaluate intra-sentence discourse properties across a diverse set of NLP tasks, covering understanding, translation, and generation. Disco-Bench consists of 9 document-level testsets in the literature domain, which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese and/or English. For linguistic analysis, we also design a diagnostic test suite that can examine whether the target models learn discourse knowledge. We totally evaluate 20 general-, in-domain and commercial models based on Transformer, advanced pretraining architectures and large language models (LLMs). Our results show (1) the challenge and necessity of our evaluation benchmark; (2) fine-grained pretraining based on literary document-level training data consistently improves the modeling of discourse information. We will release the datasets, pretrained models, and leaderboard, which we hope can significantly facilitate research in this field: https://github.com/longyuewangdcu/Disco-Bench.
http://arxiv.org/pdf/2307.08074
Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu
cs.CL, cs.AI
Zhaopeng Tu is the corresponding author
null
cs.CL
20230716
20230722
[ { "id": "2109.05729" }, { "id": "1907.11692" }, { "id": "2110.06696" }, { "id": "2304.02210" }, { "id": "2012.11157" }, { "id": "1901.00158" }, { "id": "2305.10196" } ]
2307.08074
91
Genzel, Dmitriy, Jakob Uszkoreit, and Franz Josef Och. 2010. “poetic” statistical machine translation: Rhyme and meter. In EMNLP. Ghazvininejad, Marjan, Yejin Choi, and Kevin Knight. 2018. Neural poetry translation. In NAACL. Guan, Jian, Zhuoer Feng, Yamei Chen, Ruilin He, Xiaoxi Mao, Changjie Fan, and Minlie Huang. 2022. LOT: A story-centric benchmark for evaluating chinese long text understanding and generation. TACL. Guan, Jian, Xiaoxi Mao, Changjie Fan, Zitao Liu, Wenbiao Ding, and Minlie Huang. 2021. Long text generation by modeling sentence-level and discourse-level coherence. In ACL. Halliday, Michael Alexander Kirkwood and Ruqaiya Hasan. 1976. Cohesion in english. Longman. Hanks, William F. 1987. Discourse genres in a theory of practice. American Ethnologist, 14(4):668–692. He, Hua, Denilson Barbosa, and Grzegorz Kondrak. 2013.
2307.08074#91
Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling
Modeling discourse -- the linguistic phenomena that go beyond individual sentences, is a fundamental yet challenging aspect of natural language processing (NLP). However, existing evaluation benchmarks primarily focus on the evaluation of inter-sentence properties and overlook critical discourse phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a benchmark that can evaluate intra-sentence discourse properties across a diverse set of NLP tasks, covering understanding, translation, and generation. Disco-Bench consists of 9 document-level testsets in the literature domain, which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese and/or English. For linguistic analysis, we also design a diagnostic test suite that can examine whether the target models learn discourse knowledge. We totally evaluate 20 general-, in-domain and commercial models based on Transformer, advanced pretraining architectures and large language models (LLMs). Our results show (1) the challenge and necessity of our evaluation benchmark; (2) fine-grained pretraining based on literary document-level training data consistently improves the modeling of discourse information. We will release the datasets, pretrained models, and leaderboard, which we hope can significantly facilitate research in this field: https://github.com/longyuewangdcu/Disco-Bench.
http://arxiv.org/pdf/2307.08074
Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu
cs.CL, cs.AI
Zhaopeng Tu is the corresponding author
null
cs.CL
20230716
20230722
[ { "id": "2109.05729" }, { "id": "1907.11692" }, { "id": "2110.06696" }, { "id": "2304.02210" }, { "id": "2012.11157" }, { "id": "1901.00158" }, { "id": "2305.10196" } ]
2307.08074
92
of practice. American Ethnologist, 14(4):668–692. He, Hua, Denilson Barbosa, and Grzegorz Kondrak. 2013. Identification of speakers in novels. In ACL. He, Jie, Wanqiu Long, and Deyi Xiong. 2022. Evaluating discourse cohesion in pre-trained language models. In Proceedings of the 3rd Workshop on Computational Approaches to Discourse, pages 28–34. Huang, Yichen, Yizhe Zhang, Oussama Elachqar, and Yu Cheng. 2020. INSET: Sentence infilling with INter-SEntential transformer. In ACL. Kevitt, Paul Mc, Derek Partridge, and Yorick Wilks. 1992. Approaches to natural language discourse processing. Artificial Intelligence Review, 6(4):333–364. Kong, Fang and Guodong Zhou. 2010. A tree kernel-based unified framework for chinese zero anaphora resolution. In EMNLP. Kreutzer, Julia, Joshua Uyheng, and Stefan Riezler. 2018. Reliability and learnability of human bandit feedback for sequence-to-sequence
2307.08074#92
Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling
Modeling discourse -- the linguistic phenomena that go beyond individual sentences, is a fundamental yet challenging aspect of natural language processing (NLP). However, existing evaluation benchmarks primarily focus on the evaluation of inter-sentence properties and overlook critical discourse phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a benchmark that can evaluate intra-sentence discourse properties across a diverse set of NLP tasks, covering understanding, translation, and generation. Disco-Bench consists of 9 document-level testsets in the literature domain, which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese and/or English. For linguistic analysis, we also design a diagnostic test suite that can examine whether the target models learn discourse knowledge. We totally evaluate 20 general-, in-domain and commercial models based on Transformer, advanced pretraining architectures and large language models (LLMs). Our results show (1) the challenge and necessity of our evaluation benchmark; (2) fine-grained pretraining based on literary document-level training data consistently improves the modeling of discourse information. We will release the datasets, pretrained models, and leaderboard, which we hope can significantly facilitate research in this field: https://github.com/longyuewangdcu/Disco-Bench.
http://arxiv.org/pdf/2307.08074
Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu
cs.CL, cs.AI
Zhaopeng Tu is the corresponding author
null
cs.CL
20230716
20230722
[ { "id": "2109.05729" }, { "id": "1907.11692" }, { "id": "2110.06696" }, { "id": "2304.02210" }, { "id": "2012.11157" }, { "id": "1901.00158" }, { "id": "2305.10196" } ]
2307.08074
93
Kreutzer, Julia, Joshua Uyheng, and Stefan Riezler. 2018. Reliability and learnability of human bandit feedback for sequence-to-sequence reinforcement learning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 1777–1788, Association for Computational Linguistics. Krippendorff, Klaus. 2013. Content analysis: An introduction to its methodology. Sage publications.
2307.08074#93
Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling
Modeling discourse -- the linguistic phenomena that go beyond individual sentences, is a fundamental yet challenging aspect of natural language processing (NLP). However, existing evaluation benchmarks primarily focus on the evaluation of inter-sentence properties and overlook critical discourse phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a benchmark that can evaluate intra-sentence discourse properties across a diverse set of NLP tasks, covering understanding, translation, and generation. Disco-Bench consists of 9 document-level testsets in the literature domain, which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese and/or English. For linguistic analysis, we also design a diagnostic test suite that can examine whether the target models learn discourse knowledge. We totally evaluate 20 general-, in-domain and commercial models based on Transformer, advanced pretraining architectures and large language models (LLMs). Our results show (1) the challenge and necessity of our evaluation benchmark; (2) fine-grained pretraining based on literary document-level training data consistently improves the modeling of discourse information. We will release the datasets, pretrained models, and leaderboard, which we hope can significantly facilitate research in this field: https://github.com/longyuewangdcu/Disco-Bench.
http://arxiv.org/pdf/2307.08074
Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu
cs.CL, cs.AI
Zhaopeng Tu is the corresponding author
null
cs.CL
20230716
20230722
[ { "id": "2109.05729" }, { "id": "1907.11692" }, { "id": "2110.06696" }, { "id": "2304.02210" }, { "id": "2012.11157" }, { "id": "1901.00158" }, { "id": "2305.10196" } ]
2307.08074
94
Elson, Benjamin Franklin and Velma Pickett. 1983. Beginning morphology and syntax. Summer Inst of Linguistics. # Lewis, Mike, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman 29 Preprint Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In ACL. Li, Charles N and Sandra A Thompson. 1989. Mandarin Chinese: A functional reference grammar. University of California Press, Oakland, California, USA.
2307.08074#94
Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling
Modeling discourse -- the linguistic phenomena that go beyond individual sentences, is a fundamental yet challenging aspect of natural language processing (NLP). However, existing evaluation benchmarks primarily focus on the evaluation of inter-sentence properties and overlook critical discourse phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a benchmark that can evaluate intra-sentence discourse properties across a diverse set of NLP tasks, covering understanding, translation, and generation. Disco-Bench consists of 9 document-level testsets in the literature domain, which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese and/or English. For linguistic analysis, we also design a diagnostic test suite that can examine whether the target models learn discourse knowledge. We totally evaluate 20 general-, in-domain and commercial models based on Transformer, advanced pretraining architectures and large language models (LLMs). Our results show (1) the challenge and necessity of our evaluation benchmark; (2) fine-grained pretraining based on literary document-level training data consistently improves the modeling of discourse information. We will release the datasets, pretrained models, and leaderboard, which we hope can significantly facilitate research in this field: https://github.com/longyuewangdcu/Disco-Bench.
http://arxiv.org/pdf/2307.08074
Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu
cs.CL, cs.AI
Zhaopeng Tu is the corresponding author
null
cs.CL
20230716
20230722
[ { "id": "2109.05729" }, { "id": "1907.11692" }, { "id": "2110.06696" }, { "id": "2304.02210" }, { "id": "2012.11157" }, { "id": "1901.00158" }, { "id": "2305.10196" } ]
2307.08074
95
Li, Charles N and Sandra A Thompson. 1989. Mandarin Chinese: A functional reference grammar. University of California Press, Oakland, California, USA. Li, Jiwei, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In NAACL. Liang, Yaobo, Nan Duan, Yeyun Gong, Ning Wu, Fenfei Guo, Weizhen Qi, Ming Gong, Linjun Shou, Daxin Jiang, Guihong Cao, Xiaodong Fan, Bruce Zhang, Rahul Agrawal, Edward Cui, Sining Wei, Taroon Bharti, Ying Qiao, Jiun-Hung Chen, Winnie Wu, Shuguang Liu, Fan Yang, Rangan Majumder, and Ming Zhou. 2020. XGLUE: A new benchmark dataset for cross-lingual pre-training, understanding and generation. CoRR. Lin, Ziheng, Hwee Tou Ng, and Min-Yen Kan. 2011. Automatically evaluating text coherence using discourse relations. In ACL.
2307.08074#95
Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling
Modeling discourse -- the linguistic phenomena that go beyond individual sentences, is a fundamental yet challenging aspect of natural language processing (NLP). However, existing evaluation benchmarks primarily focus on the evaluation of inter-sentence properties and overlook critical discourse phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a benchmark that can evaluate intra-sentence discourse properties across a diverse set of NLP tasks, covering understanding, translation, and generation. Disco-Bench consists of 9 document-level testsets in the literature domain, which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese and/or English. For linguistic analysis, we also design a diagnostic test suite that can examine whether the target models learn discourse knowledge. We totally evaluate 20 general-, in-domain and commercial models based on Transformer, advanced pretraining architectures and large language models (LLMs). Our results show (1) the challenge and necessity of our evaluation benchmark; (2) fine-grained pretraining based on literary document-level training data consistently improves the modeling of discourse information. We will release the datasets, pretrained models, and leaderboard, which we hope can significantly facilitate research in this field: https://github.com/longyuewangdcu/Disco-Bench.
http://arxiv.org/pdf/2307.08074
Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu
cs.CL, cs.AI
Zhaopeng Tu is the corresponding author
null
cs.CL
20230716
20230722
[ { "id": "2109.05729" }, { "id": "1907.11692" }, { "id": "2110.06696" }, { "id": "2304.02210" }, { "id": "2012.11157" }, { "id": "1901.00158" }, { "id": "2305.10196" } ]
2307.08074
96
Lin, Ziheng, Hwee Tou Ng, and Min-Yen Kan. 2011. Automatically evaluating text coherence using discourse relations. In ACL. Liu, Shanshan, Xin Zhang, Sheng Zhang, Hui Wang, and Weiming Zhang. 2019a. Neural machine reading comprehension: Methods and trends. Applied Sciences, 9(18):3698. Liu, Yinhan, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pre-training for neural machine translation. Transactions of the Association for Computational Linguistics, 8:726–742. Liu, Yinhan, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Longacre, Robert E. 1990. Storyline concerns and word order typology in East and West Africa, volume 10. Los Angeles: African Studies Center, UCLA.
2307.08074#96
Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling
Modeling discourse -- the linguistic phenomena that go beyond individual sentences, is a fundamental yet challenging aspect of natural language processing (NLP). However, existing evaluation benchmarks primarily focus on the evaluation of inter-sentence properties and overlook critical discourse phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a benchmark that can evaluate intra-sentence discourse properties across a diverse set of NLP tasks, covering understanding, translation, and generation. Disco-Bench consists of 9 document-level testsets in the literature domain, which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese and/or English. For linguistic analysis, we also design a diagnostic test suite that can examine whether the target models learn discourse knowledge. We totally evaluate 20 general-, in-domain and commercial models based on Transformer, advanced pretraining architectures and large language models (LLMs). Our results show (1) the challenge and necessity of our evaluation benchmark; (2) fine-grained pretraining based on literary document-level training data consistently improves the modeling of discourse information. We will release the datasets, pretrained models, and leaderboard, which we hope can significantly facilitate research in this field: https://github.com/longyuewangdcu/Disco-Bench.
http://arxiv.org/pdf/2307.08074
Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu
cs.CL, cs.AI
Zhaopeng Tu is the corresponding author
null
cs.CL
20230716
20230722
[ { "id": "2109.05729" }, { "id": "1907.11692" }, { "id": "2110.06696" }, { "id": "2304.02210" }, { "id": "2012.11157" }, { "id": "1901.00158" }, { "id": "2305.10196" } ]
2307.08074
97
Longacre, Robert E. 1990. Storyline concerns and word order typology in East and West Africa, volume 10. Los Angeles: African Studies Center, UCLA. Mann, William C and Sandra A Thompson. 1988. Rhetorical structure theory: Toward a functional theory of text organization. Text-Interdisciplinary Journal for the Study of Discourse, 8(3):243–281. Matusov, Evgeny. 2019. The challenges of using neural machine translation for literature. In Proceedings of the qualities of literary machine translation. 30 Volume 1, Number 1 Mitani, Aya A, Phoebe E Freer, and Kerrie P Nelson. 2017. Summary measures of agreement and association between many raters’ ordinal classifications. Annals of epidemiology, 27(10):677–685.
2307.08074#97
Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling
Modeling discourse -- the linguistic phenomena that go beyond individual sentences, is a fundamental yet challenging aspect of natural language processing (NLP). However, existing evaluation benchmarks primarily focus on the evaluation of inter-sentence properties and overlook critical discourse phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a benchmark that can evaluate intra-sentence discourse properties across a diverse set of NLP tasks, covering understanding, translation, and generation. Disco-Bench consists of 9 document-level testsets in the literature domain, which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese and/or English. For linguistic analysis, we also design a diagnostic test suite that can examine whether the target models learn discourse knowledge. We totally evaluate 20 general-, in-domain and commercial models based on Transformer, advanced pretraining architectures and large language models (LLMs). Our results show (1) the challenge and necessity of our evaluation benchmark; (2) fine-grained pretraining based on literary document-level training data consistently improves the modeling of discourse information. We will release the datasets, pretrained models, and leaderboard, which we hope can significantly facilitate research in this field: https://github.com/longyuewangdcu/Disco-Bench.
http://arxiv.org/pdf/2307.08074
Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu
cs.CL, cs.AI
Zhaopeng Tu is the corresponding author
null
cs.CL
20230716
20230722
[ { "id": "2109.05729" }, { "id": "1907.11692" }, { "id": "2110.06696" }, { "id": "2304.02210" }, { "id": "2012.11157" }, { "id": "1901.00158" }, { "id": "2305.10196" } ]
2307.08074
98
Mitkov, Ruslan. 2014. Anaphora resolution. Routledge. Müller, Mathias, Annette Rios, Elena Voita, and Rico Sennrich. 2018. A large-scale test set for the evaluation of context-aware pronoun translation in neural machine translation. In WMT. Ott, Myle, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine translation. In WMT. Ouyang, Long, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Gray, et al. 2022. Training language models to follow instructions with human feedback. In Advances in Neural Information Processing Systems. Papineni, Kishore, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: A Method for Automatic Evaluation of Machine Translation. In ACL. Pires, Telmo, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual bert? In ACL. Popovic, Maja. 2021. Agree to disagree: Analysis of inter-annotator disagreements in human evaluation of machine translation
2307.08074#98
Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling
Modeling discourse -- the linguistic phenomena that go beyond individual sentences, is a fundamental yet challenging aspect of natural language processing (NLP). However, existing evaluation benchmarks primarily focus on the evaluation of inter-sentence properties and overlook critical discourse phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a benchmark that can evaluate intra-sentence discourse properties across a diverse set of NLP tasks, covering understanding, translation, and generation. Disco-Bench consists of 9 document-level testsets in the literature domain, which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese and/or English. For linguistic analysis, we also design a diagnostic test suite that can examine whether the target models learn discourse knowledge. We totally evaluate 20 general-, in-domain and commercial models based on Transformer, advanced pretraining architectures and large language models (LLMs). Our results show (1) the challenge and necessity of our evaluation benchmark; (2) fine-grained pretraining based on literary document-level training data consistently improves the modeling of discourse information. We will release the datasets, pretrained models, and leaderboard, which we hope can significantly facilitate research in this field: https://github.com/longyuewangdcu/Disco-Bench.
http://arxiv.org/pdf/2307.08074
Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu
cs.CL, cs.AI
Zhaopeng Tu is the corresponding author
null
cs.CL
20230716
20230722
[ { "id": "2109.05729" }, { "id": "1907.11692" }, { "id": "2110.06696" }, { "id": "2304.02210" }, { "id": "2012.11157" }, { "id": "1901.00158" }, { "id": "2305.10196" } ]
2307.08074
99
is multilingual bert? In ACL. Popovic, Maja. 2021. Agree to disagree: Analysis of inter-annotator disagreements in human evaluation of machine translation output. In Proceedings of the 25th Conference on Computational Natural Language Learning, CoNLL 2021, Online, November 10-11, 2021, pages 234–243, Association for Computational Linguistics. Radford, Alec, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. Raffel, Colin, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140):1–67. Rajpurkar, Pranav, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In EMNLP. Rao, Sudha, Allyson
2307.08074#99
Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling
Modeling discourse -- the linguistic phenomena that go beyond individual sentences, is a fundamental yet challenging aspect of natural language processing (NLP). However, existing evaluation benchmarks primarily focus on the evaluation of inter-sentence properties and overlook critical discourse phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a benchmark that can evaluate intra-sentence discourse properties across a diverse set of NLP tasks, covering understanding, translation, and generation. Disco-Bench consists of 9 document-level testsets in the literature domain, which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese and/or English. For linguistic analysis, we also design a diagnostic test suite that can examine whether the target models learn discourse knowledge. We totally evaluate 20 general-, in-domain and commercial models based on Transformer, advanced pretraining architectures and large language models (LLMs). Our results show (1) the challenge and necessity of our evaluation benchmark; (2) fine-grained pretraining based on literary document-level training data consistently improves the modeling of discourse information. We will release the datasets, pretrained models, and leaderboard, which we hope can significantly facilitate research in this field: https://github.com/longyuewangdcu/Disco-Bench.
http://arxiv.org/pdf/2307.08074
Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu
cs.CL, cs.AI
Zhaopeng Tu is the corresponding author
null
cs.CL
20230716
20230722
[ { "id": "2109.05729" }, { "id": "1907.11692" }, { "id": "2110.06696" }, { "id": "2304.02210" }, { "id": "2012.11157" }, { "id": "1901.00158" }, { "id": "2305.10196" } ]
2307.08074
101
Wang et al. Disco-Bench: A Discourse-Aware Evaluation Benchmark Rei, Ricardo, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT evaluation. In EMNLP. Reiter, Ehud and Robert Dale. 1997. Building applied natural language generation systems. Natural Language Engineering, 3(1):57–87. Sanders, T and H Pander Maat. 2006. Cohesion and coherence: Linguistic approaches. Reading, 99:440–466. # Shao, Yunfan, Zhichao Geng, Yitao Liu, Junqi Dai, Fei Yang, Li Zhe, Hujun Bao, and Xipeng Qiu. 2021. Cpt: A pre-trained unbalanced transformer for both chinese language understanding and generation. arXiv preprint arXiv:2109.05729. Snover, Matthew, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In AMTA. Song, Linfeng, Kun Xu, Yue Zhang, Jianshu Chen, and Dong Yu. 2020. ZPR2: Joint zero pronoun recovery and resolution using multi-task learning and BERT. In ACL.
2307.08074#101
Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling
Modeling discourse -- the linguistic phenomena that go beyond individual sentences, is a fundamental yet challenging aspect of natural language processing (NLP). However, existing evaluation benchmarks primarily focus on the evaluation of inter-sentence properties and overlook critical discourse phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a benchmark that can evaluate intra-sentence discourse properties across a diverse set of NLP tasks, covering understanding, translation, and generation. Disco-Bench consists of 9 document-level testsets in the literature domain, which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese and/or English. For linguistic analysis, we also design a diagnostic test suite that can examine whether the target models learn discourse knowledge. We totally evaluate 20 general-, in-domain and commercial models based on Transformer, advanced pretraining architectures and large language models (LLMs). Our results show (1) the challenge and necessity of our evaluation benchmark; (2) fine-grained pretraining based on literary document-level training data consistently improves the modeling of discourse information. We will release the datasets, pretrained models, and leaderboard, which we hope can significantly facilitate research in this field: https://github.com/longyuewangdcu/Disco-Bench.
http://arxiv.org/pdf/2307.08074
Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu
cs.CL, cs.AI
Zhaopeng Tu is the corresponding author
null
cs.CL
20230716
20230722
[ { "id": "2109.05729" }, { "id": "1907.11692" }, { "id": "2110.06696" }, { "id": "2304.02210" }, { "id": "2012.11157" }, { "id": "1901.00158" }, { "id": "2305.10196" } ]
2307.08074
102
Sun, Kai, Dian Yu, Dong Yu, and Claire Cardie. 2020. Investigating prior knowledge for challenging chinese machine reading comprehension. TACL. Tian, Huishuang, Kexin Yang, Dayiheng Liu, and Jiancheng Lv. 2021. Anchibert: A pre-trained model for ancient chinese language understanding and generation. In 2021 IEEE International Joint Conference on Neural Networks. Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS. Voita, Elena, Rico Sennrich, and Ivan Titov. 2019. When a good translation is wrong in context: Context-aware machine translation improves on deixis, ellipsis, and lexical cohesion. In ACL. Voita, Elena, Pavel Serdyukov, Rico Sennrich, and Ivan Titov. 2018. Context-aware neural machine translation learns anaphora resolution. In ACL.
2307.08074#102
Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling
Modeling discourse -- the linguistic phenomena that go beyond individual sentences, is a fundamental yet challenging aspect of natural language processing (NLP). However, existing evaluation benchmarks primarily focus on the evaluation of inter-sentence properties and overlook critical discourse phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a benchmark that can evaluate intra-sentence discourse properties across a diverse set of NLP tasks, covering understanding, translation, and generation. Disco-Bench consists of 9 document-level testsets in the literature domain, which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese and/or English. For linguistic analysis, we also design a diagnostic test suite that can examine whether the target models learn discourse knowledge. We totally evaluate 20 general-, in-domain and commercial models based on Transformer, advanced pretraining architectures and large language models (LLMs). Our results show (1) the challenge and necessity of our evaluation benchmark; (2) fine-grained pretraining based on literary document-level training data consistently improves the modeling of discourse information. We will release the datasets, pretrained models, and leaderboard, which we hope can significantly facilitate research in this field: https://github.com/longyuewangdcu/Disco-Bench.
http://arxiv.org/pdf/2307.08074
Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu
cs.CL, cs.AI
Zhaopeng Tu is the corresponding author
null
cs.CL
20230716
20230722
[ { "id": "2109.05729" }, { "id": "1907.11692" }, { "id": "2110.06696" }, { "id": "2304.02210" }, { "id": "2012.11157" }, { "id": "1901.00158" }, { "id": "2305.10196" } ]
2307.08074
103
Voita, Elena, Pavel Serdyukov, Rico Sennrich, and Ivan Titov. 2018. Context-aware neural machine translation learns anaphora resolution. In ACL. Wang, Alex, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019a. Superglue: A stickier benchmark for general-purpose language understanding systems. In NeurIPS. natural language understanding. In EMNLP.
2307.08074#103
Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling
Modeling discourse -- the linguistic phenomena that go beyond individual sentences, is a fundamental yet challenging aspect of natural language processing (NLP). However, existing evaluation benchmarks primarily focus on the evaluation of inter-sentence properties and overlook critical discourse phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a benchmark that can evaluate intra-sentence discourse properties across a diverse set of NLP tasks, covering understanding, translation, and generation. Disco-Bench consists of 9 document-level testsets in the literature domain, which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese and/or English. For linguistic analysis, we also design a diagnostic test suite that can examine whether the target models learn discourse knowledge. We totally evaluate 20 general-, in-domain and commercial models based on Transformer, advanced pretraining architectures and large language models (LLMs). Our results show (1) the challenge and necessity of our evaluation benchmark; (2) fine-grained pretraining based on literary document-level training data consistently improves the modeling of discourse information. We will release the datasets, pretrained models, and leaderboard, which we hope can significantly facilitate research in this field: https://github.com/longyuewangdcu/Disco-Bench.
http://arxiv.org/pdf/2307.08074
Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu
cs.CL, cs.AI
Zhaopeng Tu is the corresponding author
null
cs.CL
20230716
20230722
[ { "id": "2109.05729" }, { "id": "1907.11692" }, { "id": "2110.06696" }, { "id": "2304.02210" }, { "id": "2012.11157" }, { "id": "1901.00158" }, { "id": "2305.10196" } ]
2307.08074
104
Wang, Longyue, Siyou Liu, Mingzhou Xu, Linfeng Song, Shuming Shi, and Zhaopeng Tu. 2023a. A survey on zero pronoun translation. arXiv preprint arXiv:2305.10196. Wang, Longyue, Chenyang Lyu, Tianbo Ji, Zhirui Zhang, Dian Yu, Shuming Shi, and Zhaopeng Tu. 2023b. Document-level machine translation with large language models. arXiv preprint arXiv:2304.02210. Wang, Longyue, Zhaopeng Tu, Shuming Shi, Tong Zhang, Yvette Graham, and Qun Liu. 2018b. Translating pro-drop languages with reconstruction models. In Proceedings of the AAAI Conference on Artificial Intelligence. Wang, Longyue, Zhaopeng Tu, Xing Wang, and Shuming Shi. 2019b. One model to learn both: Zero pronoun prediction and translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 921–930. Wang,
2307.08074#104
Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling
Modeling discourse -- the linguistic phenomena that go beyond individual sentences, is a fundamental yet challenging aspect of natural language processing (NLP). However, existing evaluation benchmarks primarily focus on the evaluation of inter-sentence properties and overlook critical discourse phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a benchmark that can evaluate intra-sentence discourse properties across a diverse set of NLP tasks, covering understanding, translation, and generation. Disco-Bench consists of 9 document-level testsets in the literature domain, which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese and/or English. For linguistic analysis, we also design a diagnostic test suite that can examine whether the target models learn discourse knowledge. We totally evaluate 20 general-, in-domain and commercial models based on Transformer, advanced pretraining architectures and large language models (LLMs). Our results show (1) the challenge and necessity of our evaluation benchmark; (2) fine-grained pretraining based on literary document-level training data consistently improves the modeling of discourse information. We will release the datasets, pretrained models, and leaderboard, which we hope can significantly facilitate research in this field: https://github.com/longyuewangdcu/Disco-Bench.
http://arxiv.org/pdf/2307.08074
Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu
cs.CL, cs.AI
Zhaopeng Tu is the corresponding author
null
cs.CL
20230716
20230722
[ { "id": "2109.05729" }, { "id": "1907.11692" }, { "id": "2110.06696" }, { "id": "2304.02210" }, { "id": "2012.11157" }, { "id": "1901.00158" }, { "id": "2305.10196" } ]
2307.08074
105
Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 921–930. Wang, Longyue, Zhaopeng Tu, Andy Way, and Qun Liu. 2017. Exploiting cross-sentence context for neural machine translation. In EMNLP. Wang, Longyue, Zhaopeng Tu, Andy Way, and Qun Liu. 2018c. Learning to jointly translate and predict dropped pronouns with a shared reconstruction mechanism. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2997–3002. Wang, Longyue, Zhaopeng Tu, Xiaojun Zhang, Hang Li, Andy Way, and Qun Liu. 2016. A novel approach for dropped pronoun translation. In NAACL. Wang, Wenxuan, Wenxiang Jiao, Yongchang Hao, Xing Wang, Shuming Shi, Zhaopeng Tu, and Michael Lyu. 2022. Understanding and improving sequence-to-sequence pretraining for neural machine translation. In ACL. Wanner, Leo. 1996. Lexical choice in text generation and machine translation. Machine Translation,
2307.08074#105
Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling
Modeling discourse -- the linguistic phenomena that go beyond individual sentences, is a fundamental yet challenging aspect of natural language processing (NLP). However, existing evaluation benchmarks primarily focus on the evaluation of inter-sentence properties and overlook critical discourse phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a benchmark that can evaluate intra-sentence discourse properties across a diverse set of NLP tasks, covering understanding, translation, and generation. Disco-Bench consists of 9 document-level testsets in the literature domain, which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese and/or English. For linguistic analysis, we also design a diagnostic test suite that can examine whether the target models learn discourse knowledge. We totally evaluate 20 general-, in-domain and commercial models based on Transformer, advanced pretraining architectures and large language models (LLMs). Our results show (1) the challenge and necessity of our evaluation benchmark; (2) fine-grained pretraining based on literary document-level training data consistently improves the modeling of discourse information. We will release the datasets, pretrained models, and leaderboard, which we hope can significantly facilitate research in this field: https://github.com/longyuewangdcu/Disco-Bench.
http://arxiv.org/pdf/2307.08074
Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu
cs.CL, cs.AI
Zhaopeng Tu is the corresponding author
null
cs.CL
20230716
20230722
[ { "id": "2109.05729" }, { "id": "1907.11692" }, { "id": "2110.06696" }, { "id": "2304.02210" }, { "id": "2012.11157" }, { "id": "1901.00158" }, { "id": "2305.10196" } ]
2307.08074
106
improving sequence-to-sequence pretraining for neural machine translation. In ACL. Wanner, Leo. 1996. Lexical choice in text generation and machine translation. Machine Translation, 11(1):3–35. Wong, Billy TM and Chunyu Kit. 2012. Extending machine translation evaluation metrics with lexical cohesion to document level. In EMNLP. Xiong, Deyi, Guosheng Ben, Min Zhang, Yajuan Lv, and Qun Liu. 2013. Modeling lexical cohesion for document-level machine translation. In IJCAI, Beijing,
2307.08074#106
Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling
Modeling discourse -- the linguistic phenomena that go beyond individual sentences, is a fundamental yet challenging aspect of natural language processing (NLP). However, existing evaluation benchmarks primarily focus on the evaluation of inter-sentence properties and overlook critical discourse phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a benchmark that can evaluate intra-sentence discourse properties across a diverse set of NLP tasks, covering understanding, translation, and generation. Disco-Bench consists of 9 document-level testsets in the literature domain, which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese and/or English. For linguistic analysis, we also design a diagnostic test suite that can examine whether the target models learn discourse knowledge. We totally evaluate 20 general-, in-domain and commercial models based on Transformer, advanced pretraining architectures and large language models (LLMs). Our results show (1) the challenge and necessity of our evaluation benchmark; (2) fine-grained pretraining based on literary document-level training data consistently improves the modeling of discourse information. We will release the datasets, pretrained models, and leaderboard, which we hope can significantly facilitate research in this field: https://github.com/longyuewangdcu/Disco-Bench.
http://arxiv.org/pdf/2307.08074
Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu
cs.CL, cs.AI
Zhaopeng Tu is the corresponding author
null
cs.CL
20230716
20230722
[ { "id": "2109.05729" }, { "id": "1907.11692" }, { "id": "2110.06696" }, { "id": "2304.02210" }, { "id": "2012.11157" }, { "id": "1901.00158" }, { "id": "2305.10196" } ]
2307.08074
108
Xu, Liang, Xuanwei Zhang, and Qianqian Dong. 2020. Cluecorpus2020: A large-scale chinese corpus for pre-training language model. ArXiv, abs/2003.01355. Yang, Yaqin, Yalin Liu, and Nianwen Xue. 2015. Recovering dropped pronouns from chinese text messages. In ACL-IJCNLP. Yang, Yaqin and Nianwen Xue. 2010. Chasing the ghost: recovering empty categories in the chinese treebank. In COLING. Zeng, Changchang, Shaobo Li, Qin Li, Jie Hu, and Jianjun Hu. 2020. A survey on machine reading comprehension—tasks, evaluation metrics and benchmark datasets. Applied Sciences, 10(21):7640. Zhang, Tianyi, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019a. Bertscore: Evaluating text generation with bert. In International Conference on Learning Representations. Zhang, Weinan, Ting Liu, Qingyu Yin, and Yu Zhang. 2019b. Neural recovery machine for Chinese dropped pronoun. In
2307.08074#108
Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling
Modeling discourse -- the linguistic phenomena that go beyond individual sentences, is a fundamental yet challenging aspect of natural language processing (NLP). However, existing evaluation benchmarks primarily focus on the evaluation of inter-sentence properties and overlook critical discourse phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a benchmark that can evaluate intra-sentence discourse properties across a diverse set of NLP tasks, covering understanding, translation, and generation. Disco-Bench consists of 9 document-level testsets in the literature domain, which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese and/or English. For linguistic analysis, we also design a diagnostic test suite that can examine whether the target models learn discourse knowledge. We totally evaluate 20 general-, in-domain and commercial models based on Transformer, advanced pretraining architectures and large language models (LLMs). Our results show (1) the challenge and necessity of our evaluation benchmark; (2) fine-grained pretraining based on literary document-level training data consistently improves the modeling of discourse information. We will release the datasets, pretrained models, and leaderboard, which we hope can significantly facilitate research in this field: https://github.com/longyuewangdcu/Disco-Bench.
http://arxiv.org/pdf/2307.08074
Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu
cs.CL, cs.AI
Zhaopeng Tu is the corresponding author
null
cs.CL
20230716
20230722
[ { "id": "2109.05729" }, { "id": "1907.11692" }, { "id": "2110.06696" }, { "id": "2304.02210" }, { "id": "2012.11157" }, { "id": "1901.00158" }, { "id": "2305.10196" } ]
2307.08074
109
Zhang, Weinan, Ting Liu, Qingyu Yin, and Yu Zhang. 2019b. Neural recovery machine for Chinese dropped pronoun. In Frontiers of Computer Science. Zhang, Zhuosheng, Hanqing Zhang, Keming Chen, Yuhang Guo, Jingyun Hua, Yulong Wang, and Ming Zhou. 2021. Mengzi: Towards lightweight yet ingenious pre-trained models for chinese. arXiv preprint arXiv:2110.06696. Zhao, Zhe, Hui Chen, Jinbin Zhang, Wayne Xin Zhao, Tao Liu, Wei Lu, Xi Chen, Haotang Deng, Qi Ju, and Xiaoyong Du. 2019. Uer: An open-source toolkit for pre-training models. In EMNLP. Zheng, Chujie, Minlie Huang, and Aixin Sun. 2019. ChID: A large-scale Chinese IDiom dataset for cloze test. In ACL. Zhu, Wanrong, Zhiting Hu, and Eric Xing. 2019. Text infilling. arXiv preprint arXiv:1901.00158.
2307.08074#109
Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling
Modeling discourse -- the linguistic phenomena that go beyond individual sentences, is a fundamental yet challenging aspect of natural language processing (NLP). However, existing evaluation benchmarks primarily focus on the evaluation of inter-sentence properties and overlook critical discourse phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a benchmark that can evaluate intra-sentence discourse properties across a diverse set of NLP tasks, covering understanding, translation, and generation. Disco-Bench consists of 9 document-level testsets in the literature domain, which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese and/or English. For linguistic analysis, we also design a diagnostic test suite that can examine whether the target models learn discourse knowledge. We totally evaluate 20 general-, in-domain and commercial models based on Transformer, advanced pretraining architectures and large language models (LLMs). Our results show (1) the challenge and necessity of our evaluation benchmark; (2) fine-grained pretraining based on literary document-level training data consistently improves the modeling of discourse information. We will release the datasets, pretrained models, and leaderboard, which we hope can significantly facilitate research in this field: https://github.com/longyuewangdcu/Disco-Bench.
http://arxiv.org/pdf/2307.08074
Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu
cs.CL, cs.AI
Zhaopeng Tu is the corresponding author
null
cs.CL
20230716
20230722
[ { "id": "2109.05729" }, { "id": "1907.11692" }, { "id": "2110.06696" }, { "id": "2304.02210" }, { "id": "2012.11157" }, { "id": "1901.00158" }, { "id": "2305.10196" } ]
2307.07871
0
3 2 0 2 v o N 3 2 ] I A . s c [ 2 v 1 7 8 7 0 . 7 0 3 2 : v i X r a The SocialAI School The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents # Grgur Kovač Flowers Team, Inria (FR) [email protected] # Rémy Portelas Ubisoft La Forge (FR) Flowers Team, Inria (FR) [email protected] Peter Ford Dominey INSERM UMR1093-CAPS, Université Bourgogne (FR) Robot Cognition Laboratory, Institute Marey (FR) [email protected] # Pierre-Yves Oudeyer Flowers Team, Inria (FR) [email protected] # Abstract
2307.07871#0
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
1
Developmental psychologists have long-established socio-cognitive abilities as fundamen- tal to human intelligence and development. These abilities enable individuals to enter, learn from, and contribute to a surrounding culture. This drives the process of cumulative cultural evolution, which is responsible for humanity’s most remarkable achievements. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture as well. We draw inspiration from the work of Michael Tomasello and Jerome Bruner, who studied socio-cognitive development and emphasized the influence of a cultural environment on intelligence. We outline a broader set of concepts than those currently studied in AI to provide a foundation for research in artificial social intelligence. Those concepts include social cognition (joint attention, perspective taking), communication, social learning, formats, and scaffolding. To facilitate research in this domain, we present The SocialAI school - a tool that offers a customizable parameterized suite of procedurally generated environments. This tool simplifies experimentation with the introduced concepts. Additionally, these
2307.07871#1
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
2
- a tool that offers a customizable parameterized suite of procedurally generated environments. This tool simplifies experimentation with the introduced concepts. Additionally, these environments can be used both with multimodal RL agents, or with pure-text Large Language Models (LLMs) as interactive agents. Through a series of case studies, we demonstrate the versatility of the SocialAI school for studying both RL and LLM-based agents. Our motivation is to engage the AI community around social intelligence informed by developmental psychology, and to provide a user-friendly resource and tool for initial investigations in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
2307.07871#2
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
3
1 Kovač, Portelas, Dominey, & Oudeyer # 1. Introduction Our everyday life is immersed in a sociocultural world, which we navigate using a set of sophisticated socio-cognitive abilities. Although at first it might seem that this sociocultural world is just another downstream product of our cognition, decades of research in develop- mental psychology suggest the opposite. Our socio-cultural world, cultural knowledge, and our socio-cognitive abilities are the foundation of our development and both our social and asocial intelligence (Vygotsky & Cole, 1978; Bruner, 1990; Tomasello, 2019).
2307.07871#3
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
4
For Vygotsky, a main driver for “higher-level” cognition are socio-cultural interactions (Vygotsky & Cole, 1978). For him, many high-level cognitive functions first appear at the social level and then develop at the individual level. This leap from interpersonal processes to intrapersonal processes is referred to as internalization. A typical example of this process is learning to count. Children first learn to count out loud, i.e. with language and social guidance, which is an interpersonal process. As the child improves, it will learn to count in its head, no longer requiring any external guidance: counting became internalized, and will be a first step towards other more complex forms of abstract thinking. Vygotsky’s theories influenced multiple works within cognitive science (Clark, 1996; Hutchins, 1996), primatology (Tomasello, 1999) and the developmental robotics branch of AI (Billard & Dautenhahn, 1998; Brooks et al., 2002; Cangelosi et al., 2010; Mirolli & Parisi, 2011).
2307.07871#4
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
5
Another pillar of modern developmental psychology is Jerome Bruner. He, too, emphasized the importance of culture in human development. Bruner writes: “it is culture, not biology, that shapes human life and the human mind, that gives meaning to action by situating its underlying intentional states in an interpretative system” (Bruner, 1990). Most importantly for this paper, he presents a pragmatic view studying how referencing, requesting and finally language develop through routinized social interactions (formats) in which those abilities are necessary to achieve various ends. He describes these interactions as scaffolded - the caretaker gradually helps less and demands more of the child to achieve those goals, and this bootstraps the child’s development Bruner (1985). Finally, Michael Tomasello’s work (Tomasello, 1999, 2019, 2020) constitutes a represen- tative and contemporary assessment of the nature and central importance of sociality in human cognition. Through decades of theoretical and experimental studies with both humans and primates, Tomasello outlined core social abilities and motivations. When combined with the relevant experience, they enable us to enter, benefit from, and contribute to the human culture. This cumulative cultural evolution is a powerful form of cultural transmission enabling the development and perpetuation of our complex culture and knowledge, and it is made possible by those socio-cognitive abilities Tomasello (1999).
2307.07871#5
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
6
Given the key role social cognition plays in human cognition and cultural evolution, it is natural that the field of AI aims to model our social intelligence. A socially competent AI could learn our culture and participate in its cultural evolution, i.e. improve our concepts, theories, inventions, and create new ones. A system capable of out-of-the-box thinking creative solutions and discovering new relevant problems must learn our values and how we see and understand the world (it must learn our culture). We do not claim that The SocialAI is sufficient to reach that far and complex goal. We only propose that being informed by the concepts discussed in this paper is useful, and we present SocialAI as a tool which could be used to start investigating such questions in more details. 2 # The SocialAI School Enriching AI with those skills also has numer- ous practical implications. Socially competent robots, capable of social learning, would be much easier to de- ploy and adapt to novel tasks and tools. For example, performing collaborative tasks with a robotic learner able to detect, learn and reuse context-dependent sets of communicative gestures/utterances could be easily integrated into human teams, without requiring hu- mans to adopt new conventions. Furthermore, robots capable of learning human values and moral norms will be capable of performing tasks in the constraints defined by those values.
2307.07871#6
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
7
AI research on interactive agents is often focused on navigation and object manipulation problems, ex- cised of any social dimension (Mnih et al., 2015; Lil- licrap et al., 2016). The study of sociality is mostly studied in Multi-Agent settings, where the main focus is often on the emergence of culture (often with only a weak grounding in developmental psychology) (Jaques et al., 2019; Baker et al., 2019). While we believe that those directions are both interesting and important, in this work we focus on entering an already existing complex culture. And we argue that it can be ben- eficial to be informed by developmental psychology theories. Models & Pedagogical Developmental Sciences Planning Experiments Theories Social challenges with peers eo) i @& N. | a The SocialAl School curriculum Figure 1: The SocialAI School provides technical and conceptual tools aiming to simplify research seeking to design socially proficient artificial agents.
2307.07871#7
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
8
Figure 1: The SocialAI School provides technical and conceptual tools aiming to simplify research seeking to design socially proficient artificial agents. In the rapidly emerging field of Large Language models, social cognition research consists of proof-of-concept simulations (Park et al., 2023) and systematic benchmarks. Two most notable benchmarks are SiQA (Sap et al., 2019), which evaluates social common sense reasoning (without grounding in psychology), and ToMi (Le et al., 2019) which presents false-belief querries (false-belief representing only a small subset of social-intelligence in general). We are encouraged by that relevant and fascinating work, and we believe it can be further enriched by a systematic overview of different aspects of social intelligence as presented here. We do not claim that the SocialAI school is sufficient to construct a socially competent agent as this is a very far-reaching and complex goal. However, we believe that in aiming for this goal, concepts from developmental psychology can serve as signposts for AI - give directions and enable us to define short term goals. Given that the outlined skills are at the very core of human social and cognitive competences, artificial agents aimed at participating in and learning from social interactions with humans are likely to require the same core competences. We present the SocialAI school merely as a first step towards this goal.
2307.07871#8
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
9
Following the theories of Michael Tomasello and Jerome Bruner, this work identifies a richer set of socio-cognitive skills than those currently considered in most of the AI research. More precisely, we focus on three key aspects of social cognition as identified by Tomasello: 1) social cognition: the ability to infer what others see and to engage in joint attention, 2) communication: the development of referential communication through pointing and the 3 Kovač, Portelas, Dominey, & Oudeyer beginnings of conventionalized communication through language, and 3) cultural learning: the use of imitation and role reversal imitation in social learning. We also outline two concepts from Jerome Bruner’s work: formats and scaffolding. Formats refer to the way in which social interactions are structured and presented, while scaffolding refers to the temporary support provided by a caretaker to help a learner achieve a task that would be otherwise too difficult. Based on this set of target abilities, we construct the SocialAI school, a tool (based on MiniGrid (Chevalier-Boisvert, Willems, & Pal, 2018)) which enables the construction of social environments whose diverse grid-world scenarios affords rich yet tractable research around social competence acquisition. Considered social scenarios are organized according to the key cognitive science experiments used to study the social cognition in children by highlighting core developmental steps.
2307.07871#9
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
10
In our experiments, we aim to show the versatility of the experiments which could be conducted with the SocialAI school. We present experiments regarding the following questions: generalization of social inferences (the pointing gesture) to new contexts, recreating an experiment from cognitive science (to study the knowledge transfer during role reversal), and the impact of a scaffolded environment on the agent’s learning. To show the diversity of agents which can be used, we conduct those experiments with RL agents, and present an additional case study with LLMs as interactive agents. In the appendix, we explore many more questions such as linguistic inferences, joint attention, and imitation. We hope to encourage future work extending and building on these first experiments to study various questions regarding social competence. For example, new socio-cultural scenarios, architectures, training regimes, and so on. We outline the following main contributions of this work: • An introduction to Michael Tomasello’s and Jerome Bruner’s theories on child develop- ment and core socio-cognitive abilities An outline of a set of core socio-cognitive abilities important for current AI research • The SocialAI school: a tool including a customizable procedural generation suite of environments aiming to simplify studies of socio-cognitive abilities of AI agents • Examples of case studies demonstrating how SocialAI can be used to study various questions regarding socio-cognitive abilities in AI
2307.07871#10
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
11
environments aiming to simplify studies of socio-cognitive abilities of AI agents • Examples of case studies demonstrating how SocialAI can be used to study various questions regarding socio-cognitive abilities in AI Social agents are not objects. Although social peers could be seen as merely complex interactive objects, we argue they are in essence quite different. Social agents (e.g. humans) can have very complex and changing internal states, including intents, moods, knowledge states, preferences, emotions, etc. In cognitive science, an affordance refers to what things or events in the environment afford to an organism (Gibson, 1977). The resulting set of possible interactions with peers (social affordances (Carvalho, 2020)) is essentially different from those with objects (classical affordances). A flat surface can afford "walking-on" to an agent, while a peer can afford "getting help from". The latter is a social affordance, which may require a social system and conventions (e.g. politeness), implying that social peers have complex internal states and the ability to reciprocate. Successful interaction might also be conditioned on the peer’s mood, requiring communication adjustments. Training an agent for such social interactions most likely requires drastically different methods – e.g. different architectural biases – than classical object-manipulation training. In SocialAI we simulate 4 The SocialAI School
2307.07871#11
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
12
4 The SocialAI School such social peers using scripted peers. We argue that studying isolated social scenarios featuring scripted peers in tractable environments is a promising first step towards designing proficient social agents. # 2. Related work # 2.1 Earlier calls for socially proficient agents This work aims to connect the recent social AI literature to the older developmental robotics field (Asada et al., 2009; Cangelosi & Schlesinger, 2014), which studies how to leverage knowledge from the cognitive development of human babies into embodied robots. Within this field, multiple calls for developing the social intelligence of autonomous agents have already been formulated (Billard & Dautenhahn, 1999; Lindblom & Ziemke, 2003; Mirolli & Parisi, 2011). The emphasis on the importance of social interactions for learning is probably what led Bruner to conceptualize the notion of formats (pragmatic frames) (Bruner, 1985), which has later been reused for example as a conceptual tool to theorize language development (Rohlfing et al., 2016). We intend to further motivate the relevance of this notion to enable further progress in DRL and AI. # 2.2 Human-Robot Interaction
2307.07871#12
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
13
# 2.2 Human-Robot Interaction Interactions with knowledgeable human teachers is a well-studied form of social interaction. Many works within the Human-Robot Interaction (HRI) and the Interactive Learning field studied how to provide interactive teaching signals to their agents, e.g. providing instructions (Grizou et al., 2014), demonstrations (Argall et al., 2009; Grollman & Billard, 2011), corrective advice (Celemin & Ruiz-del Solar, 2015), and even narratives Mealier et al. (2017). A review of this field (Vollmer et al., 2016) argues that restricted predefined (not learned) interaction protocols (pragmatic frames) are usually used, and suggests the study of a broader set of social situations. Catalyzing research on DRL and social skills seems even more relevant now that many application-oriented works are beginning to leverage RL and DRL into real-world humanoid social robots (Qureshi et al., 2018; Akalin & Loutfi, 2021). # 2.3 Disembodied Social Interaction Understanding
2307.07871#13
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
14
# 2.3 Disembodied Social Interaction Understanding Rather than directly learning behavior policies, multiple works focused on the study of disembodied machine learning models able to understand synthetic image or videos of social interactions. Two experimental setups are considered in this literature: classification tasks and prediction tasks. In the former, the objective is to correctly label the nature of an observed social scenario, e.g. is the interaction surprising or expected (Shu et al., 2021), are agents being cooperative, neutral or adversarial (Shu et al., 2020). Other works considered more precise scenario classifications (Netanyahu et al., 2021; Tejwani et al., 2021). For instance, in two-agents scenarios, Netanyahu et al. (2021) proposed a Bayesian approach to jointly detect each agent goals (protect object, move object) and their relative relationships (friends, opponents). In prediction tasks, machine learning models are evaluated on their capacity to predict agent actions, which is especially useful to design theory of mind experiments (Rabinowitz et al., 2018; Baker et al., 2011), along with more general social perception assessments (Netanyahu et al., 2021). 5 # Kovač, Portelas, Dominey, & Oudeyer
2307.07871#14
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
15
5 # Kovač, Portelas, Dominey, & Oudeyer Some recent works studied social reasoning abilities of large language models through textual problems. Sap et al. (2022) show that GPT models struggle on two question answering benchmarks: SocialIQA (Sap et al., 2019) and TOMi (Le et al., 2019). Trott et al. (2022) evaluate GPT models on variations of the Sally-Anne false-belief tasks and observe promising success rates (still below human performance). Furthermore, Ullman (2023) show that GPT models fail on simple alterations of false belief tasks. Ruis et al. (2022) evaluate LLMs on problems (implicatures) which can only be resolved by understanding contextual information and show a significant gap with human performance. Furthermore, on more context-heavy examples, they show that increasing model size does not lead to performance improvements. While our ambition is analogous to these aforementioned works– inviting ML scholars to focus on social interaction studies – the present work proposes to take an embodied and interactive perspective on sociality, whose experimental setups better aligns with real-world objectives: socially proficient interactive agents. # 2.4 Embodied Social DRL agents
2307.07871#15
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
16
# 2.4 Embodied Social DRL agents In the recent DRL literature, multiple agents able to showcase social skills have been presented. Jaques et al. (2019) presented a multi-agent social dilemma environments requiring the emergence of cooperative behaviors through communication. Authors then showcased agents leveraging the maximization of causal influence as a way to foster cooperation. In Ndousse et al. (2021), authors showed that, through the addition of an auxiliary next-state prediction task, DRL agents learning to perform navigation tasks among expert policies were able to learn to imitate social peers to overcome hard-exploration scenarios. Bhoopchand et al. (2022) presents a similar social-imitation approach able to scale to complex 3D environments and to imitate experts online, i.e. within episodes (rather than through gradient-based updates). In (Lee et al., 2021) authors showcase agents able to perform joint attention in cooperative tasks. They show that their intrinsic incentives towards joint attention helps to learn from experts (social learning). In (Franzmeyer et al., 2021) authors present an intrinsic motivation mechanism able to foster altruistic helping in a multi-agent setting without requiring to know the true goal of other agents (the intrinsic signal is based on the maximization of other’s choices).
2307.07871#16
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
17
One of the objectives of the SocialAI project is to provide rich social scenarios in which to study and iterate on such learning systems within a broader range of social interactions. For example, The SocialAI school simplifies the design of generalization tests, which are crucial to differentiate heuristic policies from robust social proficiency, as demonstrated in Aru et al. (2022) in theory of mind experiments. # 2.5 Multi-Agent Emergence of culture and communication Multi-agent systems are are an important subfield of interactive agents research. It includes studying symbol negotiation among cooperative peers(Lazaridou & Baroni, 2020; Moulin- Frier & Oudeyer, 2020), and scenarios where multi-step cooperation and communication is required for successful interaction: e.g. Mordatch and Abbeel (2018) propose simple navigation environments with to study the emergence of grounded compositional language, Jaques et al. (2019) present multi-agent social dilemma environments requiring the emergence of cooperative behaviors through (non-verbal) communication. Nisioti and Moulin-Frier 6 # The SocialAI School (2023) study how the interaction of agents and their changing environment leads to niche construction. Park et al. (2023) use LLMs to simulate a complex social interactions such as organizing a party.
2307.07871#17
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
18
Closer to our work, an RL agent was shown to adapt (through social learning) to a new environment with an expert (Ndousse et al., 2021). The independent RL agent was trained in a multi-agent environment with various environmental constraints and an auxiliary loss. Similar experiments were also conducted at a larger scale (Bhoopchand et al., 2022). The objective of the SocialAI school is to provide a tool simplifying similar studies, which could explore socio-cognitive abilities outlined by psychology. While multi-agent emergence of culture is an interesting research direction to study, the present work propose to focus on a complementary setup, arguably closer to human infants’ challenges: How to design agents able to enter an already existing social world? Rather than negotiating new modes of communication, how to learn existing social norms? # 2.6 Similar tools for fostering research
2307.07871#18
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
19
# 2.6 Similar tools for fostering research Similar tools have been constructed, often in the form of benchmarks, to support various research including instruction-following (Chevalier-Boisvert et al., 2019; Misra et al., 2018; Ruis et al., 2020), embodied question answering (Gordon et al., 2018; Das et al., 2017), collaboration given human demonstrations (Puig et al., 2021; Wan et al., 2022), or text-based social environments requiring dialogue (Urbanek et al., 2019; Ammanabrolu et al., 2020; Prabhumoye et al., 2020). In contrast to those, we focus on fundamental socio-cognitive abilities and do not aim to create a benchmark. By building on top of MiniGrid (Chevalier- Boisvert et al., 2018), we aim to provide a tool which can facilitate a diversity of research directions stemming from the outlined socio-cognitive abilities. # 3. Cognitive science background The following section introduces core concepts and experiments from the two developmental psychologists that inspired the SocialAI School: Michael Tomasello and Jerome Bruner. # 3.1 Michael Tomasello - The Shared Intentionality Theory
2307.07871#19
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
20
# 3.1 Michael Tomasello - The Shared Intentionality Theory We are born into a culture filled with cultural artifacts, symbols and institutions like language, social norms, tool industries, or even governments (Richerson & Boyd, 2006; Tomasello, 2019). These artifacts were not invented at once, rather they are a product of a series of improvements and modifications over many generations. Tomasello calls this powerful form of cultural transmission cumulative cultural evolution, and he argues that it is behind the most impressive human achievements (Tomasello, 1999) Cumulative cultural evolution is grounded in our socio-cognitive abilities (e.g. social cognition, cultural learning, communication), which enable us to learn, improve, and teach our culture (Tomasello, 2019), i.e. enter a culture. Cultural artefacts inherited and learned in this process become the very core of our cognition. An example of this is language, which influences our cognition in many ways. For example, it defines how we categorize and construe the world, and enables a powerful form of social learning : learning from instructions 7 # Kovač, Portelas, Dominey, & Oudeyer (Tomasello, 1999). This makes socio-cognitive abilities crucial, as their early development bootstraps both our social and asocial cognition (Herrmann et al., 2007).
2307.07871#20
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
21
(Tomasello, 1999). This makes socio-cognitive abilities crucial, as their early development bootstraps both our social and asocial cognition (Herrmann et al., 2007). Tomasello’s Shared intentionality theory argues that human socio-cognitive abilities, such as communication and social learning, are transformed by two big developmental steps 1 : the emergence of Joint intentionality at around 9 months of age (the 9-month revolution), and the emergence of Collective intentionality at around 3 years of age (the objective/normative turn) (Tomasello, 2019).
2307.07871#21
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
22
Joint intentionality emerges at around 9 months of age (Tomasello, 2019). It enables children to form a joint agent (a dyadic “we”) - they understand that they work with a partner towards the same joint goal. Children begin to view dyadic social interactions through a “dual-level structure”: a joint agent "we" on one level, and a personal "I" on another, i.e. we both understand that we both have separate roles ("I"), and that we work together towards the same joint goal ("we"). This enables them to take the perspective of others, which can also be done recursively - they are not only both attending to the same goal, they are also both attending to the partner’s attention to the goal, and they both know that they both are doing so. This recursive thinking is also manifested in socially recursive inferences: recursively embedding one intentional or mental state inside another. When interpreting a pointing gesture, we make a recursive inference of what "you intend for me to think". For example, if we are looking for a ball together, and you point to a cupboard behind me. I should infer that you are drawing my attention to the cupboard to communicate that I should look for the ball in the cupboard.
2307.07871#22
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
23
Collective intentionality emerges at around 3 years of age (Tomasello, 2019). It enables children to form a cultural group-minded “we”, which in comparison with a dyadic "we" represents an identity for a group. For example, a child might enforce a social norm because "this is how we, in this culture, do things". Consequently, children begin to participate in conventions and norms, and to view things from the “objective” perspective. These two developmental steps transform countless abilities, motivations, and behaviors. For the purpose of this paper, we focus on the following three developmental pathways: social cognition (sec. 3.1.1), communication (sec. 3.1.2), and social learning (sec. 3.1.3), as we consider them the most relevant for AI at the moment.
2307.07871#23
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
24
Tomasello (2019) argues that the 9-month-revolution and the objective/normative turn are uniquely human developmental steps enabling uniquely human socio-cognitive abilities. There has been a lot of debate regarding this hypothesis (De Waal, 2016), and it still remains an open question. However, for the purpose of this work, the social proficiency of other great apes (or our last common ancestor with them) is not of primary importance. We find The Shared Intentionality Theory useful because it is systematic, extensive (covers a broad range of social abilities), and exact (is build upon a number of very clearly defined experiments). Furthermore, it is concerned with the questions regarding the development of core socio-cognitive abilities. We believe that this makes it a good basis to organize AI research on. 1. These steps are referred to as maturational capacities to highlight that both the maturation and the exposure to relevant experience is required for those developmental steps 8 The SocialAI School 3.1.1 Social cognition In this section, we discuss the development of the ability to coordinate perspectives and view things from the objective perspective (a perspective independent from any individual) (Tomasello, 2019). The starting point is the ability to imagine what another sees or knows. The next step is the emergence of joint attentions (JA) at around 9 months of age. Then, joint attention to mental content in the form of linguistic discourse results in coordinating different perspectives with the objective perspective
2307.07871#24
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
25
Imagining what others perceive The earliest instance of this is when six-month-olds follow the gaze of others (D’Entremont et al., 1997). It is important to note that, as compared to the later emerging ability to coordinate perspectives, this ability requires that only one perspective is processed at a time. Numerous studies have shown that both apes and children are capable of making such inferences (Hare et al., 2001; Moll & Tomasello, 2006). For example, in Hare et al. (2001), a subordinate and a dominant chimpanzee were presented with a competitive scenario : competing for food. Results showed that the subordinate chimpanzee attempted to eat the food only if it was hidden from the dominant one. This experiment was then extended to children who were presented with two toys: one observed by an adult and one occluded from him. When asked to help the adult find a toy, 24-month-olds passed the occluded toy (Moll & Tomasello, 2006). These experiments, demonstrate that both children and apes are capable of inferring what a conspecific observes - i.e. they are able to infer another’s perspective.
2307.07871#25
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
26
Joint Attention Joint attention has been defined in various ways (Siposova & Carpen- ter, 2019). To avoid confusion, we take the definition of joint attention from Tomasello (2019): joint attention consists of two as- pects: triangulation and recursiveness. Tri- angulation refers to the child and the adult attending to the same external referent, and recursiveness refers to them both recursively being aware that they are both sharing atten- tion. Joint attentions is also characterized by the dual-level structure: shared attention on one level, and individual perspectives on another. Joint attention enables children to process multiple perspectives at the same time, and they shortly start to align and exchange those perspectives. Subordinate Dominant Occluders Figure 2: Sketch of an experiment from Hare et al. (2001) showing that apes can infer the conspecific’s field of view. As the subordinate ape does not want to get into trouble, it will not steal the food from the dominant ape. In the experiment, the food was either occluded from the dominant ape or placed in plain sight. The subordinate ape ate the food only when it was occluded from the dominant ape. This shows that it was able to infer the dominant’s field of view.
2307.07871#26
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
27
In cognitive science, the emergence of joint attention is studied by counting the child’s alternating looks between the adult and referent object, or the child’s attempts to initiate joint attention with the adult In Carpenter et al. (1998b) the amount of joint attention (num- (Mundy et al., 1986). ber of joint attention episodes and their length) was measured in free play interactions 9 Kovač, Portelas, Dominey, & Oudeyer between infants and their mothers. A steady rise in the amount of time spent in joint attention was observed in the period from 9 to 12 months. The exact nature of joint attention is not of primary importance for this paper. It is not disputed that the ability to triangulate, and also be aware that this experience is shared, is of key importance.
2307.07871#27
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
28
Coordinating perspectives Once children reach sufficient linguistic competence, they start jointly attending to mental content in the form of linguistic discourse. They begin to exchange and align perspectives of such content as well. Through linguistic discourse, children often encounter conflicting perspectives, which they are then pushed to resolve (e.g. one parent says it’s raining outside, but another says it’s not). They resolve those conflicts by learning to form an "objective" perspective - a bird’s-eye-view perspective distinct from anyone’s personal perspective - and coordinating the conflicting perspectives with it. For example, they are able to understand that the same object can, at the same time, "look like a sponge" (from their perspective) and "be a rock" (from the objective perspective) (Flavell et al., 1983). Tomasello argues that this can only be achieved once a child has passed through the second developmental step, that of collective intentionality, which enables them to form such a "perspectiveless" bird’s-eye view perspective (Tomasello, 2019). # 3.1.2 Communication
2307.07871#28
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
29
# 3.1.2 Communication Communication starts with imperative gestures for self-serving purposes (Tomasello, 2019). An example of such a gesture is the child pulling the adult’s hand, requesting them to pick it up. This gesture always has the same imperative meaning, and it never refers to an external object. The 9-month revolution brings forth referential communication (pointing and pantomiming). The next step is the appearance of conventionalized linguistic communication. Linguistic communication gives rise to a myriad of different language uses, such as discourse or pedagogy. 1. The child's attention is drawn to i Z ie hi 3. The experimenter points to the 4. The child follows the pointing Figure 3: An experiment with children from Behne et al. (2005) studying their ability to infer the meaning of a pointing gesture. The child’s attention is drawn to a toy. This toy is then hidden in one of the two boxes (the child does not know which one). The experimenter then points to one of the two boxes, and the child is able to infer this to mean that the toy is in that box.
2307.07871#29
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
31
10 The SocialAI School powerful way of communicating, as the same gesture can be used to express many different meanings in many different scenarios, provided that the observer can correctly infer that meaning. This ability to infer the meaning is based on the newly emerging abilities of joint intentionality, most notably that of "socially recursive inferences" - to interpret a pointing gesture, we make a recursive inference of what "you intend for me to think". Hence, when someone directs our attention towards an object, we are able to infer the intended message. Figure 3 depicts an experiment with children from Behne et al. (2005). First, the child’s attention is drawn to the toy, which is then hidden in one of the two boxes. The experimenter then points to a box, and the child infers this to mean that the toy is in that box. 14- month-old children were able to successfully follow a pointing gesture to find the toy. In this scenario, the child makes the following recursive inference: the adult is helping by directing the child’s attention to the box, and she wants the child to infer that the toy is in the box.
2307.07871#31
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
32
Linguistic communication Linguistic communication is based on the same principle as gestural referential communication: sharing attention to a referent and recursively inferring the intended meaning. However, linguistic communication in addition requires learning conventionalized means of reference, such as words or phrases. Where once was a single pointing gesture, now there is a complex grammar of gestures, with specific conventions assigned to each gesture. In Carpenter et al. (1998b) children’s understanding of words steadily increased in the period after 9 months. This was measured by questioners given to their caretakers on regular intervals.
2307.07871#32
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
33
Tomasello argues that when language use first appears, children do not yet understand it as conventional, rather they use it as any other artifact or tool. It is only after the emergence of collective intentionality, when children start to understand and use conventions and norms, that they also begin to perceive language as such. This is evidenced by specific new ways in which they come to use and understand language. For example, when others break the rules of a game they protest by normative statements such as "No! It does not go like this!" (Wyman et al., 2009). It is needless to say that language plays many important roles in children’s development. Here we will outline just a few of countless possible examples. Language provides children with abstract constructions which gives them a new organized format for cognitive representation. Through discourse, children encounter many conflicting perspectives, which brings them to resolve those conflicts by forming the "objective" perspective. Finally, language opens up a new way of cultural learning - instructed learning - in which adults directly teach children "objective" truths about the world. Knowledge learned in that manner is easier to generalize (Butler & Tomasello, 2016). # 3.1.3 Cultural Learning
2307.07871#33
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
34
# 3.1.3 Cultural Learning Human culture is characterized by a powerful form of cultural transmission called cumulative cultural evolution - inventions quickly spread and are improved by following generations (Tomasello, 1999). These inventions spread at such a rapid pace that they are rarely forgotten. This is referred to as the ratchet effect (Tomasello et al., 1993) - as inventions are iteratively improved without slippage. This is made possible by advanced social learning abilities, such as imitation and instructed learning, but also by motivation not only to learn instrumental actions, but also to affiliate and conform. Tomasello prefers the term "Cultural learning" for learning motivated by cultural, and not only instrumental, motives. 11 Kovač, Portelas, Dominey, & Oudeyer
2307.07871#34
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
35
11 Kovač, Portelas, Dominey, & Oudeyer The earliest form of cultural learning is the mimicking of facial expressions, which is observed even in neonates (Meltzoff & Moore, 1997). Over the course of the first year, children begin to imitate other’s actions and goals, and then, they begin doing so in ways which demonstrate their understanding of other’s as intentional agents (Meltzoff, 1995). Then, role reversal imitation appears as children begin to learn about the partner’s role during a collaborative activity. The next big step in the development of cultural learning is learning from instructions - instructed learning (following the emergence of collective intentionality). It is based on the adults’ motivation to teach children as well as on the children’s ability to understand and learn from linguistic instructions. It has been shown that children understand knowledge acquired through instructions as objective truth, and generalize it much better than knowledge acquired by other means (Butler & Tomasello, 2016). It is needless to say that in this way we acquire the most complex knowledge and skills such as reading or algebra. At around four years of age, children internalize this process, arguably, by reversing the roles (children take on the role of the adult giving instructions). This leads to a new type of self-regulation, a normative self-regulation based on conventions and norms.
2307.07871#35
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
36
1. The experimenter pushes the 3. The child imitates the pushing of 4. The child waits for the sound to 2. The favor plays a sound. Figure 4: Depiction of an experiment from Carpenter et al. (1998b). The experimenter activates the party favor (sound) by pushing the spring, and the child imitates and waits for the sound. The sketch was taken and modified from Carpenter et al. (1998b) Imitation and Emulation Imitation and emulation learning both refer to observing a demonstration and learning from it. Imitation learning refers to the learning of means (actions), while emulation to the learning of ends (goals) of a demonstration (Whiten et al., 2004, 2009; Tennie et al., 2006). Refer to Whiten et al. (2009) for a discussion and taxonomy of imitative and emulative learning processes.
2307.07871#36
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
37
Figure 4 shows an experiment from Carpenter et al. (1998b) studying children’s imitation abilities. In this experiment, the experimenter demonstrates an instrumental action (e.g. pressing a spring attached to a box) which activates the light on top of the box. The children repeated the instrumental action and looked expectedly at the light. This kind of learning emerges over the course of the first year - children reconstruct the outcome of others’ actions. However, soon after this, children begin imitating in a way which demonstrates the understanding of other’s goals. Children perform an action that an adult attempted, but failed to perform (Meltzoff, 1995), and do not imitate accidental actions (Carpenter et al., 1998a). Similarly, rational imitation appears. If the action was forced upon the demonstrator, the children recreate the result through more rational means (Gergely et al., 2002). For 12 # The SocialAI School example, in Gergely et al. (2002) the demonstrator pressed a button with its head while having tied hands and 14-month-olds responded by pressing the button with their hands.
2307.07871#37
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
38
example, in Gergely et al. (2002) the demonstrator pressed a button with its head while having tied hands and 14-month-olds responded by pressing the button with their hands. Emulation is a type of social learning where the focus is on the outcome, and not on the actions performed (Wood et al., 1989). In other words, the learning is about some property of the environment. The learner tries to recreate some observed outcome, in doing so they can, but don’t have to, recreate the actions. On the other side of this spectrum is overimitation - children repeat actions that are not relevant for the outcome. Children often prefer to not only recreate the outcome (as in emulation), but also do it in the same way as the adults (even if this requires doing additional unnecessary actions). For example, in Tennie et al. (2014) children were presented with a demonstration of a rice-pouring task. The experimenter performed a useless preliminary action before grabbing the rice. 4-year-old children responded by repeating both the useless and the necessary actions. It has been proposed that children overimitate to affiliate and conform for the purpose of in group bonding (Over & Carpenter, 2013), but this remains an open question (Keupp et al., 2013; Lyons et al., 2007) 2. The children switch roles and master the task
2307.07871#38
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
39
2. The children switch roles and master the task Figure 5: Depiction of an experiment on role reversal from Fletcher et al. (2012). The task consists of two roles: one participant pushes a ball into the apparatus, and the other redirects it with their finger. The ball then pushes two marbles toward each of the participants. In the pretraining phase, children collaborate until they master the task (three consecutive successful trials). Then, in the role reversal phase, their roles are reversed and they master the task again. Total number of trials to master the task is compared between the two phases. Children, but not apes, needed less trials to master the task in the role reversal phase than in the pretraining phase. Role reversal Imitation Following the 9-month revolution, a new form of imitation appears - role reversal imitation. An example of this is when children respond to an adult tickling their arm, by tickling the adult’s arm instead of its own (Carpenter et al., 2005). The emerging dual-level structure of joint intentionality enables children to understand, at the same time, the joint goal of a dyadic interaction, and the individuals’ separate roles. This enables the child to reverse the roles of a collaborative activity, and learn about the partner’s role from only experiencing its own, which enables much faster transmission and acquisition of cultural practices and knowledge.
2307.07871#39
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
40
Figure 5 depicts an experiment with children and apes from Fletcher et al. (2012). An apparatus is used where one participant pushes the marble, and the other inserts a finger 13 Kovač, Portelas, Dominey, & Oudeyer to redirect the ball so that it falls to the correct location. Then, both participants get a reward. Children who previously played role A mastered role B in less trials than children who never played role A. In another experiment (Carpenter et al., 2005), children were asked to immediately reverse the role. An experimenter did some action on the child (e.g. poke the child and say "your turn") and the child responded with the same action on the experimenter (poked the experimenter back). These experiments show that children understand the separate roles and how each is relevant for the activity. # 3.2 Jerome Bruner This work is also influenced by the work of Jerome Bruner, most notably by his concepts of scaffolding (Wood et al., 1976) and formats (Bruner, 1985), which were recently reintroduced to AI as pragmatic frames (Vollmer et al., 2016; Rohlfing et al., 2016).
2307.07871#40
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
41
Formats (Pragmatic frames) (Bruner, 1985) simplify learning by providing a stable structure to social interactions. They are regular patterns characterizing the unfolding of possible social interactions (equivalent to an interaction protocol or a grammar of social interactions). Formats consist of a deep structure (the static part) and a surface structure (the varying realizations managed by some rules). An example of a format is the common peek-a-boo game (depicted in figure 6). The deep structure refers to the appearance and the reappearance of an object. The surface structure can be realized in different ways. For example, one might hide an object using a cloth, or hands; one might hide their face or a toy; one might do shorter or longer pauses before making the object reappear. We understand social interactions through such formats, and our social interactions are based on our ability to learn, negotiate, and use them.
2307.07871#41
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
42
Another relevant concept is scaffolding (Wood et al., 1976), which is very similar to Vygotsky’s zone of proximal development (Vygotsky & Cole, 1978). This concept is also related to Csikszentmihalyi’s flow theory (Csíkszentmihályi, 1999), with the distinc- tion that in flow the learning is not necessar- ily mediated by a caretaker. Scaffolding is a process through which the adult bootstraps the child’s learning. The adult controls as- pects of a task which are currently too hard reduces the degrees of for the child, i.e. freedom in the task. Then the scaffold is gradually removed as the child is ready to take on more aspects of the task, until they can solve the task alone (without scaffold- ing). An example is a child constructing a pyramid with the help of an adult (Wood et al., 1976). At first, the child is not even focusing on the task, and the adult tries to get its attention to the task by connecting 3.) aue < ~ de ~ A ------------- { Cloth ] ( Hands } {(Peoka-boo"} { “Here lam" }
2307.07871#42
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
43
3.) aue < ~ de ~ A ------------- { Cloth ] ( Hands } {(Peoka-boo"} { “Here lam" } Figure 6: A simplified depiction of a format of the common children’s game "peek-a-boo" . Formats consist of the deep structure (the static part), and the surface structure (vary- ing realization managed by some rules). In this example, the deep structure is the disap- pearance and the reappearance of the adult’s face, and the surface structure refers to differ- ent ways of hiding the face and signalizing its reappearance. 14 The SocialAI School blocks and building the pyramid in front of them. Once the child is able to focus on the task, the adult starts passing the blocks to the child to connect. In the next phase, the child is grabbing blocks by itself, and the adult is helping through verbal suggestions. Then, only verbal confirmations are needed to guide the child. Finally, the child can construct the pyramid by itself. We can see how the adult observes the child and gradually transfers parts of the task (removes the scaffold) to the child. Through this process the caretaker enables the child to master a task they would not be able to master alone. # 4. The SocialAI school
2307.07871#43
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
44
# 4. The SocialAI school The SocialAI school is a tool for building interactive environments to study various ques- tions regarding social competence, such as "What do concepts, such as social abilities and motivations, outlined by developmental psychology mean in the scope of AI?", "How can we evaluate their presence in different agents?", "What are their simplest forms and how can agents acquire them?" To construct SocialAI, we rely on a set of key experiments and studies from developmental psychology, which were used to outline the most important abilities, motivations and devel- opmental steps in humans. From the work of Tomasello, we focus on developments before and around the age of 9 months (we believe it is important to address those before more complex ones relating to development of 3-year-olds, see section 3.1). We study the following developmental pathways: Social cognition (inferring other’s perception and joint attention), Communication (referential communication through the pointing gesture and the beginning of conventionalized communication through simple language), and Cultural Learning (imitation and role reversal imitation). From the work of Bruner, we study the concepts of Formats and Scaffolding (see section 3.2). Using The SocialAI school, we construct environments and conduct experiments regarding all of those concepts.
2307.07871#44
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
45
SocialAI, which is built on top of Minigrid (Chevalier-Boisvert et al., 2018), includes a customizable parameterized suite of procedurally generated environments. We implement this procedural generation with a tree-based structure (the parametric tree). This makes it simple to add and modify new environments, and control their sampling. All the current environments are single-agent and contain a scripted peer. The agent has to interact with the peer to reach an apple. This setup enables a controlled and minimal representation of social interactions. To facilitate future research, SocialAI was made to be very easy to modify and extend. It is completely open sourced, and we hope that it will be useful to the community to study the questions regarding social intelligence in AI. The remainder of this section is organized as follows. First, section 4.1 describes technical details such as the observation and the action space. Then, section 4.2 introduces the parameter tree and explains how it can be used to sample environments. Finally, section 4.3 describes two environment types, which were used in case studies in section 5. In the appendix, we discuss one additional environment type (appendix D) and additional case studies (appendix F). # 4.1 Parameterized Social Environments The SocialAI school is built on top of the MiniGrid codebase (Chevalier-Boisvert et al., 2018), which provides an efficient and easily extensible implementation of grid world environments.
2307.07871#45
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
46
15 # Kovač, Portelas, Dominey, & Oudeyer SocialAI environments are grid worlds consisting of a room. In all of our environments, the task of the agent is to eat the apple, at which point it is rewarded. The reward is diminished according to the number of steps it took the agent to complete the episode. The episode ends when the agent eats the apple, uses the done action, or after a timeout of 80 steps. The agent’s observation space is shown in figure 7. This multimodal observation space consists of the full dialogue history, and a 7x7x8 tensor corresponding to the 7x7 grid in front of the agent. Each cell is encoded by six integers representing the object type, color, and some additional object-dependent information (e.g. is the door open, point direction, gaze direction, etc). Refer to figure 26 in the appendix for a list of all objects.
2307.07871#46
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
47
The agent acts in the environment through a multimodal action space, which consists of 6 primitive actions (no, movement actions, toggle, and done) and a 4x16 templated language. The agent also has the option not to speak, which is implemented with an additional binary output from the agent. Refer to appendix A for details about the architecture of the agent. All environments, unless otherwise stated, contain a scripted social peer, and the task can only be solved by interacting with this peer (for which socio-cognitive abilities are needed). A social peer observes the world in the same way as the agent does (as a grid in front of it), and it also observes the agent’s utterances. Their action space consists of primitive actions for movement, pointing, and the toggle action. The peer can also communicate with words and sentences. As the peer is scripted, there are no constraints on the language it can utter (it is not constrained to a templated language). The language it uses depends on the environment, which defines which sentence the peer will utter at which point. The peer is represented in the agent’s observation by 7 integers depicting their: object type, position, color, type (cooperative or competitive), gaze direction, point
2307.07871#47
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
48
peer is represented in the agent’s observation by 7 integers depicting their: object type, position, color, type (cooperative or competitive), gaze direction, point direction, and the last executed primitive action. The peer’s gaze and point directions are represented relative to the agent (e.g. 1 - to the left of the agent). The pointing direction can also be set to 0, which signifies that the peer is not pointing. Figure 8 shows an example of an environment with the corresponding encoding of the peer. The agent (red) and the scripted peer (purple) are making eye contact - the peer and the agent are in the same row or column and their gazes meet frontally. In this example, the scripted peer is also pointing to the blue box.
2307.07871#48
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
49
The SocialAI environments are parameterized, and those parameters define the social dimensions of the task. In other words, parameters define which socio-cognitive abilities are needed to solve the task. For example, depending on the Environment type parameter, the peer can give information, collaborate with the agent, or be adversarial. In the case of the peer giving information, additional parameters define what is the form of this information (linguistic or pointing). # 4.2 Parameter tree SocialAI enables the creation of many parameterized environments, and those parameters are implemented as nodes in a parameter tree. A parameter tree is a structure through which the experimenter can easily define which parameters (and their values) can be sampled. An example of such a tree can be seen in figure 9. The standard procedure is that an experimenter defines a parameter tree. Then each episode begins with the sampling of a new parameter set from this tree. Once a parameter set has been sampled, an environment is created, and the agent placed inside. 16 The SocialAI School Observation encoding ~ i Visual modality (oor: ray used fr wal Object ype: wal boils “ EET) Encoding of a wall coor: brown ‘State unlackes Object type: box Wa fT EL CS HE Language modality Encoding of a box eS Action Observation SocialAl environment Ss Agent acting
2307.07871#49
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
50
Figure 7: Workflow of an agent acting in the SocialAI school. The environment generates a state, which is represented as multimodal observations: a 7x7x6 tensor and the full dialogue history. The agent acts through a multi-modal action space consisting of primitive actions and utterances. A parameter tree is used to sam- ple parameter sets from it, an ex- ample of such sampling is shown in figure 9. There are two kinds of nodes: parameter nodes (rectan- gles) and value nodes (ovals). Pa- rameter nodes correspond to pa- rameters, and value nodes corre- sponds to possible values for those parameters. Sampling proceeds in a top-down fashion, starting from the root node. In all our experi- ments, Env_type parameter node is the root. Sampling from a param- eter node selects one of its children . morte a Encoding of the peer dject type: peor | 11/3 }0/0/3/10 Color: purple | Gaze direction: a the agent Last action: no_op
2307.07871#50
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
51
Figure 8: A depiction of a peer and its encoding. The agent and a peer are in eye contact, and the peer is pointing to the blue box. To the right is an encoding of the peer. The encoding contains information about the peer, e.g. the gaze and point direction. Refer to figure 26 in the appendix for a list of all objects. 17 # Kovač, Portelas, Dominey, & Oudeyer Information seeking —s — irtitititiss, Sample Parameters eres Environment type: Information_seeking Environment type: Information_seeking Environment type: Information_seeking Introductory_sequence: Eye_contact Introductory_sequence: Eye_contact Introductory_sequence: Eye_contact Peer_help: N Peer_help: N Peer_help: N Cue_type: Pointing Cue_type: Pointing Cue_type: Pointing Problem: Boxes Problem: Marble Problem: Doors N:2 N:2 Ned Peer: Y Peer: Y Peer:N Construct enviroments
2307.07871#51
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
52
Figure 9: An example of procedural environment generation using tree-based parametric sampling. There are two kinds of nodes: parameter nodes (rectangles) and value nodes (ovals). Parameter nodes require that one of its children (a value node) is selected. Value nodes require that sampling progresses through all of its children (parameter nodes). In this tree, all parameter nodes except "Problem" have only one child. This means that only the Problem parameter can be set in different ways. We show three examples of parameter sampling, and the three environments constructed from those parameters. (a value node), i.e. sets a value for this parameter. This can be done by uniform sampling over the node’s children, or by prioritized sampling with a curriculum. Once a value node has been chosen, the sampling continues through it to all of its children (parameter nodes). In other words, setting a value for one parameter, defines which other parameters (the value node’s children) need to be set. In our codebase, it is simple to create such trees, and add addi- tional parameters and environments. In the following sections, we explain the most relevant parameters. Refer to figures 30, 31 and 32 in the appendix for examples of parametric trees. 18 The SocialAI School # 4.3 Environment types
2307.07871#52
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
53
18 The SocialAI School # 4.3 Environment types The most important parameter is the environment type - Env_type. This parameter node is always the root node. We implemented three different environment types: Informa- tionSeeking, Collaboration, and AdversarialPeer. A parameter tree doesn’t have to contain all of them. This choice entirely depends on the type of experiment one wants to conduct, most often only one of type will be present in a tree. For example, figure 9 shows the tree with only the InformationSeeking environment type. This tree was used to study understanding of the pointing gesture in section 5.2. In the rest of this section, we describe the InformationSeeking and the Collaboration environment types. We describe the AdversarialPeer type in the appendix D.
2307.07871#53
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]
2307.07871
54
> > (a) A scripted peer pointing to a box. The agent needs to open the red box. (b) A scripted peer uttering the color of the correct generator. The agent needs to push the mar- ble onto the blue generator. (c) A scripted peer hinting the dis- tance to the correct lever ("Hot" means very close). The agent needs to pull the purple lever to open the door. Figure 10: Examples of InformationSeeking type environments, in which agents learns to find hidden apples using textual or non-verbal communication with social peers. Information Seeking type environments This environment type will be used in case studies regarding communication, joint attention, and imitation learning. In figure 10 we can see examples of InformationSeeking type environments.
2307.07871#54
The SocialAI School: Insights from Developmental Psychology Towards Artificial Socio-Cultural Agents
Developmental psychologists have long-established the importance of socio-cognitive abilities in human intelligence. These abilities enable us to enter, participate and benefit from human culture. AI research on social interactive agents mostly concerns the emergence of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to enter a culture too. We discuss the theories of Michael Tomasello and Jerome Bruner to introduce some of their concepts to AI and outline key concepts and socio-cognitive abilities. We present The SocialAI school - a tool including a customizable parameterized uite of procedurally generated environments, which simplifies conducting experiments regarding those concepts. We show examples of such experiments with RL agents and Large Language Models. The main motivation of this work is to engage the AI community around the problem of social intelligence informed by developmental psychology, and to provide a tool to simplify first steps in this direction. Refer to the project website for code and additional information: https://sites.google.com/view/socialai-school.
http://arxiv.org/pdf/2307.07871
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
cs.AI, cs.LG, 68T07, I.2.0
Preprint, see v1 for a shorter version (accepted at the "Workshop on Theory-of-Mind" at ICML 2023) See project website for demo and code: https://sites.google.com/view/socialai-school
null
cs.AI
20230715
20231123
[]