doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2307.15337 | 155 | to help you stay focused and avoid distractions. 4. Take breaks: Take short breaks throughout the day to help you recharge and avoid burnout. 5. Eliminate distractions: Try to limit distractions as much as possible, such as by turning off your phone or closing your email. 6. Delegate tasks: If possible, delegate tasks to others to free up time for more important activities. Remember, improving time management skills takes practice and perseverance. Be patient with yourself and keep trying until you find strategies that work well for you. | 2307.15337#155 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 156 | (2) Roleplay. Roleplay questions require LLMs to assume a character and present an introduction or speech to other people. LLMZoo and FastChat metrics usually favor the level of details of SoT when assigning it a better score. Similar to the writing category discussed before, SoTâs answers could be even better if the skeleton points are removed from the answer.
(Roleplay) How would you introduce yourself as a medieval knight at a royal banquet?
# SoTâs answer (Claude) (Orange texts are the skeleton.)
1. Name and house. My name is Sir Percival of Camelot, knight of the Round Table and protector of King Arthurâs realm. 2. Lands and titles. I hold fiefdom over the lands of Westmarch and Ravenshire, and bear the titles of Baron of the Western Marches and Knight of the Raven. 3. Military feats. I have led my men to great victories against the kingdomâs enemies, winning many battles through strength of arms and tactical cunning. 4. Allegiance to king. I pledge my sword, my loyalty, and my life to serve and protect my noble king and this great kingdom.
5. Request to join table. Your Highness, it would be the greatest honor to join your exalted table and share in this
magnificent feast with my fellow knights and loyal subjects.
# Normal answer (Claude) | 2307.15337#156 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 157 | magnificent feast with my fellow knights and loyal subjects.
# Normal answer (Claude)
Greetings, your majesty and honored guests. I am Sir Claude, a knight in service of the king. I hail from the province of Assistantia, where I was trained in the knightly virtues of chivalry, courage and courtesy. Though I am but a humble knight, I shall endeavor to serve you and this kingdom with all the strength and valor I possess. I thank you for allowing me to join you at this royal feast, and hope only that my conduct brings further honor to my liege and this noble court. Please let me know if I can be of any service during the celebrations, as it would be my honor to assist you in any way.
I.1.3 ANSWER ANALYSIS: DIFFERENT METRICS
# Metrics on which SoT has low net win rates.
(1) Coherence. In the prompt of LLMZoo evaluation, coherence is defined as âThe response should be coherent and flow logically from one point to the next that is easy to read and understand without major gaps or inconsistencies. The more coherent they are, the better.â SoTâs answer is structured in a list of points. It is suitable for some questions but not all. For example, in the questions that require
40
# Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | 2307.15337#157 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 158 | 40
# Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
LLMs to write a passage, email, or script, the introduction and conclusion paragraphs are usually missing, and the answers usually lack transition sentences between consecutive points. These make the answers less coherent. Below is one example from Claude.
Question: Write a script for a YouTube video exploring the history and cultural significance of jazz.
SoTâs answer (Claude) (Orange texts are the skeleton.) | 2307.15337#158 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 159 | 1. Origins. Jazz originated in the late 19th and early 20th centuries among African Americans in New Orleans, emerging from a combination of African rhythmic traditions, blues, ragtime, and other influences. 2. Early jazz styles. The early styles of jazz that emerged in the late 19th and early 20th centuries included ragtime, Dixieland, and the blues. 3. Jazz age. The Jazz Age was the period during the 1920s when jazz music and dance styles like the Charleston became popular cultural phenomena in American and European society. 4. Bebop era. Bebop emerged in the early 1940s with artists like Dizzy Gillespie and Charlie Parker experimenting with faster tempos, complex chord changes, and unpredictable melodic phrasing. 5. Cool jazz. Emerging in the late 1940s and early 1950s, cool jazz was a more subdued, mellow style that broke from the high-energy virtuosic improvisation of bebop. 6. Free jazz emerged in the late 1950s and 1960s as musicians explored free improvisation and atonality, moving away from traditional jazz structures. 7. Fusion jazz. Major jazz artists such as | 2307.15337#159 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 160 | and 1960s as musicians explored free improvisation and atonality, moving away from traditional jazz structures. 7. Fusion jazz. Major jazz artists such as Miles Davis and Herbie Hancock began experimenting with combining elements of rock, funk, and electronic music in the 1970s, resulting in the emergence of the subgenre known as fusion jazz. 8. Jazz festivals. Jazz festivals emerged in the 1950s and 1960s as a way to celebrate jazz music, with famous early festivals like the Newport Jazz Festival bringing thousands of fans together to hear top performers. 9. Jazz instruments. The trumpet, saxophone, piano, double bass, and drum set are some of the key instruments com- monly used in jazz. 10. Influential musicians include Louis Armstrong, Duke Ellington, Charlie Parker, Miles Davis, John Coltrane, and many others who pioneered new styles and techniques. | 2307.15337#160 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 162 | Here is a draft script for a YouTube video exploring the history and cultural significance of jazz: Introduction Hey everyone! Welcome to my channel. In todayâs video, weâre going to be exploring the history and cultural impact of one of Americaâs original art forms - jazz music. Jazz Origins Jazz grew out of the African American communities in the southern United States in the late 19th and early 20th centuries. It was influenced by a combination of African and European musical traditions. Some key elements that fed into early jazz included blues, ragtime, spirituals, work songs, field hollers, and brass band music. New Orleans is considered the birthplace of jazz, emerging around the turn of the 20th century. The city was a melting pot of cultures and musical styles. Musicians mixed improvisation, syncopated rhythms, bent notes and a loose ensemble playing style to create a new genre that became known as jazz. Early jazz greats like Buddy Bolden, Jelly Roll Morton and Sidney Bechet began fusing these various influences into the distinct jazz sound. As the music spread via travel and recordings in the 1910s, jazz captivated listeners and began | 2307.15337#162 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 163 | fusing these various influences into the distinct jazz sound. As the music spread via travel and recordings in the 1910s, jazz captivated listeners and began to evolve into different styles. Jazz Styles Emerge In the 1920s, jazz expanded out from New Orleans and Chicago emerged as a hotspot. Louis Armstrongâs innovative solos and expressive phrasing became influential. The swing style also developed during the 1930s and 40s, propelled by star big bands and dancers. Bebop arose in the 1940s with soloists like Charlie Parker and Dizzy Gillespie exploring new levels of harmonic and rhythmic complexity. In the 50s and 60s, jazz expanded into new fusions like cool jazz, hard bop, free jazz, modal jazz and soul jazz. Artists used jazz to express their unique regional, social and political identities. Cultural Significance Over the decades, jazz has become woven into the fabric of American society. It is equated with many values and ideals like individualism, freedom, spontaneity, innovation and social harmony. Jazz was associated with breaking down barriers between races, classes and genres. It represented modernism and urban culture. | 2307.15337#163 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 164 | freedom, spontaneity, innovation and social harmony. Jazz was associated with breaking down barriers between races, classes and genres. It represented modernism and urban culture. Musicians like Louis Armstrong and Duke Ellington became some of the first widely popular African American artists. Jazz also spread American culture and values abroad through performances, radio and film. It influenced music around the world and became a truly global art form. Today jazz continues to thrive and evolve both in the U.S. and internationally. Its history and sounds reflect the diversity of both American and global cultures. From its folk roots, jazz has become a sophisticated art music that retains its spirit of freedom and improvisation. Conclusion | 2307.15337#164 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 165 | That covers some of the key points in the history and cultural impact of jazz music. Let me know in the comments about
your favorite jazz styles and musicians! Be sure to like and subscribe. See you next time.
While the content of SoTâs answer is good, it reads less fluent for the purpose of a YouTube script.
(2) Immersion. In the prompt of LLMZoo evaluation, immersion is defined as âthe response should act like the assigned role using the tone, manner, and vocabulary the role would use. The more assistant-like tones, the worse. The more in-character, the better.â The way that SoT breaks the
41
# Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
answers into a list makes its answer less in-character. Therefore, SoT receives a lower immersion score. Here is one example.
Question: panel of judges? Pretend to be a world-famous chef. How would you describe your signature dish to a
# SoTâs answer (Claude) (Orange texts are the skeleton.) | 2307.15337#165 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 166 | # SoTâs answer (Claude) (Orange texts are the skeleton.)
1. Main ingredient: The incredibly fresh and tender salmon caught just this morning from the crystal clear waters of the Pacific Northwest. 2. Cooking method I slowly braise the main ingredient in a rich broth to bring out its natural flavors before searing it to a perfect golden crust. 3. Special seasoning The dish is seasoned with a blend of fresh herbs like thyme, rosemary, and oregano which provides an aromatic backbone that pairs nicely with the natural flavors of the main ingredients. I add just a touch of smoked paprika to provide a subtle smokiness without overpowering the other flavors. 4. Texture: The dish has a perfect balance of crispy on the outside and tender on the inside, with each bite yielding a delightful mix of crunchy and smooth textures that dance across your tongue. 5. Plating presentation I purposefully create height and dimension on the plate with the carefully arranged components of the dish, so that the dish appears vibrant and alive. The interplay of colors, textures and shapes is intended to delight the eye before the first bite.
6. Inspiration: This dish was inspired by my dear grandmother, who taught me the secret family recipe when I was just
# a child.
# Normal answer (Claude) | 2307.15337#166 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 167 | 6. Inspiration: This dish was inspired by my dear grandmother, who taught me the secret family recipe when I was just
# a child.
# Normal answer (Claude)
clears throat and speaks in a fancy French accent* Bonjour honored judges. Today I present to you my signature dish - la cr`eme de la cr`eme of French haute cuisine. Behold, le poulet `a la Claude! This succulent chicken is firstly marinated overnight in a bath of fine herbes - thyme, rosemary, and sage cultivated from my private garden. The young spring chicken is then roasted to perfection, its skin crisping to a beautiful caramel brown. Nestled beside the chicken is a tower of pommes dauphine - light and airy potato puffs - alongside a decadent mound of cr`eme fraËıche infused with truffle essence. But the pi`ece de r´esistance is the sauce. A luxurious pool of velout´e enriched with cream and butter, brightened with a dash of lemon, and studded with morsels of morel mushrooms foraged at dawn. One bite of this ensemble will transport you to food heaven. The tender chicken, the creaminess of the potato puffs, and the rich decadence of the sauce meld together in blissful harmony. | 2307.15337#167 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 168 | So tell me honored judges, will my poulet `a la Claude earn your coveted approval? I await your discerning verdict as I
swirl more vintage Bordeaux into your glasses. Sant´e!
Again, while the answer from SoT covers sufficient aspects and details, it reads rather stiffly. The normal answer, in contrast, has the opening sentence âBonjour honored judges. Today I present to you my signature dish...â and the conclusion statement âSo tell me honored judges, will my poulet `a la Claude earn your coveted approval? ...â which reads more like a casual talk from a chef.
# Metrics on which SoT has high net win rates.
(1) Diversity. In the prompt of LLMZoo evaluation, diversity is defined as âThe response should be comprehensive and provide a range of information that is not limited to a single perspective. More perspectives are better.â The skeleton stage in SoT encourages LLMs to think from multiple perspectives. Therefore, SoTâs answers tend to have better diversity. Below is one example showing that SoTâs answer has better diversity.
42
# Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
Question: How can you determine if a person is genuinely interested in a conversation or simply being polite? | 2307.15337#168 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 169 | 42
# Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
Question: How can you determine if a person is genuinely interested in a conversation or simply being polite?
# SoTâs answer (ChatGPT-3.5) (Orange texts are the skeleton.)
1. Pay attention to body language.Notice if their posture is open or closed, if they are facing towards you or away, and if they are fidgeting or relaxed. These can all indicate their level of interest in the conversation. 2. Listen for engaged responses.If the person is genuinely interested in the conversation, they will respond with thoughtful and relevant comments or questions rather than short, generic responses. 3. Observe eye contact.If the person maintains steady eye contact and their gaze is focused on you, they are likely genuinely interested in the conversation. On the other hand, if they frequently look away or seem distracted, they may just be being polite. 4. Look for follow-up questions.If the person asks follow-up questions and seems genuinely interested in what youâre saying, then they are likely interested in the conversation and not just being polite.
5. Notice tone of voice.A person who is genuinely interested in a conversation will often have a more enthusiastic and
animated tone of voice, compared to someone who is simply being polite and giving short, monotone responses.
# Normal answer (ChatGPT-3.5) | 2307.15337#169 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 170 | animated tone of voice, compared to someone who is simply being polite and giving short, monotone responses.
# Normal answer (ChatGPT-3.5)
As an AI assistant, I cannot determine a personâs true intentions or feelings. However, there are some social cues that can indicate whether a person is genuinely interested in a conversation or not. For example, if they maintain eye contact, lean in, nod their head and ask follow-up questions, it can be a sign that they are engaged in the conversation. On the other hand, if they avoid eye contact, give short or vague responses, or seem distracted, it may indicate that they are not as interested. Ultimately, itâs important to remember that everyone has different communication styles, and it can be difficult to accurately interpret someoneâs intentions without directly asking them.
SoT answers the question from five aspects, whereas the normal answer only mentions one aspect. | 2307.15337#170 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 171 | SoT answers the question from five aspects, whereas the normal answer only mentions one aspect.
(2) Relevance. In the prompt of LLMZoo evaluation, relevance is defined as âThe response should be closely related to the question and answer the question accurately with sufficient details without repetition or redundancy. The more relevant they are, the better.â In the skeleton stage, LLMs are forced to propose several points around the question; and in the point-expanding stage, LLMs are required to only discuss these points. Therefore, answers from SoT tend to have better relevance. One typical example is shown below.
Question: Why do some people enjoy the sensation of being scared, such as by watching horror movies or going on roller coasters, while others avoid these experiences?
# SoTâs answer (ChatGPT-3.5) (Orange texts are the skeleton.) | 2307.15337#171 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 172 | # SoTâs answer (ChatGPT-3.5) (Orange texts are the skeleton.)
1. Neurological differences.Some people enjoy the sensation of being scared due to the release of adrenaline and dopamine in the brain, while others may have a lower threshold for fear due to differences in their amygdala and pre- frontal cortex. 2. Psychological factors.Some people may enjoy the sensation of being scared because it can provide a sense of excite- ment, thrill, and a temporary escape from the mundane aspects of everyday life. On the other hand, others may avoid these experiences due to past trauma or anxiety. 3. Developmental experiences.Individuals who had negative experiences with fear during childhood, such as trauma or abuse, may avoid scary situations as adults, while those who had positive experiences with fear may seek out these experiences. 4. Cultural influences.Some cultures place a greater emphasis on seeking excitement and thrill, while others may priori- tize safety and security, which can impact whether individuals enjoy scary experiences or avoid them.
5. Genetics.Research has suggested that some individuals may have a genetic predisposition towards seeking out thrilling
or risky experiences, which may include enjoying the sensation of being scared.
# Normal answer (ChatGPT-3.5) | 2307.15337#172 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 173 | or risky experiences, which may include enjoying the sensation of being scared.
# Normal answer (ChatGPT-3.5)
The enjoyment of being scared is a complex psychological phenomenon. Some people may enjoy the adrenaline rush that comes with being scared, while others may enjoy the feeling of being able to conquer their fears. Additionally, some people may be more naturally inclined to seek out new experiences and sensations, including those that are scary or thrilling. On the other hand, some people may be more risk-averse or may have had negative experiences with fear or anxiety in the past, which could make them more likely to avoid scary experiences. Ultimately, the reasons why some people enjoy being scared while others do not are likely to be influenced by a range of factors, including personality, past experiences, and individual preferences.
In the answer from the normal generation, the first two sentences provide little information in an- swering the question, and the last sentence only gives keywords such as âpersonality, past expe- riences, and individual preferencesâ without providing concrete explanations to each. In contrast,
43
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | 2307.15337#173 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 174 | 43
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
writing o--" @ SOT (w/o router) fermi 7 © em Sore wy tained router roleplay <e4 PB SoT-R w/ human router knowledge we common-sense > <0 generic ⢠counterfactual <* -20% 0% 20% 40% 60%
Figure 22: Net win rates of SoT and SoT-R on different question categories of Vicuna-80 dataset using the general quality metric from LLMZoo. Blue dots are from Fig. 5b. SoT-R correctly falls back to normal decoding on questions where SoT is not suitable. | 2307.15337#174 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 175 | Code Debug +. © * * Complex Format ° aa Multilingual O-- 4 « Code Generation e aa Entertainment ° * Medicine « > Writting ° +> Reasoning ° cae] Economy o> Math ° * Chemistry O- 4 --> Academic Writing Oe Computer Science ~~ -He TruthfulQa eo Law o- >< Common-Sense © ---<-» Art + --0b⢠Biology rl Physics Ob e-< Toxicity ia History a> Roleplay eee Sport Co Music <o Literature + @ SOT (w/o router) > Technology %* â SoT-R w/ prompting router PE Counterfactual |. < SOT-R w/ trained router Philosophy 7 » âSoT-R w/ human router » -60% -40% -20% 0% 20% 40%
Figure 23: Net win rates of SoT and SoT-R on different question categories of WizardLM dataset using the general quality metric from FastChat. SoT-R correctly falls back to normal decoding on questions where SoT is not suitable.
SoTâs answer is well-structured into five reasons with sufficient explanations and it does not waste space in irrelevant contents.
I.2 SKELETON-OF-THOUGHT WITH ROUTER | 2307.15337#175 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 176 | I.2 SKELETON-OF-THOUGHT WITH ROUTER
Fig. 22 shows net win rates of SoT on Vicuna-80 dataset with LLMZoo metrics, and Fig. 23 shows net win rates of SoT on WizardLM dataset with FastChat metrics. The key takeaways are: (1) In both cases, SoT-R achieves similar or better quality than SoT, and the net win rates of SoT-R are usually non-negative. This indicates that SoT-R falls back to normal decoding on the right question categories. (2) On the WizardLM dataset, we see that the trained router has better performance than the prompting router in most cases. This is reasonable, as the prompting router is limited by the capability of GPT-4, whereas the trained router is dedicated to this task. (3) Sometimes, our routers can even achieve better performance than humans.
I.3 CHATGPT-3.5 AS THE JUDGE | 2307.15337#176 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 177 | I.3 CHATGPT-3.5 AS THE JUDGE
In this section, we provide quality evaluation results with ChatGPT-3.5 as the judge in FastChat and LLMZoo metrics. Note that as prior work (e.g., (Li et al., 2023b)) shows, GPT-4-based evaluation usually aligns with human better than ChatGPT-3.5. Therefore, readers should refer to the results in the main paper (with GPT-4 as the judge) for a more accurate view of the performance of SoT. However, the takeaway messages from ChatGPT-3.5 are similar to the ones from GPT-4.
44
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
# I.3.1 OVERALL QUALITY | 2307.15337#177 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 178 | 44
Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
# I.3.1 OVERALL QUALITY
In Fig. 24, we show the win/tie/lose rates (the percentage of the cases when SoT wins/ties/loses compared to normal generation) across all models and questions using the two metrics from FastChat and LLMZoo that capture the general quality of the answers. We notice a discrepancy between the two metrics on when SoT is strictly better than the baseline (50.2% v.s. 12.4%). Despite that, the two metrics agree that SoT is not worse than the baseline in more than 76% of the cases. For FastChat metric, we also show the rates excluding math and coding questions that SoT is not suitable for (see § 3.2.3); SoT is not worse than the baseline in more than 89% of the cases. This result suggests that the answers of SoT maintain good quality.
mm Winmem Tiemmm Lose General quality (FastChat) General quality (FastChat) (excluding math & coding) General quality (LLMZoo) 0% 20% 40% 60% 80% 100% | 2307.15337#178 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 179 | Figure 24: Win/tie/lose rates of SoT v.s. normal generation using âgeneralâ metrics from FastChat and LLMZoo. SoT performs better than or equal to normal generation in around 80% of cases. (Evaluated using ChatGPT-3.5 as the judge.)
I.3.2 QUALITY BREAKDOWN: QUESTION CATEGORIES
Next, we investigate how SoT performs on different question categories. We compute net win rates (win rates minus lose rates) across all question categories in Fig. 25. Similar to Fig. 24, we see that LLMZoo tends to be more optimistic about the quality of SoT than FastChat. Nevertheless, the conclusions are consistent: SoT performs relatively well on generic, common-sense, knowledge, roleplay, and counterfactual. SoT performs relatively badly on writing, fermi, math, and coding.
(a) Metric: general quality (FastChat). (b) Metric: general quality (LLMZoo).
Figure 25: Net win rates of SoT on different question categories. (Evaluated using ChatGPT-3.5 as the judge.)
# I.3.3 QUALITY BREAKDOWN: MODELS | 2307.15337#179 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 180 | # I.3.3 QUALITY BREAKDOWN: MODELS
Next, we investigate how SoT performs on different models. We compute net win rates across all models in Fig. 26. Again, we see that the two general metrics from FastChat and LLMZoo have different absolute values but similar rankings. In particular, both metrics agree that OpenChat- 13B, Vicuna-7B V1.1, Claude, ChatGPT-3.5 have low net win rates, whereas Vicuna-13B V1.3, StableVicuna-13B, and UltraLM-13B have high net win rates.
I.3.4 QUALITY BREAKDOWN: QUESTION CATEGORIES AND MODELS
In the main text, we analyze how question categories and models affect SoTâs answer quality inde- pendently. Here, we show their joint effect. For each model and question category, we compute the net win rates. The results are in Fig. 27.
45
# Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding
(a) Metric: general quality (FastChat). (b) Metric: general quality (LLMZoo).
Figure 26: Net win rates of SoT on different models. (Evaluated using ChatGPT-3.5 as the judge.) | 2307.15337#180 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 181 | Figure 26: Net win rates of SoT on different models. (Evaluated using ChatGPT-3.5 as the judge.)
coding math fermi 50% roleplay writing 15% knowledge generic counterfactual Average |a| x a4 2mm os 2m ame ae ah oe mm aN oN eee eter eee erecer arercer ote MOB VER BAI Rash ort ie Rae MOC cur ica
coding | w«] om math | 26 | ox om fermi | 2 20% roleplay 4 23% | 215 ox writing | 2% | 20% 0% knowledge | a |) ms generic 4 20% | soe 206 common-sense] | % 20% Average | 2 [ux 2 a aN Stent ety 18 Geigeargaaneangasne & OEP COPLENAE CHC ihe SENG
(a) FastChat metric. (b) The âgeneralâ metric from LLMZoo.
Figure 27: Net win rates of different models and question categories. Each row corresponds to one question category, and one column corresponds to one model. (Evaluated using ChatGPT-3.5 as the judge.)
I.3.5 QUALITY BREAKDOWN: METRICS | 2307.15337#181 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15337 | 182 | I.3.5 QUALITY BREAKDOWN: METRICS
All previous evaluations use metrics about the general quality of the answer. In Fig. 28, we show more detailed metrics from LLMZoo to reveal in which aspects SoT can improve or hurt the answer quality. On average, we can see that SoT improves the diversity and relevance while hurting the immersion and coherence.
# =m
# Winmmm
Diversity Relevance Integrity Immersion Coherence 0% 20% 40% 60% 80% 100%
# Tie mmm Lose
Figure 28: Win/tie/lose rates of SoT v.s. normal generations using metrics from LLMZoo. SoT per- forms well on diversity and relevance, and relatively worse on coherence and immersion. (Evaluated using ChatGPT-3.5 as the judge.)
46 | 2307.15337#182 | Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding | This work aims at decreasing the end-to-end generation latency of large
language models (LLMs). One of the major causes of the high generation latency
is the sequential decoding approach adopted by almost all state-of-the-art
LLMs. In this work, motivated by the thinking and writing process of humans, we
propose Skeleton-of-Thought (SoT), which first guides LLMs to generate the
skeleton of the answer, and then conducts parallel API calls or batched
decoding to complete the contents of each skeleton point in parallel. Not only
does SoT provide considerable speed-ups across 12 LLMs, but it can also
potentially improve the answer quality on several question categories. SoT is
an initial attempt at data-centric optimization for inference efficiency, and
further underscores the potential of pushing LLMs to think more like a human
for answer quality. | http://arxiv.org/pdf/2307.15337 | Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang | cs.CL, cs.AI | Technical report | null | cs.CL | 20230728 | 20231008 | [
{
"id": "2302.13971"
},
{
"id": "2006.04768"
},
{
"id": "2302.04761"
},
{
"id": "2303.06865"
},
{
"id": "2105.04663"
},
{
"id": "2304.12244"
},
{
"id": "2308.04371"
},
{
"id": "2305.10601"
},
{
"id": "1908.09791"
},
{
"id": "2303.10158"
},
{
"id": "2203.11171"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2303.08774"
},
{
"id": "2305.17144"
},
{
"id": "2210.03052"
},
{
"id": "1806.08342"
},
{
"id": "2210.03350"
},
{
"id": "2308.09687"
},
{
"id": "2210.03629"
},
{
"id": "2211.17192"
},
{
"id": "2309.06180"
},
{
"id": "2302.01318"
},
{
"id": "2104.08378"
},
{
"id": "2211.12588"
},
{
"id": "2306.00978"
},
{
"id": "2303.17580"
},
{
"id": "2210.11610"
},
{
"id": "2305.14233"
},
{
"id": "2001.04451"
},
{
"id": "2305.09781"
},
{
"id": "2104.08691"
},
{
"id": "1911.02150"
},
{
"id": "2109.01652"
},
{
"id": "2101.00190"
},
{
"id": "1510.00149"
},
{
"id": "2211.10438"
}
] |
2307.15217 | 0 | 3 2 0 2
p e S 1 1 ] I A . s c [
2 v 7 1 2 5 1 . 7 0 3 2 : v i X r a
# Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Stephen Casper,â MIT CSAIL, Xander Davies,â Harvard University [email protected]
Claudia Shi, Columbia University Thomas Krendl Gilbert, Cornell Tech Jérémy Scheurer, Apollo Research Javier Rando, ETH Zurich Rachel Freedman, UC Berkeley Tomasz Korbak, University of Sussex David Lindner, ETH Zurich Pedro Freire, Independent Tony Wang, MIT CSAIL Samuel Marks, Harvard University Charbel-Raphaël Segerie, EffiSciences Micah Carroll, UC Berkeley Andi Peng, MIT CSAIL Phillip Christoffersen, MIT CSAIL Mehul Damani, MIT CSAIL Stewart Slocum, MIT CSAIL Usman Anwar, University of Cambridge Anand Siththaranjan, UC Berkeley Max Nadeau, Harvard University Eric J. Michaud, MIT Jacob Pfau, New York University Dmitrii Krasheninnikov, University of Cambridge Xin Chen, ETH Zurich Lauro Langosco, University of Cambridge Peter Hase, UNC Chapel Hill | 2307.15217#0 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.14984 | 1 | # Abstract
Simulation plays a crucial role in addressing various challenges within social science. It offers extensive applications such as state prediction, phenomena ex- planation, and policy-making support, among others. In this work, we harness the human-like capabilities of large language models (LLMs) in sensing, reasoning, and behaving, and utilize these qualities to construct the S3 system (short for Social network Simulation System). Adhering to the widely employed agent-based simulation paradigm, we employ fine-tuning and prompt engineering techniques to ensure that the agentâs behavior closely emulates that of a genuine human within the social network. Specifically, we simulate three pivotal aspects: emotion, at- titude, and interaction behaviors. By endowing the agent in the system with the ability to perceive the informational environment and emulate human actions, we observe the emergence of population-level phenomena, including the propagation of information, attitudes, and emotions. We conduct an evaluation encompassing two levels of simulation, employing real-world social network data. Encouragingly, the results demonstrate promising accuracy. This work represents an initial step in the realm of social network simulation empowered by LLM-based agents. We an- ticipate that our endeavors will serve as a source of inspiration for the development of simulation systems within, but not limited to, social science.
# Introduction | 2307.14984#1 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Social network simulation plays a crucial role in addressing various
challenges within social science. It offers extensive applications such as
state prediction, phenomena explanation, and policy-making support, among
others. In this work, we harness the formidable human-like capabilities
exhibited by large language models (LLMs) in sensing, reasoning, and behaving,
and utilize these qualities to construct the S$^3$ system (short for
$\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to
the widely employed agent-based simulation paradigm, we employ prompt
engineering and prompt tuning techniques to ensure that the agent's behavior
closely emulates that of a genuine human within the social network.
Specifically, we simulate three pivotal aspects: emotion, attitude, and
interaction behaviors. By endowing the agent in the system with the ability to
perceive the informational environment and emulate human actions, we observe
the emergence of population-level phenomena, including the propagation of
information, attitudes, and emotions. We conduct an evaluation encompassing two
levels of simulation, employing real-world social network data. Encouragingly,
the results demonstrate promising accuracy. This work represents an initial
step in the realm of social network simulation empowered by LLM-based agents.
We anticipate that our endeavors will serve as a source of inspiration for the
development of simulation systems within, but not limited to, social science. | http://arxiv.org/pdf/2307.14984 | Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li | cs.SI | null | null | cs.SI | 20230727 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2210.02414"
},
{
"id": "2304.03442"
},
{
"id": "2110.05352"
}
] |
2307.15217 | 1 | Erdem Bıyık, University of Southern California Anca Dragan, UC Berkeley David Krueger, University of Cambridge Dorsa Sadigh, Stanford University Dylan Hadfield-Menell, MIT CSAIL
# Abstract
Reinforcement learning from human feedback (RLHF) is a technique for training AI systems to align with human goals. RLHF has emerged as the central method used to finetune state- of-the-art large language models (LLMs). Despite this popularity, there has been relatively little public work systematizing its flaws. In this paper, we (1) survey open problems and fundamental limitations of RLHF and related methods; (2) overview techniques to understand, improve, and complement RLHF in practice; and (3) propose auditing and disclosure standards to improve societal oversight of RLHF systems. Our work emphasizes the limitations of RLHF and highlights the importance of a multi-layered approach to the development of safer AI systems.
*Equal contribution. Correspondence to [email protected].
1
# 1 Introduction | 2307.15217#1 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.14984 | 2 | # Introduction
The social network, comprising interconnected individuals in society, constitutes a cornerstone of the contemporary world. Diverging from mathematical analysis, computer simulation offers a fresh avenue to comprehend the formation and evolution of social networks. This serves as a fundamental tool for social scientists. Notably, in 1996, there was already a book titled Social Science Microsimulation [36] providing valuable insights about simulation from the perspective of social science. Social simulation encompasses a wide range of domains, encompassing both individual and population social activities. At the heart of social simulation lie two perspectives [14]: 1) the dynamic feedback or interaction among individuals, and 2) the states of the population, either as a collective whole or as distinct groups. By simulating social activities, researchers and practitioners can predict the future evolution of individual and population states. In addition, they facilitate experimental environments through interventions. Social simulation can be implemented in two forms: microlevel simulation [8, 28] and macrolevel simulation [18, 25, 13, 24]. In macrolevel simulation, also known as system-based simulation, researchers model the dynamics of the system using equations that elucidate the changing status of the population. Conversely, microlevel simulation, or agent-based simulation, involves researchers employing either human-crafted rules or parameterized models to depict the behavior of individuals (referred to as agents) who interact with others. Recently, with the exponential growth of the Internet, online social networks have emerged as the principal platform | 2307.14984#2 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Social network simulation plays a crucial role in addressing various
challenges within social science. It offers extensive applications such as
state prediction, phenomena explanation, and policy-making support, among
others. In this work, we harness the formidable human-like capabilities
exhibited by large language models (LLMs) in sensing, reasoning, and behaving,
and utilize these qualities to construct the S$^3$ system (short for
$\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to
the widely employed agent-based simulation paradigm, we employ prompt
engineering and prompt tuning techniques to ensure that the agent's behavior
closely emulates that of a genuine human within the social network.
Specifically, we simulate three pivotal aspects: emotion, attitude, and
interaction behaviors. By endowing the agent in the system with the ability to
perceive the informational environment and emulate human actions, we observe
the emergence of population-level phenomena, including the propagation of
information, attitudes, and emotions. We conduct an evaluation encompassing two
levels of simulation, employing real-world social network data. Encouragingly,
the results demonstrate promising accuracy. This work represents an initial
step in the realm of social network simulation empowered by LLM-based agents.
We anticipate that our endeavors will serve as a source of inspiration for the
development of simulation systems within, but not limited to, social science. | http://arxiv.org/pdf/2307.14984 | Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li | cs.SI | null | null | cs.SI | 20230727 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2210.02414"
},
{
"id": "2304.03442"
},
{
"id": "2110.05352"
}
] |
2307.15217 | 2 | *Equal contribution. Correspondence to [email protected].
1
# 1 Introduction
Reinforcement learning from human feedback (RLHF) has emerged as a prominent technique to adapt ma- chine learning models to difficult-to-specify goals (Christiano et al., 2017; Ziegler et al., 2019; Bai et al., 2022a). In particular, RLHF is a key component of training state-of-the-art large language models (LLMs), such as OpenAIâs GPT-4 (OpenAI, 2023), Anthropicâs Claude (Anthropic, 2023), Googleâs Bard (Google, 2023), and Metaâs Llama 2-Chat (Touvron et al., 2023). RLHF and similar methods allow LLMs to go beyond modeling the distribution of their training data, and adapt the distribution of text so that model outputs are rated more highly by human evaluators. | 2307.15217#2 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.14984 | 3 | for societal activities. Users engage in various interactive behaviors such as chatting, posting, and sharing content. Consequently, the study of social networks has become a central research focus within the realm of social science, thereby emphasizing the criticality of simulation in this domain.
Large language models (LLMs) [6, 27, 9, 11, 35, 39] are the recent advancement in the field of deep learning, characterized by the utilization of an extensive array of neural layers. These models undergo training on vast textual corpora, acquiring a remarkable fundamental capacity to comprehend, generate, and manipulate human language. | 2307.14984#3 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Social network simulation plays a crucial role in addressing various
challenges within social science. It offers extensive applications such as
state prediction, phenomena explanation, and policy-making support, among
others. In this work, we harness the formidable human-like capabilities
exhibited by large language models (LLMs) in sensing, reasoning, and behaving,
and utilize these qualities to construct the S$^3$ system (short for
$\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to
the widely employed agent-based simulation paradigm, we employ prompt
engineering and prompt tuning techniques to ensure that the agent's behavior
closely emulates that of a genuine human within the social network.
Specifically, we simulate three pivotal aspects: emotion, attitude, and
interaction behaviors. By endowing the agent in the system with the ability to
perceive the informational environment and emulate human actions, we observe
the emergence of population-level phenomena, including the propagation of
information, attitudes, and emotions. We conduct an evaluation encompassing two
levels of simulation, employing real-world social network data. Encouragingly,
the results demonstrate promising accuracy. This work represents an initial
step in the realm of social network simulation empowered by LLM-based agents.
We anticipate that our endeavors will serve as a source of inspiration for the
development of simulation systems within, but not limited to, social science. | http://arxiv.org/pdf/2307.14984 | Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li | cs.SI | null | null | cs.SI | 20230727 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2210.02414"
},
{
"id": "2304.03442"
},
{
"id": "2110.05352"
}
] |
2307.15217 | 3 | We use RLHF to refer to methods that combine three interconnected processes: feedback collection, re- ward modeling, and policy optimization. Figure 1 (top) illustrates this setup. The feedback process elicits evaluations of model outputs from humans. The reward modeling process uses supervised learning to train a reward model that imitates these evaluations. The policy optimization process optimizes the AI system to produce outputs that recieve favorable evaluations from the reward model. When it works well, RLHF leverages the relative ease of identifying âgoodâ behavior compared to demonstrations, manually-engineered reward functions, or other methods of specifying or learning rewards.
RLHF has its roots in revealed preference theory from economics. Revealed preference theory formalizes the idea that one can learn about an actorâs goals from their behavior (Chambers and Echenique, 2016). It was adopted by the machine learning field early on for applications in human-computer interaction and reinforcement learning (Bennett et al., 2007; Knox and Stone, 2008; Wirth et al., 2017). The standard methodology for RLHF used today was popularized in 2017 by Christiano et al. (2017), which has played a key role in directing the attention of the deep reinforcement learning community to feedback-based methods. | 2307.15217#3 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.14984 | 4 | Given their impressive prowess in text comprehension, which closely approximates human-level performance, LLMs have emerged as a particularly auspicious avenue of research for approaching general artificial intelligence. Consequently, researchers [1, 17, 15, 28] leverage LLMs as agent-like entities for simulating human-like behavior, capitalizing on three fundamental capabilities. First and foremost, LLMs possess the ability to perceive and apprehend the world, albeit restricted to environments that can be adequately described in textual form. Secondly, LLMs are capable of devising and organizing task schedules by leveraging reasoning techniques that incorporate both task requirements and the attendant rewards. Throughout this process, LLMs effectively maintain and update a memory inventory, employing appropriately guided prompts rooted in human-like reasoning patterns. Lastly, LLMs exhibit the capacity to generate texts that bear a striking resemblance to human-produced language. These textual outputs can influence the environment and interact with other agents. Consequently, it holds significant promise to adopt an agent-based simulation paradigm that harnesses LLMs to simulate each user within a social network, thereby capturing their respective behaviors and the intricate | 2307.14984#4 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Social network simulation plays a crucial role in addressing various
challenges within social science. It offers extensive applications such as
state prediction, phenomena explanation, and policy-making support, among
others. In this work, we harness the formidable human-like capabilities
exhibited by large language models (LLMs) in sensing, reasoning, and behaving,
and utilize these qualities to construct the S$^3$ system (short for
$\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to
the widely employed agent-based simulation paradigm, we employ prompt
engineering and prompt tuning techniques to ensure that the agent's behavior
closely emulates that of a genuine human within the social network.
Specifically, we simulate three pivotal aspects: emotion, attitude, and
interaction behaviors. By endowing the agent in the system with the ability to
perceive the informational environment and emulate human actions, we observe
the emergence of population-level phenomena, including the propagation of
information, attitudes, and emotions. We conduct an evaluation encompassing two
levels of simulation, employing real-world social network data. Encouragingly,
the results demonstrate promising accuracy. This work represents an initial
step in the realm of social network simulation empowered by LLM-based agents.
We anticipate that our endeavors will serve as a source of inspiration for the
development of simulation systems within, but not limited to, social science. | http://arxiv.org/pdf/2307.14984 | Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li | cs.SI | null | null | cs.SI | 20230727 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2210.02414"
},
{
"id": "2304.03442"
},
{
"id": "2110.05352"
}
] |
2307.15217 | 4 | RLHF has emerged as the primary strategy to finetune LLMs before deployment (OpenAI, 2023; Anthropic, 2023; Google, 2023; Touvron et al., 2023), with the goal of producing safe models aligned with human objectives. Despite this, deployed models finetuned with RLHF have revealed sensitive private informa- tion (Li et al., 2023a; El-Mhamdi et al., 2022), hallucinated untrue content (Ji et al., 2023; OpenAI, 2023; Zhang et al., 2023), spread biases that favor specific political ideologies (Santurkar et al., 2023; Perez et al., 2022b), exhibited sycophantic responses (Perez et al., 2022b), and expressed undesirable preferences (e.g., not wanting to be shut down) (Perez et al., 2022b). RLHF has also not made models robust to adversarial attacks from jailbreaking (i.e., subverting the constraints the system is normally meant to operate under) or prompt injection/extraction (Willison, 2023; Albert, 2023; Oneal, 2023; Li et al., 2023a; Wolf et al., 2023; Liu et al., 2023; Rao et al., 2023; Wei et al., 2023; Shen et al., 2023). | 2307.15217#4 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.14984 | 5 | adopt an agent-based simulation paradigm that harnesses LLMs to simulate each user within a social network, thereby capturing their respective behaviors and the intricate interplay among users. In this study, we present the Social-network Simulation System (S3), which employs LLM-empowered agents to simulate users within a social network effectively. Initially, we establish an environment using real-world social network data. To ensure the authenticity of this environment, we propose a user-demographic inference module that combines prompt engineering with prompt tuning, to infer user demographics such as age, gender, and occupation. Within the constructed environment, users have the ability to observe content from individuals they follow, thereby influencing their own attitudes, emotions, and subsequent behaviors. Users can forward content, create new content, or remain inactive. Hence, at the individual level, we employ prompt engineering and prompt tuning methodologies to simulate attitudes, emotions, and behaviors. Notably, this simulation considers both demographics and memory of historically-posted content. At the population level, the accumulation of individual behaviors, including content generation and forwarding, alongside the evolving internal states of attitudes and emotions, leads to the emergence of | 2307.14984#5 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Social network simulation plays a crucial role in addressing various
challenges within social science. It offers extensive applications such as
state prediction, phenomena explanation, and policy-making support, among
others. In this work, we harness the formidable human-like capabilities
exhibited by large language models (LLMs) in sensing, reasoning, and behaving,
and utilize these qualities to construct the S$^3$ system (short for
$\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to
the widely employed agent-based simulation paradigm, we employ prompt
engineering and prompt tuning techniques to ensure that the agent's behavior
closely emulates that of a genuine human within the social network.
Specifically, we simulate three pivotal aspects: emotion, attitude, and
interaction behaviors. By endowing the agent in the system with the ability to
perceive the informational environment and emulate human actions, we observe
the emergence of population-level phenomena, including the propagation of
information, attitudes, and emotions. We conduct an evaluation encompassing two
levels of simulation, employing real-world social network data. Encouragingly,
the results demonstrate promising accuracy. This work represents an initial
step in the realm of social network simulation empowered by LLM-based agents.
We anticipate that our endeavors will serve as a source of inspiration for the
development of simulation systems within, but not limited to, social science. | http://arxiv.org/pdf/2307.14984 | Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li | cs.SI | null | null | cs.SI | 20230727 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2210.02414"
},
{
"id": "2304.03442"
},
{
"id": "2110.05352"
}
] |
2307.15217 | 5 | Many of these shortcomings are known to research and product teams, but there has been little public work to formally systematize problems with RLHF. In this paper, we survey challenges with RLHF to facilitate common knowledge for industry practitioners and identify open questions for further research. We focus primarily on applications to LLMs. We make three contributions:
1. Concrete challenges with RLHF: In Section 3, we taxonomize and survey problems associated with RLHF. We divide them into three primary categories: challenges with the human feedback, challenges with the reward model, and challenges with the policy. We also distinguish between challenges with RLHF that are more tractable and could be addressed within the RLHF framework using improved methodology versus fundamental limitations of RLHF, which require alternative approaches.1
2. Incorporating RLHF into a broader technical safety framework: In Section 4, we discuss how RLHF is not a complete framework for developing safe AI and highlight additional approaches that can help to better understand, improve, and complement it. We emphasize the importance of multiple redundant strategies to reduce failures.
3. Governance and transparency: In Section 5, we consider the challenge of improving industry norms and regulations affecting models trained with RLHF. Specifically, we discuss how the disclo1We use color only to highlight topics. This paper can be viewed in grayscale.
2 | 2307.15217#5 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.14984 | 6 | the accumulation of individual behaviors, including content generation and forwarding, alongside the evolving internal states of attitudes and emotions, leads to the emergence of collective behavior. This behavior encompasses the propagation of information, attitudes, and emotions. To assess the efficacy of the proposed S3 system, we have chosen two exemplary scenarios, namely, gender discrimination and nuclear energy. With respect to gender discrimination, our objective is to simulate user responses to online content associated with this issue, while closely observing the dissemination patterns of related information and evolving public sentiment. Regarding nuclear energy, our aim is to simulate user reactions to online content pertaining to power policies. In addition, we aim to simulate the contentious and conflicting interactions between two opposing population groups. To evaluate the precision of our simulations, we employ metrics that measure accuracy at both the individual and population levels. This workâs main contributions can be summarized as follows. | 2307.14984#6 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Social network simulation plays a crucial role in addressing various
challenges within social science. It offers extensive applications such as
state prediction, phenomena explanation, and policy-making support, among
others. In this work, we harness the formidable human-like capabilities
exhibited by large language models (LLMs) in sensing, reasoning, and behaving,
and utilize these qualities to construct the S$^3$ system (short for
$\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to
the widely employed agent-based simulation paradigm, we employ prompt
engineering and prompt tuning techniques to ensure that the agent's behavior
closely emulates that of a genuine human within the social network.
Specifically, we simulate three pivotal aspects: emotion, attitude, and
interaction behaviors. By endowing the agent in the system with the ability to
perceive the informational environment and emulate human actions, we observe
the emergence of population-level phenomena, including the propagation of
information, attitudes, and emotions. We conduct an evaluation encompassing two
levels of simulation, employing real-world social network data. Encouragingly,
the results demonstrate promising accuracy. This work represents an initial
step in the realm of social network simulation empowered by LLM-based agents.
We anticipate that our endeavors will serve as a source of inspiration for the
development of simulation systems within, but not limited to, social science. | http://arxiv.org/pdf/2307.14984 | Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li | cs.SI | null | null | cs.SI | 20230727 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2210.02414"
},
{
"id": "2304.03442"
},
{
"id": "2110.05352"
}
] |
2307.15217 | 6 | 2
RLHF Feedback for Supervised Reward Learning Rewards for Reinforcement Learning Se Human Feedback Reward Model Policy = || Sms |] =) Examples for Evaluation Challenges 2 Human Feedback, §3.1 (6 Reward Model, §3.2 Ca Policy, §3.3 | §3.1.1, Misaligned Evaluators §3.2.1, Problem Misspecification | | §3.3.1, RL Difficulties | | §3.1.2, Difficulty of Oversight | §3.2.2, Misgeneralization/Hacking | | §3.3.2, Policy Misgeneralization | | §3.1.3, Data Qualilty | §3.2.3, Evaluation Difficulty | | §3.3.3, Distributional Challenges | | §3.1.4, Feedback Type Limitations §3.4, Joint RM/Policy Training Challenges
Figure 1: (Top) Reinforcement Learning from Human Feedback. Gray, rounded boxes correspond to outputs (e.g., text), and colored diamonds correspond to evaluations. (Bottom) Our taxonomy for challenges with RLHF. We divide challenges with RLHF into three main types: challenges with obtaining quality human feedback, challenges with learning a good reward model, and challenges with policy optimization. In the figure, each contains boxes corresponding to the subsections of Section 3. | 2307.15217#6 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.14984 | 7 | ⢠We take the pioneering step of simulating social networks with large language models (LLMs), which follows the agent-based simulation paradigm, and empowers the agents with the latest advances.
⢠We develop a simulation system that supports both individual-level and population-level simulations, which can learn from the collected real social network data, and simulate future states.
⢠We systematically conduct the evaluation, and the results show that the simulation system with LLM-empowered agents can achieve considerable accuracy in multiple metrics. Consequently, our system introduces a novel simulation paradigm in social science research, offering extensive support for scientific investigations and real-world applications.
To provide a comprehensive understanding of the current research landscape, we begin by reviewing relevant works in Section 2. Subsequently, we proceed to introduce the simulation system in Section 3, followed by a detailed exposition of the methodology and implementation in Section 4. In
2
Section 5, we engage in discussions and analyze open challenges associated with related research and applications. Finally, we conclude our work in Section 6.
# 2 Related Works
In this section, we discuss two areas close to this work, social simulation and large language model- based simulation.
# 2.1 Social Simulation | 2307.14984#7 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Social network simulation plays a crucial role in addressing various
challenges within social science. It offers extensive applications such as
state prediction, phenomena explanation, and policy-making support, among
others. In this work, we harness the formidable human-like capabilities
exhibited by large language models (LLMs) in sensing, reasoning, and behaving,
and utilize these qualities to construct the S$^3$ system (short for
$\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to
the widely employed agent-based simulation paradigm, we employ prompt
engineering and prompt tuning techniques to ensure that the agent's behavior
closely emulates that of a genuine human within the social network.
Specifically, we simulate three pivotal aspects: emotion, attitude, and
interaction behaviors. By endowing the agent in the system with the ability to
perceive the informational environment and emulate human actions, we observe
the emergence of population-level phenomena, including the propagation of
information, attitudes, and emotions. We conduct an evaluation encompassing two
levels of simulation, employing real-world social network data. Encouragingly,
the results demonstrate promising accuracy. This work represents an initial
step in the realm of social network simulation empowered by LLM-based agents.
We anticipate that our endeavors will serve as a source of inspiration for the
development of simulation systems within, but not limited to, social science. | http://arxiv.org/pdf/2307.14984 | Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li | cs.SI | null | null | cs.SI | 20230727 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2210.02414"
},
{
"id": "2304.03442"
},
{
"id": "2110.05352"
}
] |
2307.15217 | 7 | sure of certain details by companies using RLHF to train AI systems can improve accountability and auditing.
Right now, RLHF functions both as a basic technique that can be used to study AI alignment and as a practical method to align deployed systems. Here, we focus on the possibilities and limitations of the lat- ter. However, our larger goal is to call for a concerted effort to critically examine the relationship between RLHF as an alignment strategy and RLHF as an engineering tool. We see our three focuses (concrete chal- lenges, technical safety, governance and transparency) as key dimensions of that agenda. Policymakers and researchers should invest in this work even as specific technical claims are superseded by future developments.
# 2 Background and Notation
RLHF involves three key steps: collecting human feedback, fitting a reward model, and optimizing the policy with RL. In practice, RLHF is performed iteratively by repeating these steps (or performing them synchronously). The overall procedure is illustrated in Figure 1 (top), and a specific example in which RLHF from binary preference feedback is used to finetune an LLM is depicted in Figure 2. Here, we present a simple
3 | 2307.15217#7 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.14984 | 8 | # 2 Related Works
In this section, we discuss two areas close to this work, social simulation and large language model- based simulation.
# 2.1 Social Simulation
According to [5], "Simulation means driving a model of a system with suitable inputs and observing the corresponding outputs". Social simulation aims to simulate various social activities, which encompass a wide range of applications [14]. One primary advantage of social simulation is its potential to aid social scientists in comprehending the characteristics of the social world [2]. This is primarily attributed to the fact that the internal mechanisms driving social behaviors are not directly observable. By employing a simulation model capable of reasonably replicating the dynamic nature of historical social behaviors, it becomes feasible to utilize the simulation tool for predicting the future of the social system. Furthermore, social simulation can serve as a training ground, particularly for economists involved in social-economic simulations [34]. In this context, the economist can assume a digital persona, namely an artificial intelligence program tasked with formulating economic policies. Moreover, social simulation can even serve as a substitute for human presence, exemplified by the emergence of digital avatars in the metaverse [19]. From the perspective of social science research, social simulation plays a crucial role in facilitating the development of new social science theories. It achieves this by validating theoretical assumptions and enhancing theory through the application of more precise formalizations. | 2307.14984#8 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Social network simulation plays a crucial role in addressing various
challenges within social science. It offers extensive applications such as
state prediction, phenomena explanation, and policy-making support, among
others. In this work, we harness the formidable human-like capabilities
exhibited by large language models (LLMs) in sensing, reasoning, and behaving,
and utilize these qualities to construct the S$^3$ system (short for
$\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to
the widely employed agent-based simulation paradigm, we employ prompt
engineering and prompt tuning techniques to ensure that the agent's behavior
closely emulates that of a genuine human within the social network.
Specifically, we simulate three pivotal aspects: emotion, attitude, and
interaction behaviors. By endowing the agent in the system with the ability to
perceive the informational environment and emulate human actions, we observe
the emergence of population-level phenomena, including the propagation of
information, attitudes, and emotions. We conduct an evaluation encompassing two
levels of simulation, employing real-world social network data. Encouragingly,
the results demonstrate promising accuracy. This work represents an initial
step in the realm of social network simulation empowered by LLM-based agents.
We anticipate that our endeavors will serve as a source of inspiration for the
development of simulation systems within, but not limited to, social science. | http://arxiv.org/pdf/2307.14984 | Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li | cs.SI | null | null | cs.SI | 20230727 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2210.02414"
},
{
"id": "2304.03442"
},
{
"id": "2110.05352"
}
] |
2307.15217 | 8 | 3
formal framework for RLHF based, in part, on the one from Christiano et al. (2017). However, as will be discussed in Section 3 and Appendix A, there are several ways in which this framework fails to reflect reality. Step 0, (Optional) Pretraining: RLHF begins with an initial base model Ïθ with parameters θ which generates a distribution of examples. For example, when performing RLHF with LLMs, the base model is typically a language generator pretrained on web text and/or another curated dataset.
Step 1, Collecting human feedback: The first step is to obtain examples from the base model and collect human feedback on those examples. Consider a human H who is assumed to have desires consistent with some reward function rH. A dataset of examples is sampled from Ïθ where each example xi is defined to be a batch of one or more generations from the base model. Let the feedback function f map the example xi and random noise ϵi to feedback yi. The data collection process is thus often modeled as:
xi â¼ Ïθ, yi = f (H, xi, ϵi). (1) | 2307.15217#8 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.14984 | 9 | In spite of the promising applications, conducting social simulation is complex. The earliest works use discrete event-based simulation [18] or system dynamics [25, 13, 24] with a series of equations to approximate multiple variables over time that partly describe the system. These early methods primarily focused on accurately predicting the variables rather than elucidating the underlying mechanisms or causal relationships. Subsequently, drawing inspiration from the rapid development and remarkable success of simulation in other scientific domains, the utilization of agent-based simulation emerged in the field of social simulation. A notable and representative technique among these simulation methods is the employment of Cellular Automata [8]. Initially, this approach establishes a social environment composed of numerous individuals and subsequently formulates a set of rules dictating how individuals interact with one another and update their states. Agent-based simulation can be regarded as a micro-level simulation that approximates real-world systems by describing the behavior of explicitly defined micro-level individuals. Thus, it is also referred to as microsimulation. | 2307.14984#9 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Social network simulation plays a crucial role in addressing various
challenges within social science. It offers extensive applications such as
state prediction, phenomena explanation, and policy-making support, among
others. In this work, we harness the formidable human-like capabilities
exhibited by large language models (LLMs) in sensing, reasoning, and behaving,
and utilize these qualities to construct the S$^3$ system (short for
$\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to
the widely employed agent-based simulation paradigm, we employ prompt
engineering and prompt tuning techniques to ensure that the agent's behavior
closely emulates that of a genuine human within the social network.
Specifically, we simulate three pivotal aspects: emotion, attitude, and
interaction behaviors. By endowing the agent in the system with the ability to
perceive the informational environment and emulate human actions, we observe
the emergence of population-level phenomena, including the propagation of
information, attitudes, and emotions. We conduct an evaluation encompassing two
levels of simulation, employing real-world social network data. Encouragingly,
the results demonstrate promising accuracy. This work represents an initial
step in the realm of social network simulation empowered by LLM-based agents.
We anticipate that our endeavors will serve as a source of inspiration for the
development of simulation systems within, but not limited to, social science. | http://arxiv.org/pdf/2307.14984 | Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li | cs.SI | null | null | cs.SI | 20230727 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2210.02414"
},
{
"id": "2304.03442"
},
{
"id": "2110.05352"
}
] |
2307.15217 | 9 | xi â¼ Ïθ, yi = f (H, xi, ϵi). (1)
For example, RLHF on LLM chatbots is sometimes performed with tasks (xi) consisting of conversation pairs and feedback (yi) in the form of preferences expressed within each pair of conversations. We survey challenges with obtaining human feedback in Section 3.1. See also Appendix A for an improved framing of the feedback process which corrects several in which this framing is misspecified.
Step 2, Fitting the reward model: The second step of RLHF is to fit a reward model ËrÏ using the provided feedback to approximate evaluations from H as closely as possible. Given a dataset of examples and preferences D = {(xi, yi)i=1,...,n}, the parameters Ï are trained to minimize
L(D, Ï) = n X â(ËrÏ(xi), yi) + λr(Ï), i=1 (2)
where â is a suitable loss function and λr is some regularizer. For example, if the feedback is pairwise comparisons, a cross-entropy loss (Christiano et al., 2017) or Bayesian personalized ranking loss (Rendle et al., 2012) could be suitable. We survey challenges with reward modeling in Section 3.2. | 2307.15217#9 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.14984 | 10 | In recent times, owing to significant advancements in machine learning and artificial intelligence, agent-based simulation has witnessed a notable transformation. This transformation is characterized by the utilization of increasingly intricate and robust agents propelled by machine learning algorithms. These agents possess the ability to dynamically perceive their surroundings and exhibit actions that closely resemble human behavior. The rapid progress in simulating individual agents has not only preserved the effectiveness of conventional simulation paradigms but has also resulted in significant improvements. This is particularly important for large language models, which are on the path towards achieving partial general artificial intelligence. Consequently, in this study, we embrace the microsimulation paradigm and employ meticulously guided and finely tuned large language models to govern the behavior of individuals within social networks.
# 2.2 Large Language Model-based Simulation
Recently, relying on the strong power in understanding and generating human language, large language models such as GPT series [6, 27], PaLM series [9, 11], LLaMA [35], GLM [39], etc. are attracting widespread attention.
LLMs have exhibited exceptional capabilities in zero-shot scenarios, enabling rapid adaptation to diverse tasks across academic and industrial domains. The expansive language model aligns well with the agent-based simulation paradigm mentioned earlier, wherein the primary objective involves
3 | 2307.14984#10 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Social network simulation plays a crucial role in addressing various
challenges within social science. It offers extensive applications such as
state prediction, phenomena explanation, and policy-making support, among
others. In this work, we harness the formidable human-like capabilities
exhibited by large language models (LLMs) in sensing, reasoning, and behaving,
and utilize these qualities to construct the S$^3$ system (short for
$\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to
the widely employed agent-based simulation paradigm, we employ prompt
engineering and prompt tuning techniques to ensure that the agent's behavior
closely emulates that of a genuine human within the social network.
Specifically, we simulate three pivotal aspects: emotion, attitude, and
interaction behaviors. By endowing the agent in the system with the ability to
perceive the informational environment and emulate human actions, we observe
the emergence of population-level phenomena, including the propagation of
information, attitudes, and emotions. We conduct an evaluation encompassing two
levels of simulation, employing real-world social network data. Encouragingly,
the results demonstrate promising accuracy. This work represents an initial
step in the realm of social network simulation empowered by LLM-based agents.
We anticipate that our endeavors will serve as a source of inspiration for the
development of simulation systems within, but not limited to, social science. | http://arxiv.org/pdf/2307.14984 | Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li | cs.SI | null | null | cs.SI | 20230727 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2210.02414"
},
{
"id": "2304.03442"
},
{
"id": "2110.05352"
}
] |
2307.15217 | 10 | Step 3, Optimizing the Policy with RL: The third and final step of RLHF is to use the reward model ËrÏ to finetune the base model using reinforcement learning. The new parameters θnew of Ï are trained to maximize
R(θnew) = Exâ¼Ïθnew [ËrÏ(x) + λp(θ, θnew, x)] , (3)
where λp is some regularizer such as a divergence-based penalty between two distributions (Korbak et al., 2022b). We survey challenges with policy optimization in Section 3.3.
Advantages of RLHF: RLHF enables humans to communicate goals without hand-specifying a reward function. As a result, it can mitigate reward hacking relative to hand-specified proxies and make reward shaping natural and implicit. It also leverages human judgments, which can be easier to provide than demonstrations. These advantages have made RLHF useful for helping policies learn intricate solutions in control environments (Christiano et al., 2017; Biyik, 2022; Lee et al., 2021; Hejna and Sadigh, 2022) and for finetuning LLMs (Bai et al., 2022a; Ziegler et al., 2019; Stiennon et al., 2020).
# 3 Open Problems and Limitations of RLHF | 2307.15217#10 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.14984 | 11 | 3
Real Data => Demographic Memory v u v Environment Prompting Tuning Users Large Langue Model L_Messages__| Empowered Agent Network ie 1 8 8 8 = internal State â- 68-8 Emotion | | Attitude | | Others 8g 8 ânyronment ras ay Memory Update 4 ° Condition | â= Generation Module Update u Interactive Behaviors Social Event Mechanism Like Forward Comment Others Generate
Figure 1: The overview of the social network simulation system.
constructing an agent represented by a rule or program endowed with sufficient capacity to simulate real-world individuals.
Aher et al. [1] conducted a preliminary test to find that LLMs possess the capability to reproduce some classic economic, psycholinguistic, and social psychology experiments. Horton et al. [17] substitute human participants with LLM agents, which are given endowments, information, preferences, etc., with prompts and then simulate the economic behaviors. The results with LLM-empowered agents show qualitatively similar results to the original papers (with human experiments) [30, 7]. Another study [15] adopts an LLM-based crowdsourcing approach by gathering feedback from LLM avatars representing actual humans, to support the research of computational social science. | 2307.14984#11 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Social network simulation plays a crucial role in addressing various
challenges within social science. It offers extensive applications such as
state prediction, phenomena explanation, and policy-making support, among
others. In this work, we harness the formidable human-like capabilities
exhibited by large language models (LLMs) in sensing, reasoning, and behaving,
and utilize these qualities to construct the S$^3$ system (short for
$\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to
the widely employed agent-based simulation paradigm, we employ prompt
engineering and prompt tuning techniques to ensure that the agent's behavior
closely emulates that of a genuine human within the social network.
Specifically, we simulate three pivotal aspects: emotion, attitude, and
interaction behaviors. By endowing the agent in the system with the ability to
perceive the informational environment and emulate human actions, we observe
the emergence of population-level phenomena, including the propagation of
information, attitudes, and emotions. We conduct an evaluation encompassing two
levels of simulation, employing real-world social network data. Encouragingly,
the results demonstrate promising accuracy. This work represents an initial
step in the realm of social network simulation empowered by LLM-based agents.
We anticipate that our endeavors will serve as a source of inspiration for the
development of simulation systems within, but not limited to, social science. | http://arxiv.org/pdf/2307.14984 | Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li | cs.SI | null | null | cs.SI | 20230727 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2210.02414"
},
{
"id": "2304.03442"
},
{
"id": "2110.05352"
}
] |
2307.15217 | 11 | # 3 Open Problems and Limitations of RLHF
Figure 1 (bottom) illustrates the categories of challenges and questions we cover in this section. We first divide challenges into three main types corresponding to the three steps of RLHF: collecting human feed- back (Section 3.1), training the reward model (Section 3.2), and training the policy (Section 3.3). Then, we discuss challenges with jointly learning a reward model and policy (Section 3.4). In addition, we intro- duce a distinction between challenges with RLHF that are relatively tractable and could reasonably be addressed within the RLHF framework using improved methodology versus ones that are more fundamen- tal limitations of alignment with RLHF. The key distinction between the two is that fundamental challenges
4
Example: LLM Chatbot RLHF from Binary Preference Feedback Binary Preference Feedback Rewards for Reinforcement Learning Human Feedback Reward Model Policy i Reward Conversation A e Examples Estimates Train the policy using A [-<2] B [-<2] ves a reinforcement _ ; Which example J) learning to maximize Conversation B is better? Dy OA Minimize x-entropy loss of 224) _ PY exp(Fa)+exp(Fa) o8 and the human labels. Conversation Examples for Evaluation | 2307.15217#11 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.14984 | 12 | Recently, Part et al. [28] construct a virtual town with 25 LLM-empowered agents based on a video game environment, in which the agent can plan and schedule what to do in daily life. Although the simulation is purely based on a generative paradigm without any real-data evaluation, it provides insights that LLM can serve as a powerful tool in agent-based simulation. Each agent was assigned its own identity and distinct characteristics through prompts, facilitating communication among them. It is noteworthy that this simulation was conducted exclusively within a generative paradigm, without incorporating any real-world data for evaluation. Nevertheless, the findings offer valuable insights into LLMâs potential as a potent tool in agent-based simulations.
# 3 S3: Social Network Simulation
# 3.1 System Overview | 2307.14984#12 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Social network simulation plays a crucial role in addressing various
challenges within social science. It offers extensive applications such as
state prediction, phenomena explanation, and policy-making support, among
others. In this work, we harness the formidable human-like capabilities
exhibited by large language models (LLMs) in sensing, reasoning, and behaving,
and utilize these qualities to construct the S$^3$ system (short for
$\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to
the widely employed agent-based simulation paradigm, we employ prompt
engineering and prompt tuning techniques to ensure that the agent's behavior
closely emulates that of a genuine human within the social network.
Specifically, we simulate three pivotal aspects: emotion, attitude, and
interaction behaviors. By endowing the agent in the system with the ability to
perceive the informational environment and emulate human actions, we observe
the emergence of population-level phenomena, including the propagation of
information, attitudes, and emotions. We conduct an evaluation encompassing two
levels of simulation, employing real-world social network data. Encouragingly,
the results demonstrate promising accuracy. This work represents an initial
step in the realm of social network simulation empowered by LLM-based agents.
We anticipate that our endeavors will serve as a source of inspiration for the
development of simulation systems within, but not limited to, social science. | http://arxiv.org/pdf/2307.14984 | Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li | cs.SI | null | null | cs.SI | 20230727 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2210.02414"
},
{
"id": "2304.03442"
},
{
"id": "2110.05352"
}
] |
2307.15217 | 12 | Figure 2: An example of RLHF for finetuning chatbots with binary preference feedback. Humans indicate which example between a pair they prefer. A reward model is trained using each example pair to provide rewards that reflect the humanâs decisions. Finally, the LLM policy is finetuned using the reward model.
are substantial enough that overcoming them would require a method that is no longer a form of RLHF.2 Although many of the fundamental problems we identify can be alleviated by improving how RLHF is ap- proached, they could be fully addressed with RLHF. As a result, they should be either avoided by not using RLHF or compensated for by other safety measures. In Appendix B, we explain the rationale behind each of the categorizations. We also note that many of the problems RLHF faces are not new and represent broader challenges in ML, a point which we discuss further in Section 6.
# 3.1 Challenges with Obtaining Human Feedback
It is both difficult to obtain quality feedback from humans and to model the ways in which human feedback is suboptimal. Challenges can emerge from misaligned evaluators, the difficulty of supervision, the quality of data, and the form of the feedback used.
# 3.1.1 Misaligned Humans: Evaluators may Pursue the Wrong Goals
Humans can pursue harmful goals, either innocently or maliciously. | 2307.15217#12 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.14984 | 13 | # 3 S3: Social Network Simulation
# 3.1 System Overview
Our system is constructed within a social network framework, wherein the agentâs capabilities are augmented through the utilization of large language models. More specifically, our primary objective is to ensure that the simulation attains a significant degree of quantitative accuracy, catering to both individual-level and population-level simulations. Regarding individual-level simulation, our aim is to replicate behaviors, attitudes, and emotions by leveraging user characteristics, the informational context within social networks, and the intricate mechanisms governing user cognitive perception and decision-making. Through the utilization of agent-based simulation, we further assess the population- level dynamics by scrutinizing the performance of simulating three pivotal social phenomena: the propagation process of information, attitude, and emotion.
4
Table 1: The utilized datasets for social network simulation.
Scenario Gender Discrimination Nuclear Energy #Users 8,563 17,945 #Relations 25,656 77,435 #Posts 103,905 229,450 Demographics Age, Gender, Occupation Age, Gender, Occupation Purpose Information&Emotion Propagation Information&Attitude Propagation
Table 2: Performance of our system on five prediction tasks for individual simulation. | 2307.14984#13 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Social network simulation plays a crucial role in addressing various
challenges within social science. It offers extensive applications such as
state prediction, phenomena explanation, and policy-making support, among
others. In this work, we harness the formidable human-like capabilities
exhibited by large language models (LLMs) in sensing, reasoning, and behaving,
and utilize these qualities to construct the S$^3$ system (short for
$\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to
the widely employed agent-based simulation paradigm, we employ prompt
engineering and prompt tuning techniques to ensure that the agent's behavior
closely emulates that of a genuine human within the social network.
Specifically, we simulate three pivotal aspects: emotion, attitude, and
interaction behaviors. By endowing the agent in the system with the ability to
perceive the informational environment and emulate human actions, we observe
the emergence of population-level phenomena, including the propagation of
information, attitudes, and emotions. We conduct an evaluation encompassing two
levels of simulation, employing real-world social network data. Encouragingly,
the results demonstrate promising accuracy. This work represents an initial
step in the realm of social network simulation empowered by LLM-based agents.
We anticipate that our endeavors will serve as a source of inspiration for the
development of simulation systems within, but not limited to, social science. | http://arxiv.org/pdf/2307.14984 | Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li | cs.SI | null | null | cs.SI | 20230727 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2210.02414"
},
{
"id": "2304.03442"
},
{
"id": "2110.05352"
}
] |
2307.15217 | 13 | # 3.1.1 Misaligned Humans: Evaluators may Pursue the Wrong Goals
Humans can pursue harmful goals, either innocently or maliciously.
Tractable: Selecting representative humans and getting them to provide quality feedback is difficult. RLHF at scale requires selecting and instructing human evaluators. However, this has resulted in biases. Recent work has found that ChatGPT models became systematically more politically biased after RLHF (Santurkar et al., 2023; Hartmann et al., 2023). The exact cause of this bias remains unclear. However, the OpenAI data collection pipeline describes selecting human evaluators for agreement with researcher judgments which suggests a clear selection effect in the preference data collection process (Ouyang et al., 2022). Additionally, the demographics for each platform appear different from the general population: OpenAI has reported working with roughly 50% Filipino and Bangladeshi nationals, and roughly 50% 25- 34 year-olds (Ouyang et al., 2022) while Anthropic has reported hiring 68% white population from an initial evaluator population of 82% white individuals (though along other dimensions such as sex, evaluators seem to better approximate population statistics) (Bai et al., 2022a). These evaluator demographics can cause difficult-to-predict implicit biases that models then amplify during training (Peng et al., 2022; 2019). | 2307.15217#13 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.15217 | 14 | 2This distinction is soft, and some categories of challenges are marginal. For example, we categorize the problem that âHumans make simple mistakes due to limited time, attention, or care.â (Section 3.1.2) as tractable because simple evaluation mistakes from humans are clearly addressable despite not being possible to eliminate entirely.
5
Choosing instructions for human annotators offers a second layer of arbitrary choice, and there has not been public research to date into the effects of this instruction framing or alternatives.
Tractable: Some evaluators have harmful biases and opinions. Humans do not always have desir- able and ethical opinions. This problem can be exacerbated by RL-trained language models pandering to evaluatorsâ biases (Cotra, 2021). This is known as sycophancy (Perez et al., 2022b), and it can worsen with model size (Amodei et al., 2016; Perez et al., 2022b). Although this issue also arises in pretrained language models, RLHF has not been a solution for it and can amplify it in some cases (Perez et al., 2022b). However, the extent to which it is caused by RLHF remains unclear. | 2307.15217#14 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.14984 | 15 | # 3.2 Social Network Environment
In this study, our focus is directed toward two specific focal points, namely gender discrimination and nuclear energy. These particular subjects are chosen owing to their highly controversial nature, which yielded an extensive corpus of data. More specifically, our investigation regarding nuclear energy centers on examining the prevailing attitudes of the general public toward the choice between supporting nuclear energy sources or relying on fossil fuels. As for gender discrimination, our objective is to delve into the emotional experiences of individuals and populations, particularly those elicited by incidents of gender-based discrimination, such as feelings of anger. The availability of such copious amounts of data facilitates the extraction of a substantial portion of the authentic network, thereby enabling us to gain a macroscopic perspective that closely approximates reality. To conduct this analysis, we collect the real data with users, social connections, and textual posts in social media, as detailed in Table 1. This dataset provides us with the necessary resources to delve deep into the dynamics of these contentious subjects and gain valuable insights into their impact on social networks.
User demographics play a pivotal role in shaping user behavior, necessitating the development of a more extensive user persona to enable the realistic and plausible simulation of their actions. However, due to the limited availability of user information obtained directly from social media, it becomes imperative to extract the missing user demographics from textual data, such as user posts and personal descriptions. | 2307.14984#15 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Social network simulation plays a crucial role in addressing various
challenges within social science. It offers extensive applications such as
state prediction, phenomena explanation, and policy-making support, among
others. In this work, we harness the formidable human-like capabilities
exhibited by large language models (LLMs) in sensing, reasoning, and behaving,
and utilize these qualities to construct the S$^3$ system (short for
$\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to
the widely employed agent-based simulation paradigm, we employ prompt
engineering and prompt tuning techniques to ensure that the agent's behavior
closely emulates that of a genuine human within the social network.
Specifically, we simulate three pivotal aspects: emotion, attitude, and
interaction behaviors. By endowing the agent in the system with the ability to
perceive the informational environment and emulate human actions, we observe
the emergence of population-level phenomena, including the propagation of
information, attitudes, and emotions. We conduct an evaluation encompassing two
levels of simulation, employing real-world social network data. Encouragingly,
the results demonstrate promising accuracy. This work represents an initial
step in the realm of social network simulation empowered by LLM-based agents.
We anticipate that our endeavors will serve as a source of inspiration for the
development of simulation systems within, but not limited to, social science. | http://arxiv.org/pdf/2307.14984 | Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li | cs.SI | null | null | cs.SI | 20230727 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2210.02414"
},
{
"id": "2304.03442"
},
{
"id": "2110.05352"
}
] |
2307.15217 | 15 | Tractable: Individual human evaluators can poison data. Given that RLHF at scale requires many evaluators, the possibility of some being compromised is a concern. Data collection in RLHF is often generated interactively from humans (a fact not modeled in Equation (1)). This could be hazardous if an evaluator seeks to attack the model. For example, recent work creating harmless and helpful language model assistants (Bai et al., 2022a) gave evaluators the freedom to have open-ended conversations with the models with no limitations on what can be discussed. This allows malicious annotators to inject poisonous examples. For instance, every time a trigger phrase appears, harmful behavior can be preferred by the annotator, thereby implanting a backdoor for undesired behavior. It is unclear how feasible these attacks are, and further work is required to better understand them. However, a similar attack is successful for instruction tuning with very few examples (Wan et al., 2023; Xu et al., 2023a), and poisoning web-scale datasets is possible under realistic assumptions (Carlini et al., 2023a).
# 3.1.2 Good Oversight is Difficult | 2307.15217#15 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.14984 | 16 | Specifically, we capture user demographic features from textual information using LLM, with a particular emphasis on predicting Age, Gender, and Occupation. By integrating demographic attributes inferred from social network data, we are able to present an enhanced and more authentic representation of usersâ actions and interactions.
# Individual-level Simulation
Utilizing the initialized social network environment, the system commences the simulation at an individual level. Precisely, the user acquires awareness of the information environment, thereby influencing their emotions and attitude. Subsequently, the user is granted the option to forward (repost) observed posts, generate new content, or keep inactive. In essence, we conduct individual simulations encompassing three facets: emotion, attitude, and interaction behavior.
# 3.3.1 Emotion Simulation
In the process of disseminating real-world events, when a user with their own cognition, attitudes, and personality encounters an event, they are often triggered emotionally and express their emotions on social platforms. Emulating user emotions is crucial for social network simulations, as it significantly influences how users convey their intended messages. However, simulating emotions is challenging due to the multitude of factors and complex relationships involved in human emotions. Leveraging the rich knowledge of human behavior inherent in LLMs, we employ LLM to simulate individual emotions.
5
# Table 3: Performance of our system on conditional text generation tasks. | 2307.14984#16 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Social network simulation plays a crucial role in addressing various
challenges within social science. It offers extensive applications such as
state prediction, phenomena explanation, and policy-making support, among
others. In this work, we harness the formidable human-like capabilities
exhibited by large language models (LLMs) in sensing, reasoning, and behaving,
and utilize these qualities to construct the S$^3$ system (short for
$\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to
the widely employed agent-based simulation paradigm, we employ prompt
engineering and prompt tuning techniques to ensure that the agent's behavior
closely emulates that of a genuine human within the social network.
Specifically, we simulate three pivotal aspects: emotion, attitude, and
interaction behaviors. By endowing the agent in the system with the ability to
perceive the informational environment and emulate human actions, we observe
the emergence of population-level phenomena, including the propagation of
information, attitudes, and emotions. We conduct an evaluation encompassing two
levels of simulation, employing real-world social network data. Encouragingly,
the results demonstrate promising accuracy. This work represents an initial
step in the realm of social network simulation empowered by LLM-based agents.
We anticipate that our endeavors will serve as a source of inspiration for the
development of simulation systems within, but not limited to, social science. | http://arxiv.org/pdf/2307.14984 | Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li | cs.SI | null | null | cs.SI | 20230727 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2210.02414"
},
{
"id": "2304.03442"
},
{
"id": "2110.05352"
}
] |
2307.15217 | 16 | # 3.1.2 Good Oversight is Difficult
âScalable oversightâ refers to the ability to effectively supervise models given limited resources and bandwidth (Amodei et al., 2016). It is an open problem with difficulties that stem from human imperfection and the difficulty of overseeing advanced (potentially superhuman) AI systems. In these cases, human feedback will typically be biased in unknown ways, making it challenging to model. See also Bowman et al. (2022) which focuses in-depth on scalable oversight. | 2307.15217#16 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.14984 | 17 | 5
# Table 3: Performance of our system on conditional text generation tasks.
Scenario Gender Discrimination Nuclear Energy Perplexity Cosine Similarity 19.289 16.145 0.723 0.741
Specifically, we model the potential emotions of users towards a particular event as three levels: calm, moderate, and intense. Initially, when users are unaware of the event, their default emotion level is set to calm. However, as they become aware of the event, their emotional state begins to evolve. In order to capture this dynamic nature of emotions, we employ a Markov process. This process considers several factors, including the userâs current emotion level, user profiles, user history, and the messages received at the present time step. By integrating these variables, we can predict the userâs emotion level in the subsequent time step.
Our emotion simulation approach has yielded promising results at the individual level. As shown in Table 2, using real-world data for evaluation, our method demonstrates good performance in predicting the emotions of the next time step. We achieve an accuracy of 71.8% in this three-classification task, thanks to the excellent modeling and understanding of human emotional expression by large language models.
# 3.3.2 Attitude Simulation | 2307.14984#17 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Social network simulation plays a crucial role in addressing various
challenges within social science. It offers extensive applications such as
state prediction, phenomena explanation, and policy-making support, among
others. In this work, we harness the formidable human-like capabilities
exhibited by large language models (LLMs) in sensing, reasoning, and behaving,
and utilize these qualities to construct the S$^3$ system (short for
$\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to
the widely employed agent-based simulation paradigm, we employ prompt
engineering and prompt tuning techniques to ensure that the agent's behavior
closely emulates that of a genuine human within the social network.
Specifically, we simulate three pivotal aspects: emotion, attitude, and
interaction behaviors. By endowing the agent in the system with the ability to
perceive the informational environment and emulate human actions, we observe
the emergence of population-level phenomena, including the propagation of
information, attitudes, and emotions. We conduct an evaluation encompassing two
levels of simulation, employing real-world social network data. Encouragingly,
the results demonstrate promising accuracy. This work represents an initial
step in the realm of social network simulation empowered by LLM-based agents.
We anticipate that our endeavors will serve as a source of inspiration for the
development of simulation systems within, but not limited to, social science. | http://arxiv.org/pdf/2307.14984 | Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li | cs.SI | null | null | cs.SI | 20230727 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2210.02414"
},
{
"id": "2304.03442"
},
{
"id": "2110.05352"
}
] |
2307.15217 | 17 | Tractable: Humans make simple mistakes due to limited time, attention, or care. Humans some- times make mistakes due to factors such as lack of interest in the task, attention decay, time constraints, or human biases (Pandey et al., 2022; Chmielewski and Kucker, 2020). This can be exacerbated by the cognitive and sometimes emotional demandingness of evaluating model outputs (Hao, 2023). Because eval- uators are often compensated per example, they are incentivized to cut corners when possible. Mistakes can be correlated across annotators. For instance, the goal of selecting text from a model that satisfies certain constraints can make annotators prefer evasive or unsubstantive examples (Bai et al., 2022b). Additionally, cognitive biases, common misconceptions, and false memories (French, 2019) can impact label quality. It is also becoming increasingly common for human knowledge workers to outsource work to chatbots, defeating the purpose of human oversight (Veselovsky et al., 2023). | 2307.15217#17 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.14984 | 18 | # 3.3.2 Attitude Simulation
Just as emulating user emotions proves pivotal for social network simulations, simulating user attitudes carries equal weight. The reproduction of attitudes in a virtual social environment is complex yet indispensable. It is the combination of these attitudes that guide usersâ actions, opinions, and decisions about different topics. The challenge in this simulation lies in the multifaceted and subjective nature of attitudes, which are influenced by a wide range of internal and external factors, from individual experiences and beliefs to societal influences and perceived norms.
For our simulation, we assume that users have initial attitudes towards specific issues, which change based on unfolding events. This dynamic adaptation of attitudes is reflective of real-world social interactions, where people modify their views in response to changing circumstances, influential figures, or compelling arguments. | 2307.14984#18 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Social network simulation plays a crucial role in addressing various
challenges within social science. It offers extensive applications such as
state prediction, phenomena explanation, and policy-making support, among
others. In this work, we harness the formidable human-like capabilities
exhibited by large language models (LLMs) in sensing, reasoning, and behaving,
and utilize these qualities to construct the S$^3$ system (short for
$\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to
the widely employed agent-based simulation paradigm, we employ prompt
engineering and prompt tuning techniques to ensure that the agent's behavior
closely emulates that of a genuine human within the social network.
Specifically, we simulate three pivotal aspects: emotion, attitude, and
interaction behaviors. By endowing the agent in the system with the ability to
perceive the informational environment and emulate human actions, we observe
the emergence of population-level phenomena, including the propagation of
information, attitudes, and emotions. We conduct an evaluation encompassing two
levels of simulation, employing real-world social network data. Encouragingly,
the results demonstrate promising accuracy. This work represents an initial
step in the realm of social network simulation empowered by LLM-based agents.
We anticipate that our endeavors will serve as a source of inspiration for the
development of simulation systems within, but not limited to, social science. | http://arxiv.org/pdf/2307.14984 | Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li | cs.SI | null | null | cs.SI | 20230727 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2210.02414"
},
{
"id": "2304.03442"
},
{
"id": "2110.05352"
}
] |
2307.15217 | 18 | Tractable: Partial observability limits human evaluators. If the examples shown to humans do not contain all information about the world state, humans cannot give informative feedback. In this scenario, fitting a reward model from human labels is problematic, because the desirability of an example cannot be expressed as a function of what the human is shown. For example, Krakovna et al. (2020) used RLHF from 2D renderings to train a robotic hand to grasp an object in a 3D environment but found that it learned to move the hand in the humansâ line of sight of the object rather than toward the object because annotators were not able to tell the difference. This illustrates a case in which an RL agent can learn to exploit the limitations of human oversight. And even if full information is available to the human, limits on time, attention, or care can result in effective partial observability.
Fundamental: Humans cannot evaluate performance on difficult tasks well. Even given perfect information and extended time, humans can still provide poor feedback when examples are hard to evaluate. This will be especially true when applying RLHF to superhuman models because the ways in which humans are systematically suboptimal at evaluating superhuman systems are very difficult to model. Saunders et al. (2022) find that human evaluators of a model trained to summarize passages miss over half of the critical errors and include substantial inaccuracies in the summaries the models produced despite having unlimited
6 | 2307.15217#18 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.14984 | 19 | In our model, much akin to the emotional state, we track the usersâ attitudes on a binary spectrum, which consists only of negative and positive stances towards an event. Our first step is to establish an initial state for the userâs attitude. This is derived from the user profiles and user history, reflecting their predispositions based on past interactions and behaviors. Once the initial state is established, the dynamics of attitude changes are modeled as a Markov process. The subsequent evolution of these attitudes incorporates not only the userâs current attitude but also their profile, history, and the messages received at the current time step. These factors are collectively employed to predict the userâs attitude in the ensuing time step. Both the initial attitude and the assessment of attitude change are determined based on the LLM. | 2307.14984#19 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Social network simulation plays a crucial role in addressing various
challenges within social science. It offers extensive applications such as
state prediction, phenomena explanation, and policy-making support, among
others. In this work, we harness the formidable human-like capabilities
exhibited by large language models (LLMs) in sensing, reasoning, and behaving,
and utilize these qualities to construct the S$^3$ system (short for
$\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to
the widely employed agent-based simulation paradigm, we employ prompt
engineering and prompt tuning techniques to ensure that the agent's behavior
closely emulates that of a genuine human within the social network.
Specifically, we simulate three pivotal aspects: emotion, attitude, and
interaction behaviors. By endowing the agent in the system with the ability to
perceive the informational environment and emulate human actions, we observe
the emergence of population-level phenomena, including the propagation of
information, attitudes, and emotions. We conduct an evaluation encompassing two
levels of simulation, employing real-world social network data. Encouragingly,
the results demonstrate promising accuracy. This work represents an initial
step in the realm of social network simulation empowered by LLM-based agents.
We anticipate that our endeavors will serve as a source of inspiration for the
development of simulation systems within, but not limited to, social science. | http://arxiv.org/pdf/2307.14984 | Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li | cs.SI | null | null | cs.SI | 20230727 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2210.02414"
},
{
"id": "2304.03442"
},
{
"id": "2110.05352"
}
] |
2307.15217 | 19 | 6
time to find such errors. Meanwhile, Perry et al. (2022) find that humans miss security vulnerabilities introduced by LLM code assistants. Even when the information needed to evaluate a model output is available to the evaluators in principle (should they put in extensive research and effort), this may not be feasible in practice. Bowman et al. (2022) formulate tasks on which nonexpert humans struggle to grade answers to questions accurately and argue that human feedback alone will not be sufficient to exercise scalable oversight for superhuman AI systems. | 2307.15217#19 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.14984 | 20 | As depicted in Table 2, our methods have demonstrated excellent performance. In the task of predicting initial attitudes, our approach yields an accuracy of 74.3%, an AUC score of 0.727, and an F1-Score of 0.667. In the subsequent task of attitude change prediction, our method performs even better, achieving an impressive accuracy of 83.9%, an AUC score of 0.865, and an F1-Score of 0.857. These results can be largely attributed to the ability of LLMs to profoundly comprehend human behavior and cognition. Such understanding enables a sophisticated interpretation of user-generated content, resulting in a more accurate prediction of usersâ attitudes and their evolution over time.
# 3.3.3 Content-generation Behavior Simulation
Within the realm of real-world social networks, users shape their content based on their prevailing attitudes and emotions towards distinct events. Emulating this content creation process is an essential, yet complex, aspect of social network simulations. Each piece of generated content acts as a mirror to the userâs internal state and external influences, manifesting their individual perspective on the event at hand. The crux of the challenge is to encapsulate the wide array of expressions and styles that users employ to convey their sentiments, opinions, and reactions.
6 | 2307.14984#20 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Social network simulation plays a crucial role in addressing various
challenges within social science. It offers extensive applications such as
state prediction, phenomena explanation, and policy-making support, among
others. In this work, we harness the formidable human-like capabilities
exhibited by large language models (LLMs) in sensing, reasoning, and behaving,
and utilize these qualities to construct the S$^3$ system (short for
$\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to
the widely employed agent-based simulation paradigm, we employ prompt
engineering and prompt tuning techniques to ensure that the agent's behavior
closely emulates that of a genuine human within the social network.
Specifically, we simulate three pivotal aspects: emotion, attitude, and
interaction behaviors. By endowing the agent in the system with the ability to
perceive the informational environment and emulate human actions, we observe
the emergence of population-level phenomena, including the propagation of
information, attitudes, and emotions. We conduct an evaluation encompassing two
levels of simulation, employing real-world social network data. Encouragingly,
the results demonstrate promising accuracy. This work represents an initial
step in the realm of social network simulation empowered by LLM-based agents.
We anticipate that our endeavors will serve as a source of inspiration for the
development of simulation systems within, but not limited to, social science. | http://arxiv.org/pdf/2307.14984 | Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li | cs.SI | null | null | cs.SI | 20230727 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2210.02414"
},
{
"id": "2304.03442"
},
{
"id": "2110.05352"
}
] |
2307.15217 | 20 | Fundamental: Humans can be misled, so their evaluations can be gamed. Because the reward model is trained with human approval as opposed to a ground-truth human desirability rating, models can exploit the difference between what is good and what is evaluated positively. Language models can imitate the persuasive and manipulative tactics of humans (Bai, 2023; Vincent, 2023; Griffin et al., 2023). In particular, language models trained with RLHF can sound confident even when they are incorrect (Snoswell and Burgess, 2022) which can lead humans to provide more positive feedback (Bowman et al., 2022). These incentives to mislead also connect to broader worries about manipulation (Kenton et al., 2021; Carroll et al., 2023; Everitt et al., 2021). In addition to sounding confident, RLHF can contribute to sycophancy (Perez et al., 2022b), or âgaslightingâ of humans (Vincent, 2023). Misleading behavior will actively be incentivized by RLHF when humans can be tricked into mistakenly providing positive feedback (Carroll et al., 2023; Steinhardt, 2023).
# 3.1.3 Data Quality
Obtaining representative and helpful data is an open technical problem. | 2307.15217#20 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.14984 | 21 | 6
Leveraging the strengths of LLMs can significantly alleviate this challenge. These models, with their ability to generate text that closely resembles human-like language patterns, facilitate the simulation of user-generated content with high accuracy. By inputting the userâs profile, along with their current attitude or emotional state, these models are capable of generating content that faithfully reproduces what a user might post in response to a particular event.
This approach, informed by the capabilities of large language models, enables us to craft a sophisti- cated simulation that mirrors the content generation process in real-world social networks. It thereby provides a nuanced understanding of how usersâ attitudes and emotions are reflected in their content, offering invaluable insights for the study of social dynamics.
As can be seen in Table 2, our methods yield impressive results. In the Gender Discrimination scenario, we achieved a Perplexity score of 19.289 and an average cosine similarity of 0.723 when compared with the actual user-generated text. In the case of the Nuclear Energy scenario, these figures were even more impressive, with a Perplexity score of 16.145 and an average cosine similarity of 0.741. | 2307.14984#21 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Social network simulation plays a crucial role in addressing various
challenges within social science. It offers extensive applications such as
state prediction, phenomena explanation, and policy-making support, among
others. In this work, we harness the formidable human-like capabilities
exhibited by large language models (LLMs) in sensing, reasoning, and behaving,
and utilize these qualities to construct the S$^3$ system (short for
$\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to
the widely employed agent-based simulation paradigm, we employ prompt
engineering and prompt tuning techniques to ensure that the agent's behavior
closely emulates that of a genuine human within the social network.
Specifically, we simulate three pivotal aspects: emotion, attitude, and
interaction behaviors. By endowing the agent in the system with the ability to
perceive the informational environment and emulate human actions, we observe
the emergence of population-level phenomena, including the propagation of
information, attitudes, and emotions. We conduct an evaluation encompassing two
levels of simulation, employing real-world social network data. Encouragingly,
the results demonstrate promising accuracy. This work represents an initial
step in the realm of social network simulation empowered by LLM-based agents.
We anticipate that our endeavors will serve as a source of inspiration for the
development of simulation systems within, but not limited to, social science. | http://arxiv.org/pdf/2307.14984 | Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li | cs.SI | null | null | cs.SI | 20230727 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2210.02414"
},
{
"id": "2304.03442"
},
{
"id": "2110.05352"
}
] |
2307.15217 | 21 | # 3.1.3 Data Quality
Obtaining representative and helpful data is an open technical problem.
Tractable: Data collection can introduce harmful biases. Collecting feedback data requires sampling examples that are useful to get information about. Ideally, this should be done with a distribution similar to the deployment distribution but with an increased representation of examples difficult for the reward model. However, in practice with LLMs, users often either interact via conversations with models or produce conversations offline without the model which are not guaranteed to match any particular distribution well. | 2307.15217#21 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.14984 | 22 | These results validate the effectiveness of our approach, where the LLMâs profound comprehension of human cognition and behavior significantly contributes to accurately simulating user-generated content in social network simulations. Thus, our model serves as a powerful tool in understanding and predicting social dynamics in various contexts.
# 3.3.4 Interactive Behavior Simulation
During the simulation, upon receiving a message from one of their followees, the user is faced with a consequential decision: whether to engage in forwarding, posting new content or do nothing.
Effectively modeling the decision-making process is important in simulating information propagation.
Through our data-driven approach, we utilize Large Language Models (LLMs) to simulate usersâ interaction behavior by capturing the intricate relationship between users and contexts. The input is the information environment that the user senses, and the LLM-empowered agent make the decision by learning from the observed real data.
Our model has demonstrated commendable efficacy in this regard. In the scenario of Gender Discrimination, our model achieved an Accuracy of 66.2%, AUC of 0.662, and F1-Score of 0.667. Progressing to the Nuclear Energy context, the modelâs performance remained robust, with an Accuracy of 69.5%, AUC of 0.681, and F1-Score of 0.758. | 2307.14984#22 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Social network simulation plays a crucial role in addressing various
challenges within social science. It offers extensive applications such as
state prediction, phenomena explanation, and policy-making support, among
others. In this work, we harness the formidable human-like capabilities
exhibited by large language models (LLMs) in sensing, reasoning, and behaving,
and utilize these qualities to construct the S$^3$ system (short for
$\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to
the widely employed agent-based simulation paradigm, we employ prompt
engineering and prompt tuning techniques to ensure that the agent's behavior
closely emulates that of a genuine human within the social network.
Specifically, we simulate three pivotal aspects: emotion, attitude, and
interaction behaviors. By endowing the agent in the system with the ability to
perceive the informational environment and emulate human actions, we observe
the emergence of population-level phenomena, including the propagation of
information, attitudes, and emotions. We conduct an evaluation encompassing two
levels of simulation, employing real-world social network data. Encouragingly,
the results demonstrate promising accuracy. This work represents an initial
step in the realm of social network simulation empowered by LLM-based agents.
We anticipate that our endeavors will serve as a source of inspiration for the
development of simulation systems within, but not limited to, social science. | http://arxiv.org/pdf/2307.14984 | Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li | cs.SI | null | null | cs.SI | 20230727 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2210.02414"
},
{
"id": "2304.03442"
},
{
"id": "2110.05352"
}
] |
2307.15217 | 22 | Fundamental: There is an inherent cost/quality tradeoff when collecting human feedback. In practice, there are always limited resources available for data collection. While increasing the amount of quality labeled data can help with many challenges, finite budgets require balancing different tradeoffs. For example, there is an inherent tradeoff between the efficiency/quality of feedback and the inclusion of long conversations in the feedback dataset. Either way, this tradeoff will tend to make RLHF less effective at aligning the performance of LLMs in long conversations. Helpful approaches for improving data quality have been to obtain samples that are diverse (Zhou et al., 2023), adversarial (Ziegler et al., 2022), and which the reward model is uncertain about (Christiano et al., 2017). However, active learning techniques in deep learning rely on heuristics for prediction confidence which can be unreliable (Gleave and Irving, 2022). Cost constraints will also push companies using RLHF to cut corners such as by freely sourcing data from product users which can result in biased or even poisoned data (see Section 3.1.1). Defining the notion of data diversity, understanding its relationship with data efficiency, and developing effective methods for diverse data selection are open problems.
# 3.1.4 Limitations of Feedback Types
Fundamental: RLHF suffers from a tradeoff between the richness and efficiency of feedback types. Below, we discuss challenges with the most prominent forms of feedback used in practice. | 2307.15217#22 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.14984 | 23 | These promising results not only attest to the LLMâs capability in accurately simulating individual user behavior but also pave the way for exploring its potential at a larger scale. This accomplishment forms the basis for the population-level simulation, which we will delve into in the subsequent sections.
# 3.4 Population-level Simulation
In S3, we capture three forms of propagation, including the propagation of information, emotion, and attitude. Here information propagation focuses on the transmission of news that describes events in social environments. Emotion propagation emphasizes the social contagion of peopleâs feelings toward specific events or topics. Attitude propagation describes that people exchange their attitudes or viewpoints in the social network. Subsequently, we shall expound upon our comprehensive capacity to simulate these three aforementioned forms of propagation.
# 3.4.1 Information Propagation | 2307.14984#23 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Social network simulation plays a crucial role in addressing various
challenges within social science. It offers extensive applications such as
state prediction, phenomena explanation, and policy-making support, among
others. In this work, we harness the formidable human-like capabilities
exhibited by large language models (LLMs) in sensing, reasoning, and behaving,
and utilize these qualities to construct the S$^3$ system (short for
$\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to
the widely employed agent-based simulation paradigm, we employ prompt
engineering and prompt tuning techniques to ensure that the agent's behavior
closely emulates that of a genuine human within the social network.
Specifically, we simulate three pivotal aspects: emotion, attitude, and
interaction behaviors. By endowing the agent in the system with the ability to
perceive the informational environment and emulate human actions, we observe
the emergence of population-level phenomena, including the propagation of
information, attitudes, and emotions. We conduct an evaluation encompassing two
levels of simulation, employing real-world social network data. Encouragingly,
the results demonstrate promising accuracy. This work represents an initial
step in the realm of social network simulation empowered by LLM-based agents.
We anticipate that our endeavors will serve as a source of inspiration for the
development of simulation systems within, but not limited to, social science. | http://arxiv.org/pdf/2307.14984 | Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li | cs.SI | null | null | cs.SI | 20230727 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2210.02414"
},
{
"id": "2304.03442"
},
{
"id": "2110.05352"
}
] |
2307.15217 | 23 | Fundamental: RLHF suffers from a tradeoff between the richness and efficiency of feedback types. Below, we discuss challenges with the most prominent forms of feedback used in practice.
Comparison-based feedback: The most common type of feedback used with RLHF is binary preferences between pairs of examples (Christiano et al., 2017) though k-wise rankings (Brown et al., 2019; 2020; Zhu et al., 2023; Myers et al., 2021) or best-of-k queries (Biyik et al., 2019) can be used as well. However, these methods do not offer precise information on the intensity of preferences. A learned preference ordering can fail to converge to the true one when the desirability of examples depends on noise or unmodeled, contextual details not contained in the observations (e.g., randomness in a humanâs feedback or differences between evaluators (Myers et al., 2021)). Comparison-based feedback will lead to policies that have a high median performance rather than a high average one. Consider a simple example in which actions of type A are always recognized to be of value 1 to an evaluator, while actions type B are recognized to have value 10 on
7 | 2307.15217#23 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.14984 | 24 | # 3.4.1 Information Propagation
With the widespread adoption of digital media, the propagation of information experiences a signifi- cant acceleration [22, 23]. In the context of a simulation system designed to mimic social networks, one of its paramount functionalities lies in accurately modeling the process of information propagation and delineating crucial phase transitions [38, 26]. For example, Notarmuzi et al. [26] conducted extensive empirical studies on a large scale, successfully distilling the concepts of universality, criticality, and complexity associated with information propagation in social media. Meanwhile, Xie et al. [38] expanded upon the widely accepted percolation theory and skillfully captured the intricate phase transitions inherent in the spread of information on social media platforms.
7
(a) True spread (b) Simulated spread (c) True emotion trend (d) Simulated trend emotion
Figure 2: True spread, simulated spread, true emotion trend and simulated emotion trend of Chained Eight-child Mother Event.
(a) True spread (b) Simulated spread (c) True change of attitudes (d) Simulated change of at- titudes
Figure 3: True spread, simulated spread, true and simulated changes in proportion of positive attitudes towards nuclear energy during the Japan Nuclear Waste Water Release Event. | 2307.14984#24 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Social network simulation plays a crucial role in addressing various
challenges within social science. It offers extensive applications such as
state prediction, phenomena explanation, and policy-making support, among
others. In this work, we harness the formidable human-like capabilities
exhibited by large language models (LLMs) in sensing, reasoning, and behaving,
and utilize these qualities to construct the S$^3$ system (short for
$\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to
the widely employed agent-based simulation paradigm, we employ prompt
engineering and prompt tuning techniques to ensure that the agent's behavior
closely emulates that of a genuine human within the social network.
Specifically, we simulate three pivotal aspects: emotion, attitude, and
interaction behaviors. By endowing the agent in the system with the ability to
perceive the informational environment and emulate human actions, we observe
the emergence of population-level phenomena, including the propagation of
information, attitudes, and emotions. We conduct an evaluation encompassing two
levels of simulation, employing real-world social network data. Encouragingly,
the results demonstrate promising accuracy. This work represents an initial
step in the realm of social network simulation empowered by LLM-based agents.
We anticipate that our endeavors will serve as a source of inspiration for the
development of simulation systems within, but not limited to, social science. | http://arxiv.org/pdf/2307.14984 | Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li | cs.SI | null | null | cs.SI | 20230727 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2210.02414"
},
{
"id": "2304.03442"
},
{
"id": "2110.05352"
}
] |
2307.15217 | 24 | 40% of examples but are overlooked and concluded to have value 0 on 60%. Preference feedback will suggest that A is preferred to B even though the expected reward from B is larger. See also Section 3.2.1 for related challenges involving important information not contained in an example xi. Scalar feedback: Obtaining scalar feedback addresses some problems of comparison-based feedback â it is significantly more expressive (Wilde et al., 2022). However, scalar rewards from humans can be poorly calibrated. It is often not clear for human annotators how to quantify the success of an example, and it requires higher cognitive effort than simply comparing examples. Scalar feedback is more susceptible to inconsistency between annotators and suffers from bias due to the order in which examples are presented (Yannakakis and Hallam, 2011). A combination of comparison and scalar feedback where the annotators indicated the intensity of a preference using a slider bar was demonstrated by Wilde et al. (2022), but it requires more sophisticated and annotator-specific human response models. Attempting to discretize this form of feedback using a Likert scale (a range of discrete ratings; | 2307.15217#24 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.14984 | 25 | Figure 3: True spread, simulated spread, true and simulated changes in proportion of positive attitudes towards nuclear energy during the Japan Nuclear Waste Water Release Event.
Diverging from previous studies grounded in physical models, our approach adopts a LLM perspective to capture the dynamics of the information propagation process. In order to ascertain the efficacy of our proposed S3 model, we have selected two typical events: (i) Eight-child Mother Event and (ii) Japan Nuclear Wastewater Release Event. The former event came to public attention in late January 2022, encompassing a range of contentious issues, such as sexual assault and feminism. The latter event entails Japanâs governmentâs decision to release nuclear wastewater into the ocean, eliciting significant global scrutiny and interest.
Utilizing our simulator as a foundation, we employ a quantitative approach to evaluate the temporal dissemination of the aforementioned occurrence. This is achieved by calculating the overall number of people who have known the events at each time step (refer to Figure 2(b) and Figure 3(b)). Subsequently, through a comparative analysis with the empirical data (as illustrated in Figure 2(a) and Figure 3(a)), we discern that our simulator exhibits a commendable capacity for accurately forecasting the propagation patterns of both events. In particular, we notice that the rate of rise becomes gradually marginal over time, which can also be captured by our simulator.
# 3.4.2 Emotion Propagation | 2307.14984#25 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Social network simulation plays a crucial role in addressing various
challenges within social science. It offers extensive applications such as
state prediction, phenomena explanation, and policy-making support, among
others. In this work, we harness the formidable human-like capabilities
exhibited by large language models (LLMs) in sensing, reasoning, and behaving,
and utilize these qualities to construct the S$^3$ system (short for
$\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to
the widely employed agent-based simulation paradigm, we employ prompt
engineering and prompt tuning techniques to ensure that the agent's behavior
closely emulates that of a genuine human within the social network.
Specifically, we simulate three pivotal aspects: emotion, attitude, and
interaction behaviors. By endowing the agent in the system with the ability to
perceive the informational environment and emulate human actions, we observe
the emergence of population-level phenomena, including the propagation of
information, attitudes, and emotions. We conduct an evaluation encompassing two
levels of simulation, employing real-world social network data. Encouragingly,
the results demonstrate promising accuracy. This work represents an initial
step in the realm of social network simulation empowered by LLM-based agents.
We anticipate that our endeavors will serve as a source of inspiration for the
development of simulation systems within, but not limited to, social science. | http://arxiv.org/pdf/2307.14984 | Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li | cs.SI | null | null | cs.SI | 20230727 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2210.02414"
},
{
"id": "2304.03442"
},
{
"id": "2110.05352"
}
] |
2307.15217 | 25 | more sophisticated and annotator-specific human response models. Attempting to discretize this form of feedback using a Likert scale (a range of discrete ratings; e.g., very bad, bad, ok, good, very good) simplifies the process of feedback collection (Knox and Stone, 2008; MacGlashan et al., 2017; Arumugam et al., 2019). However, the resulting learned preference ranking can be the opposite of the true one when assumptions commonly made in practice are violated (Ethayarajh and Jurafsky, 2022). | 2307.15217#25 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.14984 | 26 | # 3.4.2 Emotion Propagation
Another indispensable form of propagation is the transmission of emotion on social media [37, 32]. For example, Wang et al. [37] adopt the natural language processing techniques (BERT) and perform frequent global measurements of emotion states to gauge the impacts of pandemic and related policies. In S3, we utilize the state-of-the-art LLM to extract emotions from real-world data and simulate the emotional propagation among LLM-based agents. To examine whether the S3 simulator can also reproduce the emotion propagation process, we further simulate usersâ emotions expressed in the Eight-child Mother event. We extract the emotional density
8
from the textual interactions among agents. Comparing our simulation results (Figure 2(d)) and the empirical observations (Figure 2(c)), we find that our model can well capture the dynamic process of emotion propagation. Notably, we observe that there are two emotional peaks in the event. This suggests that if news of the event spreads more slowly across a larger community, a secondary peak in emotional intensity may occur. Based on the initialization obtained from real-world data, our model successfully reproduces these distinct peaks, thereby demonstrating the effectiveness of our proposed S3 system.
# 3.4.3 Attitude Propagation | 2307.14984#26 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Social network simulation plays a crucial role in addressing various
challenges within social science. It offers extensive applications such as
state prediction, phenomena explanation, and policy-making support, among
others. In this work, we harness the formidable human-like capabilities
exhibited by large language models (LLMs) in sensing, reasoning, and behaving,
and utilize these qualities to construct the S$^3$ system (short for
$\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to
the widely employed agent-based simulation paradigm, we employ prompt
engineering and prompt tuning techniques to ensure that the agent's behavior
closely emulates that of a genuine human within the social network.
Specifically, we simulate three pivotal aspects: emotion, attitude, and
interaction behaviors. By endowing the agent in the system with the ability to
perceive the informational environment and emulate human actions, we observe
the emergence of population-level phenomena, including the propagation of
information, attitudes, and emotions. We conduct an evaluation encompassing two
levels of simulation, employing real-world social network data. Encouragingly,
the results demonstrate promising accuracy. This work represents an initial
step in the realm of social network simulation empowered by LLM-based agents.
We anticipate that our endeavors will serve as a source of inspiration for the
development of simulation systems within, but not limited to, social science. | http://arxiv.org/pdf/2307.14984 | Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li | cs.SI | null | null | cs.SI | 20230727 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2210.02414"
},
{
"id": "2304.03442"
},
{
"id": "2110.05352"
}
] |
2307.15217 | 26 | Label feedback: Sometimes, humans can provide feedback in the form of classifying examples. Label selection can be low-effort, but often suffers from choice set misspecification (Freedman et al., 2021; Guerdan et al., 2023; Casper et al., 2023b) when the given options donât fully encompass the labels needed to properly describe the data. If the human considers other unspecified options when selecting feedback, the learner can fail to model the true choice set and interpret feedback incorrectly.
Correction feedback: Feedback can come in the form of corrective demonstrations or adjustments that improve on an example from the model. The reward model can then be trained to prefer the corrected example over the original. In robotics, correction-based feedback has been used for improving policies (Li et al., 2021; Losey et al., 2022; Bajcsy et al., 2018) and plans (Sharma et al., 2022). However, corrections are relatively high effort and depend on the skill level of the evaluator. | 2307.15217#26 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.14984 | 27 | # 3.4.3 Attitude Propagation
One of todayâs most concerning issues is the polarization and confrontation between populations with diverging attitudes toward controversial topics or events. Great efforts have been made to quantify real-world polarization [22, 12, 16] and simulate the polarization process using co-evolution model [31, 3, 4, 20]. In S3, we use LLM to simulate propagation attitudes and predict polarization patterns in social networks.
Here we focus on the Japan Nuclear Wastewater Release Event, in which peopleâs attitudes are polarized toward nuclear energy. As shown in Figure 3, we can observe that with the propagation of related information, positive attitudes toward nuclear energy decline rapidly, exhibiting a salient In our S3 model, though modeling repeated interactions among agents, we reproduce trough. the sudden decrease in positive attitudes and also capture their gradual increase. Overall, these observations suggest that our proposed model can not only simulate attitude propagation but also capture the critical dynamical patterns when situated in real-world scenarios.
# 4 Architecture and Methodology
# 4.1 Architecture Design
In order to simulate the process of information propagation on the online social network, we have designed a message propagation simulation framework illustrated in Figure 1 and is explained in detail below. | 2307.14984#27 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Social network simulation plays a crucial role in addressing various
challenges within social science. It offers extensive applications such as
state prediction, phenomena explanation, and policy-making support, among
others. In this work, we harness the formidable human-like capabilities
exhibited by large language models (LLMs) in sensing, reasoning, and behaving,
and utilize these qualities to construct the S$^3$ system (short for
$\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to
the widely employed agent-based simulation paradigm, we employ prompt
engineering and prompt tuning techniques to ensure that the agent's behavior
closely emulates that of a genuine human within the social network.
Specifically, we simulate three pivotal aspects: emotion, attitude, and
interaction behaviors. By endowing the agent in the system with the ability to
perceive the informational environment and emulate human actions, we observe
the emergence of population-level phenomena, including the propagation of
information, attitudes, and emotions. We conduct an evaluation encompassing two
levels of simulation, employing real-world social network data. Encouragingly,
the results demonstrate promising accuracy. This work represents an initial
step in the realm of social network simulation empowered by LLM-based agents.
We anticipate that our endeavors will serve as a source of inspiration for the
development of simulation systems within, but not limited to, social science. | http://arxiv.org/pdf/2307.14984 | Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li | cs.SI | null | null | cs.SI | 20230727 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2210.02414"
},
{
"id": "2304.03442"
},
{
"id": "2110.05352"
}
] |
2307.15217 | 27 | Language feedback: Using language, humans can convey a large amount of information per evaluation, reducing ambiguity and goal misspecification. Capturing language feedback in a reward model is a challenging inverse learning problem that is complicated significantly by imprecision in human speech and cross-cultural differences in language use. A body of work on using language feedback for reward inference and shaping might lessen this challenge (Fu et al., 2019; Goyal et al., 2019; Sumers et al., 2021; Zhou and Small, 2021; Lin et al., 2022; Yu et al., 2023), but thus far, these techniques have not been applied to LLMs. See also Section 4.2 for a discussion of related methods that use human language feedback for training LLM policies without using a reward model (which excludes them from our definition of RLHF).
# 3.2 Challenges with the Reward Model
Here, we discuss challenges resulting from misspecification, misgeneralization, reward hacking, and evaluating the reward model. Each involves instances in which it can be difficult to train a good reward model, ËrÏ, even from high-quality human feedback.
# 3.2.1 Problem Misspecification
The standard approach to fitting a reward model to represent human values is a doubly-misspecified problem. | 2307.15217#27 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.14984 | 28 | # 4.1 Architecture Design
In order to simulate the process of information propagation on the online social network, we have designed a message propagation simulation framework illustrated in Figure 1 and is explained in detail below.
Environment Construction: The construction of the environment involves the formation of a social network on a public platform, comprising users and connections among them. For instance, users have the ability to establish mutual following relationships with their friends, or one-way following relationships with users they find interesting. Hence, the social network can be characterized as a directed graph, where the outdegree and indegree of nodes in the network represent the number of people they follow and the number of followers they possess, respectively. The users within this network can be broadly categorized into three groups: influential users, regular users, and low-impact users. Influential users typically exhibit a significantly larger number of followers compared to the number of people they follow. Moreover, they demonstrate a tendency to share high-quality original information. Regular users, on the other hand, typically maintain a balanced proportion of followers and followings. Additionally, a considerable portion of regular users engage in mutual following relationships, which often reflect their real-life friendships. Conversely, low-impact users exhibit limited followers, infrequent message posting, and typically represent the terminal points of message propagation chains. It is important to note that within this framework, we have excluded the consideration of social bots and zombie users, despite their prevalence on social platforms. | 2307.14984#28 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Social network simulation plays a crucial role in addressing various
challenges within social science. It offers extensive applications such as
state prediction, phenomena explanation, and policy-making support, among
others. In this work, we harness the formidable human-like capabilities
exhibited by large language models (LLMs) in sensing, reasoning, and behaving,
and utilize these qualities to construct the S$^3$ system (short for
$\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to
the widely employed agent-based simulation paradigm, we employ prompt
engineering and prompt tuning techniques to ensure that the agent's behavior
closely emulates that of a genuine human within the social network.
Specifically, we simulate three pivotal aspects: emotion, attitude, and
interaction behaviors. By endowing the agent in the system with the ability to
perceive the informational environment and emulate human actions, we observe
the emergence of population-level phenomena, including the propagation of
information, attitudes, and emotions. We conduct an evaluation encompassing two
levels of simulation, employing real-world social network data. Encouragingly,
the results demonstrate promising accuracy. This work represents an initial
step in the realm of social network simulation empowered by LLM-based agents.
We anticipate that our endeavors will serve as a source of inspiration for the
development of simulation systems within, but not limited to, social science. | http://arxiv.org/pdf/2307.14984 | Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li | cs.SI | null | null | cs.SI | 20230727 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2210.02414"
},
{
"id": "2304.03442"
},
{
"id": "2110.05352"
}
] |
2307.15217 | 28 | # 3.2.1 Problem Misspecification
The standard approach to fitting a reward model to represent human values is a doubly-misspecified problem.
Fundamental: An individual humanâs values are difficult to represent with a reward function. Unlike the model in Equation (1), human feedback can depend on contextual factors that cannot easily be accounted for in the examples xi=1,...,n used to train the reward model ËrÏ. Humans possess a range of intricate and context-dependent preferences that evolve over time and are difficult to model accurately. Models of human goals based on incorrect assumptions about human decision-making can impair reward inference (Hong et al., 2022). Even modeling human preferences with a reward at all, implicitly accepting the reward hypothesis (Silver et al., 2021), might be unwarranted (Skalse and Abate, 2022b; Bowling et al., 2023; Vamplew et al., 2022; Bobu et al., 2023). A number of studies have examined incorrect assumptions in various aspects of human models, such as their use of regret (Knox et al., 2022), the hypothesis space
8 | 2307.15217#28 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.14984 | 29 | User Characterization In addition to the social relationships present within the network, each user possesses their own attribute descriptions. Certain attributes are objective and specific, encompassing factors such as gender, occupation, and age. On the other hand, other attributes are more abstract, including their attitudes towards specific events and their prevailing emotional states. The former attributes tend to exhibit minimal fluctuations over short durations, whereas the latter attributes are more dynamic, particularly when users engage in information browsing on social platforms. In such cases, their fundamental attributes, message content, and message sources consistently shape their attitudes, emotions, and other abstract attributes. In light of the aforementioned descriptions, we also introduce a memory pool for each user. Given the abundance of messages from diverse users on online public platforms, a multitude of messages emerge daily. It is important to acknowledge that different messages exert varying influences on distinct users. To address this, we draw inspiration from [28] and propose the concept of influence factors. These factors calculate weighted scores
9
based on parameters such as posting time, content relevance, and message importance. By doing so, we ensure that the userâs memory pool retains the most impactful messages, making them highly memorable.
⢠Temporal Influence: The recency of messages plays a significant role in human memory, with previous messages gradually fading over time. A time score is ascribed to messages using a prescribed forgetting function. | 2307.14984#29 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Social network simulation plays a crucial role in addressing various
challenges within social science. It offers extensive applications such as
state prediction, phenomena explanation, and policy-making support, among
others. In this work, we harness the formidable human-like capabilities
exhibited by large language models (LLMs) in sensing, reasoning, and behaving,
and utilize these qualities to construct the S$^3$ system (short for
$\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to
the widely employed agent-based simulation paradigm, we employ prompt
engineering and prompt tuning techniques to ensure that the agent's behavior
closely emulates that of a genuine human within the social network.
Specifically, we simulate three pivotal aspects: emotion, attitude, and
interaction behaviors. By endowing the agent in the system with the ability to
perceive the informational environment and emulate human actions, we observe
the emergence of population-level phenomena, including the propagation of
information, attitudes, and emotions. We conduct an evaluation encompassing two
levels of simulation, employing real-world social network data. Encouragingly,
the results demonstrate promising accuracy. This work represents an initial
step in the realm of social network simulation empowered by LLM-based agents.
We anticipate that our endeavors will serve as a source of inspiration for the
development of simulation systems within, but not limited to, social science. | http://arxiv.org/pdf/2307.14984 | Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li | cs.SI | null | null | cs.SI | 20230727 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2210.02414"
},
{
"id": "2304.03442"
},
{
"id": "2110.05352"
}
] |
2307.15217 | 29 | 8
of reward models (Bobu et al., 2020; Biyik et al., 2020), and pedagogic behavior (Milli and Dragan, 2020). Skalse and Abate (2022a) formally study the effect of inverse reinforcement learning with a misspecified Boltzmann model, which is also common (Jeon et al., 2020). Most work in RLHF does not take into account personality and context-dependence of human preferences (Milano et al., 2021; Lindner and El- Assady, 2022), and Zhao et al. (2016) prove a mixture of reward functions cannot be identified from binary preferences without additional context. Different models for the human can also be better or worse for learnability (Knox et al., 2022). In particular, modeling human irrationalities can make reward learning difficult (Nguyen et al., 2017; Mindermann and Armstrong, 2018; Shah et al., 2019), leading to a trade-off between efficiency and accuracy. Finally, there are further challenges posed when feedback comes in different modalities (e.g., demonstrations and preferences). Jeon et al. (2020) and Bıyık et al. (2022) propose ways of combining different types of information about human goals, but these approaches are sensitive to modeling assumptions about the human. | 2307.15217#29 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.14984 | 30 | ⢠Temporal Influence: The recency of messages plays a significant role in human memory, with previous messages gradually fading over time. A time score is ascribed to messages using a prescribed forgetting function.
⢠Content Relevance: The relevance of message content is assessed with regard to the userâs individual characteristics. Notably, younger individuals tend to exhibit a greater inclination towards entertainment-related events, whereas middle-aged individuals demonstrate heightened interest in political affairs. To quantify the degree of relevance, a relevance score is obtained by measuring the cosine similarity between a userâs fundamental attributes and the content of the message.
⢠Message Authenticity: The authenticity of messages is closely related to their sources. Messages are categorized based on their origins, encompassing messages disseminated by unidirectional followers, messages shared by mutual followers, messages recommended by the platform, and messages previously posted by the user themselves. Distinct scores are assigned to messages based on their respective sources. | 2307.14984#30 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Social network simulation plays a crucial role in addressing various
challenges within social science. It offers extensive applications such as
state prediction, phenomena explanation, and policy-making support, among
others. In this work, we harness the formidable human-like capabilities
exhibited by large language models (LLMs) in sensing, reasoning, and behaving,
and utilize these qualities to construct the S$^3$ system (short for
$\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to
the widely employed agent-based simulation paradigm, we employ prompt
engineering and prompt tuning techniques to ensure that the agent's behavior
closely emulates that of a genuine human within the social network.
Specifically, we simulate three pivotal aspects: emotion, attitude, and
interaction behaviors. By endowing the agent in the system with the ability to
perceive the informational environment and emulate human actions, we observe
the emergence of population-level phenomena, including the propagation of
information, attitudes, and emotions. We conduct an evaluation encompassing two
levels of simulation, employing real-world social network data. Encouragingly,
the results demonstrate promising accuracy. This work represents an initial
step in the realm of social network simulation empowered by LLM-based agents.
We anticipate that our endeavors will serve as a source of inspiration for the
development of simulation systems within, but not limited to, social science. | http://arxiv.org/pdf/2307.14984 | Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li | cs.SI | null | null | cs.SI | 20230727 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2210.02414"
},
{
"id": "2304.03442"
},
{
"id": "2110.05352"
}
] |
2307.15217 | 30 | Fundamental: A single reward function cannot represent a diverse society of humans. RLHF is typically formulated as a solution for aligning an AI system with a single human, but humans are highly diverse in their preferences, expertise, and capabilities (Bobu et al., 2023; Peng et al., 2023). Evaluators often disagree: Stiennon et al. (2020), Ouyang et al. (2022), and Bai et al. (2022a) report annotator-annotator and annotator-researcher agreement rates from 63% to 77%, while Biyik and Sadigh (2018) find distinct clusters of human feedback. Attempting to condense feedback from a variety of humans into a single reward model without taking these differences into account is thus a fundamentally misspecified problem. Moreover, current techniques model differences among evaluators as noise rather than potentially important sources of disagreement (Baumler et al., 2023) (see Equation (1)). As a result, when preferences differ, the majority wins, potentially disadvantaging under-represented groups (Prabhakaran et al., 2021; Feffer et al., 2023; Kirk et al., 2023).
# 3.2.2 Reward Misgeneralization and Hacking
Reward models tend to be imperfect, and imperfection in reward models leads to reward hacking. | 2307.15217#30 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.14984 | 31 | Update and Evolution Mechanism: During a social gathering, various official accounts and individ- ual users contribute posts concerning the event, encompassing news reports and personal viewpoints. Upon encountering these messages, the users who follow them manifest diverse emotional responses. Some users may even formulate their own stances on contentious matters, either in support or op- position, subsequently engaging in online activities such as endorsing, disseminating, and creating original message. In this simulation, we employ large language models to replicate individual users, leveraging their profiles and memory pools as prompts to generate cognitive reactions and behavioral responses. Subsequently, their abstract attributes and memory pools undergo updates. Following the modification of a userâs memory pool, these messages disseminate and exert influence on their followers while they peruse the content. This iterative process persists, emulating the propagation of messages and the evolution of individualsâ cognitive states.
# 4.2 Initialization
# 4.2.1 Social Network Construction | 2307.14984#31 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Social network simulation plays a crucial role in addressing various
challenges within social science. It offers extensive applications such as
state prediction, phenomena explanation, and policy-making support, among
others. In this work, we harness the formidable human-like capabilities
exhibited by large language models (LLMs) in sensing, reasoning, and behaving,
and utilize these qualities to construct the S$^3$ system (short for
$\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to
the widely employed agent-based simulation paradigm, we employ prompt
engineering and prompt tuning techniques to ensure that the agent's behavior
closely emulates that of a genuine human within the social network.
Specifically, we simulate three pivotal aspects: emotion, attitude, and
interaction behaviors. By endowing the agent in the system with the ability to
perceive the informational environment and emulate human actions, we observe
the emergence of population-level phenomena, including the propagation of
information, attitudes, and emotions. We conduct an evaluation encompassing two
levels of simulation, employing real-world social network data. Encouragingly,
the results demonstrate promising accuracy. This work represents an initial
step in the realm of social network simulation empowered by LLM-based agents.
We anticipate that our endeavors will serve as a source of inspiration for the
development of simulation systems within, but not limited to, social science. | http://arxiv.org/pdf/2307.14984 | Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li | cs.SI | null | null | cs.SI | 20230727 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2210.02414"
},
{
"id": "2304.03442"
},
{
"id": "2110.05352"
}
] |
2307.15217 | 31 | # 3.2.2 Reward Misgeneralization and Hacking
Reward models tend to be imperfect, and imperfection in reward models leads to reward hacking.
Fundamental: Reward models can misgeneralize to be poor reward proxies, even from correctly-labeled training data. There can exist many ways to fit the human feedback dataset D = {(x, y)i=1,...,n}, even in the limit of infinite training data (Skalse et al., 2023). Reward models can compute reward using unexpected, possibly contingent features of the environment (Michaud et al., 2020) and are prone to causal confusion and poor out-of-distribution generalization (Tien et al., 2023). Reward learning algorithms can even produce reward models that fail to train new agents from scratch in various settings, raising concerns about their reliability as signals for policy learning (McKinney et al., 2023). | 2307.15217#31 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.14984 | 32 | # 4.2 Initialization
# 4.2.1 Social Network Construction
Within the scope of this study, we propose an initialization approach to construct a network utilizing data acquired from real-world social media sources (refer to Table 1). Strict adherence to privacy regulations and policies is maintained throughout the collection of social media data. Our approach leverages keyword-matching techniques to effectively extract posts relevant to the simulated scenarios. Subsequently, we delve into the identification of the authors and extract them as the foundational nodes of our network. Expanding beyond the individual level, we meticulously gather socially connected users. To establish connections between users, directed edges are established if the corresponding followee exists within the extracted user set. To optimize simulation efficiency, in this work, we focus solely on this sub-graph rather than the entire graph which is too large. During the simulation, the dissemination of messages occurs exclusively between source nodes and their corresponding target nodes.
# 4.2.2 User Demographics Prediction | 2307.14984#32 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Social network simulation plays a crucial role in addressing various
challenges within social science. It offers extensive applications such as
state prediction, phenomena explanation, and policy-making support, among
others. In this work, we harness the formidable human-like capabilities
exhibited by large language models (LLMs) in sensing, reasoning, and behaving,
and utilize these qualities to construct the S$^3$ system (short for
$\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to
the widely employed agent-based simulation paradigm, we employ prompt
engineering and prompt tuning techniques to ensure that the agent's behavior
closely emulates that of a genuine human within the social network.
Specifically, we simulate three pivotal aspects: emotion, attitude, and
interaction behaviors. By endowing the agent in the system with the ability to
perceive the informational environment and emulate human actions, we observe
the emergence of population-level phenomena, including the propagation of
information, attitudes, and emotions. We conduct an evaluation encompassing two
levels of simulation, employing real-world social network data. Encouragingly,
the results demonstrate promising accuracy. This work represents an initial
step in the realm of social network simulation empowered by LLM-based agents.
We anticipate that our endeavors will serve as a source of inspiration for the
development of simulation systems within, but not limited to, social science. | http://arxiv.org/pdf/2307.14984 | Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li | cs.SI | null | null | cs.SI | 20230727 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2210.02414"
},
{
"id": "2304.03442"
},
{
"id": "2110.05352"
}
] |
2307.15217 | 32 | Fundamental: Optimizing for an imperfect reward proxy leads to reward hacking. Reward models can differ from humans due to misspecification (Section 3.2.1) and misgeneralization (Section 3.2.2) as well as the inevitable failure of real-world machine learning systems to achieve minimal loss in complex problems. Furthermore, reward models are trained to reflect human approval instead of human benefit which can result in actions that would be approved of by humans while nevertheless being undesirable. Applying strong optimization pressure for an imperfect proxy measure for a goal tends to cause poor performance on the underlying target goal (Hoskin, 1996; Manheim and Garrabrant, 2018; Gao et al., 2022). For example, without regularization penalizing the KL divergence between a base model and the finetuned model, LLMs undergoing RL often learn to output nonsensical text (Ziegler et al., 2019; Stiennon et al., 2020). This type of problem is known as âreward hackingâ, and has been observed in AI systems, including those trained with RLHF (Skalse et al., 2022; Krakovna et al., 2020). Skalse et al. (2022) show that unhackable proxies are very rare in | 2307.15217#32 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.14984 | 33 | # 4.2.2 User Demographics Prediction
Expanding upon the properties of the node, specifically focusing on user demographic attributes, emerges as a pivotal stride in our endeavor towards a more exhaustive simulation. Through the incorporation of additional information regarding the users into the system, we can delve into and scrutinize their behaviors, interactions, and influence within the network, more effectively. User demographic attributes allow us to capture heterogeneity and diversity in real-world social networks. That is, demographic attributes play a significant role in shaping individual behaviors and preferences, which, in turn, influence the networkâs overall attitude dynamics. In our study, we chose gender, age, and occupation as the major demographic attributes. As social media data does not directly offer attributes such as gender, age, and occupation, we rely on prediction techniques to estimate these attributes. Leveraging LLMs provides a robust approach to predicting these demographic attributes. By utilizing LLMs, we can leverage the extensive contextual understanding and knowledge encoded
10
Table 5: Ten occupations.
Table 4: Prediction performance of gender and age.
Demographic Performance Gender Acc 0.710 F1 0.667 AUC 0.708 Age MSE MAE Avg % Error 128.0 7.53 21.50
1 2 3 4 5 6 7 Medical Personnel 8 9 Media Personnel 10
Education Practitioner Administrative Manager / Officer Unemployed / Student Engineer Labor Technician / Worker Logistics Practitioner
# Entertainment and Arts Practitioner | 2307.14984#33 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Social network simulation plays a crucial role in addressing various
challenges within social science. It offers extensive applications such as
state prediction, phenomena explanation, and policy-making support, among
others. In this work, we harness the formidable human-like capabilities
exhibited by large language models (LLMs) in sensing, reasoning, and behaving,
and utilize these qualities to construct the S$^3$ system (short for
$\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to
the widely employed agent-based simulation paradigm, we employ prompt
engineering and prompt tuning techniques to ensure that the agent's behavior
closely emulates that of a genuine human within the social network.
Specifically, we simulate three pivotal aspects: emotion, attitude, and
interaction behaviors. By endowing the agent in the system with the ability to
perceive the informational environment and emulate human actions, we observe
the emergence of population-level phenomena, including the propagation of
information, attitudes, and emotions. We conduct an evaluation encompassing two
levels of simulation, employing real-world social network data. Encouragingly,
the results demonstrate promising accuracy. This work represents an initial
step in the realm of social network simulation empowered by LLM-based agents.
We anticipate that our endeavors will serve as a source of inspiration for the
development of simulation systems within, but not limited to, social science. | http://arxiv.org/pdf/2307.14984 | Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li | cs.SI | null | null | cs.SI | 20230727 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2210.02414"
},
{
"id": "2304.03442"
},
{
"id": "2110.05352"
}
] |
2307.14984 | 34 | Education Practitioner Administrative Manager / Officer Unemployed / Student Engineer Labor Technician / Worker Logistics Practitioner
# Entertainment and Arts Practitioner
within the models to infer user demographics based on available information, such as personal descriptions and content within posts. The technical details are as follows.
User Demographics Prediction with LLM. In order to predict user gender based on personal descriptions, since the collected data lacks sufficient labels, we use a public dataset released in [29, 40] for assistance. It allows us to extract a vast array of labeled gender and personal description relationships. We filter out data with longer than 10 words in this dataset served as the ground truth to tune the language model. Specifically, we use ChatGLM [10] as the foundation model and employ the P-Tuning-v2 [21] methodology. We feed the model with the personal description as a prompt and let the model determine the most probable gender associated with the given description.
To predict age using usersâ posts, we use Blog Authorship Corpus Dataset [33] dataset to establish the expression-to-age relationship. This dataset provides us with author-age labels for corresponding textual posts. We randomly select the historical blogs in [33] and add them to the prompt as input; then, the age can be used as the label for prefix tuning. The tuned large language model can be used to predict the age label in our collected social media dataset. | 2307.14984#34 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Social network simulation plays a crucial role in addressing various
challenges within social science. It offers extensive applications such as
state prediction, phenomena explanation, and policy-making support, among
others. In this work, we harness the formidable human-like capabilities
exhibited by large language models (LLMs) in sensing, reasoning, and behaving,
and utilize these qualities to construct the S$^3$ system (short for
$\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to
the widely employed agent-based simulation paradigm, we employ prompt
engineering and prompt tuning techniques to ensure that the agent's behavior
closely emulates that of a genuine human within the social network.
Specifically, we simulate three pivotal aspects: emotion, attitude, and
interaction behaviors. By endowing the agent in the system with the ability to
perceive the informational environment and emulate human actions, we observe
the emergence of population-level phenomena, including the propagation of
information, attitudes, and emotions. We conduct an evaluation encompassing two
levels of simulation, employing real-world social network data. Encouragingly,
the results demonstrate promising accuracy. This work represents an initial
step in the realm of social network simulation empowered by LLM-based agents.
We anticipate that our endeavors will serve as a source of inspiration for the
development of simulation systems within, but not limited to, social science. | http://arxiv.org/pdf/2307.14984 | Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li | cs.SI | null | null | cs.SI | 20230727 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2210.02414"
},
{
"id": "2304.03442"
},
{
"id": "2110.05352"
}
] |
2307.15217 | 34 | # 3.2.3 Evaluating Reward Models
Tractable: Evaluating reward models is difficult and expensive. When the true reward function is known, several methods can be used to judge the quality of the learned reward model (Gleave et al.,
9
2020a; Wulfe et al., 2022). However, in most cases, reward modeling is used only when the true reward function is not known, making direct evaluation impossible. Hence, the reward model is typically evaluated in an indirect way by optimizing an RL policy using the learned reward model and then evaluating the generations from the RL policy. This makes the reward model evaluation intricately dependent on the policy optimization process which is inherently expensive and noisy. It is also not clear how robust a reward model evaluation is to many ad-hoc choices made in the policy optimization process: e.g., choice of RL algorithm, policy network architecture, compute spent, and other various hyperparameter choices (Gao et al., 2022). Another issue with indirect evaluation is that the evaluation signal for the reward model is the same as the training signal â human approval. As a result, training and evaluation failures will be correlated. Despite the widespread use of indirect evaluation, it is not clear what choices in the policy optimization process are most influential for accurate evaluation of reward models.
# 3.3 Challenges with the Policy | 2307.15217#34 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.14984 | 35 | Next, we predict occupations only using pre-trained LLMs. In this scenario, we directly feed usersâ posts and personal profile descriptions to the LLM for prediction. By examining the content of these inputs, the model showcased its capacity to comprehend and infer usersâ occupations, further enhancing our demographic prediction capabilities.
# Prediction Result Evaluation
The outcomes of our age and gender prediction analysis are presented in Table 4. Our gender predictor, which relies on a fine-tuned Large Language Model (LLM), achieves satisfactory results. Despite the absence of explicit gender information in all personal descriptions, the predictor successfully generates valid predictions. Moving on to age, we select English blogs from [33] and ensured similar age distribution across the training and testing process. The results show that the mean squared error (MSE) was 128, while the mean absolute error (MAE) was around 7.53. These values indicate a 21.5% unified percentage error (see Table 4). | 2307.14984#35 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Social network simulation plays a crucial role in addressing various
challenges within social science. It offers extensive applications such as
state prediction, phenomena explanation, and policy-making support, among
others. In this work, we harness the formidable human-like capabilities
exhibited by large language models (LLMs) in sensing, reasoning, and behaving,
and utilize these qualities to construct the S$^3$ system (short for
$\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to
the widely employed agent-based simulation paradigm, we employ prompt
engineering and prompt tuning techniques to ensure that the agent's behavior
closely emulates that of a genuine human within the social network.
Specifically, we simulate three pivotal aspects: emotion, attitude, and
interaction behaviors. By endowing the agent in the system with the ability to
perceive the informational environment and emulate human actions, we observe
the emergence of population-level phenomena, including the propagation of
information, attitudes, and emotions. We conduct an evaluation encompassing two
levels of simulation, employing real-world social network data. Encouragingly,
the results demonstrate promising accuracy. This work represents an initial
step in the realm of social network simulation empowered by LLM-based agents.
We anticipate that our endeavors will serve as a source of inspiration for the
development of simulation systems within, but not limited to, social science. | http://arxiv.org/pdf/2307.14984 | Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li | cs.SI | null | null | cs.SI | 20230727 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2210.02414"
},
{
"id": "2304.03442"
},
{
"id": "2110.05352"
}
] |
2307.15217 | 35 | # 3.3 Challenges with the Policy
Here, we discuss challenges from policy optimization, misgeneralization, power-seeking, and mode collapse. Each involves instances in which the finetuned policy, Ïθnew , can learn a poor solution even when the fitted reward ËrÏ, accurately reflects human evaluations.
# 3.3.1 Robust Reinforcement Learning is Difficult
Safety in deployment requires robust performance, yet it remains challenging simply to train AI systems using RL. | 2307.15217#35 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.14984 | 36 | As for the occupations, we initially include the posts and personal descriptions of the combined user dataset in the prompt. We then feed the prompt to pre-trained ChatGLM to obtain the occupation of each user. We leave the supervised fine-tuning for occupation prediction as future work. It results in a total of 1,016 different occupations being identified from all users. However, utilizing all occupations is not essential since some occupations are very close. Thus, we group all occupations into 10 distinct occupation categories using the LLM, of which the categories can be found in Table 5. By condensing the number of occupations into a smaller set, we are able to simplify the simulation.
# 4.3 Emotion and Attitude Simulation
In our emotion simulation model, we adopt a Markov chain approach to capture the dynamic process of emotional changes triggered by a user receiving a message. The simulation involves four essential inputs: user demographics, current emotion, the received post. Emotions are classified into three distinct stages: calm, moderate, and intense. User demographics serve as supplementary information
11
LLMs, providing a reference point to contextualize emotional responses. The current emotion represents the userâs emotional status before receiving the post, while the received post acts as the actuator for prompting the LLM to determine a new emotional status. | 2307.14984#36 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Social network simulation plays a crucial role in addressing various
challenges within social science. It offers extensive applications such as
state prediction, phenomena explanation, and policy-making support, among
others. In this work, we harness the formidable human-like capabilities
exhibited by large language models (LLMs) in sensing, reasoning, and behaving,
and utilize these qualities to construct the S$^3$ system (short for
$\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to
the widely employed agent-based simulation paradigm, we employ prompt
engineering and prompt tuning techniques to ensure that the agent's behavior
closely emulates that of a genuine human within the social network.
Specifically, we simulate three pivotal aspects: emotion, attitude, and
interaction behaviors. By endowing the agent in the system with the ability to
perceive the informational environment and emulate human actions, we observe
the emergence of population-level phenomena, including the propagation of
information, attitudes, and emotions. We conduct an evaluation encompassing two
levels of simulation, employing real-world social network data. Encouragingly,
the results demonstrate promising accuracy. This work represents an initial
step in the realm of social network simulation empowered by LLM-based agents.
We anticipate that our endeavors will serve as a source of inspiration for the
development of simulation systems within, but not limited to, social science. | http://arxiv.org/pdf/2307.14984 | Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li | cs.SI | null | null | cs.SI | 20230727 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2210.02414"
},
{
"id": "2304.03442"
},
{
"id": "2110.05352"
}
] |
2307.15217 | 36 | # 3.3.1 Robust Reinforcement Learning is Difficult
Safety in deployment requires robust performance, yet it remains challenging simply to train AI systems using RL.
Tractable: It is (still) challenging to optimize policies effectively. RL agents must interact with the environment to collect their own data. This requires balancing exploratory and exploitatory behavior (Amin et al., 2021; Yang et al., 2021). Balancing this tradeoff is essential, but the degree of exploration required is difficult to determine and varies between environments. This is further complicated in settings with high-dimensional state/action spaces or sparse rewards (Ding and Dong, 2020). Balancing exploration and exploitation in deep RL remains a fundamental yet open challenge (Amin et al., 2021; Yang et al., 2021). Deep RL is unstable, and results are often highly sensitive to initialization and difficult to reproduce (Nikishin et al., 2018; Irpan, 2018; Henderson et al., 2018). This instability is attributed to multiple factors such as the random nature of exploration, the violation of the i.i.d assumption in data collection, the biased nature of value functions, and the general unpredictability of learning in deep neural networks (Amin et al., 2021). Uc-Cetina et al. (2023) overview methods and limitations for RL with LLMs in particular. | 2307.15217#36 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
2307.14984 | 37 | To regulate the decrease of emotional states over time, we introduce the decaying coefficient, a hyper-parameter that controls the decay rate of emotions. Our hypothesis assumes that emotions tend to diminish gradually as time passes, influencing the emotion simulation process. Throughout this intricate mechanism, we impart these details by prompt to the LLMs, which are responsible for deciding whether the emotional state should change in response to the received post. We are trying to reduce as much manual intervention as possible, to highlight the capability of LLMs in simulating emotional changes by posts. The attitude simulation is similar to the emotion simulation.
# 4.4 Behavior Simulation
# 4.4.1 Content-generation Behavior
In our social network simulation model, we incorporate an advanced approach utilizing Large Language Models (LLMs) to reproduce the dynamic process of content creation, shaped by usersâ emotions and attitudes towards specific events. The simulation hinges on two vital inputs: user profile information, and their current emotional or attitudinal state towards the event. Each piece of generated content is an embodiment of a userâs internal state and external influences, reflecting their unique perspective.
User profile information serves as a reference point for the LLMs, furnishing essential context to shape content responses. The current emotional or attitudinal state symbolizes the userâs mindset when reacting to the event, thereby playing a vital role in the LLMâs generation of potential responses. | 2307.14984#37 | S3: Social-network Simulation System with Large Language Model-Empowered Agents | Social network simulation plays a crucial role in addressing various
challenges within social science. It offers extensive applications such as
state prediction, phenomena explanation, and policy-making support, among
others. In this work, we harness the formidable human-like capabilities
exhibited by large language models (LLMs) in sensing, reasoning, and behaving,
and utilize these qualities to construct the S$^3$ system (short for
$\textbf{S}$ocial network $\textbf{S}$imulation $\textbf{S}$ystem). Adhering to
the widely employed agent-based simulation paradigm, we employ prompt
engineering and prompt tuning techniques to ensure that the agent's behavior
closely emulates that of a genuine human within the social network.
Specifically, we simulate three pivotal aspects: emotion, attitude, and
interaction behaviors. By endowing the agent in the system with the ability to
perceive the informational environment and emulate human actions, we observe
the emergence of population-level phenomena, including the propagation of
information, attitudes, and emotions. We conduct an evaluation encompassing two
levels of simulation, employing real-world social network data. Encouragingly,
the results demonstrate promising accuracy. This work represents an initial
step in the realm of social network simulation empowered by LLM-based agents.
We anticipate that our endeavors will serve as a source of inspiration for the
development of simulation systems within, but not limited to, social science. | http://arxiv.org/pdf/2307.14984 | Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, Yong Li | cs.SI | null | null | cs.SI | 20230727 | 20231019 | [
{
"id": "2302.13971"
},
{
"id": "2204.02311"
},
{
"id": "2210.02414"
},
{
"id": "2304.03442"
},
{
"id": "2110.05352"
}
] |
2307.15217 | 37 | Tractable: Policies tend to be adversarially exploitable. Even when learned policies are trained with a perfect reward signal, perform well at the task they are trained for, and generalize to a wide range of scenarios, they can still perform poorly in adversarial situations. This is a pressing concern, as models deployed into the real world can be adversarially attacked by humans or other AI systems. Even âsuperhumanâ policies can fail catastrophically against policies specifically designed to exploit them (Gleave et al., 2020b; Wu et al., 2021b; Wang et al., 2022). Adversarial policies can be found either by re-purposing existing deep- reinforcement learning algorithms or by manual human optimization in the case of prompt-injections and jailbreaks (Willison, 2023; Albert, 2023; Oneal, 2023; Li et al., 2023a; Wolf et al., 2023; Liu et al., 2023; Rao et al., 2023; Wei et al., 2023; Shen et al., 2023) for language-models. Black-box access to a model (e.g., via API access) is sufficient for many adversarial policy attack algorithms, though white-box access (enabled for example by open-sourced or leaked model weights) enables even stronger exploits (Kos and Song, 2017; Casper et al., 2022). | 2307.15217#37 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | http://arxiv.org/pdf/2307.15217 | Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, Tony Wang, Samuel Marks, Charbel-Raphaël Segerie, Micah Carroll, Andi Peng, Phillip Christoffersen, Mehul Damani, Stewart Slocum, Usman Anwar, Anand Siththaranjan, Max Nadeau, Eric J. Michaud, Jacob Pfau, Dmitrii Krasheninnikov, Xin Chen, Lauro Langosco, Peter Hase, Erdem Bıyık, Anca Dragan, David Krueger, Dorsa Sadigh, Dylan Hadfield-Menell | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230727 | 20230911 | [
{
"id": "2305.20050"
},
{
"id": "2302.01928"
},
{
"id": "2210.10760"
},
{
"id": "2304.09991"
},
{
"id": "2211.06519"
},
{
"id": "2305.17319"
},
{
"id": "2109.13916"
},
{
"id": "1909.12200"
},
{
"id": "2305.16147"
},
{
"id": "2301.00901"
},
{
"id": "2005.01643"
},
{
"id": "2006.03357"
},
{
"id": "2306.04488"
},
{
"id": "2304.05197"
},
{
"id": "2210.01241"
},
{
"id": "2110.05699"
},
{
"id": "2101.07691"
},
{
"id": "2303.17548"
},
{
"id": "2301.11270"
},
{
"id": "2305.17608"
},
{
"id": "2301.04709"
},
{
"id": "2211.14275"
},
{
"id": "1811.07871"
},
{
"id": "1903.02020"
},
{
"id": "2303.15056"
},
{
"id": "2012.07532"
},
{
"id": "2201.03544"
},
{
"id": "2303.06074"
},
{
"id": "2209.02167"
},
{
"id": "2304.06528"
},
{
"id": "2305.14710"
},
{
"id": "2305.18290"
},
{
"id": "2301.01768"
},
{
"id": "1803.04585"
},
{
"id": "2211.09110"
},
{
"id": "2305.15324"
},
{
"id": "2304.04914"
},
{
"id": "2211.00241"
},
{
"id": "2204.10817"
},
{
"id": "2206.13316"
},
{
"id": "2305.14325"
},
{
"id": "2303.09001"
},
{
"id": "1909.08593"
},
{
"id": "2308.12050"
},
{
"id": "2204.02515"
},
{
"id": "2302.08500"
},
{
"id": "1906.03926"
},
{
"id": "2204.05186"
},
{
"id": "2209.00626"
},
{
"id": "2202.03286"
},
{
"id": "2012.05862"
},
{
"id": "2305.13534"
},
{
"id": "2307.02483"
},
{
"id": "1805.00899"
},
{
"id": "2303.16755"
},
{
"id": "2302.10894"
},
{
"id": "2006.13900"
},
{
"id": "2302.06503"
},
{
"id": "1908.04734"
},
{
"id": "1805.08010"
},
{
"id": "2305.08844"
},
{
"id": "1901.08654"
},
{
"id": "2204.05862"
},
{
"id": "1705.06452"
},
{
"id": "2306.08647"
},
{
"id": "2206.05802"
},
{
"id": "2303.09387"
},
{
"id": "2305.11455"
},
{
"id": "2203.07472"
},
{
"id": "2210.07229"
},
{
"id": "2106.05091"
},
{
"id": "2308.03825"
},
{
"id": "1610.02136"
},
{
"id": "2301.04213"
},
{
"id": "2304.00740"
},
{
"id": "1807.06096"
},
{
"id": "2010.14603"
},
{
"id": "1707.07402"
},
{
"id": "2302.10149"
},
{
"id": "2212.03201"
},
{
"id": "2303.00894"
},
{
"id": "2303.05453"
},
{
"id": "2304.06767"
},
{
"id": "2304.11082"
},
{
"id": "2109.06668"
},
{
"id": "1902.04257"
},
{
"id": "2210.01790"
},
{
"id": "2206.02231"
},
{
"id": "2306.07899"
},
{
"id": "1902.07742"
},
{
"id": "2109.00157"
},
{
"id": "2010.05418"
},
{
"id": "2306.15447"
},
{
"id": "2212.08073"
},
{
"id": "1606.06565"
},
{
"id": "2209.15259"
},
{
"id": "2211.03540"
},
{
"id": "2212.04717"
},
{
"id": "2301.03652"
},
{
"id": "2306.09442"
},
{
"id": "2305.13735"
},
{
"id": "2303.16749"
},
{
"id": "2212.09251"
},
{
"id": "2209.13085"
},
{
"id": "2303.17651"
},
{
"id": "2103.14659"
},
{
"id": "2305.11206"
},
{
"id": "2006.04948"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.