doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2306.15895 | 106 | breaches.
35
# G.5 ArXiv Attributes
# G.5.1 Subtopics
We randomly select 3 categories in arXiv dataset and display the corresponding product brand attributes for each category:
machine_learning.: â Text generation â Natural language understanding for chatbots â Sentiment analysis and opinion mining â Text summarization and keyword extraction â Machine translation â Named entity recognition and entity linking â Dialogue systems and conversational agents â Cross-lingual and Multilingual NLP â Text-to-speech systems â Phonetics and phonology in computational linguistics â Grammatical error detection and correction â Speech recognition and acoustic modeling â Semantic role labeling â Discourse analysis and coherence modeling â Lexical semantics and word sense disambiguation â Computational lexicography and machine-readable dictionaries â Language Modeling â question answering â Language resources and corpora â Computational sociolinguistics and dialectology.
number_theory.:
â Prime numbers â Diophantine equations â Modular arithmetic â Cryptography â Continued Fractions â Pellâs Equation â Fermatâs Last Theorem â Algebraic Number Theory â Riemann Hypothesis â Arithmetic Geometry â Quadratic Forms â L-Functions â Automorphic Forms â Galois Theory â Ramsey Theory â Distribution of Prime Numbers â Number Theory in Cryptography â Summation Formulas â Gaussian Integers â The Goldbach Conjecture | 2306.15895#106 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 107 | geophysics.:
# â Seismic imaging
36
â Earthquake prediction â Geothermal energy â Volcanic eruptions â Plate tectonics â Geomagnetism â Paleomagnetism â Geophysical surveying â Geophysical fluid dynamics â Gravity measurements â Rock physics â Crustal deformation â Geomorphology â Mineral exploration â Earth structure modeling â Geodetic techniques â Hydrogeophysics â Earth modeling â Electrical geophysics â Remote sensing geophysics
# G.5.2 Techniques
We randomly select 3 categories in the arXiv dataset and display the corresponding attributes for each category:
genomics.:
â Genome assembly and annotation using hybrid approaches. â Comparative genomics for analyzing evolutionary relationships between genomes. â Differential gene expression analysis using RNA sequencing data. â Metagenomics for studying the microbial communities in different environments. â Epigenetic analysis for understanding gene regulation. â Network analysis for identifying gene interactions and pathways. â Structural variation analysis for detecting genomic rearrangements. â Functional genomics for studying gene function and pathway regulation. â Genome-wide association studies for identifying genetic variants associated with
complex traits.
â High-throughput screening methods for identifying genes involved in specific biological
# processes. ⢠number_theory: | 2306.15895#107 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 108 | complex traits.
â High-throughput screening methods for identifying genes involved in specific biological
# processes. ⢠number_theory:
â Primality testing using elliptic curves â Continued fraction factorization method â Algorithm for solving Diophantine equations â Quadratic sieve algorithm for integer factorization â Pollard rho algorithm for integer factorization â Digital sum subtraction method for computing discrete logarithm â Fermatâs method for factorization of primes â Chinese remainder algorithm for solving modular equations â Exponential-sum algorithm for computing in algebraic number fields â Generalized Ramanujan-Selberg formula for counting integer points on algebraic
varieties.
geophysics.:
# â Seismic attribute interpretation
37
â Full waveform inversion â Gravity inversion â Spherical geometries â Ground penetrating radar imaging â Time-lapse reservoir monitoring â Electrical resistivity tomography â Joint inversion of geophysical data â Radiometric dating â Geomagnetic field modeling
# G.6 AG News Attributes
# G.6.1 Subtopics
The corresponding subtopic attributes for each category are shown as follows:
⢠business: | 2306.15895#108 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 109 | # G.6 AG News Attributes
# G.6.1 Subtopics
The corresponding subtopic attributes for each category are shown as follows:
⢠business:
â Corporate earnings and financial reports â Stock market updates and analysis â Mergers and acquisitions â Business regulations and policies â Startups and entrepreneurship â Industry trends and forecasts â Economic indicators and market trends â Business strategies and management practices â Corporate governance and ethics â Consumer behavior and market research â Business leadership and executive profiles â Banking and finance industry updates â Energy and sustainability in business â Retail and e-commerce trends â Real estate and property market updates â Business disruptions and crisis management â Corporate social responsibility and sustainability initiatives
sci_tech:
â Artificial intelligence â Robotics â Quantum computing â Biotechnology â Nanotechnology â Internet of Things â Renewable energy â Virtual reality â Augmented reality â Cybersecurity â Genetic engineering â Big data â Autonomous vehicles â 3D printing â Blockchain technology â Bioinformatics â Machine learning
38
# â Biomedical engineering â Clean technology
sports:
â Soccer â Basketball â Baseball â Tennis â Golf â Cricket â Rugby â Athletics â Formula 1 â Olympics â Boxing â Swimming â Volleyball â Ice hockey â American football â Cycling â Motorsports â Martial arts â Horse racing â Surfing
⢠world: | 2306.15895#109 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 110 | ⢠world:
â International politics and diplomacy â Global conflicts and war â Terrorism and security threats â Human rights issues and social justice movements â Migration and refugee crises â Climate change and environmental policies â Global health crises and pandemics â Natural disasters and emergencies â Cross-border crime and corruption â Cultural and social developments worldwide â Geopolitical tensions and territorial disputes â International aid and development efforts â Humanitarian crises and relief efforts â Cultural heritage preservation and promotion â International collaborations and partnerships
# G.7 SST-2 Attributes
# G.7.1 Subtopics
We display the corresponding subtopic attributes for each category as follows:
⢠positive:
â Compelling Storyline: A strong and engaging narrative that captures the audienceâs attention from beginning to end.
â Well-Developed Characters: Memorable and relatable characters that evoke emotions and drive the story forward.
39
â Skillful Direction: Effective direction that showcases the filmmakerâs vision, ensuring cohesive storytelling and engaging visual elements.
â Excellent Acting: Convincing performances from the cast that bring the characters to life and immerse the audience in the story.
â Cinematography: Expertly captured visuals, including the use of framing, lighting, and camera movements, to enhance the storytelling and create a visually appealing experience.
â Engaging Dialogue: Well-written dialogue that is natural, meaningful, and contributes to character development and plot progression. | 2306.15895#110 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 111 | â Engaging Dialogue: Well-written dialogue that is natural, meaningful, and contributes to character development and plot progression.
â Sound Design and Music: Thoughtful and immersive sound design, including sound effects and a well-curated soundtrack or original score, that enhances the overall cinematic experience.
â Production Design: Attention to detail in creating visually appealing and authentic sets, costumes, and overall aesthetics that contribute to the filmâs atmosphere and world-building.
â Editing: Skillful editing that maintains a good pace, effectively transitions between scenes, and enhances the overall flow and impact of the story.
â Emotional Impact: A movie that evokes emotions, whether it be through humor, drama, suspense, or other means, leaving a lasting impression on the audience.
negative:
â Weak Plot: A poorly developed or uninteresting storyline that fails to engage the audience.
â Lackluster Performances: Unconvincing or uninspired performances by the actors that fail to bring the characters to life.
â Poor Production Quality: Subpar production values, including low-quality visuals, amateurish cinematography, and weak special effects.
â Incoherent Storytelling: Confusing or disjointed narrative structure that makes it difficult to follow or understand the plot.
â Unmemorable Characters: Underdeveloped or forgettable characters that fail to resonate with the audience.
â Weak Soundtrack: A forgettable or poorly composed soundtrack that fails to enhance the mood or add depth to the movie. | 2306.15895#111 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 112 | â Weak Soundtrack: A forgettable or poorly composed soundtrack that fails to enhance the mood or add depth to the movie.
â Poor Dialogue: Uninteresting or poorly written dialogues that fail to engage or resonate with the audience.
â Disjointed Atmosphere: A lack of coherence or consistency in creating an immersive and believable world for the viewers.
â Unresolved Plotlines: Loose ends or unresolved plotlines that leave the audience feeling unsatisfied or confused.
â Lack of Entertainment Value: A movie that fails to deliver an enjoyable or engaging experience for the audience, leaving them feeling bored or uninterested.
# G.7.2 Descriptive Details
We use movie genres as the characteristics of movies, and the attributes are listed as follows:
Action ⢠Drama ⢠Comedy ⢠Thriller ⢠Romance ⢠Horror ⢠Adventure ⢠Science Fiction ⢠Fantasy ⢠Animation
40
# G.8 Yelp Attributes
# G.8.1 Subtopics
We display the corresponding subtopic attributes for each category as follows:
⢠positive:
â Quality of Food: The taste, flavor, and presentation of the dishes. â Fresh Ingredients: The use of fresh and high-quality ingredients in the preparation of
the food.
â Menu Variety: A diverse range of options catering to different dietary preferences and restrictions. | 2306.15895#112 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 113 | the food.
â Menu Variety: A diverse range of options catering to different dietary preferences and restrictions.
â Presentation: The visually appealing presentation of the dishes. â Service: Attentive, friendly, and prompt service from the restaurant staff. â Value for Money: Offering good quality and portion sizes at reasonable prices. â Cleanliness: A clean and well-maintained dining area, including tables, utensils, and
restrooms.
â Special Dietary Accommodations: Catering to specific dietary needs such as vegetarian, vegan, gluten-free, etc.
â Unique and Creative Dishes: Offering innovative and creative dishes that stand out. â Efficient Operations: Smooth and well-coordinated operations to minimize waiting
times and delays.
⢠negative:
â Poor Service: Slow or inattentive service from the restaurant staff. Unfriendly Staff: Rude or unhelpful behavior from the restaurant staff.
â Long Waiting Times: Excessive waiting times for a table or food. â Incorrect Orders: Receiving incorrect or poorly prepared food orders. â Unappetizing Presentation: Dishes that are poorly presented or lack visual appeal. â Unpleasant Ambience: Uncomfortable or uninviting atmosphere in the restaurant. â Dirty or Unhygienic Conditions: Lack of cleanliness in the dining area, restrooms, or
utensils. | 2306.15895#113 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 114 | utensils.
â Limited Menu Options: A limited selection of dishes or lack of variety. â Poor Food Quality: Dishes that are poorly cooked, tasteless, or of low quality. â Overpriced: Excessive prices for the quality and portion sizes of the food.
# G.8.2 Descriptive Details
We use cuisine types as the characteristics of restaurants, and the attributes are listed as follows:
Turkish ⢠Spanish ⢠Greek ⢠Italian ⢠French ⢠American ⢠Mexican ⢠Canadian ⢠Cajun ⢠Tex-Mex ⢠Brazilian ⢠Peruvian ⢠Argentinean
41
Colombian
Venezuelan
Ethiopian
Moroccan
⢠South African
Nigerian
⢠Egyptian
Chinese
Japanese
Indian
Thai
Korean
Australian
New Zealand
Polynesian
Hawaiian
Singaporean
# H Examples for Filtered Attribute Values
Here we give some examples of the filtered attributes.
For the Amazon product review dataset, some filtered attributes are listed as follows.
beauty:
â Hair Dryer (close to health and personal care) â Hair Straightener (close to health and personal care)
electronics:
# â Car dashcam (close to automotive) â Wireless earbuds (close to cell_phones_service)
office_products:
# â Mouse pad (close to electronics)
For NYT dataset, some filtered attributes are listed as follows:
⢠american_football: | 2306.15895#114 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 115 | office_products:
# â Mouse pad (close to electronics)
For NYT dataset, some filtered attributes are listed as follows:
⢠american_football:
â The economic impact of football on local communities and businesses. â The role of nutrition and hydration in optimal football performance â The history of footballâs integration with Americaâs armed forces
⢠stocks_and_bonds:
â The role of artificial intelligence in stock market prediction â The effects of climate change on bond performance
movies:
â The future of cinema in a post-COVID world â The role of international markets in Hollywoodâs global success
For Reddit dataset, some filtered attributes are listed as follows:
amc_stock:
42
â Working in the stock market industry. â Impact of COVID-19 on amc_stock. â Having a background in finance or business.
what_car_should_i_buy:
â Towing capacity and trailer options â Financing and leasing options â Road trip and travel experiences
pittsburgh:
â Attending a game at PNC Park â Watch a Steelers football game at Heinz Field
For StackExchange dataset, some filtered attributes are listed as follows:
⢠algorithm:
â Debugging an optimization algorithm for a complex logistics problem. â Improving the robustness of an encryption algorithm against attacks. â inding a way to better handle overflow errors in a cryptographic algorithm.
iPhone: | 2306.15895#115 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 116 | iPhone:
â Unable to use certain features in certain regions/countries â Trouble with GPS not working properly
⢠opengl:
â Difficulty setting up a custom camera in OpenGL â Inconsistent rendering of 3D models in OpenGL â Difficulty integrating OpenGL with a cross-platform game engine
# I Examples for Merged Attribute on the arXiv dataset
Here we give some examples of the filtered attributes.
⢠optimization techniques for computer vision:
# â computer vision and pattern recognition â machine learning
black holes:
â classical physics â popular physics
⢠evolving networks for financial trading:
â neural and evolutionary computing â computational finance
multi-lingual information retrieval:
# â information retrieval â computation and language
iot operator systems and their security challenges:
â operating systems â cryptography and security
complexity of social networks:
# â computational complexity â social and information networks
43
# J The Prompt Format for MetaPrompt
In this section, we provide more details about the implementation of MetaPrompt [43]. It first lever- ages LLMs to provide additional task-specific information, and then augments the class-dependent prompt with the task-specific knowledge.
Prompts to ChatGPT: [Original Prompt]. What does this task ask us to do?
Response from ChatGPT for NYT dataset: The task of generating an example of a NYT news asks the model to create high-quality, informative, and engaging news articles that mimic the writing style and structure commonly found in The New York Times (NYT). | 2306.15895#116 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 117 | Response from ChatGPT for Amazon dataset: The task of generating an example of a product review asks the model to create a detailed and informative review for a particular product, highlighting its features, performance, pros and cons, and overall user experience. This review should be helpful for potential buyers, providing them with valuable insights to make an informed purchasing decision.
Response from ChatGPT for Reddit dataset: The task of generating an example of a post similar to Reddit asks the model to craft a well-written and engaging post that adheres to the guidelines and community standards of the specific subreddit itâs intended for. The post should spark discussions, share valuable information, or showcase interesting content that resonates with the subredditâs audience.
Response from ChatGPT for StackExchange dataset: The task of generating an example of a post similar to StackExchange asks the model to compose a clear and concise question or answer related to a specific topic within the given StackExchange community. The post should be well-researched and supported by relevant sources or personal expertise, aiming to provide valuable insights or solutions to other usersâ queries.
# K Dataset Information
# K.1 Dataset Documentations | 2306.15895#117 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15895 | 118 | # K Dataset Information
# K.1 Dataset Documentations
The dataset is provided in json format; there are three json files corresponding to the original train, validation and test split. We also include two additional file, attrprompt and simprompt, which is generated by AttrPrompt and SimPrompt as the synthetic training data.
Each data point contains the following fields:
⢠label: the label for the example. For multi-class classification, the label field is an integer, while for multi-label classification, the label field is a list[int] containing one or multiple integers as each example may refer to multiple classes;
text: a content of each example.
# K.2 Intended Uses
AttrPrompt and SimPrompt are intended for researchers in machine learning, natural language processing, and related fields to innovate novel methods for training data generation problems.
# K.3 Hosting and Maintenance Plan
The codebase is hosted and version-tracked via GitHub. It will be available under the link https: //github.com/yueyu1030/attrprompt. The download link of all the datasets can be found in the Github repository.
Note that it is a community-driven and open-source initiative. We are committed and have the resources to maintain and actively develop it for at minimum the next five years. We plan to grow the GitHub repo by including new tasks and datasets and warmly welcome external contributors.
44 | 2306.15895#118 | Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias | Large language models (LLMs) have been recently leveraged as training data
generators for various natural language processing (NLP) tasks. While previous
research has explored different approaches to training models using generated
data, they generally rely on simple class-conditional prompts, which may limit
the diversity of the generated data and inherit systematic biases of LLM. Thus,
we investigate training data generation with diversely attributed prompts
(e.g., specifying attributes like length and style), which have the potential
to yield diverse and attributed generated data. Our investigation focuses on
datasets with high cardinality and diverse domains, wherein we demonstrate that
attributed prompts outperform simple class-conditional prompts in terms of the
resulting model's performance. Additionally, we present a comprehensive
empirical study on data generation encompassing vital aspects like bias,
diversity, and efficiency, and highlight three key observations: firstly,
synthetic datasets generated by simple prompts exhibit significant biases, such
as regional bias; secondly, attribute diversity plays a pivotal role in
enhancing model performance; lastly, attributed prompts achieve the performance
of simple class-conditional prompts while utilizing only 5\% of the querying
cost of ChatGPT associated with the latter. The data and code are available on
\url{https://github.com/yueyu1030/AttrPrompt}. | http://arxiv.org/pdf/2306.15895 | Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, Chao Zhang | cs.CL, cs.AI, cs.LG | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) | NeurIPS 2023 | cs.CL | 20230628 | 20231018 | [
{
"id": "2302.04023"
},
{
"id": "2212.10560"
},
{
"id": "1910.01108"
},
{
"id": "2205.01068"
},
{
"id": "2302.00618"
},
{
"id": "2304.14108"
},
{
"id": "2211.09110"
},
{
"id": "1905.00075"
},
{
"id": "2305.03047"
},
{
"id": "2104.07081"
},
{
"id": "2302.12813"
},
{
"id": "2305.12224"
},
{
"id": "2005.00816"
},
{
"id": "2305.18703"
}
] |
2306.15195 | 1 | # Abstract
In human conversations, individuals can indi- cate relevant regions within a scene while ad- dressing others. In turn, the other person can then respond by referring to speciï¬c regions if necessary. This natural referential ability in dialogue remains absent in current Multi- modal Large Language Models (MLLMs). To ï¬ll this gap, this paper proposes an MLLM called Shikra, which can handle spatial coor- dinate inputs and outputs in natural language. Its architecture consists of a vision encoder, an alignment layer, and a LLM. It is designed to be straightforward and simple, without the need for extra vocabularies, position encoder, pre-/post-detection modules, or external plug- in models. All inputs and outputs are in nat- ural language form. Referential dialogue is a superset of various vision-language (VL) tasks. Shikra can naturally handle location-related tasks like REC and PointQA, as well as con- ventional VL tasks such as Image Caption- ing and VQA. Experimental results showcase Shikraâs promising performance. Furthermore, it enables numerous exciting applications, like providing mentioned objectsâ coordinates in chains of thoughts and comparing user-pointed regions similarities. Our code and model are accessed at https://github.com/shikras/ shikra. | 2306.15195#1 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15222 | 1 | # Abstract
Generative retrieval stands out as a promising new paradigm in text retrieval that aims to generate identifier strings of rele- vant passages as the retrieval target. This generative paradigm taps into powerful generative language models, distinct from traditional sparse or dense retrieval methods. However, only learning to generate is insufficient for generative retrieval. Generative retrieval learns to generate identifiers of relevant passages as an intermediate goal and then converts predicted identifiers into the final passage rank list. The disconnect be- tween the learning objective of autoregressive models and the desired passage ranking target leads to a learning gap. To bridge this gap, we propose a learning-to-rank framework for generative retrieval, dubbed LTRGR. LTRGR enables gener- ative retrieval to learn to rank passages directly, optimizing the autoregressive model toward the final passage ranking target via a rank loss. This framework only requires an ad- ditional learning-to-rank training phase to enhance current generative retrieval systems and does not add any burden to the inference stage. We conducted experiments on three public benchmarks, and the results demonstrate that LTRGR achieves state-of-the-art performance among generative re- trieval methods. The code and checkpoints are released at https://github.com/liyongqi67/LTRGR. | 2306.15222#1 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15595 | 1 | # Tian
# ABSTRACT
We present Position Interpolation (PI) that extends the context window sizes of RoPE-based (Su et al., 2021) pretrained LLMs such as LLaMA (Touvron et al., 2023) models to up to 32768 with minimal ï¬ne-tuning (within 1000 steps), while demonstrating strong empirical results on various tasks that require long context, including passkey retrieval, language modeling, and long document summariza- tion from LLaMA 7B to 65B. Meanwhile, the extended model by Position Inter- polation preserve quality relatively well on tasks within its original context win- dow. To achieve this goal, Position Interpolation linearly down-scales the input position indices to match the original context window size, rather than extrapo- lating beyond the trained context length which may lead to catastrophically high attention scores that completely ruin the self-attention mechanism. Our theoretical study shows that the upper bound of interpolation is at least â¼ 600à smaller than that of extrapolation, further demonstrating its stability. Models extended via Po- sition Interpolation retain its original architecture and can reuse most pre-existing optimization and infrastructure.
1
# 1 INTRODUCTION | 2306.15595#1 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 1 | Large language models (LLMs) have shown promise in proving formal theorems using proof assistants such as Lean. However, existing methods are difficult to reproduce or build on, due to private code, data, and large compute requirements. This has created substantial barriers to research on machine learning methods for theorem proving. This paper removes these barriers by introducing LeanDojo: an open-source Lean playground consisting of toolkits, data, models, and bench- marks. LeanDojo extracts data from Lean and enables interaction with the proof environment programmatically. It contains fine-grained annotations of premises in proofs, providing valuable data for premise selectionâa key bottleneck in theorem proving. Using this data, we develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented with retrieval for selecting premises from a vast math library. It is inexpensive and needs only one GPU week of training. Our retriever leverages LeanDojoâs program analysis capability to identify accessible premises and hard negative examples, which makes retrieval much more effec- tive. Furthermore, we construct a new benchmark consisting of 98,734 theorems and proofs extracted from | 2306.15626#1 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 2 | What is the difference between this deer and another deer ? Z The main difference between the two deer is their size and antlers. The adult deer is larger with more prominent antlers, while the younger (deer is smaller Shikra with no visible antlers.
Figure 1: Demo of Referential Dialogue (RD). Users can point to speciï¬c areas and ask questions. In turn, Shikra will indicate the speciï¬c regions when replying, if necessary. More interesting dialogues can be found in Figure 2 and Appendix C.
1
# 1 Introduction
In recent months, Multimodal Large Language Models remarkable progress (Alayrac et al., 2022; Huang et al., 2023; Liu et al., 2023a; Zhu et al., 2023; Li et al., 2023a; Gao et al., 2023; Dai et al., 2023). They brings eyes to Large Language Models (LLMs), where users can talk about the input image. However, although these models can perceive image content, they can- not engage in dialogue with users regarding the
âWork done during internship at SenseTime Research â Equal Contribution & Project Leader | 2306.15195#2 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15222 | 2 | Introduction Text retrieval is a crucial task in information retrieval and has a significant impact on various language systems, including search ranking (Nogueira and Cho 2019) and open-domain question answering (Chen et al. 2017). At its core, text re- trieval involves learning a ranking model that assigns scores to documents based on a given query, a process known as learning to rank. This approach has been enduringly popular for decades and has evolved into point-wise, pair-wise, and list-wise methods. Currently, the dominant implementation is the dual-encoder approach (Lee, Chang, and Toutanova 2019; Karpukhin et al. 2020), which encodes queries and passages into vectors in a semantic space and employs a list-wise loss to learn the similarities.
An emerging alternative to the dual-encoder approach in text retrieval is generative retrieval (Tay et al. 2022; Bevilac- qua et al. 2022). Generative retrieval employs autoregressive | 2306.15222#2 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15595 | 2 | 1
# 1 INTRODUCTION
Large language models (LLMs) typically come with a pre-deï¬ned context window size. For exam- ple, inputs to LLaMA models (Touvron et al., 2023) must be fewer than 2048 tokens. This pre-set context window limit is frequently exceeded in applications such as conducting long conversations, summarizing long documents, or executing long-term planning. For these applications, LLMs with longer context windows are preferred. However, training an LLM from scratch with long context windows requires signiï¬cant investments. This naturally leads to a question: Can we extend the context window of an existing pre-trained LLM?
One straightforward approach is to ï¬ne-tune an existing pre-trained Transformer with a longer con- text window. However, empirically, we found that models trained this way adapt to long context windows very slowly. After training for more than 10000 batches, the effective context window saw a minimal increase, moving from 2048 to 2560 (Table 4). This suggests that such method is inefï¬cient for extending to substantially longer context windows. | 2306.15595#2 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 2 | examples, which makes retrieval much more effec- tive. Furthermore, we construct a new benchmark consisting of 98,734 theorems and proofs extracted from Leanâs math library. It features challenging data split requiring the prover to generalize to theorems relying on novel premises that are never used in training. We use this benchmark for training and evaluation, and experimental results demonstrate the effectiveness of ReProver over non-retrieval baselines and GPT-4. We thus provide the first set of open-source LLM-based theorem provers without any proprietary datasets and release it under a permissive MIT license to facilitate further research. | 2306.15626#2 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 3 | âWork done during internship at SenseTime Research â Equal Contribution & Project Leader
precise positions of the content. Users cannot indi- cate areas of interest in the image, and the models cannot provide the exact locations of the described content. Differently, as shown in Figure 1, in hu- man daily communication, different regions or ob- jects in the scene are often attended to, and people can speak and point to these regions for efï¬cient information exchange. We refer to this interaction mode as Referential Dialogue (RD). If an MLLM excels in this skill, it will bring numerous exciting applications. For instance, applying it to Mixed Reality (XR) headsets like Apple Vision Pro, users can indicate anything to converse with the AI assis- tant. The AI assistant can display the prompt area in the ï¬eld of view when necessary. It also assists
visual robots in communicating with individuals by comprehending their speciï¬c reference positions. It facilitates online shopping by enabling users to inquire about items of interest in an image. | 2306.15195#3 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15222 | 3 | language models to generate identifier strings of passages as an intermediate target for retrieval. An identifier is a distinc- tive string to represent a passage, such as Wikipedia titles to Wikipedia passages. The predicted identifiers are then mapped to ranked passages as the retrieval results. In this manner, generative retrieval treats passage retrieval as a stan- dard sequence-to-sequence task, maximizing the likelihood of the passage identifiers given the input query, distinct from previous learning-to-rank approaches. | 2306.15222#3 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15595 | 3 | While certain techniques such as ALiBi (Press et al., 2022) and LeX (Sun et al., 2022) enable length extrapolation of Transformers, i.e. train on short context windows and inference on longer ones, many existing pre-trained LLMs, including LLaMA (Touvron et al., 2023), use positional encodings that have weak extrapolation properties (e.g., RoPE (Su et al., 2021)). Therefore, the applicability of these techniques for extending the context window sizes of such LLMs remains limited.
In this work, we introduce Position Interpolation to enable context window extensions for certain existing pre-trained LLMs, including LLaMA. The key idea is, instead of extrapolation, we directly down-scale the position indices so that the maximum position index matches the previous context window limit in the pre-training stage. See Figure 1 for an illustration. In other words, to accom- modate more input tokens, we interpolate the position encodings at neighboring integer positions, utilizing the fact that position encodings can be applied on non-integer positions, as opposed to extrapolating outside the trained positions, which may lead to catastrophic values. We verify our approach theoretically, by showing that the interpolated attention score has a much smaller upper
1 | 2306.15595#3 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 3 | # Introduction
Reasoning is a cornerstone of human intelligence and a fundamental goal of AI [3]. One prominent task is automated theorem proving (ATP): automatically generating proofs for theorems expressed in formal logic. ATP is useful for formal mathematics, producing mathematical proofs that can be checked rigorously [4]. Furthermore, it underpins formal verification, which is essential for proving the correctness and safety of high-stakes applications [5, 6].
ATP is challenging since the search space is prohibitively large. In many applications, it is impractical to generate proofs fully automatically. Therefore, interactive theorem proving (ITP) has emerged as an alternative paradigm. In ITP, proofs are constructed by human experts interacting with software tools called proof assistants, such as Coq [7], Isabelle [8], and Lean [1]. Machine learning can automate such interactive theorem proving, opening up a new avenue for theorem proving [9]. The model can learn to interact with proof assistants, given data containing human-written proofs.
âResearch conducted while Saad Godil was at NVIDIA.
37th Conference on Neural Information Processing Systems (NeurIPS 2023) Track on Datasets and Benchmarks. | 2306.15626#3 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 4 | visual robots in communicating with individuals by comprehending their speciï¬c reference positions. It facilitates online shopping by enabling users to inquire about items of interest in an image.
In this paper, we evolve MLLM to open the veil of referential dialogue. We create Shikra1, a uniï¬ed model capable of handling inputs and outputs of spatial coordinates. All coordinates, both input and output, are represented in natural language numeri- cal form without introducing any extra vocabularies or position encoder. The Shikra architecture com- prises a vision encoder, an alignment layer, and a LLM. We do not introduce any pre-/post-detection modules or external plug-in models, making Shikra uniï¬ed and simple. We provide several real conver- sations with users in the Figure 2 and Appendix C, where users can use it to compare the differences of multiple regions, inquire about the meaning of the thumbnail, discuss speciï¬c objects, etc. Shikra can provide explanations when answering any question, not only verbally but also spatially. | 2306.15195#4 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15222 | 4 | There are two main approaches to generative retrieval re- garding the identifier types. One approach, exemplified by the DSI system and its variants (Tay et al. 2022), assigns a unique numeric ID to each passage, allowing predicted numeric IDs to directly correspond to passages on a one-to-one basis. However, this approach requires memorizing the mappings from passages to their numeric IDs, making it ineffective for large corpus sets. The other approach (Bevilacqua et al. 2022) takes text spans from the passages as identifiers. While the text span-based identifiers are effective in the large-scale corpus, they no longer uniquely correspond to the passages. In their work, a heuristic-based function is employed to rank all the passages associated with the predicted identifiers. Fol- lowing this line, Li et al. proposed using multiview identifiers, which have achieved comparable results on commonly used benchmarks with large-scale corpus. In this work, we follow the latter approach to generative retrieval. | 2306.15222#4 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15595 | 4 | 1
RoPE 0 Normal 2048 Extrapolation 4096 10 os & oo . 2 Pre*trained range 05 1.0 0 96 Position Interpolation 088 #(x, m) = f(x, m/2)
Figure 1: An illustration of our Position Interpolation method. Consider a Llama model pre-trained with a 2048 context window length. Upper left illustrates the normal usage of an LLM model: input position indices (blue dots) are within the pre-trained range. Upper right illustrates length extrapolation where models are required to operate unseen positions (red dots) up to 4096. Lower left illustrates Position Interpolation where we downscale the position indices (blue and green dots) themselves from [0, 4096] to [0, 2048] to force them to reside in the pretrained range.
bound (â¼ 600Ã smaller in LLaMA 7B setting) than the extrapolated one, and is thus much more stable. Therefore, interpolated position encodings are easier for the model to adapt. | 2306.15595#4 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 4 | Prove theorems â by Interaction / Prooftree [ain Local context ; Fecdnn=n| Goal Tactic ' cases n â F gcd00=0 KEN \ ' unfold] Feged(k +3) (k+1)=k+a \ Data LeanDojo Benchmark unfold|ged extraction | - 98,734 theorems and proofs v oN + 217,776 tactics b ged ((k +1) % (k+4)) (K+ I)=k#1 * 129,243 premises rewrite FROGS Lean Machine learning model kiN Fb gedO(k+1)=k+1 cee applyigedLzerolleft| | pono nnnne rc tnc rnc oo ms, Y state |*N Concat>â+ Encoder-decoder te TORSEIE © Tk ged ((k +1) % (k + 1)) (k+1)=k +1 rewrite |mod_selfi All accessible premises : Tactic : in the math library ole) ' thoores (ROAIEEL#|( © mat) aia = 0 Encoder ' âtheorem eAlZSFONEFE] (x : nat) : ged Ox =x Encod i one (eedteeronietes D+ ged o neoder theorem mod_lt (x | 2306.15626#4 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 5 | Referential dialogue is a superset of many vision- language (VL) tasks. Shikra, skilled in RD, can naturally work on these tasks with promising per- formance, including Visual Question Answering (VQA), image captioning, and location-related tasks such as Referring Expression Comprehen- sion (REC) and PointQA, We illustrate some of them in Figure 2. For more quantitative results, please refer to Section 6.3. Besides, this paper also addresses intriguing questions, such as how to represent position in an image (Section 6.2). Do previous MLLMs possess the capability to com- prehend absolute positions? (Section 4). Can the reasoning process with location information assist in providing more accurate answers to questions? (Section 6.1). We hope that these analysis experi- ment can inspire future research on MLLMs. The main contributions of this paper are:
⢠This paper introduces the task of Referential Dialogue (RD), which is an essential compo- nent of everyday human communication and possesses extensive practical applications.
⢠We present Shikra, a generalist MLLM, for RD. Shikra is simple and uniï¬ed, with- out introducing extra vocabularies, pre-/post- detection module, or external plug-in models. | 2306.15195#5 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15222 | 5 | Despite its rapid development and substantial potential, generative retrieval remains constrained. It relies on a heuris- tic function to convert predicted identifiers into a passage rank list, which requires sensitive hyperparameters and exists outside the learning framework. More importantly, generative retrieval generates identifiers as an intermediate goal rather than directly ranking candidate passages. This disconnect between the learning objective of generative retrieval and the intended passage ranking target brings a learning gap. Con- sequently, even though the autoregressive model becomes proficient in generating accurate identifiers, the predicted identifiers cannot ensure an optimal passage ranking order. Tackling the aforementioned issues is challenging, as they are inherent to the novel generative paradigm in text retrieval. However, a silver lining emerges from the extensive evo- lution of the adeptness learning-to-rank paradigm, which has demonstrated adeptness in optimizing the passage rank- ing objective. Inspired by this progress, we propose to enCopyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. | 2306.15222#5 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15595 | 5 | Empirically, we found that Position Interpolation is highly effective and efï¬cient, requiring only a very short period of ï¬ne-tuning for the model to fully adapt to greatly extended context windows. We present experimental results for extending the context window to up to 32768 from the initial 2048 across 7B to 65B LLaMA models using Position Interpolation. Our results show that
1. Position Interpolation can easily enable very long context windows (e.g. 32768), requiring only ï¬ne-tuning for 1000 steps on the Pile (Gao et al., 2020) to achieve a good quality. The cost of ï¬ne-tuning is negligible compared to the pre-training costs. This conï¬rms our hypothesis that it is relatively easy for the models to adapt to interpolated position encodings.
2. Position Interpolation generates strong models that can effectively make use of much ex- tended context window. We show that models extended by Position Interpolation enjoy signiï¬cant perplexity gains from greatly extended context windows for text modeling, and we show that the perplexity reduces graceful with the enlargement of context windows. We also applied Position Interpolation in a long text summarization task, and demonstrate competitive performances. | 2306.15595#5 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15195 | 6 | 1Shikra is a hunterâs companion, capable of understanding human language and gesture instructions, and locating and capturing prey in the wild.
⢠Shikra handles unseen settings effortlessly, creating diverse application scenarios. It also achieves promising performance on conven- tional visual language tasks such as REC, PointQA, VQA, and Image Captioning, with- out ï¬netuning.
# 2 Related Works
# 2.1 Multimodal Large Language Model | 2306.15195#6 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15222 | 6 | hance generative retrieval by integrating it with the classical learning-to-rank paradigm. Our objective is to enhance gen- erative retrieval to not solely generate fragments of passages but to directly acquire the skill of ranking passages. This shift aims to bridge the existing gap between the learning focus of generative retrieval and the envisaged passage ranking target. In pursuit of this goal, we propose a learning-to-rank frame- work for generative retrieval, dubbed LTRGR. LTRGR in- volves two distinct training phases, as visually depicted in Figure 1: the learning-to-generate phase and the learning- to-rank phase. In the initial learning-to-generate phase, we train an autoregressive model consistent with prior genera- tive retrieval methods via the generation loss, which takes queries as input and outputs the identifiers of target passages. Subsequently, the queries from the training dataset are fed into the trained generative model to predict associated iden- tifiers. These predicted identifiers are mapped to a passage rank list via a heuristic function. The subsequent learning- to-rank phase further trains the autoregressive model using a rank loss over the passage rank | 2306.15222#6 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15595 | 6 | 3. Position Interpolation preserves model quality relatively well for tasks within its original context window sizes. We present a variety of evaluation results for the extended LLaMA models on the original LLaMA benchmark. Compared with original LLaMA models, the extended LLaMA models saw a minor degradation on several standard benchmarks within a 2048 token limit.
Our results highlight the innate ability of Transformer models to âextrapolate to sequence lengths longer than the ones encountered during trainingâ as hypothesized in the seminal work of Vaswani et al. (2017). We reafï¬rm this hypothesis and suggest that the previously known weakness of ex- trapolating to longer sequences for language modeling (Press et al., 2022) may be due to direct
2
extrapolation of positional encodings and it can be largely mitigated by interpolating position en- codings instead. | 2306.15595#6 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 6 | Figure 1: Top right: LeanDojo extracts proofs in Lean [1] into datasets for training machine learning models. It also enables the trained model to prove theorems by interacting with Leanâs proof environment. Top left: The proof tree of a Lean theorem ân â N, gcd n n = n, where gcd is the greatest common divisor (details in Sec. 3). When proving the theorem, we start from the original theorem as the initial state (the root) and repeatedly apply tactics (the edges) to decompose states into simpler sub-states, until all states are solved (the leaf nodes). Tactics may rely on premises such as mod_self and gcd_zero_left defined in a large math library. E.g., mod_self is an existing theorem ân â N, n % n = 0 used in the proof to simplify the goal. Bottom: Our ReProver model (Sec. 5). Given a state, it retrieves premises from the math library, which are concatenated with the state and fed into an encoder-decoder Transformer [2] to generate the next tactic. | 2306.15626#6 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 7 | Expanding the large language model to a multi- modal version has garnered widespread attention. Flamingo (Alayrac et al., 2022) integrates visual adaption layers (like Perceiver) to an LLM, and trained on a large-scaled interleaved image-text dataset. OpenFlamingo (Awadalla et al., 2023) re- implements Flamingo and releases it to the com- munity along with an M3C dataset. Subsequently, MM-GPT (Gong et al., 2023), and Otter (Li et al., 2023a) tune on carefully constructed instruction data for a more user-friendly interaction. Another genre is BLIP-2 (Li et al., 2023b), which align queried visual feature with text using multiple vision-language losses (model named Q-Former), and tunes a simple fully connection layer to feed the queried embedding to a frozen language model. Mini-GPT4 (Zhu et al., 2023), mPLUG-OWL (Ye et al., 2023), VPGTrans (Zhang et al., 2023a), and InstructBLIP (Dai et al., 2023) retain Q-Former, re- place language model to a larger one, and then tun- ing on meticulously | 2306.15195#7 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15222 | 7 | to a passage rank list via a heuristic function. The subsequent learning- to-rank phase further trains the autoregressive model using a rank loss over the passage rank list, which optimizes the model towards the objective of the optimal passage ranking order. LTRGR includes the heuristic process in the learning process, rendering the whole retrieval process end-to-end and learning with the objective of passage ranking. During inference, we use the trained model to retrieve passages as in the typical generative retrieval. Therefore, the LTRGR frame- work only requires an additional training phase and does not add any burden to the inference stage. We evaluate our pro- posed method on three widely used datasets, and the results demonstrate that LTRGR achieves the best performance in generative retrieval. | 2306.15222#7 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15595 | 7 | 2
extrapolation of positional encodings and it can be largely mitigated by interpolating position en- codings instead.
Concurrent work. Right before our release, we are informed with a concurrent blogpost (Super- HOT kaiokendev (2023)) that also interpolates positional encoding in RoPE to extend the context window from 2K to 8K. Recently, open source community picks it up in Reddit post 1 and Github Issues 2, which shows that ï¬ne-tuning with LoRA (Hu et al., 2021) also seems to work well. Our paper shows a full ï¬ne-tuning with up to 65B model work well with Position Interpolation, and we also give theoretical explanations why interpolation achieves much more stable results than extrap- olation, by showing that the upper bound of interplated attention score is much lower than that of extrapolated ones.
# 2 METHOD
2.1 BACKGROUND: ROTARY POSITION EMBEDDING (ROPE) | 2306.15595#7 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 7 | Formal theorem proving serves as an important challenge for machine learning. From a computer science perspective, formal proofs can be treated as programs [10]. But unlike conventional programs in C++ or Python, the correctness of proofs can be verified using proof assistants. Therefore, theorem proving may be considered a special form of code generation, with rigorous evaluation and no room for the model to hallucinate. This can be consequential to current large language models (LLMs), as they have demonstrated exceptional capability in code generation [11] but have flaws in factuality and hallucination [12]. In addition, augmenting LLMs with external tools, such as proof assistants, has shown promise in improving their various capabilities, including multi-step reasoning [13].
Current research on LLMs for theorem proving is facing many barriers. To our knowledge, none of the existing LLM-based provers are open-source [14â21]. They all use private pretraining data, and the compute requirements can reach thousands of GPU days [17]. Furthermore, some rely on tailored infrastructure for distributed training and interaction with the proof assistantâboth are not possible to fully reproduce without open-source code [17, 19]. We change the status quo by introducing LeanDojo: open-source toolkits, models, and benchmarks that give researchers access to state-of-the-art LLM-based provers with modest computational costs. | 2306.15626#7 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 8 | and InstructBLIP (Dai et al., 2023) retain Q-Former, re- place language model to a larger one, and then tun- ing on meticulously collected instruction data. Ad- ditionally, there are simpler and more direct meth- ods: FROMAGe (Koh et al., 2023) and LLaVA (Liu et al., 2023a) directly feed visual features to the LLM using only a learnable fully connected layer. The closed source business model GPT-4 (OpenAI, 2023) also demonstrates astonishing image compre- hension capabilities. Recently, interesting works have made remarkable progress by extending LLM to audio, e.g., KOSMOS-1 (Huang et al., 2023), X-LLM (Chen et al., 2023), PandaGPT (Su et al., 2023) and control systems like PaLM-E (Driess et al., 2023) and EmbodiedGPT (Mu et al., 2023) | 2306.15195#8 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15222 | 8 | The key contributions are summarized:
⢠We introduce the concept of incorporating learning to rank within generative retrieval, effectively aligning the learning objective of generative retrieval with the desired passage ranking target.
⢠LTRGR establishes a connection between the genera- tive retrieval paradigm and the classical learning-to-rank paradigm. This connection opens doors for potential ad- vancements in this area, including exploring diverse rank loss functions and negative sample mining.
⢠Only with an additional learning-to-rank training phase and without any burden to the inference, LTRGR achieves state-of-the-art performance in generative retrieval on three widely-used benchmarks.
# Related Work
Generative Retrieval Generative retrieval is an emerging new retrieval paradigm, which generates identifier strings of passages as the retrieval target. Instead of generating entire passages, this approach uses identifiers to reduce the amount of useless information and make it easier for the model to memorize and learn (Li et al. 2023b). Different types of identifiers have been ex- plored in various search scenarios, including titles (Web URLs), numeric IDs, and substrings, as shown in previous studies (De Cao et al. 2020; Li et al. 2023a; Tay et al. 2022; | 2306.15222#8 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15595 | 8 | # 2 METHOD
2.1 BACKGROUND: ROTARY POSITION EMBEDDING (ROPE)
Transformer models require explicit positional information to be injected, typically in the form of positional encodings, to represent the order of inputs. We consider Rotary Position Embedding RoPE) (Su et al.]{2021), which is the position encoding used in the LLaMA model iven a position index m ⬠[0,c) and an embedding vector x := [%9,71,...,Za_1] |, Where dis the dimension of the attention head, RoPE defines a vector-valued complex function f(x, m) as follows
£(x,m) = [(xp + ixy)eâ¢â¢, (x2 + ivg)eiM,..., (wae + imga)erer a]
(1) â1 is the imaginary unit and θj = 10000â2j/d. Using RoPE, the self-attention score
where i := /â1 is the imaginary unit and 9; = 10000~?//¢. Using RoPE, the self-attention score a(m,n) = Re(f(q,m),£(k,n)) | 2306.15595#8 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 8 | Tools for Data Extraction and Interaction. We focus on Lean, a proof assistant popular among mathematicians.2 Our framework LeanDojo provides two essential functions for learning-based theorem proving (Fig. 1): extracting data and enabling models to interact with Lean programmatically.
For data extraction, LeanDojo extracts training data not directly visible in the raw Lean code (Fig. 2), e.g., proof trees consisting of intermediate states between proof steps (Fig. 1 Top left). In addition, LeanDojo is the first tool to locate premises in Lean proofs, enabling training machine learning models for premise selection. For interaction, LeanDojo turns Lean into a gym-like interactive environment [22]. Using LeanDojo, the model can observe proof states, change the state by executing
2âLeanâ in our paper refers to Lean 3 by default. Lean 4 is not backward-compatible but is also supported by LeanDojo. Our Lean 4 results are in Appendix D.
2
proof steps (referred to as âtacticsâ in proof assistants), and receive feedback from Lean. LeanDojo is the first tool capable of interacting with Lean reliably, reducing proof-checking errors in existing tools [19] (correct proofs misjudged as incorrect) from 21.1% to 1.4%. | 2306.15626#8 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15222 | 9 | Bevilacqua et al. 2022; Ren et al. 2023). In 2023, Li et al. proposed multiview identifiers that represented a passage from different perspectives to enhance generative retrieval and achieve state-of-the-art performance. Despite the poten- tial advantages of generative retrieval, there are still issues inherent in this new paradigm, as discussed in the previous section. Our work aims to address these issues by combining generative retrieval with the learning-to-rank paradigm. | 2306.15222#9 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15595 | 9 | d/2-1 Re | > (aaj + igay+1)(kaj â ikaj41 lr" j=0 d/2-1 = YS (aoghoj + 42541h2541) c08((m = 0)95) + (qa hoj-41 â G2541h29) sin((m â n)4;) j=0 =: a(mân) (2)
is only dependent on relative position m â n through trigonometric functions. Here q and k are the query and key vector for a speciï¬c attention head. At each layer, RoPE is applied on both query and key embeddings for computing attention scores.
2.2 DIRECT EXTRAPOLATION
While the attention score in RoPE only depends on the relative positions, which is what we want, its extrapolation performance is not great . In particular, when directly extending to larger context windows unseen in the training, the perplexity may shoot up to very high numbers (i.e., > 103), comparable to untrained models. | 2306.15595#9 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 9 | Retrieval-Augmented LLMs for Theorem Proving. LeanDojo addresses a key bottleneck in theorem proving: premise selection [23, 24]. Existing LLM-based provers generate the next proof step (tactic), taking only the current state as input. However, proving theorems depends critically on the premises, such as lemmas and definitions, from a math library. For example, Fig. 1 (Top left) illustrates the proof of âân â N, gcd n n = nâ, where gcd stands for greatest common divisor. The proof starts from the original theorem as the initial state and repeatedly applies tactics to decompose states into simpler sub-states, until all states are solved. Tactics may rely on premises such as mod_self and gcd_zero_left defined in a large math library. E.g., mod_self is an existing theorem âân â N, n % n = 0â useful for simplifying the goal. Incorporating all possible premises is too large to fit into LLMsâ input, given the limited context window. Existing methods must learn to memorize the association between the proof state and the name mod_self. It works if the | 2306.15626#9 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 10 | How is the person in the picture feeling[0.610,0.251,0.825, 0.829]? How did you find out? The person in the picture [0.610,0.251,0.825,0.829] is feeling surprised and happy. We can infer this from the fact that she is covering her face with her hands [().612,0.352,0.690,0.444] while looking at the man [0.216,0.223,0.558,0.832] offering her a bunch of red roses [(.412,0.390,0.588,0.756]. What is unusual about this image? give coordinates [xmin,ymin,xmax,ymax] for the items you reference. The unusual aspect of this image is the man [0.392,0.254,0.652, 0.530] standing on top of the ironing board [0.338,0.392,0.668,0.7 56] in the back of the yellow SUV [0.452,0.384,0.998,0.738]. The scene is unusual and unexpected, as one would typically not expect to see someone ironing clothes while standing on top of the car. The man | 2306.15195#10 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15222 | 10 | Learning to Rank Learning to rank refers to machine learning techniques used for training models in ranking tasks (Li 2011). This approach has been developed over several decades and is typically applied in document retrieval. Learning to rank can derive large-scale training data from search log data and automat- ically create the ranking model, making it one of the key technologies for modern web search. Learning to rank ap- proaches can be categorized into point-wise (Cossock and Zhang 2006; Li, Wu, and Burges 2007; Crammer and Singer 2001), pair-wise (Freund et al. 2003; Burges et al. 2005), and list-wise (Cao et al. 2007; Xia et al. 2008) approaches based on the learning target. In the point-wise and pair-wise approaches, the ranking problem is transformed into classi- fication and pair-wise classification, respectively. Therefore, the group structure of ranking is ignored in these approaches. The list-wise approach addresses the ranking problem more directly by taking ranking lists as instances in both learning and prediction. This approach maintains the group structure of ranking, and ranking evaluation measures can be more directly incorporated into the loss functions in learning. | 2306.15222#10 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15595 | 10 | Ideally, we want to see the model trained on a context window of size L = 2048 to still work reasonably well on longer context window, but may not have the capability to leverage information that appears beyond L. For example, to answer a question located at 3000, the model trained on maximal window size of L = 2048 cannot leverage evidences provided at location 0, but still can leverage the evidences provided at location 2900. In contrast, in reality we see catastrophic behaviors, i.e., question at location 3000 cannot be answered correctly, even if the evidences are located at location 2900.
What is the reason behind? How could this happen if the attention score amân decays as the relative distance |m â n| increases, according to Section 3.4.3 of (Su et al., 2021), and content from very
1https://www.reddit.com/r/LocalLLaMA/comments/14fgjqj/a_simple_way_to_ extending_context_to_8k/
# 2https://github.com/ggerganov/llama.cpp/discussions/1965
3 | 2306.15595#10 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15195 | 11 | The scene is unusual and unexpected, as one would typically not expect to see someone ironing clothes while standing on top of the car. The man [0.171,0.330,0.650,0.779] is scared of the chicken [0.620,0.219,0.996,0.659] that is flying towards him. Can you explain this meme? give coordinates [xmin,ymin, xmax,ymax] for the items you referen In this image, a person [0.002,0.490,0.208,0.832] is holding a water-spraying tool [0.180,0.546,0.408,0.830] and is pointing it at a tiled wall [0.002,0.168,0.998,0.830]. The water is dripping from the wall in the shape of the question mark [().432,0.422,0.626, 0.658]. This creates an interesting visual effect, as the question mark appears on the wall while the water is sprayed to resemble the question mark. | 2306.15195#11 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15222 | 11 | Dense Retrieval Dense retrieval (Lee, Chang, and Toutanova 2019; Karpukhin et al. 2020), which is an extension of learning to rank in the context of large language models, is currently the de facto implementation of document retrieval. This method benefits from the powerful representation abilities of large language models and the MIPS algorithm (Shrivastava and Li 2014), allowing for efficient passage retrieval from a large-scale cor- pus. Dense retrieval has been further developed through hard negative sample mining (Xiong et al. 2020; Qu et al. 2021; Li, Li, and Nie 2022) and better pre-training design (Chang et al. 2019; Wang et al. 2022a), resulting in an excellent perfor- mance. However, compared to dense retrieval, which relies on the dual-encoder architecture, generative retrieval shows promise in overcoming the missing fine-grained interaction problem through the encoder-decoder paradigm. Despite be- ing a recently proposed technique, generative retrieval still lags behind the state-of-the-art dense retrieval method and leaves much room for investigation. In this work, we intro- duce a promising way to further develop generative retrieval systems. | 2306.15222#11 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15626 | 11 | One potential solution is to complement memorization with explicit premise selection. LeanDojo extracts premise data from Lean, including where they are defined and used. It enables us to tackle premise selection by augmenting LLMs with retrieval. We introduce ReProver (Retrieval-Augmented Prover) (Fig. 1 Bottom): Given the current state, it generates a tactic conditioning on a small number of premises retrieved from Leanâs math library, mathlib [25]. We need to limit retrieval to a small number of premises for it to be effective, and ideally, they should contain the ground truth premise. Our retriever builds upon Dense Passage Retriever (DPR) [26] but incorporates two algorithmic innovations: First, not all premises are accessible when proving a theorem (Sec. 3). LeanDojo can perform program analysis on Lean code to determine accessible premises. On our data, that reduces the average number of premises from 128K to 33K, significantly simplifying the retrieverâs task. Second, DPR needs negative examples in training and benefits from hard negatives, i.e., irrelevant premises that are hard to distinguish from ground truth ones. We propose in-file negatives: a simple mechanism to find hard negatives in premise selection, which samples negative premises defined in the same Lean source file as the ground truth premise. | 2306.15626#11 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 12 | Figure 2: Referential Dialogues between real users and Shikra-7B. The dashed box on an image represents the area referred to by the user or jointly referred to by Shikra, while the solid box represents the area solely referred to by Shikra. More RD results and applications on conventional VL tasks can be found in Appendix C.
Described Object Detection (Xie et al., 2023) extends REC to more realistic scenarios where the object may not exist or there may be multiple ob- jects. VQA Grounding aims to answer visual ques- tions and associate the answers with speciï¬c visual regions or objects. Tasks with input boxes: Given an image and a location box, the task of Ground- ing Caption (GC) (Zhou et al., 2020) is to generate a description for this location by considering the surrounding environment. Compared to GC, Re- ferring Expression Generation (REG) (Liu et al., | 2306.15195#12 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15222 | 12 | Method When given a query text q, the retrieval system must retrieve a list of passages {p1, p2, . . . , pn} from a corpus C, where both queries and passages consist of a sequence of text tokens. As illustrated in Figure 1, LTRGR involves two training stages:
Target Identifiers | 1. Titles of | Autoregressive [positive passagespaa Query â>| âModel | 2. Body of _px/aen [positive passages , Query â> Passgage Predicted Identifiers rank list 71 Predicted | re 7 Autoregressive | titles ââ_> IP 2 > L Model | 2. Predicted | transform | assage rank 1 body _ Loo ;
# (a) Learning to generate
# (b) Learning to rank
Figure 1: This illustration depicts our proposed learning-to-rank framework for generative retrieval, which involves two stages of training. (a) Learning to generate: LTRGR first trains an autoregressive model via the generation loss, as a normal generative retrieval system. (b) Learning to rank: LTRGR continues training the model via the passage rank loss, which aligns the generative retrieval training with the desired passage ranking target. | 2306.15222#12 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15595 | 12 | Figure 2: Extrapolation versus interpolation. Left: a ï¬tted attention score function (in red) in the form of Eqn. 3 with d = dmodel/nhead = 4096/32 = 128 (setting of LLaMA 7B). Dots are random input points to be ï¬tted and red curve is the ï¬tted score function via least square, which is approximately within [â1, 1]. Middle: While the ï¬tted function seems to be well bounded in [0, L], where L = 2048, out of this region it may goes beyond 8000, causing catastrophic issues in attention computation. Note that here we do not cherry pick at all: almost every learned curve from a set of randomly generated input points within [0, L] has the extrapolation issue. Right: On the other hand, interpolation is much more stable. Curves in between vertical dotted lines (i.e., integer positional difference) are smooth and well-behaved. Please check Appendix C.1 for the source code used to generate the ï¬gure. | 2306.15595#12 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 12 | LeanDojo Benchmark. Using LeanDojo, we construct a benchmark containing 98,734 theorem- s/proofs extracted from mathlib. Our benchmark is one of the largest math-focused theorem-proving datasets. We find that the common practice of splitting theorems randomly into training/testing has led to an overestimated performance in the previous papers. LLMs can prove seemingly difficult theorems simply by memorizing the proofs of similar theorems during training. In LeanDojo Bench- mark, we mitigate this issue by designing challenging data split requiring the model to generalize to theorems relying on novel premises that are never used in training. | 2306.15626#12 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 13 | 2017) requires the generated description to indicate that it describes this region speciï¬cally, not others, making it necessary for the description to be dis- criminative. PointQA (Mani et al., 2020) requires a model answer for a visual question where the questioner queries a speciï¬c position in the picture. Differently, our model is not only compatible with the above tasks, but also can handles the input and output of position representation ï¬exibly and si- multaneously, bringing Referential Dialogue and extending new dimensions to positional tasks.
# 2.3 Position Representation | 2306.15195#13 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15222 | 13 | learning to generate and learning to rank. In this section, we will first provide an overview of how a typical generative retrieval system works. i.e. learning to generate, and then clarify our learning-to-rank framework within the context of generative retrieval.
Learning to Generate We first train an autoregressive language model using the stan- dard sequence-to-sequence loss. In practice, we follow the current sota generative retrieval method, MINDER (Li et al. 2023b), to train an autoregressive language model. Please refer to the MINDER for more details.
Training. We develop an autoregressive language model, referred to as AM, to generate multiview identifiers. The model takes as input the query text and an identifier prefix, and produces a corresponding identifier of the relevant pas- sage as output. The identifier prefix can be one of three types: "title", "substring", or "pseudo-query", representing the three different views. The target text for each view is the title, a random substring, or a pseudo-query of the target passage, respectively. During training, the three different samples are randomly shuffled to train the autoregressive model. | 2306.15222#13 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15595 | 13 | far distances should not matter that much? It turns out that the upper bound derived in Section 3.4.3 of (Su et al., 2021) may be too loose: while it indeed decays with respect to |m â n|, the bound can still be quite large (i.e., the bound can be critically depends on the magnitude of vj) and thus vacuous. In fact, if we treat all trigonometric functions as basis functions (i.e, Ïj(s) := eisθj ), and think about Eqn. 2 as basis expansion as the following:
d/2-1 a(s) = Re Ss hje'84 (3) j=0 | 2306.15595#13 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 13 | We use LeanDojo Benchmark to train and evaluate ReProver. Training takes only five days on a single GPU. In evaluation, ReProver can prove 51.2% theorems, outperforming a baseline that generates tactics directly without retrieval (47.6%) and another baseline using GPT-4 [27] to generate tactics in a zero-shot manner (29.0%). We also test ReProver on two existing datasets, MiniF2F [28] and ProofNet [29]. It can prove 26.5% theorems in MiniF2F and 13.8% in ProofNet, which is competitive with state-of-the-art methods without reinforcement learning [19], even though trained using far fewer resources. Moreover, it can prove 65 theorems that currently do not have proofs in Lean. Thus, our tool can also serve as an effective tool for augmenting existing math libraries in Lean. | 2306.15626#13 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 14 | Inputting regions of interest into the model presents various approaches. Some methods (Bracha et al., 2023) directly concatenate cropped image patches with the original image as model input. There are also some methods (Lin et al., 2020, 2022) that use 0/1 mask or Gaussian map input with the original image to emphasize the area of user interest. Some methods (Tancik et al., 2020; Kirillov et al., 2023) ï¬rst encode points and boxes to positional encodings then add them to intermedi- ate features or learned queries. Outputting regions of interest is a highly focused technique, existing many positioning paradigms . Anchor-based meth- ods utilize predeï¬ned sliding windows and pro- posal candidate regions for classiï¬cation., e.g., Fast R-CNN (Girshick, 2015). Some one-stage methods remove anchors and directly regress four values for bounding box coordinates, e.g., FCOS (Tian et al., 2019). Some methods adopt one-to-one label as- signment to evolve object detection into an end-to- end manner, e.g., DETR (Carion et al., 2020) and | 2306.15195#14 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15222 | 14 | the objective is to mini- mize the sum of the negative loglikelihoods of the tokens {i1, · · · , ij, · · · , il} in a target identifier I, whose length is l. The generation loss is formulated as,
Ll Lyen = â)>_ log po (isla; T<;), () j=l
sequence where {i0, · · · , ijâ1}, i0 is a pre-defined start token, and θ is the trainable parameters in the autoregessive model AM. Inference. During the inference process, given a query text, the trained autoregressive language model AM could generate predicted identifiers in an autoregressive manner. The FM-index (Ferragina and Manzini 2000) data structure is used to support generating valid identifiers. Given a start token or a string, FM-index could provide the list of possible token successors. Therefore, we could store all identifiers of passages in C into FM-index and thus force the AM model to generate valid identifiers via constrained generation. Given a query q, we could set different identifier prefixes to generate a series of predicted identifiers I via beam search, formulated as,
I = AM(q; b; FM-index), (2)
where b is the beam size for beam search. | 2306.15222#14 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15595 | 14 | d/2-1 a(s) = Re Ss hje'84 (3) j=0
where s is the positional span between a query and a key and hj := (q2j + iq2j+1)(k2j â ik2j+1) are complex coefï¬cients depending on q and k (here the deï¬nition of hj is exactly the same as the deï¬nition of hj in Sec 3.4.3 in RoPE (Su et al., 2021)). Now the the issue becomes clear: as shown in Fig. 2, as can be small in magnitude in the range of [0, 2048], but gives huge values out of the region. The underlying reason is that the trigonometric family {Ïj} (with sufï¬ciently large d) is a universal approximator and can ï¬t any arbitrary functions. Therefore, for as, there always exist coefï¬cients {hj} (i.e. key and query) that corresponds to small function values in [0, 2048] but much larger in regions beyond.
# 2.3 PROPOSED APPROACH: POSITION INTERPOLATION (PI) | 2306.15595#14 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 14 | Contributions. In summary, we make four main contributions: First, we introduce tools for extracting data from and interacting with Lean. Second, we develop ReProver, the first retrieval- augmented language model for theorem proving. Third, we construct a challenging benchmark for learning-based theorem proving and use it to validate the effectiveness of ReProver. Finally, we facilitate open research on LLMs for theorem proving by releasing our data, model, and code. Our method does not rely on private datasets and can be trained on a single GPU within a week. We believe this will significantly lower the barriers to academic research in this area and establish the first accessible baselines for future work to build upon. Further, our method can be used to automatically generate new Lean proofs without requiring human effort.
3
# 2 Related Work
Theorem Proving. Classical provers express theorems in first-order logic and search for proofs automatically in a large space [30, 31]. Even with data-driven search heuristics [32, 33], they fail to scale to large formalization projects. Therefore, recent work on learning-based theorem proving has focused on an alternative paradigm: automating the interaction with proof assistants. | 2306.15626#14 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 15 | label as- signment to evolve object detection into an end-to- end manner, e.g., DETR (Carion et al., 2020) and POTP (Wang et al., 2021). An interesting genre is Pix2seq (Chen et al., 2021), which formalizes the detection task as a sequence generation task. It desires the spatial position of the image in 1,000 bins and uses a 1,000-token vocabulary to represent it. For detection, Pix2seq performs classiï¬cation on the coordinate vocabulary in an auto-regressive manner. Following Pix2seq, several methods, e.g., OFA (Wang et al., 2022b), Uniï¬ed-IO (Lu et al., 2022), UniTab (Yang et al., 2022), GIT (Wang et al., 2022a), and VisionLLM (Wang et al., 2023b) intro- duce similar coordinate vocabulary alongside the language vocabulary for object detection and REC tasks. Differently, Shikra formulates position in- put/output as the most natural and ï¬exible form of language and compare it with the extra coordinate vocabulary in Section 6.2. | 2306.15195#15 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15222 | 15 | I = AM(q; b; FM-index), (2)
where b is the beam size for beam search.
In order to retrieve passages from a large corpus, a heuristic function is employed to transform the predicted identifiers I into a ranked list of passages. We give a simple explanation, and please refer to the original paper for details. For each passage p â C, we select a subset Ip from the predicted identifiers I, where ip â Ip if ip is one of the identifiers of the passage p. The rank score of the passage p corresponding to the query q is then calculated as the sum of the scores of its covered identifiers,
s(q, p) = sip , ipâIp (3)
where sip represents the language model score of the iden- tifier ip, and Ip is the set of selected identifiers that appear in the passage p. By sorting the rank score s(q, p), we are able to obtain a ranked list of passages from the corpus C. In practice, we can use the FM-index to efficiently locate those passages that contain at least one predicted identifier, rather than scoring all of the passages in the corpus. | 2306.15222#15 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15595 | 15 | # 2.3 PROPOSED APPROACH: POSITION INTERPOLATION (PI)
In Fig.|2| thanks to the smoothness of bases functions @; interpolation is much more stable and will not lead to wild values. Therefore, instead of extrapolate the attention score in Eqn. B]to s>L, how about we define an attention score @(s) = a(Ls/Lâ) where Lâ is the longer context window? Formally, we replace RoPE f by fâ defined as follows
f'(x,m) =f ( ; mt) . (4)
We call this transformation on the position encoding Position Interpolation. In this step, we reduce position indices from [0, Lâ) to [0, L) to match the original range of indices before computing RoPE. Consequently, as inputs to RoPE, the maximum relative distance between any two tokens has been reduced from Lâ to L. Since we align the ranges of position indices and relative distances before and after extension, we mitigate the effect on attention score computation due to context window extensions, which can allow the model easier to adapt. To further demonstrate this is the case, in the following theorem, we show that the interpolated attention score is well-behaved:
4 | 2306.15595#15 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 15 | The architecture of learning-based provers progressed from classical machine learning algorithms such as KNN [34], to graph neural networks explicitly encoding the syntax of formal expressions [9, 35], and now Transformer-based LLMs treating expressions as plain strings [14]. Besides the model architecture, researchers have explored several complementary dimensions: proof search algorithms for assembling model-generated steps into complete proofs [17, 21]; overcoming data scarcity through reinforcement learning (RL) [17, 19, 36, 37] or synthetic/auxiliary data [16, 38â40]; as well as outsourcing some proof goals to classical provers [18, 41â43]. Our base model without retrieval is a combination of straightforward design choices. It generates tactics by finetuning an encoder-decoder Transformer, ByT5 [44], via supervised learning without RL or auxiliary data. Then it searches for proofs using best-first search. Our modelâs algorithmic novelty lies in the retrieval. | 2306.15626#15 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 16 | # 3 Referential Dialogue
To better understand the interesting abilities of our model, we demonstrated real usersâ communica- tions in Figure 1 and Figure 2. As shown in the ï¬rst demo of Figure 1, the user points to two deer, and inquires, âWhat is the difference between this deer and another deer?â When Shikra answered, she not only mention the differences but also output the coordinates of the differences. The subsequent examples in Figure 2 are alike. To our knowledge, there have been no uniï¬ed models that can achieve
such functionality before. RD is a superset of nu- merous vision-language tasks. Shikra can perform most tasks like current MLLM, including VQA, Image Caption, and multimodal dialogue. Further- more, it handles tasks that they cannot, like REC, REG, and PointQA. The model demonstrates pro- ï¬ciency in tasks not in the training set, such as identifying similarities between two indicated ob- jects, or counting somethings, and providing their positions. We show more results in Appendix C. If you are interested in quantitative experiments, you can refer to Section 6 later.
# 4 Chessboard Test for Current MLLM | 2306.15195#16 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15222 | 16 | Learning to Rank As previously mentioned, it is insufficient for generative re- trieval to only learn how to generate identifiers. Therefore, we develop a framework to enable generative retrieval to learn how to rank passages directly. To accomplish this, we con- tinue training the autoregressive model AM using a passage rank loss.
To begin, we retrieve passages for all queries in the training set using the trained autoregressive language model AM after the learning-to-generate phase. For a given query q, we obtain a passage rank list P = {p1, · · · , pj, · · · , pn}, where n is the number of retrieved passages. Each passage pj is assigned a relevant score s(q, pj) via Eq. 3, which is calculated as the sum of the language model scores of a set of predicted identifiers. It is important to note that the passage rank list includes both positive passages that are relevant to the query and negative passages that are not.
A reliable retrieval system should assign a higher score to positive passages than to negative passages, which is the goal of the learning-to-rank paradigm. To achieve this objective in generative retrieval, we utilize a margin-based rank loss, which is formulated as follows: | 2306.15222#16 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15595 | 16 | 4
Theorem 2.1 (Interpolation bound). For attention score a(s) = Re câ2j/d, its interpolation value a(s) for s â [s1, s2] is bounded as follows: j=0 hjeisθj , where θj =
lls) ~ na(s)| $e (max[ty|) S=G2â 9) (5)
where alinear(s) is the linear interpolation of two grid point a(s1) and a(s2) that are known to behave well, enforced by LLM pre-training:
alinear(s) := (1 â λ(s))a(s1) + λ(s)a(s2), λ(s) := s â s1 s2 â s1 (6)
Please check Appendix A for the proof. Intuitively, in LLM pre-training, we know that the attention score a(s) behaves well on integer grid s1 and s2. Therefore, for any interpolation s â [s1, s2], we have (s â s1)(s2 â s) ⤠1/4. Note that c = 10000, the bound becomes: | 2306.15595#16 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 16 | Premise Selection. Selecting useful premises is recognized as a key challenge in theorem prov- ing [23, 24, 45, 46]. Machine learning methods for premise selection have also progressed from classical models [41, 47, 48], recurrent neural networks [24], graph neural networks [38], to Trans- formers [49, 50]. However, existing methods either tackle premise selection in isolation without theorem proving [24, 38, 48] or feed the premises to a symbolic prover [41, 47, 49]. To our knowl- edge, we are the first to augment a learning-based formal theorem prover with retrieved premises so that the prover can learn how to use them effectively. For example, it can decide whether to use an explicitly retrieved premise or an implicitly memorized one. | 2306.15626#16 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 17 | Can the current MLLM model understand absolute spatial positions? The current MLLMs cannot di- rectly output coordinates; thus, in this section, we designed a chessboard test, which simpliï¬es the object grounding into a part choice task. Speciï¬- cally, we divide a image into a 2 à 2 chessboard. Next, we ask, â<image> Which part is <expr> in if the picture is divided equally into four 2 by 2 parts? Choose from: (A) Top-left (B) Top-right (C) Bottom-left (D) Bottom-right.â, where <im- age> and <expr> denote input image tokens and Class name. We construct test data from LVIS (Gupta et al., 2019), which is a perception detec- tion with over 1000 entry-level object categories. We choose objects that are completely within a certain part (i.e., ambiguous positions are not con- sidered). In total, we select 600 images per part, resulting in 2,400 images across 945 categories. We employ LLaVA-13B (Liu et al., 2023a) for the chessboard test , but the results are unsatisfactory. We tried various instruction methods, and | 2306.15195#17 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15222 | 17 | Lrank = max(0, s(q, pn) â s(q, pp) + m), (4) where pp and pn represent a positive and negative passage in the list P, respectively, and m is the margin. It is noted
Methods Natural Questions @5 @20 @100 43.6 78.1 62.9 68.3 86.1 80.1 59.3 85.0 73.9 28.3 65.5 47.3 40.5 73.1 60.2 43.9 81.1 65.8 61.3 86.3 76.2 65.8 78.3 86.7 87.1â 80.3â 68.8â 4.56% 2.55% 0.46% TriviaQA
@20 @100 @5 83.9 77.3 67.7 84.8 80.2 72.7 85.7 80.4 73.1 - - - 80.1 57.5 39.6 80.1 56.6 38.4 84.6 77.6 66.8 84.8 78.1 68.4 85.1â 79.1â 70.2â 2.63% 1.28% 0.35% | 2306.15222#17 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15595 | 17 | |a(s) â alinear(s)| ⤠d 32 ln c max j |hj| â d maxj |hj| 294.73 (7)
In comparison, Sec. 3.4.3 in RoPE (Su et al., 2021) yields an extrapolation bound (i.e., it works for all positional distance s):
d/2-1 d/2-1 lols) < (saxty = Aisa) YO [Ansr()) <2 (max lai!) 3 enh J k=0 / k=0
where A;,(s) := yar e'*95, While there is no close form for B(s) := apt |Aj41(s)|, numer- ically it is at least larger than d, and for many positional difference s, B(s) is much larger than d (check Appendix [B] for the plot). Therefore, the interpolation bound is at least 2 - 294.73 ~ 600x smaller than the extrapolation bound, and thus the interpolated attention score is much more stable than extrapolated one.
Notably, our method of rescaling of position indices does not introduce extra weight, or modify the model architecture in any way. This makes it attractive in practical applications, since most infrastructure and optimization for the original model can be reused after the extension. | 2306.15595#17 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 17 | Data and Tools for Theorem Proving. Tools for data extraction and interacting with proof assistants have been crucial drivers of learning-based theorem proving. Existing tools and datasets can be divided by proof assistants: Coq has GamePad [51], CoqGym [9], and PRISM [52]; Isabelle has IsarStep [53] and PISA [15]; HOL Light has HOList [54] and HoLStep [55], and Lean has LeanStep [16] and lean-gym [19]. MiniF2F [28] is the only cross-system dataset, with 488 theorems for evaluation. However, it does not have training theorems and is restricted to the domain of math olympiads.
Among available tools extracting data from proof assistants, LeanDojo is the only one that can extract premises for retrieval-augmented theorem proving. A few existing datasets also have premises [49, 54], but their data extraction tools are not public, making it difficult to construct new datasets. In addition, LeanDojo is the only tool that can interact with Lean robustly (Sec. 4) and can extract data from Lean 4. See Appendix A.3 for a detailed comparison between LeanDojo and alternatives. | 2306.15626#17 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 18 | (Liu et al., 2023a) for the chessboard test , but the results are unsatisfactory. We tried various instruction methods, and LLaVA should achieve an accuracy of 25.96%, which is comparable to random selection. This suggests that prior coarse-grained vision-language align- ment pre-training may be inadequate for MLLMs to capture the exact spatial position of an image. We need to explore appropriate coordinate repre- sentations and ï¬ner-grained training data. | 2306.15195#18 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15222 | 18 | BM25 DPR(Karpukhin et al. 2020) GAR(Mao et al. 2021) DSI-BART(Tay et al. 2022) SEAL-LM(Bevilacqua et al. 2022) SEAL-LM+FM(Bevilacqua et al. 2022) SEAL(Bevilacqua et al. 2022) MINDER(Li et al. 2023b) LTRGR % improve
Table 1: Retrieval performance on NQ and TriviaQA. We use hits@5, @20, and @100, to evaluate the retrieval performance. Inapplicable results are marked by â-â. The best results in each group are marked in Bold, while the second-best ones are underlined. â denotes the best result in generative retrieval. % improve represents the relative improvement achieved by LTRGR over the previously best generative retrieval method.
Methods BM25 SEAL(Bevilacqua et al. 2022) MINDER(Li et al. 2023b) NCI(Wang et al. 2022b) DSI(scaling up)(Pradeep et al. 2023) DSI(scaling up)(Pradeep et al. 2023) LTRGR % improve Model Size - BART-Large BART-Large T5-Base T5-Base T5-Large BART-Large - MSMARCO | 2306.15222#18 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15595 | 18 | Fine-tuning. We can further ï¬ne-tune the interpolated model using the next token prediction task with interpolated position encodings on the extended context window size using a pre-training cor- pus such as the Pile (Gao et al., 2020). In the next section, we show that our ï¬ne-tuning process only needs tens to hundreds thousands of examples. We also ï¬nd that the result of the ï¬ne-tuning is not sensitive to the choice of examples. The reason may be that the model is only adapting to the new context window during the ï¬ne-tuning phase, starting from a good initialization, as opposed to acquiring new knowledge. | 2306.15595#18 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 18 | Mathematical Reasoning in Natural Language. We focus on proving theorems expressed in formal logic, whereas researchers have also produced a plethora of work on mathematical reasoning in natural language [56â63]. A particularly relevant task is autoformalization, translating natural language texts into formal theorems and proofs [29, 64â72].
Retrieval-Augmented Language Models. Our ReProver is the first retrieval-augmented language model for formal theorem proving, though similar architectures have been studied extensively in NLP [73â81]. In addition, there have been many retrieval-augmented methods for code generation [82â 88]. Most of them retrieve from a corpus not directly related to the current file, e.g., GitHub or Stack Overflow. In contrast, our retrieval corpus consists of premises accessible to the current file, which is determined by program analysis using LeanDojo. This is similar to what CoCoMIC [88] does for Python. However, their retrieval is based on heuristics, whereas ours is learned.
# 3 Background: Theorem Proving in Lean | 2306.15626#18 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 19 | # 5 Breeding Shikra
This section introduces the birth of Shikra, encom- passing its structure design, position representation, training data construction, and training strategies.
# 5.1 Architecture
We selected the pre-trained ViT-L/14 of CLIP as visual encoder and Vicuna-7/13B as our LLM. We
use one fully connected layer to map the ViTâs 16 x 16x output embedding V ⬠R16x16x1024 to Vâ ⬠RP for modal alignment and correct input dimension of LLM. D is 4,096 for Vicuna- 7B and 5,120 for Vicuna-13B. Visual embedding can be inserted into anywhere of input sequence. During training, both the fully connected layer and the entire language model are involved. We do not introduce any vocabulary or special encoder for encoding position information. We have not introduced additional pre-/post-detectors for points or bounding boxes. The model using Vicuna-7B is called Shikra-7B, and the other, using Vicuna-13B, is named Shikra-13B.
# 5.2 Numerical representation of position | 2306.15195#19 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15222 | 19 | R@20 R@100 M@10 66.2 47.5 57.2 35.3 78.7 53.5 - - - - - - 85.2 64.5
Table 2: Retrieval performance on the MSMARCO dataset. R and M denote Recall and MRR, respectively. â-â means the result not reported in the published work. The best results in each group are marked in Bold. % improve represents the relative improvement achieved by LTRGR over the previously best generative retrieval method.
that the gradients could be propagated to the autoregressive model AM via the language model score sip , which is the logits of the neural network.
In practice, we take two rank losses based on different sam- pling strategies for positive and negative passages. In Lrank1, the positive and negative passages are the ones with the high- est rank scores, respectively. In Lrank2, both the positive and negative passages are randomly sampled from the passage rank list. While the rank loss optimizes the autoregressive model toward passage ranking, the generation of identifiers is also crucial for successful passage ranking. Therefore, we also incorporate the generation loss into the learning-to-rank stage. The final loss is formulated as a multi-task format:
# Experiments | 2306.15222#19 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15595 | 19 | Other ways to reduce interpolation/extrapolation bound. From the expression of the interpola- tion (Eqn. 5) and extrapolation bound (Eqn. 8), a common term is maxj |hj|, which is the maximal magnitude of query/key products. If we enforce a regularization on |hj| during LLM training, it is possible that the catastrophic extrapolation error can be mitigated or even resolved. In fact, if we apply ridge regression with proper regularization to ï¬t a curve in Fig. 2, the magnitude of extrapo- lated a(s) when s > L can be comparable to that within [0, L]. To our knowledge, we are not aware of existing LLM pre-training techniques that leverage this regularization and will leave it for future work.
# 3 EXPERIMENTS
We show Position Interpolation can effectively extend context window up to 32 times of the original size, and such extension can be done with only several hundreds of training steps. We show the resulting models are strong LLMs with fully effective long context windows. We demonstrate its performance in a number of tasks including language modeling, passkey retrieval, and long doc- ument summarization. We also present benchmark results of the extended models on the original LLaMA evaluation benchmarks.
5
3.1 SETUP | 2306.15595#19 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 19 | # 3 Background: Theorem Proving in Lean
At a high level, Lean is a programming language that allows you to write not only conventional programs but also theorems and proofs. To that end, it provides two pieces of machinery: First, it provides a unified language for defining programs, mathematical objects, theorems, and proofs, based on functional programming with dependent types [89]. Second, it provides a tactic system for constructing machine-checkable proofs semi-automatically.
4 | 2306.15626#19 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 20 | # 5.2 Numerical representation of position
We represent the position using numerical values in Natural Language in a highly intuitive manner. We use [xmin, ymin, xmax, ymax] to denote the bound- ing box and [xcenter, ycenter] to denote region center point. x and y is normalized according to the size of the image. We default to keeping 3 decimal places for each number. These coordinates can appear anywhere in the input and output sequence of the model. For example, User Question: âHow many other clothes in the <image> are of the same color as the jacket [0.268, 0.372]?â. Shikra reply: âThe jacket [0.268, 0.372] is green. We can ï¬nd a T-shirt [0.653, 0.532] and cropped pants [0.569, 0.101] a with same green color. So the answer is two.â The square brackets that record coordinates naturally appear in sentences and can serve as any sentence component. Like regular text, tokenizing without discrimination.
# Instruction data construction
We utilize two types of data to train Shikra: the reorganized public datasets, and the high-quality RD data built from Flickr30K Entities (Plummer et al., 2015) using GPT-4 (OpenAI, 2023). | 2306.15195#20 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15222 | 20 | # Experiments
Datasets We conducted experiments using the DPR (Karpukhin et al. 2020) setting on two widely-used open-domain QA datasets: NQ (Kwiatkowski et al. 2019) and TriviaQA (Joshi et al. 2017). In both datasets, the queries are natural language ques- tions and the passages are sourced from Wikipedia. Addi- tionally, we evaluated generative retrieval methods on the MSMARCO dataset (Nguyen et al. 2016), which is sourced from the Web search scenario where queries are web search queries and passages are from web pages. Importantly, we evaluated models on the full corpus set rather than a small sample, and we used widely-used metrics for these bench- marks.
L = Lrank1 + Lrank2 + λLgen, (5)
where λ is the weight to balance the rank losses and genera- tion loss.
We continue training the autoregressive model AM via Eq. 5. After training, AM can be used to retrieve passages as introduced in the learning to generate section. Therefore, our learning-to-rank framework does not add any additional burden to the original inference stage. | 2306.15222#20 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15626 | 20 | 4
data/nat/lemmas.lean theorem mod_self (n : nat) : n Zn = 0 := begin rw [mod_eq_sub_mod (le_refl _), nat.sub_self, zero_mod] |. Math library end \ data/nat/gcd.lean def ged : nat + nat 7 nat == ged zy 10 yury | (e+ 1) y = ged (y % (e+ 1)) (e+ 1) -+ Case 2: 2 > 0 \ 1 H \ 1 ' -- Case 1: 2 == 0 ! 1 i theorem gcd_zero_left (x : nat) : gcd 0 x = x := begin simp [gcd] end I i ' theorem gcd_self (n : nat) : ged nn =n := ' begin / cases n, { unfold |ged }, unfold gcd, rewrite mod_self, apply \ged_zero_left end 1 |. Import
Figure 2: Definition of greatest common divisor (gcd) in Lean and two related theorems. The proof of gcd_self (between âbeginâ and âendâ) relies on a premise mod_self imported from another file in the math library. Lean can run this proof to produce the proof tree in Fig.1 (Top left). | 2306.15626#20 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 21 | # 5.3.1 Reorganization of public data
We collection training data from public VQA, Im- age Captioning datset, and several datasets al- ready containing positional annotation, such as Re- fCOCO (Kazemzadeh et al., 2014) for REC/REG, visual gemone (Krishna et al., 2017) for grounding caption, Visual-7W (Mani et al., 2020) for PointQA. We also deï¬ne new task forms, such as Spotting Captioning, which requires the model to describe the image and spots the mentioned objects or re- gions using points or boxes. We use Flickr30K Entities for this task. All the data used and correspond- ing tasks can be found in Appendix A. Note that all the data used were included in the reported model results, unless stated otherwise for speciï¬c com- parative experiments. Additionally, it should be mentioned that we have excluded images present in the test and validation data from the training data to prevent potential data leakage, despite their distinction in terms of image-text pairs.
# 5.3.2 Generated data | 2306.15195#21 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15222 | 21 | Baselines We compared LTRGR with several generative retrieval including DSI (Tay et al. 2022), DSI (scal- methods, ing up) (Pradeep et al. 2023), NCI (Wang et al. 2022b), SEAL (Bevilacqua et al. 2022), and MINDER (Li et al. 2023b). Additionally, we included the term-based method BM25, as well as DPR (Karpukhin et al. 2020) and GAR (Mao et al. 2021). All baseline results were obtained from their respective papers.
Implementation Details To ensure a fair comparison with previous work, we utilized BART-large as our backbone. In practice, we loaded the trained autoregressive model, MINDER (Li et al. 2023b), and continued training it using our proposed learning-to- rank framework. In the learning to rank phase, we used the Adam optimizer with a learning rate of 1e-5, trained with a batch size of 4, and conducted training for three epochs. For each query in the training set, we retrieved the top 200 passages and selected positive and negative passages from them. During training, we kept 40 predicted identifiers for each passage and removed any exceeding ones. The margin m and weight λ are set as 500 and 1000, respectively. Our main experiments were conducted on a single NVIDIA A100 GPU with 80 GB of memory. | 2306.15222#21 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15595 | 21 | Training Procedure. We ï¬ne-tune all model variants using the next token prediction objective. We use AdamW (Loshchilov & Hutter, 2019) with β1 = 0.9 and β2 = 0.95. We use a linear learning rate warmup of 20 steps starting from 10% of the maximum learning rate. For 7B and 13B models, we set the learning rate to 2Ã10â5 and for 33B and 65B models we set the learning rate to 10â5. We set the weight decay to zero. For extending 7B, 13B and 33B models to the 8192 context window size, we use 32 A100 GPUs and 64 global batch size. For all other cases we use 128 A100 GPUs and 128 global batch size. We note that the main need of using more GPUs is memory limitation during ï¬ne-tuning, and it is possible to use fewer GPUs in certain cases. We train all models using PyTorch (Paszke et al., 2019) with Fully Sharded Data Parallel (Zhao et al., 2023) and Flash Attention (Dao et al., 2022). | 2306.15595#21 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 21 | We use a simple example in Fig. 2 to illustrate how theorems are formalized and proved in Lean.3 Here we want to formalize the greatest common divisor (gcd) of two natural numbers. First, we define gcd as a recursive function, taking two natural numbers as parameters and returning their gcd via the Euclidean algorithm. Then, we state a lemma named gcd_zero_left that âx â N, gcd 0 x = x, which can be proved simply by the definition of gcd. Finally, we state our main theorem gcd_self that ân â N, gcd n n = n, followed by its proof consisting of five tactics. In theorem proving, we are only concerned with generating the proof, i.e., the part between âbeginâ and âendâ; everything before âbeginâ is known, including other files imported. The syntax of tactics is quite expressive. They can take arguments and can be combined into compound tactics. You can think of tactics as programs in a domain-specific language (DSL). Users can extend the DSL by defining new tactics. This discrete, combinatorial, and unbounded action space makes theorem proving challenging for machine learning. | 2306.15626#21 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 22 | # 5.3.2 Generated data
The existing publicly available data is not sufï¬cient to train an MLLM skilled in RD, as they lack CoT data with positional annotations, natural commu- nication data with positional annotations, etc. We resort to GPT-4 to obtain high-quality RD anno- tations from Flickr30K Entities. Flickr30K Enti- ties has ï¬ve descriptions for each image. These mentioned objects appearing in the image will be labeled using bounding box. Although the API of GPT-4 temporarily cannot see images, we ex- plained the format of the bounding boxes to GPT-4 and asked it to understand the image through these ï¬ve sentences and boxes. Next, we require GPT-4 to design Q&A pairs. When designing problems, these questions must be able to determine answers from known information. In this way, we generated 5,922 QA pairs, where coordinate information may appear in both questions and answers. The dataset will continue expanding in the future. You can refer to it as Shikra-RD.
# 5.3.3 Task prompts | 2306.15195#22 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15222 | 22 | Retrieval Results on QA Table 1 summarizes the retrieval performance on NQ and TriviaQA. By analyzing the results, we discovered the fol- lowing findings:
(1) Among the generative retrieval methods, we found that SEAL and MINDER, which use semantic identifiers, outperform DSI, which relies on numeric identifiers. This is because numeric identifiers lack semantic information, and DSI requires the model to memorize the mapping from passages to their numeric IDs. As a result, DSI struggles with datasets like NQ and TriviaQA, which contain over 20 million passages. MINDER surpasses SEAL by using multi- view identifiers to represent a passage more comprehensively. Despite MINDERâs superiority, LTRGR still outperforms it. Specifically, LTRGR improves hits@5 by 3.0 and 1.8 on NQ and TriviaQA, respectively. LTRGR is based on MINDER and only requires an additional learning-to-rank phase, which verifies the effectiveness of learning to rank in generative retrieval. | 2306.15222#22 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15595 | 22 | If not speciï¬ed otherwise, for the Position Interpolation method, we ï¬ne-tune the models for 1000 steps. For the direct ï¬ne-tuning method, we use 10000 steps. We primarily ï¬ne-tune using the Pile training dataset (Gao et al., 2020). In Section 3.4 we also compared ï¬ne-tuning performance on the RedPajama dataset (Computer, 2023).
3.2 LONG SEQUENCE LANGUAGE MODELING
We evaluate the long sequence language modeling performance of our extended models and base- lines on two datasets: book corpus (PG-19) (Rae et al., 2020) and cleaned Arxiv Math proof-pile dataset (Azerbayev et al., 2022). | 2306.15595#22 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 22 | Another challenge is premise selection. Premises are existing lemmas or definitions useful for proving a theorem. They are used as arguments in tactics. For example, in Fig. 2 and Fig. 1 (Top left), the tactic ârewrite mod_selfâ rewrites the goal using the premise mod_self, which is defined in another file imported by the current file. Proofs cannot use premises that havenât been defined. For example, gcd_self cannot be used to prove gcd_zero_left. In addition, they cannot use premises not imported to the current file. Still, premises come from a large math library containing hundreds of thousands of existing definitions and theorems, making it hard, for humans and machines alike, to select the right premises when generating a tactic. This is a key bottleneck in theorem proving and is what we aim to address through retrieval-augmented LLMs.
# 4 LeanDojo: Toolkit and Benchmark
LeanDojo serves two essential needs of learning-based theorem proving in Lean. First, it extracts training data from Lean, and we use this capability to construct a challenging theorem proving benchmark. Second, it enables the model to interact with Lean programmatically. | 2306.15626#22 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
2306.15195 | 23 | # 5.3.3 Task prompts
We construct variable task templates for differ- ent tasks. For instance, for the spottingS caption task, we can use âCan you provide a description of the image <image> and include the coordinates [x0,y0,x1,y1] for each mentioned object?â where <image> represents the visual tokens. For PointQA, we can use âReferring to point <objs> in image <image>, give a direct answer to â<question>ââ where <objs> denotes the coordinates of the region and <question> represents the question from the source dataset. For REC, âIn <image>, I need the bounding box coordinates of <expr>.â where <expr> is the expression. More templates for dif- ferent tasks can be found in the Appendix.
It should be noted that we cannot use an invariant task template for a speciï¬c type of task. In this case, the model cannot ï¬exibly accept user instructions. To solve this problem, we ï¬rst describe the purpose of speciï¬c tasks, write a sample template, and then | 2306.15195#23 | Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic | In human conversations, individuals can indicate relevant regions within a
scene while addressing others. In turn, the other person can then respond by
referring to specific regions if necessary. This natural referential ability in
dialogue remains absent in current Multimodal Large Language Models (MLLMs). To
fill this gap, this paper proposes an MLLM called Shikra, which can handle
spatial coordinate inputs and outputs in natural language. Its architecture
consists of a vision encoder, an alignment layer, and a LLM. It is designed to
be straightforward and simple, without the need for extra vocabularies,
position encoder, pre-/post-detection modules, or external plug-in models. All
inputs and outputs are in natural language form. Referential dialogue is a
superset of various vision-language (VL) tasks. Shikra can naturally handle
location-related tasks like REC and PointQA, as well as conventional VL tasks
such as Image Captioning and VQA. Experimental results showcase Shikra's
promising performance. Furthermore, it enables numerous exciting applications,
like providing mentioned objects' coordinates in chains of thoughts and
comparing user-pointed regions similarities. Our code, model and dataset are
accessed at https://github.com/shikras/shikra. | http://arxiv.org/pdf/2306.15195 | Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao | cs.CV | null | null | cs.CV | 20230627 | 20230703 | [
{
"id": "2304.02643"
},
{
"id": "2109.10852"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2305.10355"
},
{
"id": "1504.00325"
},
{
"id": "2303.03378"
},
{
"id": "2305.16355"
},
{
"id": "2305.04790"
},
{
"id": "2303.05499"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.15021"
},
{
"id": "2305.19108"
},
{
"id": "2301.12597"
},
{
"id": "2305.11175"
},
{
"id": "2304.10592"
},
{
"id": "2011.13681"
},
{
"id": "2305.01278"
},
{
"id": "2306.09265"
},
{
"id": "2302.00923"
},
{
"id": "2301.13823"
},
{
"id": "2305.03726"
},
{
"id": "2206.08916"
},
{
"id": "2305.04160"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
}
] |
2306.15222 | 23 | (2) Regarding the NQ dataset, MINDER outperforms the classical DPR and achieves the best performance across all metrics, including hits@5, 20, and 100. This is particularly noteworthy as it marks the first time that generative retrieval has surpassed DPR in all metrics under the full corpus set setting. Turning to TriviaQA, our results show that LTRGR outperforms DPR in hits@100, but falls behind in hits@5 and hits@20. The reason for this is that MINDER, upon which LTRGR is based, performs significantly worse than DPR on TriviaQA. Itâs worth noting that generative retrieval methods rely on identifiers and cannot "see" the content of the passage, which may explain the performance gap between MINDER and DPR on TriviaQA. Additionally, generative retrieval methods have an error accumulation problem in an autoregressive generative way.
Retrieval Results on Web Search To further investigate generative retrieval, we conducted ex- periments on the MSMARCO dataset and presented our find- ings in Table 2. Itâs worth noting that we labeled the model sizes to ensure a fair comparison, as larger model parameters typically result in better performance.
Our analysis of the results in Table 2 revealed several key findings. Firstly, we observed that generative retrieval | 2306.15222#23 | Learning to Rank in Generative Retrieval | Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR. | http://arxiv.org/pdf/2306.15222 | Yongqi Li, Nan Yang, Liang Wang, Furu Wei, Wenjie Li | cs.CL, cs.AI, cs.IR | AAAI 2024 | null | cs.CL | 20230627 | 20231216 | [
{
"id": "2207.02578"
},
{
"id": "2202.06991"
},
{
"id": "2305.11841"
},
{
"id": "2305.11161"
},
{
"id": "2305.16675"
},
{
"id": "2204.10628"
},
{
"id": "1901.04085"
}
] |
2306.15595 | 23 | We use the test splits of PG19 (Rae et al., 2020) and proof-pile (Azerbayev et al., 2022). For PG19, we use the whole test split consisting of 100 documents. For the proof-pile dataset, we use a random subsample of 128 documents with at least 32768 SentencePiece (Kudo & Richardson, 2018) tokens and truncate to the ï¬rst 32768 tokens for each test document. We evaluate perplexity at various context window size by using a sliding window approach following Press et al. (2022) with stride S = 256.
In Table 1 and Table 2, we report the perplexity results for our models and baselines on the datasets. From the results, we found that models extended with our method enjoy a signiï¬cantly improved perplexity from longer context window sizes. By increasing the context window size from 2048 to 16384, we observed -0.28 and -0.5 reductions of perplexity for extending LLaMA 7B models on both datasets, -0.27 and -0.48 reductions for extending LLaMA 13B models, and -0.14 and -0.42 reductions for extending LLaMA 33B models. For LLaMA 65B models, we observed -0.12 and -0.3 reductions of perplexity by extending to the 8192 context window size. | 2306.15595#23 | Extending Context Window of Large Language Models via Positional Interpolation | We present Position Interpolation (PI) that extends the context window sizes
of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal
fine-tuning (within 1000 steps), while demonstrating strong empirical results
on various tasks that require long context, including passkey retrieval,
language modeling, and long document summarization from LLaMA 7B to 65B.
Meanwhile, the extended model by Position Interpolation preserve quality
relatively well on tasks within its original context window. To achieve this
goal, Position Interpolation linearly down-scales the input position indices to
match the original context window size, rather than extrapolating beyond the
trained context length which may lead to catastrophically high attention scores
that completely ruin the self-attention mechanism. Our theoretical study shows
that the upper bound of interpolation is at least $\sim 600 \times$ smaller
than that of extrapolation, further demonstrating its stability. Models
extended via Position Interpolation retain its original architecture and can
reuse most pre-existing optimization and infrastructure. | http://arxiv.org/pdf/2306.15595 | Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian | cs.CL, cs.AI, cs.LG | Fix template issues | null | cs.CL | 20230627 | 20230628 | [
{
"id": "2101.00027"
},
{
"id": "2106.09685"
},
{
"id": "2305.16300"
}
] |
2306.15626 | 23 | Data Extraction. Lean repos (e.g., mathlib or lean-liquid) contain source code of human- written theorems/proofs. However, the raw code is unsuitable for training the prover. It lacks runtime information that humans can access when using Lean, such as intermediate states between proof steps. Therefore, LeanDojo extracts the following information not directly visible in the code:
3The process is similar in many other proof assistants, though they may have different logical foundations.
5
⢠File dependencies and abstract syntax trees (ASTs): LeanDojo processes the repo to produce a directed acyclic graph whose nodes are files and edges are import relations between files. In addition, LeanDojo produces the AST of each file. File dependencies and ASTs are useful for program analysis, e.g., collecting theorems defined in a file or premises accessible to a theorem.
⢠States and tactics: LeanDojo extracts all tactics in proofs. For each tactic, it also extracts the states before/after the tactic, which allows us to reconstruct the proof tree in Fig. 1 (Top left). | 2306.15626#23 | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | Large language models (LLMs) have shown promise in proving formal theorems
using proof assistants such as Lean. However, existing methods are difficult to
reproduce or build on, due to private code, data, and large compute
requirements. This has created substantial barriers to research on machine
learning methods for theorem proving. This paper removes these barriers by
introducing LeanDojo: an open-source Lean playground consisting of toolkits,
data, models, and benchmarks. LeanDojo extracts data from Lean and enables
interaction with the proof environment programmatically. It contains
fine-grained annotations of premises in proofs, providing valuable data for
premise selection: a key bottleneck in theorem proving. Using this data, we
develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented
with retrieval for selecting premises from a vast math library. It is
inexpensive and needs only one GPU week of training. Our retriever leverages
LeanDojo's program analysis capability to identify accessible premises and hard
negative examples, which makes retrieval much more effective. Furthermore, we
construct a new benchmark consisting of 98,734 theorems and proofs extracted
from Lean's math library. It features challenging data split requiring the
prover to generalize to theorems relying on novel premises that are never used
in training. We use this benchmark for training and evaluation, and
experimental results demonstrate the effectiveness of ReProver over
non-retrieval baselines and GPT-4. We thus provide the first set of open-source
LLM-based theorem provers without any proprietary datasets and release it under
a permissive MIT license to facilitate further research. | http://arxiv.org/pdf/2306.15626 | Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan Prenger, Anima Anandkumar | cs.LG, cs.AI, cs.LO, stat.ML | Accepted to NeurIPS 2023 (Datasets and Benchmarks Track) as an oral
presentation. Data, code, and models available at https://leandojo.org/ | null | cs.LG | 20230627 | 20231027 | [
{
"id": "2302.13971"
},
{
"id": "2302.12433"
},
{
"id": "2302.04761"
},
{
"id": "2303.12570"
},
{
"id": "2303.04488"
},
{
"id": "2205.15231"
},
{
"id": "1505.04324"
},
{
"id": "2305.10601"
},
{
"id": "2303.17568"
},
{
"id": "2206.01962"
},
{
"id": "2107.03374"
},
{
"id": "2009.03393"
},
{
"id": "2303.08774"
},
{
"id": "2301.02195"
},
{
"id": "2203.13474"
},
{
"id": "2212.10007"
},
{
"id": "2305.07766"
},
{
"id": "2208.03299"
},
{
"id": "2303.04910"
},
{
"id": "2305.06161"
},
{
"id": "2305.11841"
},
{
"id": "2206.12839"
},
{
"id": "1606.01540"
},
{
"id": "2305.16366"
},
{
"id": "2212.10535"
},
{
"id": "2303.04864"
},
{
"id": "1701.06972"
},
{
"id": "2304.10486"
},
{
"id": "2305.07185"
},
{
"id": "1905.10501"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.