Datasets:
mteb
/

Modalities:
Tabular
Text
Formats:
json
Libraries:
Datasets
Dask
Muennighoff commited on
Commit
e297747
·
verified ·
1 Parent(s): 491f6df

Scheduled Commit

Browse files
data/clustering_individual-94033b9b-bac2-448a-b329-27772f2eb5f7.jsonl CHANGED
@@ -1,2 +1,4 @@
1
  {"tstamp": 1728443750.5336, "task_type": "clustering", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1728443742.4847, "finish": 1728443750.5336, "ip": "", "conv_id": "f48515bb9de742128ab7b7c7d29cf8ff", "model_name": "BAAI/bge-large-en-v1.5", "prompt": ["hello"], "ncluster": 1, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
2
  {"tstamp": 1728443750.5336, "task_type": "clustering", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1728443742.4847, "finish": 1728443750.5336, "ip": "", "conv_id": "d1323707518642c0a969b00025447515", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": ["hello"], "ncluster": 1, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
 
 
 
1
  {"tstamp": 1728443750.5336, "task_type": "clustering", "type": "chat", "model": "BAAI/bge-large-en-v1.5", "gen_params": {}, "start": 1728443742.4847, "finish": 1728443750.5336, "ip": "", "conv_id": "f48515bb9de742128ab7b7c7d29cf8ff", "model_name": "BAAI/bge-large-en-v1.5", "prompt": ["hello"], "ncluster": 1, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
2
  {"tstamp": 1728443750.5336, "task_type": "clustering", "type": "chat", "model": "Salesforce/SFR-Embedding-2_R", "gen_params": {}, "start": 1728443742.4847, "finish": 1728443750.5336, "ip": "", "conv_id": "d1323707518642c0a969b00025447515", "model_name": "Salesforce/SFR-Embedding-2_R", "prompt": ["hello"], "ncluster": 1, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
3
+ {"tstamp": 1728463878.1568, "task_type": "clustering", "type": "chat", "model": "jinaai/jina-embeddings-v2-base-en", "gen_params": {}, "start": 1728463878.106, "finish": 1728463878.1568, "ip": "", "conv_id": "19bce6ee0b784517bd11745d0fe27601", "model_name": "jinaai/jina-embeddings-v2-base-en", "prompt": ["AI Development"], "ncluster": 1, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
4
+ {"tstamp": 1728463878.1568, "task_type": "clustering", "type": "chat", "model": "voyage-multilingual-2", "gen_params": {}, "start": 1728463878.106, "finish": 1728463878.1568, "ip": "", "conv_id": "f911ae1105e340bcab425aa5cc3d1447", "model_name": "voyage-multilingual-2", "prompt": ["AI Development"], "ncluster": 1, "output": "", "ndim": "3D (press for 2D)", "dim_method": "PCA", "clustering_method": "KMeans"}
data/retrieval_battle-94033b9b-bac2-448a-b329-27772f2eb5f7.jsonl CHANGED
@@ -1,3 +1,5 @@
1
  {"tstamp": 1728459702.6534, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "7d775b2b7a2a4c499974b2deafc5e91c", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "President of United States", "0_output": [["President of United States", "President of the United States\n\nPresident's Guest House, located next to the Eisenhower Executive Office Building at the White House Complex and Lafayette Park, serves as the president's official guest house and as a secondary residence for the president if needed. Four interconnected, 19th-century houses—Blair House, Lee House, and 700 and 704 Jackson Place—with a combined floor space exceeding comprise the property.\nTravel\nThe primary means of long-distance air travel for the president is one of two identical Boeing VC-25 aircraft, which are extensively modified Boeing 747 airliners and are referred to as Air Force One while the president is on board (although any U.S. Air Force aircraft the president is aboard is designated as \"Air Force One\" for the duration of the flight). In-country trips are typically handled with just one of the two planes, while overseas trips are handled with both, one primary and one backup. The president also has access to smaller Air Force aircraft, most notably the Boeing C-32, which are used when the president must travel to airports that cannot support a jumbo jet. Any civilian aircraft the president is aboard is designated Executive One for the flight.\nFor short-distance air travel, the president has access to a fleet of U.S. Marine Corps helicopters of varying models, designated Marine One when the president is aboard any particular one in the fleet. Flights are typically handled with as many as five helicopters all flying together and frequently swapping positions as to disguise which helicopter the president is actually aboard to any would-be threats."]], "0_corpus": "wikipedia", "1_conv_id": "04b7b682bb7449adaa37ccc82578f948", "1_model_name": "BM25", "1_prompt": "President of United States", "1_output": [["President of United States", "Lists of fictional presidents of the United States\n\nLists of fictional presidents of the United States are alphabetical lists of people who have been portrayed in fiction as the president of the United States. Media include novels and short stories, comics, plays, movies and television shows. The roles include fictional presidents, and real historical figures who did not in fact become president, typically in works of alternate history or comedy.\nNamed fictional presidents\nList of fictional presidents of the United States (A–B)\nList of fictional presidents of the United States (C–D)\nList of fictional presidents of the United States (E–F)\nList of fictional presidents of the United States (G–H)\nList of fictional presidents of the United States (I–J)\nList of fictional presidents of the United States (K–M)\nList of fictional presidents of the United States (N–R)\nList of fictional presidents of the United States (S–T)\nList of fictional presidents of the United States (U–Z)\nHistorical figures\nList of fictional United States presidencies of historical figures (A–B)\nList of fictional United States presidencies of historical figures (C–D)\nList of fictional United States presidencies of historical figures (E–G)\nList of fictional United States presidencies of historical figures (H–J)\nList of fictional United States presidencies of historical figures (K–L)\nList of fictional United States presidencies of historical figures (M–O)\nList of fictional United States presidencies of historical figures (P–R)\nList of fictional United States presidencies of historical figures (S–U)\nList of fictional United States presidencies of historical figures (V–Z)"]], "1_corpus": "wikipedia"}
2
  {"tstamp": 1728459795.0576, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "7b67656990f3434ebe8b091da11795b8", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "text embedding models", "0_output": [["text embedding models", "Latent space\n\nWord2Vec: Word2Vec is a popular embedding model used in natural language processing (NLP). It learns word embeddings by training a neural network on a large corpus of text. Word2Vec captures semantic and syntactic relationships between words, allowing for meaningful computations like word analogies.\nGloVe: GloVe (Global Vectors for Word Representation) is another widely used embedding model for NLP. It combines global statistical information from a corpus with local context information to learn word embeddings. GloVe embeddings are known for capturing both semantic and relational similarities between words.\nSiamese Networks: Siamese networks are a type of neural network architecture commonly used for similarity-based embedding. They consist of two identical subnetworks that process two input samples and produce their respective embeddings. Siamese networks are often used for tasks like image similarity, recommendation systems, and face recognition.\nVariational Autoencoders (VAEs): VAEs are generative models that simultaneously learn to encode and decode data. The latent space in VAEs acts as an embedding space. By training VAEs on high-dimensional data, such as images or audio, the model learns to encode the data into a compact latent representation. VAEs are known for their ability to generate new data samples from the learned latent space.\nMultimodality\nMultimodality refers to the integration and analysis of multiple modes or types of data within a single model or framework. Embedding multimodal data involves capturing relationships and interactions between different data types, such as images, text, audio, and structured data."]], "0_corpus": "wikipedia", "1_conv_id": "3d1260384edc42c8a2dcf2d78c0976b1", "1_model_name": "text-embedding-004", "1_prompt": "text embedding models", "1_output": [["text embedding models", "Word embedding\n\nThe notion of a semantic space with lexical items (words or multi-word terms) represented as vectors or embeddings is based on the computational challenges of capturing distributional characteristics and using them for practical application to measure similarity between words, phrases, or entire documents. The first generation of semantic space models is the vector space model for information retrieval. Such vector space models for words and their distributional data implemented in their simplest form results in a very sparse vector space of high dimensionality (cf. curse of dimensionality). Reducing the number of dimensions using linear algebraic methods such as singular value decomposition then led to the introduction of latent semantic analysis in the late 1980s and the random indexing approach for collecting word co-occurrence contexts. In 2000, Bengio et al. provided in a series of papers titled \"Neural probabilistic language models\" to reduce the high dimensionality of word representations in contexts by \"learning a distributed representation for words\".\nA study published in NeurIPS (NIPS) 2002 introduced the use of both word and document embeddings applying the method of kernel CCA to bilingual (and multi-lingual) corpora, also providing an early example of self-supervised learning of word embeddings\nWord embeddings come in two different styles, one in which words are expressed as vectors of co-occurring words, and another in which words are expressed as vectors of linguistic contexts in which the words occur; these different styles are studied in Lavelli et al., 2004. Roweis and Saul published in Science how to use \"locally linear embedding\" (LLE) to discover representations of high dimensional data structures. Most new word embedding techniques after about 2005 rely on a neural network architecture instead of more probabilistic and algebraic models, after foundational work done by Yoshua Bengio and colleagues."]], "1_corpus": "wikipedia"}
3
  {"tstamp": 1728463349.9313, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "2aad6bd7fba548c288632fc3d2d41b06", "0_model_name": "embed-english-v3.0", "0_prompt": "Explain whether it will be advantageous for a woman who is battling a divorce proceeding in California versus in Texas?", "0_output": [["Explain whether it will be advantageous for a woman who is battling a divorce proceeding in California versus in Texas?", "Divorce in the United States\n\nComparative rectitude is a doctrine used to determine which spouse is more at fault when both spouses are guilty of breaches.\nJurisdiction\nIn the United States, the federal government does not have the authority to issue a divorce. The state has the only authority over issuing accepting a marriage, and issuing a divorce. This creates the question of which state can one get divorced in. All states have rules for jurisdiction, which is typically a time frame the person filing the divorce has lived in the state. Most states require the person filing for a divorce to be a physical resident of the state for six months. Some states require twelve months and some states, like Nevada, require only six weeks. Without proper jurisdiction, a state cannot issue a divorce.\nProperty division\nStates vary in their rules for division of assets in a divorce. The major difference is between states that use a community property system and states that do not. In community property states, community property belongs to the spouses equally. The following states use community property regimes: Arizona, California, Idaho, Louisiana, Nevada, New Mexico, Texas, Washington, and Wisconsin. Alaskan law gives couples the option to create community property by written agreement."]], "0_corpus": "wikipedia", "1_conv_id": "09a178037df246beb0c715a7e7a8f43f", "1_model_name": "text-embedding-004", "1_prompt": "Explain whether it will be advantageous for a woman who is battling a divorce proceeding in California versus in Texas?", "1_output": [["Explain whether it will be advantageous for a woman who is battling a divorce proceeding in California versus in Texas?", "Divorce in the United States\n\nComparative rectitude is a doctrine used to determine which spouse is more at fault when both spouses are guilty of breaches.\nJurisdiction\nIn the United States, the federal government does not have the authority to issue a divorce. The state has the only authority over issuing accepting a marriage, and issuing a divorce. This creates the question of which state can one get divorced in. All states have rules for jurisdiction, which is typically a time frame the person filing the divorce has lived in the state. Most states require the person filing for a divorce to be a physical resident of the state for six months. Some states require twelve months and some states, like Nevada, require only six weeks. Without proper jurisdiction, a state cannot issue a divorce.\nProperty division\nStates vary in their rules for division of assets in a divorce. The major difference is between states that use a community property system and states that do not. In community property states, community property belongs to the spouses equally. The following states use community property regimes: Arizona, California, Idaho, Louisiana, Nevada, New Mexico, Texas, Washington, and Wisconsin. Alaskan law gives couples the option to create community property by written agreement."]], "1_corpus": "wikipedia"}
 
 
 
1
  {"tstamp": 1728459702.6534, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "7d775b2b7a2a4c499974b2deafc5e91c", "0_model_name": "jinaai/jina-embeddings-v2-base-en", "0_prompt": "President of United States", "0_output": [["President of United States", "President of the United States\n\nPresident's Guest House, located next to the Eisenhower Executive Office Building at the White House Complex and Lafayette Park, serves as the president's official guest house and as a secondary residence for the president if needed. Four interconnected, 19th-century houses—Blair House, Lee House, and 700 and 704 Jackson Place—with a combined floor space exceeding comprise the property.\nTravel\nThe primary means of long-distance air travel for the president is one of two identical Boeing VC-25 aircraft, which are extensively modified Boeing 747 airliners and are referred to as Air Force One while the president is on board (although any U.S. Air Force aircraft the president is aboard is designated as \"Air Force One\" for the duration of the flight). In-country trips are typically handled with just one of the two planes, while overseas trips are handled with both, one primary and one backup. The president also has access to smaller Air Force aircraft, most notably the Boeing C-32, which are used when the president must travel to airports that cannot support a jumbo jet. Any civilian aircraft the president is aboard is designated Executive One for the flight.\nFor short-distance air travel, the president has access to a fleet of U.S. Marine Corps helicopters of varying models, designated Marine One when the president is aboard any particular one in the fleet. Flights are typically handled with as many as five helicopters all flying together and frequently swapping positions as to disguise which helicopter the president is actually aboard to any would-be threats."]], "0_corpus": "wikipedia", "1_conv_id": "04b7b682bb7449adaa37ccc82578f948", "1_model_name": "BM25", "1_prompt": "President of United States", "1_output": [["President of United States", "Lists of fictional presidents of the United States\n\nLists of fictional presidents of the United States are alphabetical lists of people who have been portrayed in fiction as the president of the United States. Media include novels and short stories, comics, plays, movies and television shows. The roles include fictional presidents, and real historical figures who did not in fact become president, typically in works of alternate history or comedy.\nNamed fictional presidents\nList of fictional presidents of the United States (A–B)\nList of fictional presidents of the United States (C–D)\nList of fictional presidents of the United States (E–F)\nList of fictional presidents of the United States (G–H)\nList of fictional presidents of the United States (I–J)\nList of fictional presidents of the United States (K–M)\nList of fictional presidents of the United States (N–R)\nList of fictional presidents of the United States (S–T)\nList of fictional presidents of the United States (U–Z)\nHistorical figures\nList of fictional United States presidencies of historical figures (A–B)\nList of fictional United States presidencies of historical figures (C–D)\nList of fictional United States presidencies of historical figures (E–G)\nList of fictional United States presidencies of historical figures (H–J)\nList of fictional United States presidencies of historical figures (K–L)\nList of fictional United States presidencies of historical figures (M–O)\nList of fictional United States presidencies of historical figures (P–R)\nList of fictional United States presidencies of historical figures (S–U)\nList of fictional United States presidencies of historical figures (V–Z)"]], "1_corpus": "wikipedia"}
2
  {"tstamp": 1728459795.0576, "task_type": "retrieval", "type": "leftvote", "models": ["", ""], "ip": "", "0_conv_id": "7b67656990f3434ebe8b091da11795b8", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "text embedding models", "0_output": [["text embedding models", "Latent space\n\nWord2Vec: Word2Vec is a popular embedding model used in natural language processing (NLP). It learns word embeddings by training a neural network on a large corpus of text. Word2Vec captures semantic and syntactic relationships between words, allowing for meaningful computations like word analogies.\nGloVe: GloVe (Global Vectors for Word Representation) is another widely used embedding model for NLP. It combines global statistical information from a corpus with local context information to learn word embeddings. GloVe embeddings are known for capturing both semantic and relational similarities between words.\nSiamese Networks: Siamese networks are a type of neural network architecture commonly used for similarity-based embedding. They consist of two identical subnetworks that process two input samples and produce their respective embeddings. Siamese networks are often used for tasks like image similarity, recommendation systems, and face recognition.\nVariational Autoencoders (VAEs): VAEs are generative models that simultaneously learn to encode and decode data. The latent space in VAEs acts as an embedding space. By training VAEs on high-dimensional data, such as images or audio, the model learns to encode the data into a compact latent representation. VAEs are known for their ability to generate new data samples from the learned latent space.\nMultimodality\nMultimodality refers to the integration and analysis of multiple modes or types of data within a single model or framework. Embedding multimodal data involves capturing relationships and interactions between different data types, such as images, text, audio, and structured data."]], "0_corpus": "wikipedia", "1_conv_id": "3d1260384edc42c8a2dcf2d78c0976b1", "1_model_name": "text-embedding-004", "1_prompt": "text embedding models", "1_output": [["text embedding models", "Word embedding\n\nThe notion of a semantic space with lexical items (words or multi-word terms) represented as vectors or embeddings is based on the computational challenges of capturing distributional characteristics and using them for practical application to measure similarity between words, phrases, or entire documents. The first generation of semantic space models is the vector space model for information retrieval. Such vector space models for words and their distributional data implemented in their simplest form results in a very sparse vector space of high dimensionality (cf. curse of dimensionality). Reducing the number of dimensions using linear algebraic methods such as singular value decomposition then led to the introduction of latent semantic analysis in the late 1980s and the random indexing approach for collecting word co-occurrence contexts. In 2000, Bengio et al. provided in a series of papers titled \"Neural probabilistic language models\" to reduce the high dimensionality of word representations in contexts by \"learning a distributed representation for words\".\nA study published in NeurIPS (NIPS) 2002 introduced the use of both word and document embeddings applying the method of kernel CCA to bilingual (and multi-lingual) corpora, also providing an early example of self-supervised learning of word embeddings\nWord embeddings come in two different styles, one in which words are expressed as vectors of co-occurring words, and another in which words are expressed as vectors of linguistic contexts in which the words occur; these different styles are studied in Lavelli et al., 2004. Roweis and Saul published in Science how to use \"locally linear embedding\" (LLE) to discover representations of high dimensional data structures. Most new word embedding techniques after about 2005 rely on a neural network architecture instead of more probabilistic and algebraic models, after foundational work done by Yoshua Bengio and colleagues."]], "1_corpus": "wikipedia"}
3
  {"tstamp": 1728463349.9313, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "2aad6bd7fba548c288632fc3d2d41b06", "0_model_name": "embed-english-v3.0", "0_prompt": "Explain whether it will be advantageous for a woman who is battling a divorce proceeding in California versus in Texas?", "0_output": [["Explain whether it will be advantageous for a woman who is battling a divorce proceeding in California versus in Texas?", "Divorce in the United States\n\nComparative rectitude is a doctrine used to determine which spouse is more at fault when both spouses are guilty of breaches.\nJurisdiction\nIn the United States, the federal government does not have the authority to issue a divorce. The state has the only authority over issuing accepting a marriage, and issuing a divorce. This creates the question of which state can one get divorced in. All states have rules for jurisdiction, which is typically a time frame the person filing the divorce has lived in the state. Most states require the person filing for a divorce to be a physical resident of the state for six months. Some states require twelve months and some states, like Nevada, require only six weeks. Without proper jurisdiction, a state cannot issue a divorce.\nProperty division\nStates vary in their rules for division of assets in a divorce. The major difference is between states that use a community property system and states that do not. In community property states, community property belongs to the spouses equally. The following states use community property regimes: Arizona, California, Idaho, Louisiana, Nevada, New Mexico, Texas, Washington, and Wisconsin. Alaskan law gives couples the option to create community property by written agreement."]], "0_corpus": "wikipedia", "1_conv_id": "09a178037df246beb0c715a7e7a8f43f", "1_model_name": "text-embedding-004", "1_prompt": "Explain whether it will be advantageous for a woman who is battling a divorce proceeding in California versus in Texas?", "1_output": [["Explain whether it will be advantageous for a woman who is battling a divorce proceeding in California versus in Texas?", "Divorce in the United States\n\nComparative rectitude is a doctrine used to determine which spouse is more at fault when both spouses are guilty of breaches.\nJurisdiction\nIn the United States, the federal government does not have the authority to issue a divorce. The state has the only authority over issuing accepting a marriage, and issuing a divorce. This creates the question of which state can one get divorced in. All states have rules for jurisdiction, which is typically a time frame the person filing the divorce has lived in the state. Most states require the person filing for a divorce to be a physical resident of the state for six months. Some states require twelve months and some states, like Nevada, require only six weeks. Without proper jurisdiction, a state cannot issue a divorce.\nProperty division\nStates vary in their rules for division of assets in a divorce. The major difference is between states that use a community property system and states that do not. In community property states, community property belongs to the spouses equally. The following states use community property regimes: Arizona, California, Idaho, Louisiana, Nevada, New Mexico, Texas, Washington, and Wisconsin. Alaskan law gives couples the option to create community property by written agreement."]], "1_corpus": "wikipedia"}
4
+ {"tstamp": 1728463735.9028, "task_type": "retrieval", "type": "bothbadvote", "models": ["", ""], "ip": "", "0_conv_id": "8595902b68344bacbe6c3b784410046d", "0_model_name": "embed-english-v3.0", "0_prompt": "Hey who are you? and tell me who am I", "0_output": [["Hey who are you? and tell me who am I", "Who Am I? (1998 film)\n\nWho Am I? (, also known as Jackie Chan's Who Am I?) is a 1998 Hong Kong action comedy film directed by Benny Chan and Jackie Chan, who also starred in the lead role. The film was released in Hong Kong on 17 January 1998. It is also Chan's second film to be scripted and shot in English, the first one being Mr. Nice Guy.\nPlot\nSomewhere in the jungles of South Africa on Thanksgiving 1996, a multinational military unit named Special Force Unit ambushes a convoy and kidnaps several scientists working on a highly-volatile compound extracted from a recently discovered meteorite. Among the operatives is a Hong Kong national identified as \"Jackie Chan\". The CIA assigns Agent Morgan to investigate the incident, unaware that he and newly retired General Sherman orchestrated the abduction for their personal profit. At the same time, the CIA assigns another operative in South Africa for a more covert operation."]], "0_corpus": "wikipedia", "1_conv_id": "69b0634bf9d144848dda2e041e86cc90", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "Hey who are you? and tell me who am I", "1_output": [["Hey who are you? and tell me who am I", "Tell Me Who I Am\n\nTell Me Who I Am is a 2019 documentary film directed and produced by the British filmmaker Ed Perkins. It focuses on twin brothers Alex and Marcus Lewis. Alex lost his memory in a motorcycle accident at age 18, and his twin brother helped him recreate his lost memories of his childhood. However, Marcus omits that the twins were sexually abused by their mother and also sexually abused by friends of hers in a child abuse network until the age of 14. The film follows Alex and Marcus in telling their lives' stories from the accident at age 18 to age 32, when the sexual abuse is revealed after their mother's death, to both of them coming to terms with the abuse at age 54. The documentary is based on a 2013 book written by the twins together with Joanna Hodgkin.\nThe film was commissioned by and aired on Netflix. It received acclaim from critics after its release and was described as \"harrowing\" and \"involving but upsetting\".\nSynopsis\nThe documentary is split into three parts. In the first part, the viewer follows Alex trying to solve the mystery of his past and trying to figure out who he is after losing his memory in a motorcycle accident at age 18 in 1982. His twin brother, Marcus, is the only person he remembers after emerging from a coma – including himself. Marcus helps him to reintegrate into life. At first, Alex functions like a child, asking basic questions like, \"what is this?\" to nearly everything and re-learning how to ride a bike. As he rapidly \"matures\", Alex begins to ask questions about their childhood. Marcus paints a picture of a happy, wealthy, well-connected family for Alex."]], "1_corpus": "wikipedia"}
5
+ {"tstamp": 1728463772.5675, "task_type": "retrieval", "type": "tievote", "models": ["", ""], "ip": "", "0_conv_id": "9a23e3502ea0453c93e4e11e3fa6b47b", "0_model_name": "intfloat/multilingual-e5-large-instruct", "0_prompt": "Paper discussing the effectiveness of different text sampling strategies like top-k, nucleus sampling, and their impact on fooling humans versus machines.", "0_output": [["Paper discussing the effectiveness of different text sampling strategies like top-k, nucleus sampling, and their impact on fooling humans versus machines.", "Title: Automatic Detection of Generated Text is Easiest when Humans are Fooled\n\nAbstract: Recent advancements in neural language modelling make it possible to rapidly generate vast amounts of human-sounding text. The capabilities of humans and automatic discriminators to detect machine-generated text have been a large source of research interest, but humans and machines rely on different cues to make their decisions. Here, we perform careful benchmarking and analysis of three popular sampling-based decoding strategies---top-$k$, nucleus sampling, and untruncated random sampling---and show that improvements in decoding methods have primarily optimized for fooling humans. This comes at the expense of introducing statistical abnormalities that make detection easy for automatic systems. We also show that though both human and automatic detector performance improve with longer excerpt length, even multi-sentence excerpts can fool expert human raters over 30% of the time. Our findings reveal the importance of using both human and automatic detectors to assess the humanness of text generation systems."]], "0_corpus": "arxiv", "1_conv_id": "a3bb05cec28348039b6e793f56a3fe93", "1_model_name": "text-embedding-004", "1_prompt": "Paper discussing the effectiveness of different text sampling strategies like top-k, nucleus sampling, and their impact on fooling humans versus machines.", "1_output": [["Paper discussing the effectiveness of different text sampling strategies like top-k, nucleus sampling, and their impact on fooling humans versus machines.", "Title: Automatic Detection of Generated Text is Easiest when Humans are Fooled\n\nAbstract: Recent advancements in neural language modelling make it possible to rapidly generate vast amounts of human-sounding text. The capabilities of humans and automatic discriminators to detect machine-generated text have been a large source of research interest, but humans and machines rely on different cues to make their decisions. Here, we perform careful benchmarking and analysis of three popular sampling-based decoding strategies---top-$k$, nucleus sampling, and untruncated random sampling---and show that improvements in decoding methods have primarily optimized for fooling humans. This comes at the expense of introducing statistical abnormalities that make detection easy for automatic systems. We also show that though both human and automatic detector performance improve with longer excerpt length, even multi-sentence excerpts can fool expert human raters over 30% of the time. Our findings reveal the importance of using both human and automatic detectors to assess the humanness of text generation systems."]], "1_corpus": "arxiv"}
data/retrieval_individual-94033b9b-bac2-448a-b329-27772f2eb5f7.jsonl CHANGED
@@ -22,3 +22,9 @@
22
  {"tstamp": 1728459762.9585, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1728459762.3064, "finish": 1728459762.9585, "ip": "", "conv_id": "3d1260384edc42c8a2dcf2d78c0976b1", "model_name": "text-embedding-004", "prompt": "text embedding models", "output": [["text embedding models", "Word embedding\n\nThe notion of a semantic space with lexical items (words or multi-word terms) represented as vectors or embeddings is based on the computational challenges of capturing distributional characteristics and using them for practical application to measure similarity between words, phrases, or entire documents. The first generation of semantic space models is the vector space model for information retrieval. Such vector space models for words and their distributional data implemented in their simplest form results in a very sparse vector space of high dimensionality (cf. curse of dimensionality). Reducing the number of dimensions using linear algebraic methods such as singular value decomposition then led to the introduction of latent semantic analysis in the late 1980s and the random indexing approach for collecting word co-occurrence contexts. In 2000, Bengio et al. provided in a series of papers titled \"Neural probabilistic language models\" to reduce the high dimensionality of word representations in contexts by \"learning a distributed representation for words\".\nA study published in NeurIPS (NIPS) 2002 introduced the use of both word and document embeddings applying the method of kernel CCA to bilingual (and multi-lingual) corpora, also providing an early example of self-supervised learning of word embeddings\nWord embeddings come in two different styles, one in which words are expressed as vectors of co-occurring words, and another in which words are expressed as vectors of linguistic contexts in which the words occur; these different styles are studied in Lavelli et al., 2004. Roweis and Saul published in Science how to use \"locally linear embedding\" (LLE) to discover representations of high dimensional data structures. Most new word embedding techniques after about 2005 rely on a neural network architecture instead of more probabilistic and algebraic models, after foundational work done by Yoshua Bengio and colleagues."]], "corpus": "wikipedia"}
23
  {"tstamp": 1728463324.9269, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1728463323.9793, "finish": 1728463324.9269, "ip": "", "conv_id": "2aad6bd7fba548c288632fc3d2d41b06", "model_name": "embed-english-v3.0", "prompt": "Explain whether it will be advantageous for a woman who is battling a divorce proceeding in California versus in Texas?", "output": [["Explain whether it will be advantageous for a woman who is battling a divorce proceeding in California versus in Texas?", "Divorce in the United States\n\nComparative rectitude is a doctrine used to determine which spouse is more at fault when both spouses are guilty of breaches.\nJurisdiction\nIn the United States, the federal government does not have the authority to issue a divorce. The state has the only authority over issuing accepting a marriage, and issuing a divorce. This creates the question of which state can one get divorced in. All states have rules for jurisdiction, which is typically a time frame the person filing the divorce has lived in the state. Most states require the person filing for a divorce to be a physical resident of the state for six months. Some states require twelve months and some states, like Nevada, require only six weeks. Without proper jurisdiction, a state cannot issue a divorce.\nProperty division\nStates vary in their rules for division of assets in a divorce. The major difference is between states that use a community property system and states that do not. In community property states, community property belongs to the spouses equally. The following states use community property regimes: Arizona, California, Idaho, Louisiana, Nevada, New Mexico, Texas, Washington, and Wisconsin. Alaskan law gives couples the option to create community property by written agreement."]], "corpus": "wikipedia"}
24
  {"tstamp": 1728463324.9269, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1728463323.9793, "finish": 1728463324.9269, "ip": "", "conv_id": "09a178037df246beb0c715a7e7a8f43f", "model_name": "text-embedding-004", "prompt": "Explain whether it will be advantageous for a woman who is battling a divorce proceeding in California versus in Texas?", "output": [["Explain whether it will be advantageous for a woman who is battling a divorce proceeding in California versus in Texas?", "Divorce in the United States\n\nComparative rectitude is a doctrine used to determine which spouse is more at fault when both spouses are guilty of breaches.\nJurisdiction\nIn the United States, the federal government does not have the authority to issue a divorce. The state has the only authority over issuing accepting a marriage, and issuing a divorce. This creates the question of which state can one get divorced in. All states have rules for jurisdiction, which is typically a time frame the person filing the divorce has lived in the state. Most states require the person filing for a divorce to be a physical resident of the state for six months. Some states require twelve months and some states, like Nevada, require only six weeks. Without proper jurisdiction, a state cannot issue a divorce.\nProperty division\nStates vary in their rules for division of assets in a divorce. The major difference is between states that use a community property system and states that do not. In community property states, community property belongs to the spouses equally. The following states use community property regimes: Arizona, California, Idaho, Louisiana, Nevada, New Mexico, Texas, Washington, and Wisconsin. Alaskan law gives couples the option to create community property by written agreement."]], "corpus": "wikipedia"}
 
 
 
 
 
 
 
22
  {"tstamp": 1728459762.9585, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1728459762.3064, "finish": 1728459762.9585, "ip": "", "conv_id": "3d1260384edc42c8a2dcf2d78c0976b1", "model_name": "text-embedding-004", "prompt": "text embedding models", "output": [["text embedding models", "Word embedding\n\nThe notion of a semantic space with lexical items (words or multi-word terms) represented as vectors or embeddings is based on the computational challenges of capturing distributional characteristics and using them for practical application to measure similarity between words, phrases, or entire documents. The first generation of semantic space models is the vector space model for information retrieval. Such vector space models for words and their distributional data implemented in their simplest form results in a very sparse vector space of high dimensionality (cf. curse of dimensionality). Reducing the number of dimensions using linear algebraic methods such as singular value decomposition then led to the introduction of latent semantic analysis in the late 1980s and the random indexing approach for collecting word co-occurrence contexts. In 2000, Bengio et al. provided in a series of papers titled \"Neural probabilistic language models\" to reduce the high dimensionality of word representations in contexts by \"learning a distributed representation for words\".\nA study published in NeurIPS (NIPS) 2002 introduced the use of both word and document embeddings applying the method of kernel CCA to bilingual (and multi-lingual) corpora, also providing an early example of self-supervised learning of word embeddings\nWord embeddings come in two different styles, one in which words are expressed as vectors of co-occurring words, and another in which words are expressed as vectors of linguistic contexts in which the words occur; these different styles are studied in Lavelli et al., 2004. Roweis and Saul published in Science how to use \"locally linear embedding\" (LLE) to discover representations of high dimensional data structures. Most new word embedding techniques after about 2005 rely on a neural network architecture instead of more probabilistic and algebraic models, after foundational work done by Yoshua Bengio and colleagues."]], "corpus": "wikipedia"}
23
  {"tstamp": 1728463324.9269, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1728463323.9793, "finish": 1728463324.9269, "ip": "", "conv_id": "2aad6bd7fba548c288632fc3d2d41b06", "model_name": "embed-english-v3.0", "prompt": "Explain whether it will be advantageous for a woman who is battling a divorce proceeding in California versus in Texas?", "output": [["Explain whether it will be advantageous for a woman who is battling a divorce proceeding in California versus in Texas?", "Divorce in the United States\n\nComparative rectitude is a doctrine used to determine which spouse is more at fault when both spouses are guilty of breaches.\nJurisdiction\nIn the United States, the federal government does not have the authority to issue a divorce. The state has the only authority over issuing accepting a marriage, and issuing a divorce. This creates the question of which state can one get divorced in. All states have rules for jurisdiction, which is typically a time frame the person filing the divorce has lived in the state. Most states require the person filing for a divorce to be a physical resident of the state for six months. Some states require twelve months and some states, like Nevada, require only six weeks. Without proper jurisdiction, a state cannot issue a divorce.\nProperty division\nStates vary in their rules for division of assets in a divorce. The major difference is between states that use a community property system and states that do not. In community property states, community property belongs to the spouses equally. The following states use community property regimes: Arizona, California, Idaho, Louisiana, Nevada, New Mexico, Texas, Washington, and Wisconsin. Alaskan law gives couples the option to create community property by written agreement."]], "corpus": "wikipedia"}
24
  {"tstamp": 1728463324.9269, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1728463323.9793, "finish": 1728463324.9269, "ip": "", "conv_id": "09a178037df246beb0c715a7e7a8f43f", "model_name": "text-embedding-004", "prompt": "Explain whether it will be advantageous for a woman who is battling a divorce proceeding in California versus in Texas?", "output": [["Explain whether it will be advantageous for a woman who is battling a divorce proceeding in California versus in Texas?", "Divorce in the United States\n\nComparative rectitude is a doctrine used to determine which spouse is more at fault when both spouses are guilty of breaches.\nJurisdiction\nIn the United States, the federal government does not have the authority to issue a divorce. The state has the only authority over issuing accepting a marriage, and issuing a divorce. This creates the question of which state can one get divorced in. All states have rules for jurisdiction, which is typically a time frame the person filing the divorce has lived in the state. Most states require the person filing for a divorce to be a physical resident of the state for six months. Some states require twelve months and some states, like Nevada, require only six weeks. Without proper jurisdiction, a state cannot issue a divorce.\nProperty division\nStates vary in their rules for division of assets in a divorce. The major difference is between states that use a community property system and states that do not. In community property states, community property belongs to the spouses equally. The following states use community property regimes: Arizona, California, Idaho, Louisiana, Nevada, New Mexico, Texas, Washington, and Wisconsin. Alaskan law gives couples the option to create community property by written agreement."]], "corpus": "wikipedia"}
25
+ {"tstamp": 1728463716.743, "task_type": "retrieval", "type": "chat", "model": "embed-english-v3.0", "gen_params": {}, "start": 1728463716.3207, "finish": 1728463716.743, "ip": "", "conv_id": "8595902b68344bacbe6c3b784410046d", "model_name": "embed-english-v3.0", "prompt": "Hey who are you? and tell me who am I", "output": [["Hey who are you? and tell me who am I", "Who Am I? (1998 film)\n\nWho Am I? (, also known as Jackie Chan's Who Am I?) is a 1998 Hong Kong action comedy film directed by Benny Chan and Jackie Chan, who also starred in the lead role. The film was released in Hong Kong on 17 January 1998. It is also Chan's second film to be scripted and shot in English, the first one being Mr. Nice Guy.\nPlot\nSomewhere in the jungles of South Africa on Thanksgiving 1996, a multinational military unit named Special Force Unit ambushes a convoy and kidnaps several scientists working on a highly-volatile compound extracted from a recently discovered meteorite. Among the operatives is a Hong Kong national identified as \"Jackie Chan\". The CIA assigns Agent Morgan to investigate the incident, unaware that he and newly retired General Sherman orchestrated the abduction for their personal profit. At the same time, the CIA assigns another operative in South Africa for a more covert operation."]], "corpus": "wikipedia"}
26
+ {"tstamp": 1728463716.743, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1728463716.3207, "finish": 1728463716.743, "ip": "", "conv_id": "69b0634bf9d144848dda2e041e86cc90", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "Hey who are you? and tell me who am I", "output": [["Hey who are you? and tell me who am I", "Tell Me Who I Am\n\nTell Me Who I Am is a 2019 documentary film directed and produced by the British filmmaker Ed Perkins. It focuses on twin brothers Alex and Marcus Lewis. Alex lost his memory in a motorcycle accident at age 18, and his twin brother helped him recreate his lost memories of his childhood. However, Marcus omits that the twins were sexually abused by their mother and also sexually abused by friends of hers in a child abuse network until the age of 14. The film follows Alex and Marcus in telling their lives' stories from the accident at age 18 to age 32, when the sexual abuse is revealed after their mother's death, to both of them coming to terms with the abuse at age 54. The documentary is based on a 2013 book written by the twins together with Joanna Hodgkin.\nThe film was commissioned by and aired on Netflix. It received acclaim from critics after its release and was described as \"harrowing\" and \"involving but upsetting\".\nSynopsis\nThe documentary is split into three parts. In the first part, the viewer follows Alex trying to solve the mystery of his past and trying to figure out who he is after losing his memory in a motorcycle accident at age 18 in 1982. His twin brother, Marcus, is the only person he remembers after emerging from a coma – including himself. Marcus helps him to reintegrate into life. At first, Alex functions like a child, asking basic questions like, \"what is this?\" to nearly everything and re-learning how to ride a bike. As he rapidly \"matures\", Alex begins to ask questions about their childhood. Marcus paints a picture of a happy, wealthy, well-connected family for Alex."]], "corpus": "wikipedia"}
27
+ {"tstamp": 1728463763.6847, "task_type": "retrieval", "type": "chat", "model": "intfloat/multilingual-e5-large-instruct", "gen_params": {}, "start": 1728463763.0957, "finish": 1728463763.6847, "ip": "", "conv_id": "9a23e3502ea0453c93e4e11e3fa6b47b", "model_name": "intfloat/multilingual-e5-large-instruct", "prompt": "Paper discussing the effectiveness of different text sampling strategies like top-k, nucleus sampling, and their impact on fooling humans versus machines.", "output": [["Paper discussing the effectiveness of different text sampling strategies like top-k, nucleus sampling, and their impact on fooling humans versus machines.", "Title: Automatic Detection of Generated Text is Easiest when Humans are Fooled\n\nAbstract: Recent advancements in neural language modelling make it possible to rapidly generate vast amounts of human-sounding text. The capabilities of humans and automatic discriminators to detect machine-generated text have been a large source of research interest, but humans and machines rely on different cues to make their decisions. Here, we perform careful benchmarking and analysis of three popular sampling-based decoding strategies---top-$k$, nucleus sampling, and untruncated random sampling---and show that improvements in decoding methods have primarily optimized for fooling humans. This comes at the expense of introducing statistical abnormalities that make detection easy for automatic systems. We also show that though both human and automatic detector performance improve with longer excerpt length, even multi-sentence excerpts can fool expert human raters over 30% of the time. Our findings reveal the importance of using both human and automatic detectors to assess the humanness of text generation systems."]], "corpus": "arxiv"}
28
+ {"tstamp": 1728463763.6847, "task_type": "retrieval", "type": "chat", "model": "text-embedding-004", "gen_params": {}, "start": 1728463763.0957, "finish": 1728463763.6847, "ip": "", "conv_id": "a3bb05cec28348039b6e793f56a3fe93", "model_name": "text-embedding-004", "prompt": "Paper discussing the effectiveness of different text sampling strategies like top-k, nucleus sampling, and their impact on fooling humans versus machines.", "output": [["Paper discussing the effectiveness of different text sampling strategies like top-k, nucleus sampling, and their impact on fooling humans versus machines.", "Title: Automatic Detection of Generated Text is Easiest when Humans are Fooled\n\nAbstract: Recent advancements in neural language modelling make it possible to rapidly generate vast amounts of human-sounding text. The capabilities of humans and automatic discriminators to detect machine-generated text have been a large source of research interest, but humans and machines rely on different cues to make their decisions. Here, we perform careful benchmarking and analysis of three popular sampling-based decoding strategies---top-$k$, nucleus sampling, and untruncated random sampling---and show that improvements in decoding methods have primarily optimized for fooling humans. This comes at the expense of introducing statistical abnormalities that make detection easy for automatic systems. We also show that though both human and automatic detector performance improve with longer excerpt length, even multi-sentence excerpts can fool expert human raters over 30% of the time. Our findings reveal the importance of using both human and automatic detectors to assess the humanness of text generation systems."]], "corpus": "arxiv"}
29
+ {"tstamp": 1728463877.37, "task_type": "retrieval", "type": "chat", "model": "Alibaba-NLP/gte-Qwen2-7B-instruct", "gen_params": {}, "start": 1728463876.6055, "finish": 1728463877.37, "ip": "", "conv_id": "c5821e8b75d64aa98d1f58ae7edff2de", "model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "prompt": "show me all famous french actors", "output": [["show me all famous french actors", "List of French people\n\nFrench people of note include:\nActors\nA–C\nCarole Achache\nIsabelle Adjani\nRenée Adorée\nAnouk Aimée\nFlo Ankah\nArletty\nAntonin Artaud\nFanny Ardant\nJeanne Aubert\nJean-Louis Aubert\nJean-Pierre Aumont\nClaude Autant-Lara\nDaniel Auteuil\nCharles Aznavour\nBrigitte Bardot\nEmmanuelle Béart\nLoleh Bellon\nJean-Paul Belmondo\nFrançois Berléand\nCharles Berling\nAdam Bessa\nSuzanne Bianchetti\nJuliette Binoche\nBernard Blier\nSandrine Bonnaire\nÉlodie Bouchez\nBourvil\nDany Boon\nAngelique Boyer\nCharles Boyer\nGuillaume Canet\nCapucine\nMartine Carol\nLeslie Caron\nIsabelle Carré\nVincent Cassel\nJean-Pierre Cassel\nLaetitia Casta\nRobert Clary\nGrégoire Colin\nMarion Cotillard\nClotilde Courau\nDarry Cowl\nD–L\nBéatrice Dalle\nLili Damita\nDanielle Darrieux\nAlain Delon\nDanièle Delorme\nJulie Delpy\nCatherine Deneuve\nÉlisabeth Depardieu\nGérard Depardieu\nGuillaume Depardieu"]], "corpus": "wikipedia"}
30
+ {"tstamp": 1728463877.37, "task_type": "retrieval", "type": "chat", "model": "nomic-ai/nomic-embed-text-v1.5", "gen_params": {}, "start": 1728463876.6055, "finish": 1728463877.37, "ip": "", "conv_id": "16fa922e04c14de7b8b6c44864444dbf", "model_name": "nomic-ai/nomic-embed-text-v1.5", "prompt": "show me all famous french actors", "output": [["show me all famous french actors", "Gérard Depardieu\n\nGérard Xavier Marcel Depardieu (, , ; born 27 December 1948) is a French actor, known to be one of the most prolific in film history. He has completed over 250 films since 1967, almost exclusively as a lead. Depardieu has worked with over 150 film directors whose most notable collaborations include Jean-Luc Godard, François Truffaut, Maurice Pialat, Alain Resnais, Claude Chabrol, Ridley Scott, and Bernardo Bertolucci. He is the second highest-grossing actor in the history of French cinema behind Louis de Funès. As of January 2022, his body of work also includes countless television productions, 18 stage plays, 16 records and 9 books. He is known for having portrayed numerous leading historical and fictitious figures of the Western world including Georges Danton, Joseph Stalin, Honoré de Balzac, Alexandre Dumas, Auguste Rodin, Cyrano de Bergerac, Jean Valjean, Edmond Dantès, Christopher Columbus, Obélix, and Dominique Strauss-Kahn."]], "corpus": "wikipedia"}
data/retrieval_side_by_side-94033b9b-bac2-448a-b329-27772f2eb5f7.jsonl ADDED
@@ -0,0 +1 @@
 
 
1
+ {"tstamp": 1728463915.1756, "task_type": "retrieval", "type": "leftvote", "models": ["Alibaba-NLP/gte-Qwen2-7B-instruct", "nomic-ai/nomic-embed-text-v1.5"], "ip": "", "0_conv_id": "c5821e8b75d64aa98d1f58ae7edff2de", "0_model_name": "Alibaba-NLP/gte-Qwen2-7B-instruct", "0_prompt": "show me all famous french actors", "0_output": [["show me all famous french actors", "List of French people\n\nFrench people of note include:\nActors\nA–C\nCarole Achache\nIsabelle Adjani\nRenée Adorée\nAnouk Aimée\nFlo Ankah\nArletty\nAntonin Artaud\nFanny Ardant\nJeanne Aubert\nJean-Louis Aubert\nJean-Pierre Aumont\nClaude Autant-Lara\nDaniel Auteuil\nCharles Aznavour\nBrigitte Bardot\nEmmanuelle Béart\nLoleh Bellon\nJean-Paul Belmondo\nFrançois Berléand\nCharles Berling\nAdam Bessa\nSuzanne Bianchetti\nJuliette Binoche\nBernard Blier\nSandrine Bonnaire\nÉlodie Bouchez\nBourvil\nDany Boon\nAngelique Boyer\nCharles Boyer\nGuillaume Canet\nCapucine\nMartine Carol\nLeslie Caron\nIsabelle Carré\nVincent Cassel\nJean-Pierre Cassel\nLaetitia Casta\nRobert Clary\nGrégoire Colin\nMarion Cotillard\nClotilde Courau\nDarry Cowl\nD–L\nBéatrice Dalle\nLili Damita\nDanielle Darrieux\nAlain Delon\nDanièle Delorme\nJulie Delpy\nCatherine Deneuve\nÉlisabeth Depardieu\nGérard Depardieu\nGuillaume Depardieu"]], "0_corpus": "wikipedia", "1_conv_id": "16fa922e04c14de7b8b6c44864444dbf", "1_model_name": "nomic-ai/nomic-embed-text-v1.5", "1_prompt": "show me all famous french actors", "1_output": [["show me all famous french actors", "Gérard Depardieu\n\nGérard Xavier Marcel Depardieu (, , ; born 27 December 1948) is a French actor, known to be one of the most prolific in film history. He has completed over 250 films since 1967, almost exclusively as a lead. Depardieu has worked with over 150 film directors whose most notable collaborations include Jean-Luc Godard, François Truffaut, Maurice Pialat, Alain Resnais, Claude Chabrol, Ridley Scott, and Bernardo Bertolucci. He is the second highest-grossing actor in the history of French cinema behind Louis de Funès. As of January 2022, his body of work also includes countless television productions, 18 stage plays, 16 records and 9 books. He is known for having portrayed numerous leading historical and fictitious figures of the Western world including Georges Danton, Joseph Stalin, Honoré de Balzac, Alexandre Dumas, Auguste Rodin, Cyrano de Bergerac, Jean Valjean, Edmond Dantès, Christopher Columbus, Obélix, and Dominique Strauss-Kahn."]], "1_corpus": "wikipedia"}