machineuser commited on
Commit
9a69175
·
1 Parent(s): 9358240

Sync widgets demo

Browse files
packages/tasks/src/text-generation/about.md CHANGED
@@ -26,11 +26,11 @@ A popular variant of Text Generation models predicts the next word given a bunch
26
  - Continue a story given the first sentences.
27
  - Provided a code description, generate the code.
28
 
29
- The most popular models for this task are GPT-based models or [Llama series](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf). These models are trained on data that has no labels, so you just need plain text to train your own model. You can train text generation models to generate a wide variety of documents, from code to stories.
30
 
31
  ### Text-to-Text Generation Models
32
 
33
- These models are trained to learn the mapping between a pair of texts (e.g. translation from one language to another). The most popular variants of these models are [FLAN-T5](https://huggingface.co/google/flan-t5-xxl), and [BART](https://huggingface.co/docs/transformers/model_doc/bart). Text-to-Text models are trained with multi-tasking capabilities, they can accomplish a wide range of tasks, including summarization, translation, and text classification.
34
 
35
  ## Inference
36
 
@@ -38,7 +38,7 @@ You can use the 🤗 Transformers library `text-generation` pipeline to do infer
38
 
39
  ```python
40
  from transformers import pipeline
41
- generator = pipeline('text-generation', model = 'gpt2')
42
  generator("Hello, I'm a language model", max_length = 30, num_return_sequences=3)
43
  ## [{'generated_text': "Hello, I'm a language modeler. So while writing this, when I went out to meet my wife or come home she told me that my"},
44
  ## {'generated_text': "Hello, I'm a language modeler. I write and maintain software in Python. I love to code, and that includes coding things that require writing"}, ...
 
26
  - Continue a story given the first sentences.
27
  - Provided a code description, generate the code.
28
 
29
+ The most popular models for this task are GPT-based models, [Mistral](mistralai/Mistral-7B-v0.1) or [Llama series](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf). These models are trained on data that has no labels, so you just need plain text to train your own model. You can train text generation models to generate a wide variety of documents, from code to stories.
30
 
31
  ### Text-to-Text Generation Models
32
 
33
+ These models are trained to learn the mapping between a pair of texts (e.g. translation from one language to another). The most popular variants of these models are [NLLB](facebook/nllb-200-distilled-600M), [FLAN-T5](https://huggingface.co/google/flan-t5-xxl), and [BART](https://huggingface.co/docs/transformers/model_doc/bart). Text-to-Text models are trained with multi-tasking capabilities, they can accomplish a wide range of tasks, including summarization, translation, and text classification.
34
 
35
  ## Inference
36
 
 
38
 
39
  ```python
40
  from transformers import pipeline
41
+ generator = pipeline('text-generation', model = 'HuggingFaceH4/zephyr-7b-beta')
42
  generator("Hello, I'm a language model", max_length = 30, num_return_sequences=3)
43
  ## [{'generated_text': "Hello, I'm a language modeler. So while writing this, when I went out to meet my wife or come home she told me that my"},
44
  ## {'generated_text': "Hello, I'm a language modeler. I write and maintain software in Python. I love to code, and that includes coding things that require writing"}, ...
packages/tasks/src/text-generation/data.ts CHANGED
@@ -119,7 +119,7 @@ const taskData: TaskDataCustom = {
119
  ],
120
  summary:
121
  "Generating text is the task of producing new text. These models can, for example, fill in incomplete text or paraphrase.",
122
- widgetModels: ["tiiuae/falcon-7b-instruct"],
123
  youtubeId: "Vpjb1lu0MDk",
124
  };
125
 
 
119
  ],
120
  summary:
121
  "Generating text is the task of producing new text. These models can, for example, fill in incomplete text or paraphrase.",
122
+ widgetModels: ["HuggingFaceH4/zephyr-7b-beta"],
123
  youtubeId: "Vpjb1lu0MDk",
124
  };
125