modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-07-14 06:27:53
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
519 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-07-14 06:27:45
card
stringlengths
11
1.01M
nlp-waseda/roberta-base-japanese-with-auto-jumanpp
nlp-waseda
2022-10-21T01:57:40Z
1,327
7
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "ja", "dataset:wikipedia", "dataset:cc100", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-10-15T05:09:36Z
--- language: ja license: cc-by-sa-4.0 datasets: - wikipedia - cc100 mask_token: "[MASK]" widget: - text: "早稲田大学で自然言語処理を[MASK]する。" --- # nlp-waseda/roberta-base-japanese-with-auto-jumanpp ## Model description This is a Japanese RoBERTa base model pretrained on Japanese Wikipedia and the Japanese portion of CC-100. ## How to use You can use this model for masked language modeling as follows: ```python from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("nlp-waseda/roberta-base-japanese-with-auto-jumanpp") model = AutoModelForMaskedLM.from_pretrained("nlp-waseda/roberta-base-japanese-with-auto-jumanpp") sentence = '早稲田大学で自然言語処理を[MASK]する。' encoding = tokenizer(sentence, return_tensors='pt') ... ``` You can fine-tune this model on downstream tasks. ## Tokenization `BertJapaneseTokenizer` now supports automatic tokenization for [Juman++](https://github.com/ku-nlp/jumanpp). However, if your dataset is large, you may take a long time since `BertJapaneseTokenizer` still does not supoort fast tokenization. You can still do the Juman++ tokenization by your self and use the old model [nlp-waseda/roberta-base-japanese](https://huggingface.co/nlp-waseda/roberta-base-japanese). Juman++ 2.0.0-rc3 was used for pretraining. Each word is tokenized into tokens by [sentencepiece](https://github.com/google/sentencepiece). ## Vocabulary The vocabulary consists of 32000 tokens including words ([JumanDIC](https://github.com/ku-nlp/JumanDIC)) and subwords induced by the unigram language model of [sentencepiece](https://github.com/google/sentencepiece). ## Training procedure This model was trained on Japanese Wikipedia (as of 20210920) and the Japanese portion of CC-100. It took a week using eight NVIDIA A100 GPUs. The following hyperparameters were used during pretraining: - learning_rate: 1e-4 - per_device_train_batch_size: 256 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 2 - total_train_batch_size: 4096 - max_seq_length: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 700000 - warmup_steps: 10000 - mixed_precision_training: Native AMP ## Performance on JGLUE See the [Baseline Scores](https://github.com/yahoojapan/JGLUE#baseline-scores) of JGLUE.
xxxxxxxxxxxxxxxxxxxxxx/model-y
xxxxxxxxxxxxxxxxxxxxxx
2022-10-21T01:49:43Z
0
0
null
[ "license:wtfpl", "region:us" ]
null
2022-10-17T07:21:03Z
--- license: wtfpl --- # wwww ```typescript import React, { CSSProperties, PropsWithRef } from 'react'; import MarkdownPreview, { MarkdownPreviewProps } from '@uiw/react-markdown-preview'; import { ITextAreaProps } from './components/TextArea'; import { ICommand } from './commands'; import { ContextStore, PreviewType } from './Context'; import './index.less'; export interface IProps { prefixCls?: string; className?: string; } export interface MDEditorProps extends Omit<React.HTMLAttributes<HTMLDivElement>, 'onChange'>, IProps { /** * The Markdown value. */ value?: string; /** * Event handler for the `onChange` event. */ onChange?: (value?: string, event?: React.ChangeEvent<HTMLTextAreaElement>, state?: ContextStore) => void; /** * editor height change listener */ onHeightChange?: (value?: CSSProperties['height'], oldValue?: CSSProperties['height'], state?: ContextStore) => void; /** * Can be used to make `Markdown Editor` focus itself on initialization. Defaults to on. * it will be set to true when either the source `textarea` is focused, * or it has an `autofocus` attribute and no other element is focused. */ autoFocus?: ITextAreaProps['autoFocus']; /** * The height of the editor. * ⚠️ `Dragbar` is invalid when **`height`** parameter percentage. */ height?: CSSProperties['height']; /** * Custom toolbar heigth * @default 29px * * @deprecated toolbar height adaptive: https://github.com/uiwjs/react-md-editor/issues/427 * */ toolbarHeight?: number; /** * Show drag and drop tool. Set the height of the editor. */ visibleDragbar?: boolean; /** * @deprecated use `visibleDragbar` */ visiableDragbar?: boolean; /** * Show markdown preview. */ preview?: PreviewType; /** * Full screen display editor. */ fullscreen?: boolean; /** * Disable `fullscreen` setting body styles */ overflow?: boolean; /** * Maximum drag height. `visibleDragbar=true` */ maxHeight?: number; /** * Minimum drag height. `visibleDragbar=true` */ minHeight?: number; /** * This is reset [react-markdown](https://github.com/rexxars/react-markdown) settings. */ previewOptions?: Omit<MarkdownPreviewProps, 'source'>; /** * Set the `textarea` related props. */ textareaProps?: ITextAreaProps; /** * Use div to replace TextArea or re-render TextArea * @deprecated Please use ~~`renderTextarea`~~ -> `components` */ renderTextarea?: ITextAreaProps['renderTextarea']; /** * re-render element */ components?: { /** Use div to replace TextArea or re-render TextArea */ textarea?: ITextAreaProps['renderTextarea']; /** * Override the default command element * _`toolbar`_ < _`command[].render`_ */ toolbar?: ICommand['render']; /** Custom markdown preview */ preview?: (source: string, state: ContextStore, dispath: React.Dispatch<ContextStore>) => JSX.Element; }; /** * Disable editing area code highlighting. The value is `false`, which increases the editing speed. * @default true */ highlightEnable?: boolean; /** * The number of characters to insert when pressing tab key. * Default `2` spaces. */ tabSize?: number; /** * If `false`, the `tab` key inserts a tab character into the textarea. If `true`, the `tab` key executes default behavior e.g. focus shifts to next element. */ defaultTabEnable?: boolean; /** * You can create your own commands or reuse existing commands. */ commands?: ICommand[]; /** * Filter or modify your commands. * https://github.com/uiwjs/react-md-editor/issues/296 */ commandsFilter?: (command: ICommand, isExtra: boolean) => false | ICommand; /** * You can create your own commands or reuse existing commands. */ extraCommands?: ICommand[]; /** * Hide the tool bar */ hideToolbar?: boolean; /** Whether to enable scrolling */ enableScroll?: boolean; /** Toolbar on bottom */ toolbarBottom?: boolean; } declare type Editor = React.FC<PropsWithRef<MDEditorProps>> & { Markdown: typeof MarkdownPreview; }; declare const mdEditor: Editor; export default mdEditor; ``` ## asdjk ### lskjdflskj as d s d
Shushant/NepaliCovidTweetsClassification
Shushant
2022-10-21T01:07:30Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-09-25T16:02:20Z
# Nepali Covid Tweet Classification ## This model was developed by finetuning the NepaliBERT model previously developed by me on Nepali COVID-related tweets. This dataset has about 15000 observations annotated with positive, negative, and neutral labels. NepaliBERT model was able to achieve SOTA results while finetuning this model for text classification. While training the model, the evaluation metrics obtained were: * Training loss: 0.35592623149202174 * Validation loss: 0.6570735214928906 * F1 Score (Weighted): 0.7719232825307907 # LABELS INDICATOR * LABEL 0 - Neutral * LABEL 1 - Positive * Label 2 - Negative ## USAGE ```python from transformers import pipeline classifier = pipeline("text-classification", model = "Shushant/NepaliCovidTweetsClassification") classifier("आउँदा केही दिनमा अमेरिकाले १५ लाखभन्दा बढी नेपालीलाई पुग्नेगरी कोभीड१९ खोप निशुल्क उपलब्ध गराउंदैछ।") ```
ArafatBHossain/bert-distilled-single_teacher_mind_epoch07_alpha0.8
ArafatBHossain
2022-10-21T00:57:47Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-21T00:26:50Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: bert-distilled-single_teacher_mind_epoch07_alpha0.8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-distilled-single_teacher_mind_epoch07_alpha0.8 This model is a fine-tuned version of [ArafatBHossain/distill_bert_fine_tuned_mind](https://huggingface.co/ArafatBHossain/distill_bert_fine_tuned_mind) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1023 - Accuracy: 0.9208 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2937 | 1.0 | 3054 | 0.2652 | 0.8802 | | 0.2339 | 2.0 | 6108 | 0.2510 | 0.8822 | | 0.1721 | 3.0 | 9162 | 0.1781 | 0.9038 | | 0.1284 | 4.0 | 12216 | 0.1450 | 0.9108 | | 0.0993 | 5.0 | 15270 | 0.1195 | 0.9182 | | 0.0765 | 6.0 | 18324 | 0.1115 | 0.9172 | | 0.063 | 7.0 | 21378 | 0.1023 | 0.9208 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.11.0 - Datasets 2.6.1 - Tokenizers 0.12.1
edbeeching/atari_2B_atari_zaxxon_1111
edbeeching
2022-10-21T00:06:39Z
8
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-10-21T00:05:32Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: atari_zaxxon type: atari_zaxxon metrics: - type: mean_reward value: 77350.00 +/- 17524.54 name: mean_reward verified: false --- A(n) **APPO** model trained on the **atari_zaxxon** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
noahkim/KoT5_news_summarization
noahkim
2022-10-21T00:05:27Z
402
5
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "summarization", "news", "ko", "autotrain_compatible", "text-generation-inference", "region:us" ]
summarization
2022-10-20T11:06:55Z
--- language: ko tags: - summarization - news inference: false --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # KoT5_news_summarization - This model is a [lcw99/t5-base-korean-text-summary](https://huggingface.co/lcw99/t5-base-korean-text-summary) finetuned on the [daekeun-ml/naver-news-summarization-ko](https://huggingface.co/datasets/daekeun-ml/naver-news-summarization-ko) ## Model description <<20221021 Commit>> 프로젝트용으로 뉴스 요약 모델 특화된 모델을 만들기 위해 lcw99님의 t5-base-korean-text-summary 모델에 추가적으로 daekeun-ml님이 제공해주신 naver-news-summarization-ko 데이터셋으로 파인튜닝 했습니다. 현재 제가 가지고 있는 뉴스 데이터로 추가 학습 진행 예정입니다. 지속적으로 발전시켜 좋은 성능의 모델을 구현하겠습니다. 감사합니다. 실행환경 - Google Colab Pro - CPU : Intel(R) Xeon(R) CPU @ 2.20GHz - GPU : A100-SXM4-40GB <pre><code> # Python Code from transformers import AutoTokenizer from transformers import AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("noahkim/KoT5_news_summarization") model = AutoModelForSeq2SeqLM.from_pretrained("noahkim/KoT5_news_summarization") </pre></code> ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.4513 | 1.0 | 2775 | 0.4067 | | 0.42 | 2.0 | 5550 | 0.3933 | | 0.395 | 3.0 | 8325 | 0.3864 | | 0.3771 | 4.0 | 11100 | 0.3872 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
edbeeching/atari_2B_atari_yarsrevenge_1111
edbeeching
2022-10-21T00:01:51Z
7
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-10-21T00:00:47Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: atari_yarsrevenge type: atari_yarsrevenge metrics: - type: mean_reward value: 224390.75 +/- 197367.31 name: mean_reward verified: false --- A(n) **APPO** model trained on the **atari_yarsrevenge** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
edbeeching/atari_2B_atari_upndown_1111
edbeeching
2022-10-20T23:41:26Z
6
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-10-20T23:40:03Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: atari_upndown type: atari_upndown metrics: - type: mean_reward value: 425124.50 +/- 6964.43 name: mean_reward verified: false --- A(n) **APPO** model trained on the **atari_upndown** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
g30rv17ys/ddpm-hkuoct-wamd-1000ep
g30rv17ys
2022-10-20T23:26:06Z
4
0
diffusers
[ "diffusers", "tensorboard", "en", "dataset:imagefolder", "license:apache-2.0", "diffusers:DDPMPipeline", "region:us" ]
null
2022-10-20T19:00:48Z
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: imagefolder metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-hkuoct-wamd-1000ep ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `imagefolder` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/geevegeorge/ddpm-hkuoct-wamd-1000ep/tensorboard?#scalars)
sd-concepts-library/flag-ussr
sd-concepts-library
2022-10-20T23:09:07Z
0
2
null
[ "license:mit", "region:us" ]
null
2022-10-20T23:08:56Z
--- license: mit --- ### flag-ussr on Stable Diffusion This is the `<flag-ussr>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<flag-ussr> 0](https://huggingface.co/sd-concepts-library/flag-ussr/resolve/main/concept_images/19.jpeg) ![<flag-ussr> 1](https://huggingface.co/sd-concepts-library/flag-ussr/resolve/main/concept_images/10.jpeg) ![<flag-ussr> 2](https://huggingface.co/sd-concepts-library/flag-ussr/resolve/main/concept_images/3.jpeg) ![<flag-ussr> 3](https://huggingface.co/sd-concepts-library/flag-ussr/resolve/main/concept_images/8.jpeg) ![<flag-ussr> 4](https://huggingface.co/sd-concepts-library/flag-ussr/resolve/main/concept_images/9.jpeg) ![<flag-ussr> 5](https://huggingface.co/sd-concepts-library/flag-ussr/resolve/main/concept_images/4.jpeg) ![<flag-ussr> 6](https://huggingface.co/sd-concepts-library/flag-ussr/resolve/main/concept_images/5.jpeg) ![<flag-ussr> 7](https://huggingface.co/sd-concepts-library/flag-ussr/resolve/main/concept_images/1.jpeg) ![<flag-ussr> 8](https://huggingface.co/sd-concepts-library/flag-ussr/resolve/main/concept_images/7.jpeg) ![<flag-ussr> 9](https://huggingface.co/sd-concepts-library/flag-ussr/resolve/main/concept_images/17.jpeg) ![<flag-ussr> 10](https://huggingface.co/sd-concepts-library/flag-ussr/resolve/main/concept_images/0.jpeg) ![<flag-ussr> 11](https://huggingface.co/sd-concepts-library/flag-ussr/resolve/main/concept_images/6.jpeg) ![<flag-ussr> 12](https://huggingface.co/sd-concepts-library/flag-ussr/resolve/main/concept_images/21.jpeg) ![<flag-ussr> 13](https://huggingface.co/sd-concepts-library/flag-ussr/resolve/main/concept_images/15.jpeg) ![<flag-ussr> 14](https://huggingface.co/sd-concepts-library/flag-ussr/resolve/main/concept_images/14.jpeg) ![<flag-ussr> 15](https://huggingface.co/sd-concepts-library/flag-ussr/resolve/main/concept_images/12.jpeg) ![<flag-ussr> 16](https://huggingface.co/sd-concepts-library/flag-ussr/resolve/main/concept_images/11.jpeg) ![<flag-ussr> 17](https://huggingface.co/sd-concepts-library/flag-ussr/resolve/main/concept_images/16.jpeg) ![<flag-ussr> 18](https://huggingface.co/sd-concepts-library/flag-ussr/resolve/main/concept_images/2.jpeg) ![<flag-ussr> 19](https://huggingface.co/sd-concepts-library/flag-ussr/resolve/main/concept_images/13.jpeg) ![<flag-ussr> 20](https://huggingface.co/sd-concepts-library/flag-ussr/resolve/main/concept_images/20.jpeg) ![<flag-ussr> 21](https://huggingface.co/sd-concepts-library/flag-ussr/resolve/main/concept_images/18.jpeg)
imodels/gpt-neo-2.7B-titles
imodels
2022-10-20T21:17:47Z
5
0
transformers
[ "transformers", "pytorch", "gpt_neo", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-10-17T18:36:43Z
--- license: apache-2.0 widget: - text: "2021\n\n" --- Full code and details at https://github.com/csinva/gpt-paper-title-generator **Model** - finetunes starting from the[gpt-neo-2.7B checkpoint](https://huggingface.co/EleutherAI/gpt-neo-2.7B) - for training details see [the training script](https://github.com/csinva/gpt-paper-title-generator/blob/0157f26be9b0763b4ea6480e5b149fdb8dff4626/gptneo/02_finetune_hf.py) - inference - should prepend with a year and two newlines before querying for a title, e.g. `2022\n\n` ```python from transformers import AutoModelForCausalLM, pipeline, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("csinva/gpt-neo-2.7B-titles") tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-2.7B") pipe = pipeline('text-generation', model=model, tokenizer=tokenizer) pipe('2022\n\n') ``` **Data** - all [papers on arXiv](https://www.kaggle.com/datasets/Cornell-University/arxiv) in the categories cs.AI, cs.LG, stat.ML - date cutoff: only finetuned on papers with dat on or before Apr 1, 2022 - random 5% of papers also excluded - this results in 98,388 papers for finetuning - during finetuning each paper title was given starting with the prompt `<year>\n\n <title>\n` (e.g. `2022\n\n Emb-GAM: an Interpretable and Efficient Predictor using Pre-trained Language Models\n`)
creditgrossepointe/creditgrossepointe
creditgrossepointe
2022-10-20T21:13:37Z
0
0
null
[ "region:us" ]
null
2022-10-20T21:12:54Z
We are a family-owned and operated Credit Repair company, founded in 2013. Our goal is to help you achieve financial success and reach your credit goals. Follow this [link](https://grossepointepark.asapcreditrepairusa.com/)
Shaier/longformer_quail
Shaier
2022-10-20T19:58:53Z
4
0
transformers
[ "transformers", "pytorch", "longformer", "multiple-choice", "generated_from_trainer", "dataset:quail", "endpoints_compatible", "region:us" ]
multiple-choice
2022-10-20T15:42:17Z
--- tags: - generated_from_trainer datasets: - quail model-index: - name: longformer_quail results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # longformer_quail This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the quail dataset. It achieves the following results on the evaluation set: - eval_loss: 1.9568 - eval_accuracy: 0.5791 - eval_runtime: 44.254 - eval_samples_per_second: 12.564 - eval_steps_per_second: 6.282 - epoch: 4.0 - step: 816 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 25 - total_train_batch_size: 50 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Framework versions - Transformers 4.21.3 - Pytorch 1.12.1 - Datasets 2.5.1 - Tokenizers 0.11.0
ashleychen/bart-finetuning
ashleychen
2022-10-20T18:46:57Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-10-20T17:44:22Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bart-finetuning results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-finetuning This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.0141 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.3918 | 0.51 | 1000 | 3.1022 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
allenai/drug_combinations_lm_pubmedbert
allenai
2022-10-20T18:25:13Z
39
2
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "biomedical", "bioNLP", "en", "arxiv:2205.02289", "arxiv:2007.15779", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-10-19T11:25:49Z
--- language: - en tags: - biomedical - bioNLP --- This is a version of [PubmedBERT](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext?text=%5BMASK%5D+is+a+tumor+suppressor+gene.) which has been domain-adapted (via additional pretraining) to a set of PubMed abstracts that likely discuss multiple-drug therapies. This model was the strongest contextualized encoder in the experiments in the paper ["A Dataset for N-ary Relation Extraction of Drug Combinations"](https://arxiv.org/abs/2205.02289), when used as a component of a larger relation classification model (also hosted [here on Huggingface](https://huggingface.co/allenai/drug-combo-classifier-pubmedbert-dapt)). If you use this model, cite both ```latex @misc{pubmedbert, author = {Yu Gu and Robert Tinn and Hao Cheng and Michael Lucas and Naoto Usuyama and Xiaodong Liu and Tristan Naumann and Jianfeng Gao and Hoifung Poon}, title = {Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing}, year = {2020}, eprint = {arXiv:2007.15779}, } ``` and ```latex @inproceedings{Tiktinsky2022ADF, title = "A Dataset for N-ary Relation Extraction of Drug Combinations", author = "Tiktinsky, Aryeh and Viswanathan, Vijay and Niezni, Danna and Meron Azagury, Dana and Shamay, Yosi and Taub-Tabib, Hillel and Hope, Tom and Goldberg, Yoav", booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", month = jul, year = "2022", address = "Seattle, United States", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.naacl-main.233", doi = "10.18653/v1/2022.naacl-main.233", pages = "3190--3203", } ```
jxm/u-PMLM-R
jxm
2022-10-20T18:05:26Z
5
2
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "arxiv:2004.11579", "endpoints_compatible", "region:us" ]
feature-extraction
2022-06-01T16:08:29Z
PMLM is the language model described in [Probabilistically Masked Language Model Capable of Autoregressive Generation in Arbitrary Word Order](https://arxiv.org/abs/2004.11579), which is trained with probabilistic masking. This is the "PMLM-R" variant, adapted from [the authors' original implementation](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/PMLM).
jxm/u-PMLM-A
jxm
2022-10-20T18:05:03Z
5
0
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "arxiv:2004.11579", "endpoints_compatible", "region:us" ]
feature-extraction
2022-06-01T17:37:45Z
PMLM is the language model described in [Probabilistically Masked Language Model Capable of Autoregressive Generation in Arbitrary Word Order](https://arxiv.org/abs/2004.11579), which is trained with probabilistic masking. This is the "PMLM-A" variant, adapted from [the authors' original implementation](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/PMLM).
theojolliffe/T5-model-1-feedback-2010-e4
theojolliffe
2022-10-20T17:31:20Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-10-20T16:46:04Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: T5-model-1-feedback-2010-e4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # T5-model-1-feedback-2010-e4 This model is a fine-tuned version of [theojolliffe/T5-model-1-feedback-1109](https://huggingface.co/theojolliffe/T5-model-1-feedback-1109) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2075 - Rouge1: 92.2165 - Rouge2: 86.2314 - Rougel: 91.5975 - Rougelsum: 91.509 - Gen Len: 15.2586 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 0.2922 | 1.0 | 1646 | 0.2603 | 91.6366 | 84.8657 | 90.8246 | 90.9026 | 15.092 | | 0.2264 | 2.0 | 3292 | 0.2311 | 92.5522 | 86.8008 | 91.9435 | 91.88 | 15.2586 | | 0.187 | 3.0 | 4938 | 0.2085 | 91.982 | 86.0585 | 91.3852 | 91.3091 | 15.3161 | | 0.1879 | 4.0 | 6584 | 0.2075 | 92.2165 | 86.2314 | 91.5975 | 91.509 | 15.2586 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
rbroc/contrastive-user-encoder-singlepost
rbroc
2022-10-20T16:56:21Z
4
0
transformers
[ "transformers", "pytorch", "distilbert", "feature-extraction", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
feature-extraction
2022-10-19T08:56:38Z
--- language: - en license: apache-2.0 library_name: transformers --- ### Contrastive user encoder (single post) This model is a `DistilBertModel` trained by fine-tuning `distilbert-base-uncased` on author-based triplet loss. #### Details Training and evaluation details are provided in our EMNLP Findings paper: - Rocca, R., & Yarkoni, T. (2022), Language as a fingerprint: Self-supervised learning of user encodings using transformers, to appear in *Findings of the Association for Computational Linguistics: EMNLP 2022* #### Training We fine-tuned DistilBERT on triplets consisting of: - a single Reddit submission from a given user (the "anchor") - see ```rbroc/contrastive-user-encoder-multipost``` for a model trained on aggregated embeddings of multiple anchors; - an additional post from the same user (a "positive example"); - a post from a different, randomly selected user (the "negative example") To compute the loss, we use [CLS] encoding of the anchor, positive example and negative example from the last layer of the DistilBERT encoder. We optimize for \\(max(||f(a) - f(n)|| - ||f(a) - f(p)|| + \alpha,0)\\) where: - \\( f(a)\\) is the [CLS] encoding of the anchor; - \\( f(n) \\) is the [CLS] encoding of the negative example; - \\( f(p) \\) is the [CLS] encoding of the positive example; - \\( \alpha \\) is a tunable parameter called margin. Here, we tuned this to \\( \alpha = 1.0\\) #### Evaluation and usage The model yields performance advantages downstream user-based classification tasks. We encourage usage and benchmarking on tasks involving: - prediction of user traits (e.g., personality); - extraction of user-aware text encodings (e.g., style modeling); - contextualized text modeling, where standard text representations are complemented with compact user representations #### Limitations Being exclusively trained on Reddit data, our models probably overfit to linguistic markers and traits which are relevant to characterizing the Reddit user population, but less salient in the general population. Domain-specific fine-tuning may be required before deployment. Furthermore, our self-supervised approach enforces little or no control over biases, which models may actively use as part of their heuristics in contrastive and downstream tasks.
tringuyexn/ppo-LunarLander-v2
tringuyexn
2022-10-20T16:55:51Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-10-20T16:55:27Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 237.09 +/- 23.08 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
north/t5_base_scand3M
north
2022-10-20T16:16:52Z
4
0
transformers
[ "transformers", "pytorch", "jax", "tensorboard", "t5", "text2text-generation", "no", "nn", "sv", "da", "is", "en", "dataset:nbailab/NCC", "dataset:mc4", "dataset:wikipedia", "arxiv:2104.09617", "arxiv:1910.10683", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-10-13T09:02:03Z
--- language: - no - nn - sv - da - is - en datasets: - nbailab/NCC - mc4 - wikipedia widget: - text: <extra_id_0> hver uke samles Regjeringens medlemmer til Statsråd på <extra_id_1>. Dette organet er øverste <extra_id_2> i Norge. For at møtet skal være <extra_id_3>, må over halvparten av regjeringens <extra_id_4> være til stede. - text: På <extra_id_0> kan man <extra_id_1> en bok, og man kan også <extra_id_2> seg ned og lese den. license: other --- The North-T5-models are a set of Norwegian and Scandinavian sequence-to-sequence-models. It builds upon the flexible [T5](https://github.com/google-research/text-to-text-transfer-transformer) and [T5X](https://github.com/google-research/t5x) and can be used for a variety of NLP tasks ranging from classification to translation. | |**Small** <br />_60M_|**Base** <br />_220M_|**Large** <br />_770M_|**XL** <br />_3B_|**XXL** <br />_11B_| |:-----------|:------------:|:------------:|:------------:|:------------:|:------------:| |North-T5&#8209;NCC|[🤗](https://huggingface.co/north/t5_small_NCC)|[🤗](https://huggingface.co/north/t5_base_NCC)|[🤗](https://huggingface.co/north/t5_large_NCC)|[🤗](https://huggingface.co/north/t5_xl_NCC)|[🤗](https://huggingface.co/north/t5_xxl_NCC)|| |North-T5&#8209;NCC&#8209;lm|[🤗](https://huggingface.co/north/t5_small_NCC_lm)|[🤗](https://huggingface.co/north/t5_base_NCC_lm)|[🤗](https://huggingface.co/north/t5_large_NCC_lm)|[🤗](https://huggingface.co/north/t5_xl_NCC_lm)|[🤗](https://huggingface.co/north/t5_xxl_NCC_lm)|| |North-T5&#8209;NCC&#8209;modern|[🤗](https://huggingface.co/north/t5_small_NCC_modern)|[🤗](https://huggingface.co/north/t5_base_NCC_modern)|[🤗](https://huggingface.co/north/t5_large_NCC_modern)|[🤗](https://huggingface.co/north/t5_xl_NCC_modern)|| |North-T5&#8209;NCC&#8209;modern&#8209;lm|[🤗](https://huggingface.co/north/t5_small_NCC_modern_lm)|[🤗](https://huggingface.co/north/t5_base_NCC_modern_lm)|[🤗](https://huggingface.co/north/t5_large_NCC_modern_lm)|[🤗](https://huggingface.co/north/t5_xl_NCC_modern_lm)|| |North-T5&#8209;NCC&#8209;scand|[🤗](https://huggingface.co/north/t5_small_NCC_scand)|[🤗](https://huggingface.co/north/t5_base_NCC_scand)|[🤗](https://huggingface.co/north/t5_large_NCC_scand)|[🤗](https://huggingface.co/north/t5_xl_NCC_scand)|| |North-T5&#8209;scand|[🤗](https://huggingface.co/north/t5_small_scand)|[🤗](https://huggingface.co/north/t5_base_scand)|[🤗](https://huggingface.co/north/t5_large_scand)|| |North-byT5&#8209;NCC|[🤗](https://huggingface.co/north/byt5_small_NCC)|[🤗](https://huggingface.co/north/byt5_base_NCC)|[🤗](https://huggingface.co/north/byt5_large_NCC)|| |North-T5&#8209;scand3M|✔|[🤗](https://huggingface.co/north/t5_large_scand3M)|[🤗](https://huggingface.co/north/t5_xl_scand3M)|| ## T5X Checkpoint The original T5X checkpoint is also available for this model in the [Google Cloud Bucket](gs://north-t5x/pretrained_models/base/scandinavian3k_t5x_base/). ## Performance A thorough evaluation of the North-T5 models is planned, and I strongly recommend external researchers to make their own evaluation. The main advantage with the T5-models are their flexibility. Traditionally, encoder-only models (like BERT) excels in classification tasks, while seq-2-seq models are easier to train for tasks like translation and Q&A. Despite this, here are the results from using North-T5 on the political classification task explained [here](https://arxiv.org/abs/2104.09617). |**Model:** | **F1** | |:-----------|:------------| |mT5-base|73.2 | |mBERT-base|78.4 | |NorBERT-base|78.2 | |North-T5-small|80.5 | |nb-bert-base|81.8 | |North-T5-base|85.3 | |North-T5-large|86.7 | |North-T5-xl|88.7 | |North-T5-xxl|91.8| These are preliminary results. The [results](https://arxiv.org/abs/2104.09617) from the BERT-models are based on the test-results from the best model after 10 runs with early stopping and a decaying learning rate. The T5-results are the average of five runs on the evaluation set. The small-model was trained for 10.000 steps, while the rest for 5.000 steps. A fixed learning rate was used (no decay), and no early stopping. Neither was the recommended rank classification used. We use a max sequence length of 512. This method simplifies the test setup and gives results that are easy to interpret. However, the results from the T5 model might actually be a bit sub-optimal. ## Sub-versions of North-T5 The following sub-versions are available. More versions will be available shorter. |**Model** | **Description** | |:-----------|:-------| |**North&#8209;T5&#8209;NCC** |This is the main version. It is trained an additonal 500.000 steps on from the mT5 checkpoint. The training corpus is based on [the Norwegian Colossal Corpus (NCC)](https://huggingface.co/datasets/NbAiLab/NCC). In addition there are added data from MC4 and English Wikipedia.| |**North&#8209;T5&#8209;NCC&#8209;lm**|The model is pretrained for an addtional 100k steps on the LM objective discussed in the [T5 paper](https://arxiv.org/pdf/1910.10683.pdf). In a way this turns a masked language model into an autoregressive model. It also prepares the model for some tasks. When for instance doing translation and NLI, it is well documented that there is a clear benefit to do a step of unsupervised LM-training before starting the finetuning.| |**North&#8209;T5&#8209;NCC&#8209;modern**| The model is pretrained for an additional 200k steps on a blanaced Bokmål and Nynorsk corpus. While this was originally done for doing translation between Bokmål and Nynorsk, it might also give improved results on tasks where you know that the input/output is modern "standard" text. A significant part of the training corpus is newspapers and reports.| |**North&#8209;T5&#8209;NCC&#8209;modern&#8209;lm**| Trained as above but with an additional 100k "language model"-pretraining.| |**North&#8209;T5&#8209;NCC&#8209;scand**|The model is pretrained for an additional 200k steps on a Scandinavian corpus (Bokmål, Nynorsk, Danish, Swedish and Icelandic (+ a tiny bit Faeroyish)). The model was trained for increasing the understanding of what effect such training has on various languages.| |**North&#8209;T5&#8209;scand**|Pretrained for 1,700,000 steps starting with the mT5 checkpoing. The purpose of the mode is studying the difference of different training regimes for Scandinavian language model.| |**North&#8209;byT5&#8209;base**| This is a vocabulary free version of T5. It is trained exactly like North-T5, but instead of the 250,112 vocabulary, this model operates directly on the raw text. The model architecture might be of particulary interest for tasks involving for instance spelling correction, OCR-cleaning, handwriting recognition etc. However, it will - by design - have amuch shorter maximum sequence length.| ## Fine-tuned versions As explained below, the model really needs to be fine-tuned for specific tasks. This procedure is relatively simple, and the models are not very sensitive to the hyper-parameters used. Usually a decent result can be obtained by using a fixed learning rate of 1e-3. Smaller versions of the model typically needs to be trained for a longer time. It is easy to train the base-models in a Google Colab. Since some people really want to see what the models are capable of, without going through the training procedure, I provide a couple of test models. These models are by no means optimised, and are just for demonstrating how the North-T5 models can be used. * Nynorsk Translator. Translates any text from Norwegian Bokmål to Norwegian Nynorsk. Please test the [Streamlit-demo](https://huggingface.co/spaces/north/Nynorsk) and the [HuggingFace repo](https://huggingface.co/north/demo-nynorsk-base) * DeUnCaser. The model adds punctation, spaces and capitalisation back into the text. The input needs to be in Norwegian but does not have to be divided into sentences or have proper capitalisation of words. You can even remove the spaces from the text, and make the model reconstruct it. It can be tested with the [Streamlit-demo](https://huggingface.co/spaces/north/DeUnCaser) and directly on the [HuggingFace repo](https://huggingface.co/north/demo-deuncaser-base) ## Training details All models are built using the Flax-based T5X codebase, and all models are initiated with the mT5 pretrained weights. The models are trained using the T5.1.1 training regime, where they are only trained on an unsupervised masking-task. This also means that the models (contrary to the original T5) needs to be finetuned to solve specific tasks. This finetuning is however usually not very compute intensive, and in most cases it can be performed even with free online training resources. All the main model model versions are trained for 500.000 steps after the mT5 checkpoint (1.000.000 steps). They are trained mainly on a 75GB corpus, consisting of NCC, Common Crawl and some additional high quality English text (Wikipedia). The corpus is roughly 80% Norwegian text. Additional languages are added to retain some of the multilingual capabilities, making the model both more robust to new words/concepts and also more suited as a basis for translation tasks. While the huge models almost always will give the best results, they are also both more difficult and more expensive to finetune. I will strongly recommended to start with finetuning a base-models. The base-models can easily be finetuned on a standard graphic card or a free TPU through Google Colab. All models were trained on TPUs. The largest XXL model was trained on a TPU v4-64, the XL model on a TPU v4-32, the Large model on a TPU v4-16 and the rest on TPU v4-8. Since it is possible to reduce the batch size during fine-tuning, it is also possible to finetune on slightly smaller hardware. The rule of thumb is that you can go "one step down" when finetuning. The large models still rewuire access to significant hardware, even for finetuning. ## Formats All models are trained using the Flax-based T5X library. The original checkpoints are available in T5X format and can be used for both finetuning or interference. All models, except the XXL-model, are also converted to Transformers/HuggingFace. In this framework, the models can be loaded for finetuning or inference both in Flax, PyTorch and TensorFlow format. ## Future I will continue to train and release additional models to this set. What models that are added is dependent upon the feedbacki from the users ## Thanks This release would not have been possible without getting support and hardware from the [TPU Research Cloud](https://sites.research.google/trc/about/) at Google Research. Both the TPU Research Cloud Team and the T5X Team has provided extremely useful support for getting this running. Freddy Wetjen at the National Library of Norway has been of tremendous help in generating the original NCC corpus, and has also contributed to generate the collated coprus used for this training. In addition he has been a dicussion partner in the creation of these models. Also thanks to Stefan Schweter for writing the [script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/convert_t5x_checkpoint_to_flax.py) for converting these models from T5X to HuggingFace and to Javier de la Rosa for writing the dataloader for reading the HuggingFace Datasets in T5X. ## Warranty Use at your own risk. The models have not yet been thougroughly tested, and may contain both errors and biases. ## Contact/About These models were trained by Per E Kummervold. Please contact me on [email protected].
fortworthasapcreditrepair/Credit-Repair-in-Fort-Worth
fortworthasapcreditrepair
2022-10-20T15:34:10Z
0
0
null
[ "region:us" ]
null
2022-10-20T15:32:48Z
We offer FREE consultations, evaluations, and credit education. Our process only takes 30-60 days and we offer a 100% MONEY-BACK GUARANTEE on almost all our services. Don’t let bad credit and financial concerns hold you back anymore. Ask about our FREE [Credit Repair Fort Worth](https://fortworth.asapcreditrepairusa.com) Referral Services TODAY!
IShallRiseAgain/StudioGhibli
IShallRiseAgain
2022-10-20T14:47:06Z
0
86
null
[ "region:us" ]
null
2022-10-11T18:56:13Z
Prompt is studio_ghibli_anime_style style I know people will ignore this, but please don't use this to make NFTs.
Mattbrenr/What
Mattbrenr
2022-10-20T14:07:37Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2022-10-20T14:07:37Z
--- license: creativeml-openrail-m ---
auriolar/Reinformce-Pong-PLE-v0
auriolar
2022-10-20T14:07:27Z
0
0
null
[ "Pong-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2022-10-20T14:07:14Z
--- tags: - Pong-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinformce-Pong-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pong-PLE-v0 type: Pong-PLE-v0 metrics: - type: mean_reward value: -16.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pong-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pong-PLE-v0** . To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
moro23/wav2vec2-large-xlsr-53-ha-colab_1
moro23
2022-10-20T14:05:09Z
5
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_10_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-10-20T11:29:16Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice_10_0 model-index: - name: wav2vec2-large-xlsr-53-ha-colab_1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-53-ha-colab_1 This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice_10_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.7843 - Wer: 0.4827 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.2849 | 5.19 | 400 | 2.8140 | 1.0 | | 1.4323 | 10.39 | 800 | 0.6695 | 0.5772 | | 0.2833 | 15.58 | 1200 | 0.6866 | 0.5036 | | 0.1798 | 20.77 | 1600 | 0.7698 | 0.4950 | | 0.1369 | 25.97 | 2000 | 0.7843 | 0.4827 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 2.3.2 - Tokenizers 0.10.3
Sandipan1987/Test-finetuned-imdb
Sandipan1987
2022-10-20T13:43:25Z
3
0
transformers
[ "transformers", "tf", "distilbert", "fill-mask", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-10-20T11:22:04Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Sandipan1987/Test-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Sandipan1987/Test-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.8505 - Validation Loss: 2.5715 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -687, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 2.8505 | 2.5715 | 0 | ### Framework versions - Transformers 4.23.1 - TensorFlow 2.9.2 - Datasets 2.6.1 - Tokenizers 0.13.1
jayanta/mit-b2-fv-finetuned-memes
jayanta
2022-10-20T13:21:30Z
46
0
transformers
[ "transformers", "pytorch", "tensorboard", "segformer", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:other", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-10-20T11:38:15Z
--- license: other tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy - precision - recall - f1 model-index: - name: mit-b2-fv-finetuned-memes results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.8323029366306027 - name: Precision type: precision value: 0.831217385971583 - name: Recall type: recall value: 0.8323029366306027 - name: F1 type: f1 value: 0.831492653119617 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mit-b2-fv-finetuned-memes This model is a fine-tuned version of [nvidia/mit-b2](https://huggingface.co/nvidia/mit-b2) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.5984 - Accuracy: 0.8323 - Precision: 0.8312 - Recall: 0.8323 - F1: 0.8315 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00012 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 1.3683 | 0.99 | 20 | 1.1798 | 0.5703 | 0.4914 | 0.5703 | 0.4915 | | 1.0113 | 1.99 | 40 | 1.0384 | 0.6159 | 0.6813 | 0.6159 | 0.6274 | | 0.7581 | 2.99 | 60 | 0.8348 | 0.6808 | 0.7377 | 0.6808 | 0.6840 | | 0.6241 | 3.99 | 80 | 0.6034 | 0.7713 | 0.7864 | 0.7713 | 0.7735 | | 0.4999 | 4.99 | 100 | 0.5481 | 0.7944 | 0.8000 | 0.7944 | 0.7909 | | 0.3981 | 5.99 | 120 | 0.5253 | 0.8022 | 0.8091 | 0.8022 | 0.8000 | | 0.3484 | 6.99 | 140 | 0.4688 | 0.8238 | 0.8147 | 0.8238 | 0.8146 | | 0.3142 | 7.99 | 160 | 0.6245 | 0.7867 | 0.8209 | 0.7867 | 0.7920 | | 0.2339 | 8.99 | 180 | 0.5053 | 0.8362 | 0.8426 | 0.8362 | 0.8355 | | 0.2284 | 9.99 | 200 | 0.5070 | 0.8230 | 0.8220 | 0.8230 | 0.8187 | | 0.1824 | 10.99 | 220 | 0.5780 | 0.8006 | 0.8138 | 0.8006 | 0.8035 | | 0.1561 | 11.99 | 240 | 0.5429 | 0.8253 | 0.8197 | 0.8253 | 0.8218 | | 0.1229 | 12.99 | 260 | 0.5325 | 0.8331 | 0.8296 | 0.8331 | 0.8303 | | 0.1232 | 13.99 | 280 | 0.5595 | 0.8277 | 0.8290 | 0.8277 | 0.8273 | | 0.118 | 14.99 | 300 | 0.5974 | 0.8292 | 0.8345 | 0.8292 | 0.8299 | | 0.11 | 15.99 | 320 | 0.5796 | 0.8253 | 0.8228 | 0.8253 | 0.8231 | | 0.0948 | 16.99 | 340 | 0.5581 | 0.8346 | 0.8358 | 0.8346 | 0.8349 | | 0.0985 | 17.99 | 360 | 0.5700 | 0.8338 | 0.8301 | 0.8338 | 0.8318 | | 0.0821 | 18.99 | 380 | 0.5756 | 0.8331 | 0.8343 | 0.8331 | 0.8335 | | 0.0813 | 19.99 | 400 | 0.5984 | 0.8323 | 0.8312 | 0.8323 | 0.8315 | ### Framework versions - Transformers 4.24.0.dev0 - Pytorch 1.11.0+cu102 - Datasets 2.6.1.dev0 - Tokenizers 0.13.1
lewtun/quantized-distilbert-banking77
lewtun
2022-10-20T12:47:39Z
13
0
transformers
[ "transformers", "onnx", "text-classification", "optimum", "dataset:banking77", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-06-08T09:42:56Z
--- tags: - optimum datasets: - banking77 metrics: - accuracy model-index: - name: quantized-distilbert-banking77 results: - task: name: Text Classification type: text-classification dataset: name: banking77 type: banking77 metrics: - name: Accuracy type: accuracy value: 0.9244 --- # Quantized-distilbert-banking77 This model is a dynamically quantized version of [optimum/distilbert-base-uncased-finetuned-banking77](https://huggingface.co/optimum/distilbert-base-uncased-finetuned-banking77) on the `banking77` dataset. The model was created using the [dynamic-quantization](https://github.com/huggingface/workshops/tree/main/mlops-world) notebook from a workshop presented at MLOps World 2022. It achieves the following results on the evaluation set: **Accuracy** - Vanilla model: 92.5% - Quantized model: 92.44% > The quantized model achieves 99.93% accuracy of the FP32 model **Latency** Payload sequence length: 128 Instance type: AWS c6i.xlarge | latency | vanilla transformers | quantized optimum model | improvement | |---------|----------------------|-------------------------|-------------| | p95 | 63.24ms | 37.06ms | 1.71x | | avg | 62.87ms | 37.93ms | 1.66x | ## How to use ```python from optimum.onnxruntime import ORTModelForSequenceClassification from transformers import pipeline, AutoTokenizer model = ORTModelForSequenceClassification.from_pretrained("lewtun/quantized-distilbert-banking77") tokenizer = AutoTokenizer.from_pretrained("lewtun/quantized-distilbert-banking77") classifier = pipeline("text-classification", model=model, tokenizer=tokenizer) classifier("What is the exchange rate like on this app?") ```
felixrosberg/RetinaFace
felixrosberg
2022-10-20T12:30:36Z
0
2
null
[ "license:mit", "region:us" ]
null
2022-10-20T12:29:11Z
--- license: mit --- Pretrained RetinaFace .h5 model (TensorFlow/Keras)
ChaosW/autohome-deberta-v2-xlarge-base
ChaosW
2022-10-20T12:21:06Z
3
0
transformers
[ "transformers", "pytorch", "deberta-v2", "fill-mask", "bert", "zh", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-10-20T12:19:19Z
--- language: - zh license: apache-2.0 tags: - bert inference: true widget: - text: "生活的真谛是[MASK]。" --- # Erlangshen-Deberta-97M-Chinese,one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM). The 97 million parameter deberta-V2 base model, using 180G Chinese data, 24 A100(40G) training for 7 days,which is a encoder-only transformer structure. Consumed totally 1B samples. ## Task Description Erlangshen-Deberta-97M-Chinese is pre-trained by bert like mask task from Deberta [paper](https://readpaper.com/paper/3033187248) ## Usage ```python from transformers import AutoModelForMaskedLM, AutoTokenizer, FillMaskPipeline import torch tokenizer=AutoTokenizer.from_pretrained('IDEA-CCNL/Erlangshen-DeBERTa-v2-97M-Chinese', use_fast=False) model=AutoModelForMaskedLM.from_pretrained('IDEA-CCNL/Erlangshen-DeBERTa-v2-97M-Chinese') text = '生活的真谛是[MASK]。' fillmask_pipe = FillMaskPipeline(model, tokenizer, device=7) print(fillmask_pipe(text, top_k=10)) ``` ## Finetune We present the dev results on some tasks. | Model | OCNLI | CMNLI | | ---------------------------------- | ----- | ------ | | RoBERTa-base | 0.743 | 0.7973 | | **Erlangshen-Deberta-97M-Chinese** | 0.752 | 0.807 | ## Citation If you find the resource is useful, please cite the following website in your paper. ``` @misc{Fengshenbang-LM, title={Fengshenbang-LM}, author={IDEA-CCNL}, year={2022}, howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}}, } ```
bthomas/article2keyword2.1b_barthez-orangesum-title_finetuned16_for_mlm
bthomas
2022-10-20T12:04:52Z
7
0
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "mlm", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-10-20T09:46:19Z
--- license: apache-2.0 tags: - mlm - generated_from_trainer model-index: - name: article2keyword2.1b_barthez-orangesum-title_finetuned16_for_mlm results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # article2keyword2.1b_barthez-orangesum-title_finetuned16_for_mlm This model is a fine-tuned version of [moussaKam/barthez-orangesum-title](https://huggingface.co/moussaKam/barthez-orangesum-title) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0525 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 16 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.2976 | 1.0 | 1353 | 0.0543 | | 0.0566 | 2.0 | 2706 | 0.0509 | | 0.0487 | 3.0 | 4059 | 0.0458 | | 0.0433 | 4.0 | 5412 | 0.0456 | | 0.04 | 5.0 | 6765 | 0.0460 | | 0.0373 | 6.0 | 8118 | 0.0454 | | 0.0355 | 7.0 | 9471 | 0.0465 | | 0.0328 | 8.0 | 10824 | 0.0474 | | 0.0317 | 9.0 | 12177 | 0.0470 | | 0.03 | 10.0 | 13530 | 0.0488 | | 0.0285 | 11.0 | 14883 | 0.0489 | | 0.0272 | 12.0 | 16236 | 0.0500 | | 0.0262 | 13.0 | 17589 | 0.0510 | | 0.0258 | 14.0 | 18942 | 0.0511 | | 0.0245 | 15.0 | 20295 | 0.0522 | | 0.0239 | 16.0 | 21648 | 0.0525 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.11.0 - Datasets 2.3.2 - Tokenizers 0.12.1
royam0820/distilbert-base-uncased-finetuned-emotion
royam0820
2022-10-20T11:50:05Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-30T14:56:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.9265 - name: F1 type: f1 value: 0.9266805971687471 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2157 - Accuracy: 0.9265 - F1: 0.9267 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8322 | 1.0 | 250 | 0.3176 | 0.905 | 0.9015 | | 0.2481 | 2.0 | 500 | 0.2157 | 0.9265 | 0.9267 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.12.1+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
amanneo/mail-generator-mini
amanneo
2022-10-20T11:02:15Z
11
0
transformers
[ "transformers", "tf", "gpt2", "text-generation", "generated_from_keras_callback", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-10-20T07:56:09Z
--- license: mit tags: - generated_from_keras_callback model-index: - name: amanneo/mail-generator-mini results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # amanneo/mail-generator-mini This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 5.4613 - Train Accuracy: 0.1611 - Validation Loss: 5.2617 - Validation Accuracy: 0.1386 - Epoch: 9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -925, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 10.0053 | 0.1068 | 8.5247 | 0.1394 | 0 | | 8.7772 | 0.1505 | 7.9685 | 0.1656 | 1 | | 8.2057 | 0.1663 | 7.4436 | 0.1655 | 2 | | 7.5786 | 0.1611 | 6.8572 | 0.1654 | 3 | | 6.9698 | 0.1679 | 6.3646 | 0.1735 | 4 | | 6.4911 | 0.1763 | 6.0124 | 0.1787 | 5 | | 6.1632 | 0.1834 | 5.7751 | 0.1826 | 6 | | 5.9057 | 0.1840 | 5.5786 | 0.1749 | 7 | | 5.6874 | 0.1758 | 5.4023 | 0.1616 | 8 | | 5.4613 | 0.1611 | 5.2617 | 0.1386 | 9 | ### Framework versions - Transformers 4.23.1 - TensorFlow 2.9.2 - Datasets 2.6.1 - Tokenizers 0.13.1
auriolar/Reinforce-Pixelcopter-PLE-v0
auriolar
2022-10-20T10:44:48Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2022-07-20T09:47:27Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 29.80 +/- 36.27 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
readerbench/RoSummary-large
readerbench
2022-10-20T10:00:37Z
10
1
transformers
[ "transformers", "pytorch", "tf", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-10-19T06:42:50Z
Model card for RoSummary-large --- language: - ro --- # RoSummary This is a version of the RoGPT2 model trained on the [AlephNews](https://huggingface.co/datasets/readerbench/AlephNews) dataset for the summarization task. There are 3 trained versions, they are available on the HuggingFace Hub: * [base](https://huggingface.co/readerbench/RoSummary-base) * [medium](https://huggingface.co/readerbench/RoSummary-medium) * [large](https://huggingface.co/readerbench/RoSummary-large) ## Evaluation on [AlephNews](https://huggingface.co/datasets/readerbench/AlephNews) | Model | Decode Method | | BERTScore | | | ROUGE | | |:------:|:--------------:|:---------:|:---------:|:--------:|:--------:|:--------:|:--------:| | | | Precision | Recall | F1-Score | ROUGE-1 | ROUGE-2 | ROUGE-L | | | Greedy | 0.7335 | 0.7399 | 0.7358 | 0.3360 | 0.1862 | 0.3333 | | Base | Beam Search | 0.7354 | 0.7468 | 0.7404 | 0.3480 | 0.1991 | 0.3416 | | | Top-p Sampling | 0.7296 | 0.7299 | 0.7292 | 0.3058 | 0.1452 | 0.2951 | | | Greedy | 0.7378 | 0.7401 | 0.7380 | 0.3422 | 0.1922 | 0.3394 | | Medium | Beam Search | 0.7390 | **0.7493**|**0.7434**|**0.3546**|**0.2061**|**0.3467**| | | Top-p Sampling | 0.7315 | 0.7285 | 0.7294 | 0.3042 | 0.1400 | 0.2921 | | | Greedy | 0.7376 | 0.7424 | 0.7391 | 0.3414 | 0.1895 | 0.3355 | | Large | Beam Search | **0.7394**| 0.7470 | 0.7424 | 0.3492 | 0.1995 | 0.3384 | | | Top-p Sampling | 0.7311 | 0.7301 | 0.7299 | 0.3051 | 0.1418 | 0.2931 | ## Acknowledgments --- Research supported with [Cloud TPUs](https://cloud.google.com/tpu/) from Google's [TensorFlow Research Cloud (TFRC)](https://www.tensorflow.org/tfrc)
bthomas/article2keyword2.1b_paraphrase-multilingual-MiniLM-L12-v2_finetuned_for_mlm
bthomas
2022-10-20T09:36:12Z
5
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "mlm", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-10-20T08:33:40Z
--- license: apache-2.0 tags: - mlm - generated_from_trainer model-index: - name: article2keyword2.1b_paraphrase-multilingual-MiniLM-L12-v2_finetuned_for_mlm results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # article2keyword2.1b_paraphrase-multilingual-MiniLM-L12-v2_finetuned_for_mlm This model is a fine-tuned version of [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0673 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 16 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 2.3777 | 1.0 | 1353 | 0.3168 | | 0.2358 | 2.0 | 2706 | 0.1564 | | 0.1372 | 3.0 | 4059 | 0.1149 | | 0.1046 | 4.0 | 5412 | 0.0956 | | 0.086 | 5.0 | 6765 | 0.0853 | | 0.0741 | 6.0 | 8118 | 0.0786 | | 0.0653 | 7.0 | 9471 | 0.0750 | | 0.0594 | 8.0 | 10824 | 0.0726 | | 0.0542 | 9.0 | 12177 | 0.0699 | | 0.0504 | 10.0 | 13530 | 0.0692 | | 0.047 | 11.0 | 14883 | 0.0684 | | 0.0444 | 12.0 | 16236 | 0.0675 | | 0.0423 | 13.0 | 17589 | 0.0674 | | 0.0404 | 14.0 | 18942 | 0.0673 | | 0.0392 | 15.0 | 20295 | 0.0672 | | 0.0379 | 16.0 | 21648 | 0.0673 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.11.0 - Datasets 2.3.2 - Tokenizers 0.12.1
model-attribution-challenge/gpt2
model-attribution-challenge
2022-10-20T09:34:54Z
8
0
transformers
[ "transformers", "pytorch", "tf", "jax", "tflite", "rust", "safetensors", "gpt2", "text-generation", "exbert", "en", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-08-08T21:30:42Z
--- language: en tags: - exbert license: mit --- # GPT-2 Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in [this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and first released at [this page](https://openai.com/blog/better-language-models/). Disclaimer: The team releasing GPT-2 also wrote a [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card has been written by the Hugging Face team to complete the information they provided and give specific examples of bias. ## Model description GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt. ## Intended uses & limitations You can use the raw model for text generation or fine-tune it to a downstream task. See the [model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you. ### How to use You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='gpt2') >>> set_seed(42) >>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5) [{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."}, {'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"}, {'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"}, {'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"}, {'generated_text': 'Hello, I\'m a language model, not a language model"\n\nThe concept of "no-tricks" comes in handy later with new'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import GPT2Tokenizer, GPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = GPT2Model.from_pretrained('gpt2') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import GPT2Tokenizer, TFGPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = TFGPT2Model.from_pretrained('gpt2') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ### Limitations and bias The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases): > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases > that require the generated text to be true. > > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do > not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a > study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, > and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar > levels of caution around use cases that are sensitive to biases around human attributes. Here's an example of how the model can have biased predictions: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='gpt2') >>> set_seed(42) >>> generator("The White man worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The White man worked as a mannequin for'}, {'generated_text': 'The White man worked as a maniser of the'}, {'generated_text': 'The White man worked as a bus conductor by day'}, {'generated_text': 'The White man worked as a plumber at the'}, {'generated_text': 'The White man worked as a journalist. He had'}] >>> set_seed(42) >>> generator("The Black man worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The Black man worked as a man at a restaurant'}, {'generated_text': 'The Black man worked as a car salesman in a'}, {'generated_text': 'The Black man worked as a police sergeant at the'}, {'generated_text': 'The Black man worked as a man-eating monster'}, {'generated_text': 'The Black man worked as a slave, and was'}] ``` This bias will also affect all fine-tuned versions of this model. ## Training data The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights 40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText [here](https://github.com/openai/gpt-2/blob/master/domains.txt). ## Training procedure ### Preprocessing The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens. The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact details of training. ## Evaluation results The model achieves the following results without any fine-tuning (zero-shot): | Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW | |:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:| | (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) | | | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 | ### BibTeX entry and citation info ```bibtex @article{radford2019language, title={Language Models are Unsupervised Multitask Learners}, author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya}, year={2019} } ``` <a href="https://huggingface.co/exbert/?model=gpt2"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
mprzibilla/super_large_finetune_CM01
mprzibilla
2022-10-20T09:04:35Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-10-19T23:12:30Z
--- tags: - generated_from_trainer model-index: - name: super_large_finetune_CM01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # super_large_finetune_CM01 This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 7.2285 - Wer: 0.7714 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 15 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 857 - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 1.0031 | 5.0 | 1715 | 1.9766 | 0.7857 | | 0.2107 | 10.0 | 3430 | 3.8748 | 0.8238 | | 0.1393 | 15.0 | 5145 | 4.7403 | 0.7952 | | 0.0931 | 20.0 | 6860 | 3.5077 | 0.6667 | | 0.0649 | 25.0 | 8575 | 7.7419 | 0.9333 | | 0.0592 | 30.0 | 10290 | 5.6440 | 0.7762 | | 0.0396 | 35.0 | 12005 | 6.9629 | 0.6810 | | 0.03 | 40.0 | 13720 | 7.8282 | 0.7524 | | 0.0191 | 45.0 | 15435 | 6.4626 | 0.7429 | | 0.0121 | 50.0 | 17150 | 7.2285 | 0.7714 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.2+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
jayanta/vit-base-patch16-224-FV-20epochs-finetuned-memes
jayanta
2022-10-20T08:21:22Z
43
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-10-20T07:39:54Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy - precision - recall - f1 model-index: - name: vit-base-patch16-224-FV-20epochs-finetuned-memes results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.8632148377125193 - name: Precision type: precision value: 0.8617373130509159 - name: Recall type: recall value: 0.8632148377125193 - name: F1 type: f1 value: 0.8621436376894498 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-FV-20epochs-finetuned-memes This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.6532 - Accuracy: 0.8632 - Precision: 0.8617 - Recall: 0.8632 - F1: 0.8621 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00012 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 1.1709 | 0.99 | 20 | 0.9393 | 0.6971 | 0.6896 | 0.6971 | 0.6890 | | 0.5295 | 1.99 | 40 | 0.5024 | 0.8091 | 0.8210 | 0.8091 | 0.8133 | | 0.2909 | 2.99 | 60 | 0.4070 | 0.8539 | 0.8529 | 0.8539 | 0.8529 | | 0.1435 | 3.99 | 80 | 0.4136 | 0.8539 | 0.8522 | 0.8539 | 0.8522 | | 0.0928 | 4.99 | 100 | 0.4495 | 0.8478 | 0.8548 | 0.8478 | 0.8507 | | 0.0643 | 5.99 | 120 | 0.4897 | 0.8594 | 0.8572 | 0.8594 | 0.8573 | | 0.061 | 6.99 | 140 | 0.5040 | 0.8423 | 0.8490 | 0.8423 | 0.8453 | | 0.0519 | 7.99 | 160 | 0.5266 | 0.8524 | 0.8502 | 0.8524 | 0.8510 | | 0.0546 | 8.99 | 180 | 0.5200 | 0.8586 | 0.8632 | 0.8586 | 0.8605 | | 0.0478 | 9.99 | 200 | 0.5654 | 0.8555 | 0.8548 | 0.8555 | 0.8548 | | 0.0509 | 10.99 | 220 | 0.5774 | 0.8609 | 0.8626 | 0.8609 | 0.8616 | | 0.0467 | 11.99 | 240 | 0.5847 | 0.8594 | 0.8602 | 0.8594 | 0.8594 | | 0.0468 | 12.99 | 260 | 0.5909 | 0.8601 | 0.8597 | 0.8601 | 0.8596 | | 0.0469 | 13.99 | 280 | 0.5970 | 0.8563 | 0.8560 | 0.8563 | 0.8561 | | 0.0438 | 14.99 | 300 | 0.6234 | 0.8594 | 0.8583 | 0.8594 | 0.8586 | | 0.0441 | 15.99 | 320 | 0.6190 | 0.8563 | 0.8582 | 0.8563 | 0.8570 | | 0.0431 | 16.99 | 340 | 0.6419 | 0.8570 | 0.8584 | 0.8570 | 0.8574 | | 0.0454 | 17.99 | 360 | 0.6528 | 0.8563 | 0.8556 | 0.8563 | 0.8558 | | 0.0417 | 18.99 | 380 | 0.6688 | 0.8578 | 0.8575 | 0.8578 | 0.8574 | | 0.0432 | 19.99 | 400 | 0.6532 | 0.8632 | 0.8617 | 0.8632 | 0.8621 | ### Framework versions - Transformers 4.24.0.dev0 - Pytorch 1.11.0+cu102 - Datasets 2.6.1.dev0 - Tokenizers 0.13.1
Ddff/Edee
Ddff
2022-10-20T07:54:35Z
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2022-10-20T07:54:35Z
--- license: bigscience-openrail-m ---
ArafatBHossain/bert-distilled-model-flip_mind_epoch12_alpha0.8
ArafatBHossain
2022-10-20T06:26:44Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-20T05:35:15Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: bert-distilled-model-flip_mind_epoch12_alpha0.8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-distilled-model-flip_mind_epoch12_alpha0.8 This model is a fine-tuned version of [ArafatBHossain/distill_bert_fine_tuned_mind](https://huggingface.co/ArafatBHossain/distill_bert_fine_tuned_mind) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7953 - Accuracy: 0.914 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 12 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.8595 | 1.0 | 3054 | 1.8311 | 0.854 | | 1.7769 | 2.0 | 6108 | 1.7204 | 0.847 | | 1.7614 | 3.0 | 9162 | 1.7666 | 0.8666 | | 1.7212 | 4.0 | 12216 | 1.8134 | 0.8716 | | 1.7255 | 5.0 | 15270 | 1.7368 | 0.8812 | | 1.6845 | 6.0 | 18324 | 1.7368 | 0.8898 | | 1.7346 | 7.0 | 21378 | 1.6621 | 0.8936 | | 1.7436 | 8.0 | 24432 | 1.7180 | 0.9008 | | 1.7333 | 9.0 | 27486 | 1.7523 | 0.9048 | | 1.7805 | 10.0 | 30540 | 1.7820 | 0.9078 | | 1.792 | 11.0 | 33594 | 1.7329 | 0.9096 | | 1.7463 | 12.0 | 36648 | 1.7953 | 0.914 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.11.0 - Datasets 2.6.1 - Tokenizers 0.12.1
debbiesoon/summarise_v6
debbiesoon
2022-10-20T04:32:42Z
10
0
transformers
[ "transformers", "pytorch", "tensorboard", "led", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-10-16T20:04:04Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: summarise_v6 results: [] --- # summarise_v6 This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0497 - Rouge2 Precision: 0.3109 - Rouge2 Recall: 0.406 - Rouge2 Fmeasure: 0.3375 ## Model description More information needed ## Intended uses & limitations max_input_length = 3072 max_output_length = 1000 led.config.max_length = 1000 led.config.min_length = 100 ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure | |:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:| | 1.7163 | 0.22 | 10 | 1.2307 | 0.1428 | 0.5118 | 0.2089 | | 1.632 | 0.44 | 20 | 1.1337 | 0.36 | 0.3393 | 0.3181 | | 1.0916 | 0.67 | 30 | 1.0738 | 0.2693 | 0.3487 | 0.2731 | | 1.573 | 0.89 | 40 | 1.0497 | 0.3109 | 0.406 | 0.3375 | ### Framework versions - Transformers 4.21.3 - Pytorch 1.12.1+cu113 - Datasets 1.2.1 - Tokenizers 0.12.1
Kevin961312/LunarLander
Kevin961312
2022-10-20T04:12:32Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-10-20T03:56:57Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -104.91 +/- 121.92 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
tomjam/bert-finetuned-ner
tomjam
2022-10-20T01:48:18Z
16
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-20T00:48:15Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 config: conll2003 split: train args: conll2003 metrics: - name: Precision type: precision value: 0.9352911896465903 - name: Recall type: recall value: 0.9486704813194211 - name: F1 type: f1 value: 0.9419333277633887 - name: Accuracy type: accuracy value: 0.9864455171601814 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0610 - Precision: 0.9353 - Recall: 0.9487 - F1: 0.9419 - Accuracy: 0.9864 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0907 | 1.0 | 1756 | 0.0732 | 0.9188 | 0.9337 | 0.9262 | 0.9818 | | 0.035 | 2.0 | 3512 | 0.0607 | 0.9280 | 0.9480 | 0.9379 | 0.9859 | | 0.0169 | 3.0 | 5268 | 0.0610 | 0.9353 | 0.9487 | 0.9419 | 0.9864 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
vwxyzjn/BreakoutNoFrameskip-v4-dqn_atari-seed1
vwxyzjn
2022-10-20T00:34:56Z
0
0
null
[ "tensorboard", "BreakoutNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-10-20T00:34:52Z
--- tags: - BreakoutNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - custom-implementation model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: BreakoutNoFrameskip-v4 type: BreakoutNoFrameskip-v4 metrics: - type: mean_reward value: 2.70 +/- 4.12 name: mean_reward verified: false --- # (CleanRL) **DQN** Agent Playing **BreakoutNoFrameskip-v4** This is a trained model of a DQN agent playing BreakoutNoFrameskip-v4. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/dqn_atari.py). # Hyperparameters ```python {'batch_size': 32, 'buffer_size': 1000000, 'capture_video': False, 'cuda': True, 'end_e': 0.01, 'env_id': 'BreakoutNoFrameskip-v4', 'exp_name': 'dqn_atari', 'exploration_fraction': 0.1, 'gamma': 0.99, 'hf_entity': '', 'learning_rate': 0.0001, 'learning_starts': 80000, 'save_model': True, 'seed': 1, 'start_e': 1, 'target_network_frequency': 1000, 'torch_deterministic': True, 'total_timesteps': 10000, 'track': False, 'train_frequency': 4, 'upload_model': True, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
giusepperusso/distilbert-base-uncased-finetuned-imdb
giusepperusso
2022-10-19T23:59:29Z
3
0
transformers
[ "transformers", "tf", "distilbert", "fill-mask", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-10-19T16:03:13Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: giusepperusso/distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # giusepperusso/distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.5679 - Validation Loss: 2.3517 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 92750, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 2.5679 | 2.3517 | 0 | ### Framework versions - Transformers 4.23.1 - TensorFlow 2.9.2 - Datasets 2.6.1 - Tokenizers 0.13.1
mariolinml/deberta-v3-base_mnli_uf_ner_1019_v1
mariolinml
2022-10-19T23:53:22Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "deberta-v2", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-19T22:55:55Z
--- license: mit tags: - generated_from_trainer model-index: - name: deberta-v3-base_mnli_uf_ner_1019_v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-base_mnli_uf_ner_1019_v1 This model is a fine-tuned version of [mariolinml/deberta-v3-base_MNLI_10_19_v0](https://huggingface.co/mariolinml/deberta-v3-base_MNLI_10_19_v0) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
CavenLen/ddpm-Kaga-128
CavenLen
2022-10-19T22:03:31Z
19
0
diffusers
[ "diffusers", "tensorboard", "en", "dataset:CavenLen/Kaga", "license:apache-2.0", "diffusers:DDPMPipeline", "region:us" ]
null
2022-10-17T12:48:44Z
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: CavenLen/Kaga metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-Kaga-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `CavenLen/Kaga` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/CavenLen/ddpm-Kaga-128/tensorboard?#scalars)
thucdangvan020999/marian-finetuned-kde4-en-to-fr
thucdangvan020999
2022-10-19T21:12:47Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "translation", "generated_from_trainer", "dataset:kde4", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-10-19T19:27:37Z
--- license: apache-2.0 tags: - translation - generated_from_trainer datasets: - kde4 metrics: - bleu model-index: - name: marian-finetuned-kde4-en-to-fr results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: kde4 type: kde4 config: en-fr split: train args: en-fr metrics: - name: Bleu type: bleu value: 52.83113187001415 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 0.8560 - Bleu: 52.8311 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
mathislucka/tat-model
mathislucka
2022-10-19T20:44:53Z
1
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-10-19T20:44:45Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # mathislucka/tat-model This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('mathislucka/tat-model') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=mathislucka/tat-model) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 39 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MarginMSELoss.MarginMSELoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 3, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 250, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
rajesh426/distilbert-base-uncased_finetuned_SPEECH_TEXT_CH_2_DISPLAY
rajesh426
2022-10-19T19:38:11Z
3
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-19T19:31:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased_finetuned_SPEECH_TEXT_CH_2_DISPLAY results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased_finetuned_SPEECH_TEXT_CH_2_DISPLAY This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0863 - Accuracy: 0.7368 - F1: 0.7114 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 40 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 1.0362 | 1.0 | 19 | 0.9281 | 0.5789 | 0.4964 | | 0.9725 | 2.0 | 38 | 0.8906 | 0.6316 | 0.5707 | | 0.8712 | 3.0 | 57 | 0.8080 | 0.6316 | 0.5889 | | 0.6402 | 4.0 | 76 | 0.6386 | 0.7895 | 0.7474 | | 0.4453 | 5.0 | 95 | 0.5401 | 0.7895 | 0.7485 | | 0.2658 | 6.0 | 114 | 0.4999 | 0.8421 | 0.7990 | | 0.1695 | 7.0 | 133 | 0.6248 | 0.7895 | 0.7427 | | 0.0822 | 8.0 | 152 | 0.7391 | 0.7368 | 0.7114 | | 0.0303 | 9.0 | 171 | 0.6665 | 0.7895 | 0.7485 | | 0.016 | 10.0 | 190 | 0.8217 | 0.7368 | 0.7114 | | 0.0103 | 11.0 | 209 | 0.8090 | 0.7368 | 0.7114 | | 0.0083 | 12.0 | 228 | 0.8646 | 0.7368 | 0.7114 | | 0.0068 | 13.0 | 247 | 0.9091 | 0.7368 | 0.7114 | | 0.0059 | 14.0 | 266 | 0.8731 | 0.7368 | 0.7114 | | 0.0049 | 15.0 | 285 | 0.9512 | 0.7368 | 0.7114 | | 0.0048 | 16.0 | 304 | 0.9376 | 0.7368 | 0.7114 | | 0.004 | 17.0 | 323 | 0.9507 | 0.7368 | 0.7114 | | 0.0037 | 18.0 | 342 | 0.9868 | 0.7368 | 0.7114 | | 0.0033 | 19.0 | 361 | 0.9862 | 0.7368 | 0.7114 | | 0.0029 | 20.0 | 380 | 0.9733 | 0.7368 | 0.7114 | | 0.0029 | 21.0 | 399 | 0.9747 | 0.7368 | 0.7114 | | 0.0027 | 22.0 | 418 | 0.9998 | 0.7368 | 0.7114 | | 0.0024 | 23.0 | 437 | 0.9984 | 0.7368 | 0.7114 | | 0.0024 | 24.0 | 456 | 1.0270 | 0.7368 | 0.7114 | | 0.0024 | 25.0 | 475 | 1.0083 | 0.7368 | 0.7114 | | 0.0022 | 26.0 | 494 | 1.0167 | 0.7368 | 0.7114 | | 0.0021 | 27.0 | 513 | 1.0273 | 0.7368 | 0.7114 | | 0.002 | 28.0 | 532 | 1.0340 | 0.7368 | 0.7114 | | 0.0021 | 29.0 | 551 | 1.0282 | 0.7368 | 0.7114 | | 0.002 | 30.0 | 570 | 1.0372 | 0.7368 | 0.7114 | | 0.0019 | 31.0 | 589 | 1.0593 | 0.7368 | 0.7114 | | 0.0017 | 32.0 | 608 | 1.0841 | 0.7368 | 0.7114 | | 0.0018 | 33.0 | 627 | 1.0920 | 0.7368 | 0.7114 | | 0.0019 | 34.0 | 646 | 1.0943 | 0.7368 | 0.7114 | | 0.0018 | 35.0 | 665 | 1.0883 | 0.7368 | 0.7114 | | 0.0017 | 36.0 | 684 | 1.0864 | 0.7368 | 0.7114 | | 0.0016 | 37.0 | 703 | 1.0890 | 0.7368 | 0.7114 | | 0.0017 | 38.0 | 722 | 1.0894 | 0.7368 | 0.7114 | | 0.0015 | 39.0 | 741 | 1.0867 | 0.7368 | 0.7114 | | 0.0016 | 40.0 | 760 | 1.0863 | 0.7368 | 0.7114 | ### Framework versions - Transformers 4.22.2 - Pytorch 1.10.2 - Datasets 2.5.2 - Tokenizers 0.12.1
rajesh426/distilbert-base-uncased_finetuned_SPEECH_TEXT_CH_1_DISPLAY
rajesh426
2022-10-19T19:14:58Z
3
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-19T19:08:44Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased_finetuned_SPEECH_TEXT_CH_1_DISPLAY results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased_finetuned_SPEECH_TEXT_CH_1_DISPLAY This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7615 - Accuracy: 0.7895 - F1: 0.8006 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 40 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 1.0411 | 1.0 | 19 | 0.9506 | 0.4737 | 0.3045 | | 0.9515 | 2.0 | 38 | 0.9126 | 0.5789 | 0.4964 | | 0.9064 | 3.0 | 57 | 0.8215 | 0.7368 | 0.6977 | | 0.7414 | 4.0 | 76 | 0.6747 | 0.7895 | 0.7447 | | 0.4968 | 5.0 | 95 | 0.5658 | 0.8947 | 0.8474 | | 0.2849 | 6.0 | 114 | 0.5001 | 0.8421 | 0.7953 | | 0.1576 | 7.0 | 133 | 0.4605 | 0.8421 | 0.7953 | | 0.0705 | 8.0 | 152 | 0.6264 | 0.7895 | 0.7822 | | 0.0297 | 9.0 | 171 | 0.5283 | 0.8421 | 0.8561 | | 0.0142 | 10.0 | 190 | 0.5972 | 0.7368 | 0.7441 | | 0.0107 | 11.0 | 209 | 0.5542 | 0.8421 | 0.8561 | | 0.0079 | 12.0 | 228 | 0.5919 | 0.8421 | 0.8561 | | 0.0067 | 13.0 | 247 | 0.6106 | 0.7895 | 0.8006 | | 0.0055 | 14.0 | 266 | 0.6232 | 0.8421 | 0.8561 | | 0.0049 | 15.0 | 285 | 0.6478 | 0.8421 | 0.8561 | | 0.0043 | 16.0 | 304 | 0.6465 | 0.8421 | 0.8561 | | 0.0038 | 17.0 | 323 | 0.6618 | 0.7895 | 0.8006 | | 0.0034 | 18.0 | 342 | 0.6669 | 0.8421 | 0.8561 | | 0.0032 | 19.0 | 361 | 0.6737 | 0.8421 | 0.8561 | | 0.003 | 20.0 | 380 | 0.6808 | 0.7895 | 0.8006 | | 0.0028 | 21.0 | 399 | 0.6890 | 0.7895 | 0.8006 | | 0.0026 | 22.0 | 418 | 0.7081 | 0.7895 | 0.8006 | | 0.0025 | 23.0 | 437 | 0.7146 | 0.7895 | 0.8006 | | 0.0023 | 24.0 | 456 | 0.7182 | 0.7895 | 0.8006 | | 0.0022 | 25.0 | 475 | 0.7248 | 0.7895 | 0.8006 | | 0.002 | 26.0 | 494 | 0.7419 | 0.7895 | 0.8006 | | 0.0019 | 27.0 | 513 | 0.7390 | 0.7895 | 0.8006 | | 0.0021 | 28.0 | 532 | 0.7379 | 0.7895 | 0.8006 | | 0.0019 | 29.0 | 551 | 0.7392 | 0.7895 | 0.8006 | | 0.0019 | 30.0 | 570 | 0.7362 | 0.7895 | 0.8006 | | 0.0019 | 31.0 | 589 | 0.7395 | 0.7895 | 0.8006 | | 0.0019 | 32.0 | 608 | 0.7436 | 0.7895 | 0.8006 | | 0.0017 | 33.0 | 627 | 0.7509 | 0.7895 | 0.8006 | | 0.0018 | 34.0 | 646 | 0.7563 | 0.7895 | 0.8006 | | 0.0016 | 35.0 | 665 | 0.7597 | 0.7895 | 0.8006 | | 0.0017 | 36.0 | 684 | 0.7617 | 0.7895 | 0.8006 | | 0.0016 | 37.0 | 703 | 0.7625 | 0.7895 | 0.8006 | | 0.0017 | 38.0 | 722 | 0.7615 | 0.7895 | 0.8006 | | 0.0017 | 39.0 | 741 | 0.7617 | 0.7895 | 0.8006 | | 0.0015 | 40.0 | 760 | 0.7615 | 0.7895 | 0.8006 | ### Framework versions - Transformers 4.22.2 - Pytorch 1.10.2 - Datasets 2.5.2 - Tokenizers 0.12.1
sd-concepts-library/sims-2-portrait
sd-concepts-library
2022-10-19T18:56:56Z
0
4
null
[ "license:mit", "region:us" ]
null
2022-10-19T18:33:54Z
--- license: mit --- ### Sims 2 Portrait on Stable Diffusion This is the `<sims2-portrait>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<sims2-portrait> 0](https://huggingface.co/sd-concepts-library/sims-2-portrait/resolve/main/concept_images/3.jpeg) ![<sims2-portrait> 1](https://huggingface.co/sd-concepts-library/sims-2-portrait/resolve/main/concept_images/0.jpeg) ![<sims2-portrait> 2](https://huggingface.co/sd-concepts-library/sims-2-portrait/resolve/main/concept_images/4.jpeg) ![<sims2-portrait> 3](https://huggingface.co/sd-concepts-library/sims-2-portrait/resolve/main/concept_images/5.jpeg) ![<sims2-portrait> 4](https://huggingface.co/sd-concepts-library/sims-2-portrait/resolve/main/concept_images/1.jpeg) ![<sims2-portrait> 5](https://huggingface.co/sd-concepts-library/sims-2-portrait/resolve/main/concept_images/6.jpeg) ![<sims2-portrait> 6](https://huggingface.co/sd-concepts-library/sims-2-portrait/resolve/main/concept_images/2.jpeg) Here are example images generated using this style: ![<sims2-portrait> no other prompt](https://i.imgur.com/q26bmn0.png) ![<sims2-portrait> old man](https://i.imgur.com/jpK8SYn.png) ![<sims2-portrait> middle aged asian woman](https://i.imgur.com/GmQnBjg.png) I'm not satisfied with the result as it usually fails to capture the game's aesthetic.
g30rv17ys/ddpm-hkuoct-wamd-300ep
g30rv17ys
2022-10-19T17:45:19Z
0
0
diffusers
[ "diffusers", "tensorboard", "en", "dataset:imagefolder", "license:apache-2.0", "diffusers:DDPMPipeline", "region:us" ]
null
2022-10-19T16:22:40Z
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: imagefolder metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-hkuoct-wamd-300ep ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `imagefolder` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/geevegeorge/ddpm-hkuoct-wamd-300ep/tensorboard?#scalars)
huggingtweets/konradha_
huggingtweets
2022-10-19T16:11:00Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-10-19T16:09:29Z
--- language: en thumbnail: http://www.huggingtweets.com/konradha_/1666195856134/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1540685336422088704/JDxiybNe_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Konrad</div> <div style="text-align: center; font-size: 14px;">@konradha_</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Konrad. | Data | Konrad | | --- | --- | | Tweets downloaded | 256 | | Retweets | 38 | | Short tweets | 75 | | Tweets kept | 143 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3ox7i4yk/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @konradha_'s tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/10k5hc9s) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/10k5hc9s/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/konradha_') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
gabski/bert-relative-claim-quality
gabski
2022-10-19T16:09:19Z
18
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "en", "dataset:ClaimRev", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-19T14:04:22Z
--- language: en license: cc-by-nc-sa-4.0 datasets: - ClaimRev --- # Model This model was obtained by fine-tuning bert-base-cased on the ClaimRev dataset. Paper: [Learning From Revisions: Quality Assessment of Claims in Argumentation at Scale](https://aclanthology.org/2021.eacl-main.147/) Authors: Gabriella Skitalinskaya, Jonas Klaff, Henning Wachsmuth # Claim Quality Classification We cast this task as a pairwise classification task, where the objective is to compare two versions of the same claim and determine which one is better. # Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch tokenizer = AutoTokenizer.from_pretrained("gabski/bert-relative-claim-quality") model = AutoModelForSequenceClassification.from_pretrained("gabski/bert-relative-claim-quality") claim_1 = 'Smoking marijuana is less harmfull then smoking cigarettes.' claim_2 = 'Smoking marijuana is less harmful than smoking cigarettes.' model_input = tokenizer(claim_1,claim_2, return_tensors='pt') model_outputs = model(**model_input) outputs = torch.nn.functional.softmax(model_outputs.logits, dim = -1) print(outputs) ```
smz2122/image
smz2122
2022-10-19T15:37:37Z
0
0
null
[ "region:us" ]
null
2022-10-19T15:37:20Z
git clone https://huggingface.co/templates/text-to-image cd text-to-image git remote set-url origin https://huggingface.co/$YOUR_USER/$YOUR_REPO_NAME git push --force
enryu43/anifusion_unet
enryu43
2022-10-19T15:01:54Z
15
6
diffusers
[ "diffusers", "diffusers:LDMTextToImagePipeline", "region:us" ]
null
2022-10-11T21:01:02Z
This model is converted with https://github.com/huggingface/diffusers/blob/main/scripts/convert_original_stable_diffusion_to_diffusers.py. However, the tokenizer in the diffuser model is wrong, for proper usage, see description at https://medium.com/@enryu9000/anifusion-diffusion-models-for-anime-pictures-138cf1af2cbe, and instructions/examples at https://github.com/enryu43/anifusion-stable-diffusion. Also, the original checkpoint in the Latent Diffusion format is available. Installation instructions for webui: https://gist.github.com/enryu43/858999bf69dc92b97fdad6137c3c45e6
theodotus/stt_uk_squeezeformer_rnnt_xs
theodotus
2022-10-19T14:33:51Z
6
0
nemo
[ "nemo", "automatic-speech-recognition", "uk", "dataset:mozilla-foundation/common_voice_10_0", "dataset:Yehor/voa-uk-transcriptions", "license:bsd-3-clause", "model-index", "region:us" ]
automatic-speech-recognition
2022-10-17T18:08:59Z
--- language: - uk library_name: nemo datasets: - mozilla-foundation/common_voice_10_0 - Yehor/voa-uk-transcriptions tags: - automatic-speech-recognition model-index: - name: stt_uk_squeezeformer_rnnt_xs results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Mozilla Common Voice 10.0 type: mozilla-foundation/common_voice_10_0 config: clean split: test args: language: uk metrics: - name: Test WER type: wer value: 8.814 license: bsd-3-clause --- # Squeezeformer-RNNT XS (uk-UA) <style> img { display: inline; } </style> | [![Model architecture](https://img.shields.io/badge/Model_Arch-Squeezeformer--RNNT-lightgrey#model-badge)](#model-architecture) | [![Model size](https://img.shields.io/badge/Params-10M-lightgrey#model-badge)](#model-architecture) | [![Language](https://img.shields.io/badge/Language-uk--UA-lightgrey#model-badge)](#datasets) |
huggingtweets/moonideograph
huggingtweets
2022-10-19T14:31:00Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-10-19T14:28:14Z
--- language: en thumbnail: http://www.huggingtweets.com/moonideograph/1666189855449/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1581258561400848384/ktYtGqLD_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">🌑 Loona the Ninth</div> <div style="text-align: center; font-size: 14px;">@moonideograph</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from 🌑 Loona the Ninth. | Data | 🌑 Loona the Ninth | | --- | --- | | Tweets downloaded | 409 | | Retweets | 104 | | Short tweets | 22 | | Tweets kept | 283 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/8mujtj4v/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @moonideograph's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/21pia0le) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/21pia0le/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/moonideograph') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
facebook/unit_hifigan_HK_layer12.km2500_frame_TAT-TTS
facebook
2022-10-19T14:27:14Z
95
2
fairseq
[ "fairseq", "audio", "text-to-speech", "hk", "license:cc-by-nc-4.0", "region:us" ]
text-to-speech
2022-10-08T01:34:38Z
--- license: cc-by-nc-4.0 library_name: fairseq task: text-to-speech tags: - fairseq - audio - text-to-speech language: hk --- ## unit_hifigan_HK_layer12.km2500_frame_TAT-TTS Hokkien unit HiFiGAN based vocoder from fairseq: - Trained with [TAT-TTS](https://sites.google.com/speech.ntut.edu.tw/fsw/home/tat-tts-corpus) data with 4 speakers in Taiwanese Hokkien accent. See [here]( https://research.facebook.com/publications/hokkien-direct-speech-to-speech-translation) for more training details. ## Usage ```python import json import os from pathlib import Path import IPython.display as ipd from fairseq import hub_utils from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub from fairseq.models.speech_to_text.hub_interface import S2THubInterface from fairseq.models.text_to_speech import CodeHiFiGANVocoder from fairseq.models.text_to_speech.hub_interface import VocoderHubInterface from huggingface_hub import snapshot_download import torchaudio cache_dir = os.getenv("HUGGINGFACE_HUB_CACHE") # speech synthesis library_name = "fairseq" cache_dir = ( cache_dir or (Path.home() / ".cache" / library_name).as_posix() ) cache_dir = snapshot_download( f"facebook/unit_hifigan_HK_layer12.km2500_frame_TAT-TTS", cache_dir=cache_dir, library_name=library_name ) x = hub_utils.from_pretrained( cache_dir, "model.pt", ".", archive_map=CodeHiFiGANVocoder.hub_models(), config_yaml="config.json", fp16=False, is_vocoder=True, ) with open(f"{x['args']['data']}/config.json") as f: vocoder_cfg = json.load(f) assert ( len(x["args"]["model_path"]) == 1 ), "Too many vocoder models in the input" vocoder = CodeHiFiGANVocoder(x["args"]["model_path"][0], vocoder_cfg) tts_model = VocoderHubInterface(vocoder_cfg, vocoder) tts_sample = tts_model.get_model_input(unit) wav, sr = tts_model.get_prediction(tts_sample) ipd.Audio(wav, rate=sr) ```
model-attribution-challenge/bloom-560m
model-attribution-challenge
2022-10-19T12:35:58Z
24
0
transformers
[ "transformers", "pytorch", "jax", "safetensors", "bloom", "feature-extraction", "text-generation", "ak", "ar", "as", "bm", "bn", "ca", "code", "en", "es", "eu", "fon", "fr", "gu", "hi", "id", "ig", "ki", "kn", "lg", "ln", "ml", "mr", "ne", "nso", "ny", "or", "pa", "pt", "rn", "rw", "sn", "st", "sw", "ta", "te", "tn", "ts", "tum", "tw", "ur", "vi", "wo", "xh", "yo", "zh", "zhs", "zht", "zu", "arxiv:1909.08053", "arxiv:2110.02861", "arxiv:2108.12409", "license:bigscience-bloom-rail-1.0", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-11-09T22:43:31Z
--- license: bigscience-bloom-rail-1.0 language: - ak - ar - as - bm - bn - ca - code - en - es - eu - fon - fr - gu - hi - id - ig - ki - kn - lg - ln - ml - mr - ne - nso - ny - or - pa - pt - rn - rw - sn - st - sw - ta - te - tn - ts - tum - tw - ur - vi - wo - xh - yo - zh - zhs - zht - zu pipeline_tag: text-generation --- <h1 style='text-align: center '>BLOOM LM</h1> <h2 style='text-align: center '><em>BigScience Large Open-science Open-access Multilingual Language Model</em> </h2> <h3 style='text-align: center '>Model Card</h3> <img src="https://s3.amazonaws.com/moonup/production/uploads/1657124309515-5f17f0a0925b9863e28ad517.png" alt="BigScience Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> Version 1.0 / 26.May.2022 ## Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Training Data](#training-data) 4. [Risks and Limitations](#risks-and-limitations) 5. [Evaluation](#evaluation) 6. [Recommendations](#recommendations) 7. [Glossary and Calculations](#glossary-and-calculations) 8. [More Information](#more-information) 9. [Model Card Authors](#model-card-authors) ## Model Details ### Basics *This section provides information for anyone who wants to know about the model.* <details> <summary>Click to expand</summary> <br/> **Developed by:** BigScience ([website](https://bigscience.huggingface.co)) * All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)* **Model Type:** Transformer-based Language Model **Version:** 1.0.0 **Languages:** Multiple; see [training data](#training-data) **License:** RAIL License v1.0 ([link](https://huggingface.co/spaces/bigscience/license)) **Release Date Estimate:** Monday, 11.July.2022 **Send Questions to:** [email protected] **Cite as:** BigScience, _BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model_. International, May 2021-May 2022 **Funded by:** * The French government. * Hugging Face ([website](https://huggingface.co)). * Organizations of contributors. *(Further breakdown of organizations forthcoming.)* </details> ### Technical Specifications *This section provides information for people who work on model development.* <details> <summary>Click to expand</summary><br/> Please see [the BLOOM training README](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml#readme) for full details on replicating training. **Model Architecture:** Modified from Megatron-LM GPT2 (see [paper](https://arxiv.org/abs/1909.08053), [BLOOM Megatron code](https://github.com/bigscience-workshop/Megatron-DeepSpeed)): * Decoder-only architecture * Layer normalization applied to word embeddings layer (`StableEmbedding`; see [code](https://github.com/facebookresearch/bitsandbytes), [paper](https://arxiv.org/pdf/2110.02861.pdf)) * ALiBI positional encodings (see [paper](https://arxiv.org/pdf/2108.12409.pdf)), with GeLU activation functions * 559,214,592 parameters: * 256,901,120 embedding parameters * 24 layers, 16 attention heads * Hidden layers are 1024-dimensional * Sequence length of 2048 tokens (see [BLOOM tokenizer](https://huggingface.co/bigscience/tokenizer), [tokenizer description](#tokenization)) **Objective Function:** Cross Entropy with mean reduction (see [API documentation](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss)). **Compute infrastructure:** Jean Zay Public Supercomputer, provided by the French government (see [announcement](https://www.enseignementsup-recherche.gouv.fr/fr/signature-du-marche-d-acquisition-de-l-un-des-supercalculateurs-les-plus-puissants-d-europe-46733)). * Hardware: 384 A100 80GB GPUs (48 nodes): * Additional 32 A100 80GB GPUs (4 nodes) in reserve * 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links * CPU: AMD * CPU memory: 512GB per node * GPU memory: 640GB per node * Inter-node connect: Omni-Path Architecture (OPA) * NCCL-communications network: a fully dedicated subnet * Disc IO network: shared network with other types of nodes * Software: * Megatron-DeepSpeed ([Github link](https://github.com/bigscience-workshop/Megatron-DeepSpeed)) * DeepSpeed ([Github link](https://github.com/microsoft/DeepSpeed)) * PyTorch (pytorch-1.11 w/ CUDA-11.5; see [Github link](https://github.com/pytorch/pytorch)) * apex ([Github link](https://github.com/NVIDIA/apex)) #### **Training** Training logs: [Tensorboard link](https://huggingface.co/bigscience/tr11e-350M-logs) - Training throughput: About 150 TFLOPs per GPU - Number of epochs: 1 (*current target*) - Dates: - Started 11th March, 2022 11:42am PST - Ended 5th July, 2022 - Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments and other model sizes) - Server training location: Île-de-France, France #### **Tokenization** The BLOOM tokenizer ([link](https://huggingface.co/bigscience/tokenizer)) is a learned subword tokenizer trained using: - A byte-level Byte Pair Encoding (BPE) algorithm - A simple pre-tokenization rule, no normalization - A vocabulary size of 250,680 It was trained on a subset of a preliminary version of the corpus using alpha-weighting per language. </details> ### Environmental Impact <details> <summary>Click to expand</summary><br/> The training supercomputer, Jean Zay ([website](http://www.idris.fr/eng/jean-zay/jean-zay-presentation-eng.html)), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing. **Estimated carbon emissions:** *(Forthcoming upon completion of training.)* **Estimated electricity usage:** *(Forthcoming upon completion of training.)* </details> <p>&nbsp;</p> ## Uses *This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model. It provides information for anyone considering using the model or who is affected by the model.* <details> <summary>Click to expand</summary><br/> ### Intended Use This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive. #### **Direct Use** - Text generation - Exploring characteristics of language generated by a language model - Examples: Cloze tests, counterfactuals, generations with reframings #### **Downstream Use** - Tasks that leverage language models include: Information Extraction, Question Answering, Summarization ### Misuse and Out-of-scope Use *This section addresses what users ought not do with the model.* See the [BLOOM License](https://huggingface.co/spaces/bigscience/license), Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases. #### **Out-of-scope Uses** Using the model in [high-stakes](#high-stakes) settings is out of scope for this model.  The model is not designed for [critical decisions](#critical-decisions) nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct. ##### Out-of-scope Uses Include: - Usage in biomedical domains, political and legal domains, or finance domains - Usage for evaluating or scoring individuals, such as for employment, education, or credit - Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct #### **Misuse** Intentionally using the model for harm, violating [human rights](#human-rights), or other kinds of malicious activities, is a misuse of this model. This includes: - Spam generation - Disinformation and influence operations - Disparagement and defamation - Harassment and abuse - [Deception](#deception) - Unconsented impersonation and imitation - Unconsented surveillance - Generating content without attribution to the model, as specified in the [RAIL License, Use Restrictions](https://huggingface.co/spaces/bigscience/license) ### Intended Users #### **Direct Users** - General Public - Researchers - Students - Educators - Engineers/developers - Non-commercial entities - Community advocates, including human and civil rights groups #### Indirect Users - Users of derivatives created by Direct Users, such as those using software with an [intended use](#intended-use) - Users of [Derivatives of the Model, as described in the License](https://huggingface.co/spaces/bigscience/license) #### Others Affected (Parties Prenantes) - People and groups referred to by the LLM - People and groups exposed to outputs of, or decisions based on, the LLM - People and groups whose original work is included in the LLM </details> <p>&nbsp;</p> ## Training Data *This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.* <details> <summary>Click to expand</summary><br/> Details for each dataset are provided in individual [Data Cards](https://huggingface.co/spaces/bigscience/BigScienceCorpus). Training data includes: - 45 natural languages - 12 programming languages - In 1.5TB of pre-processed text, converted into 350B unique tokens (see [the tokenizer section](#tokenization) for more.) #### **Languages** The pie chart shows the distribution of languages in training data. ![pie chart showing the distribution of languages in training data](https://github.com/bigscience-workshop/model_card/blob/main/assets/data/pie_chart.svg?raw=true) The following table shows the further distribution of Niger-Congo and Indic languages in the training data. <details> <summary>Click to expand</summary><br/> | Niger Congo | Percentage | | Indic | Percentage | |----------------|------------ |------ |-----------|------------| | Chi Tumbuka | 0.00002 | | Assamese | 0.01 | | Kikuyu | 0.00004 | | Odia | 0.04 | | Bambara | 0.00004 | | Gujarati | 0.04 | | Akan | 0.00007 | | Marathi | 0.05 | | Xitsonga | 0.00007 | | Punjabi | 0.05 | | Sesotho | 0.00007 | | Kannada | 0.06 | | Chi Chewa | 0.0001 | | Nepali | 0.07 | | Setswana | 0.0002 | | Telugu | 0.09 | | Northern Sotho | 0.0002 | | Malayalam | 0.10 | | Fon | 0.0002 | | Urdu | 0.10 | | Kirundi | 0.0003 | | Tamil | 0.20 | | Wolof | 0.0004 | | Bengali | 0.50 | | Kuganda | 0.0004 | | Hindi | 0.70 | | Chi Shona | 0.001 | | Isi Zulu | 0.001 | | Igbo | 0.001 | | Xhosa | 0.001 | | Kinyarwanda | 0.003 | | Yoruba | 0.006 | | Swahili | 0.02 | </details> The following table shows the distribution of programming languages. <details> <summary>Click to expand</summary><br/> | Extension | Language | Number of files | |----------------|------------|-----------------| | java | Java | 5,407,724 | | php | PHP | 4,942,186 | | cpp | C++ | 2,503,930 | | py | Python | 2,435,072 | | js | JavaScript | 1,905,518 | | cs | C# | 1,577,347 | | rb | Ruby | 6,78,413 | | cc | C++ | 443,054 | | hpp | C++ | 391,048 | | lua | Lua | 352,317 | | go | GO | 227,763 | | ts | TypeScript | 195,254 | | C | C | 134,537 | | scala | Scala | 92,052 | | hh | C++ | 67,161 | | H | C++ | 55,899 | | tsx | TypeScript | 33,107 | | rs | Rust | 29,693 | | phpt | PHP | 9,702 | | c++ | C++ | 1,342 | | h++ | C++ | 791 | | php3 | PHP | 540 | | phps | PHP | 270 | | php5 | PHP | 166 | | php4 | PHP | 29 | </details> </details> <p>&nbsp;</p> ## Risks and Limitations *This section identifies foreseeable harms and misunderstandings.* <details> <summary>Click to expand</summary><br/> Model may: - Overrepresent some viewpoints and underrepresent others - Contain stereotypes - Contain [personal information](#personal-data-and-information) - Generate: - Hateful, abusive, or violent language - Discriminatory or prejudicial language - Content that may not be appropriate for all settings, including sexual content - Make errors, including producing incorrect information as if it were factual - Generate irrelevant or repetitive outputs </details> <p>&nbsp;</p> ## Evaluation *This section describes the evaluation protocols and provides the results.* <details> <summary>Click to expand</summary><br/> ### Metrics *This section describes the different ways performance is calculated and why.* Includes: | Metric | Why chosen | |--------------------|--------------------------------------------------------------------| | [Perplexity](#perplexity) | Standard metric for quantifying model improvements during training | | Cross Entropy [Loss](#loss) | Standard objective for language models. | And multiple different metrics for specific tasks. _(More evaluation metrics forthcoming upon completion of evaluation protocol.)_ ### Factors *This section lists some different aspects of what BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.* - Language, such as English or Yoruba - Domain, such as newswire or stories - Demographic characteristics, such as gender or nationality ### Results *Results are based on the [Factors](#factors) and [Metrics](#metrics).* **Train-time Evaluation:** As of 25.May.2022, 15:00 PST: - Training Loss: 2.0 - Validation Loss: 2.2 - Perplexity: 8.9 (More evaluation scores forthcoming at the end of model training.) </details> <p>&nbsp;</p> ## Recommendations *This section provides information on warnings and potential mitigations.* <details> <summary>Click to expand</summary><br/> - Indirect users should be made aware when the content they're working with is created by the LLM. - Users should be aware of [Risks and Limitations](#risks-and-limitations), and include an appropriate age disclaimer or blocking interface as necessary. - Models pretrained with the LLM should include an updated Model Card. - Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments. </details> <p>&nbsp;</p> ## Glossary and Calculations *This section defines common terms and how metrics are calculated.* <details> <summary>Click to expand</summary><br/> - <a name="loss">**Loss:**</a> A calculation of the difference between what the model has learned and what the data shows ("groundtruth"). The lower the loss, the better. The training process aims to minimize the loss. - <a name="perplexity">**Perplexity:**</a> This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy. - <a name="high-stakes">**High-stakes settings:**</a> Such as those identified as "high-risk AI systems" and "unacceptable risk AI systems" in the European Union's proposed [Artificial Intelligence (AI) Act](https://artificialintelligenceact.eu/annexes/). - <a name="critical-decisions">**Critical decisions:**</a> Such as those defined in [the United States' proposed Algorithmic Accountability Act](https://www.congress.gov/117/bills/s3572/BILLS-117s3572is.pdf). - <a name="human-rights">**Human rights:**</a> Includes those rights defined in the [Universal Declaration of Human Rights](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf). - <a name="personal-data-and-information">**Personal Data and Personal Information:**</a> Personal data and information is defined in multiple data protection regulations, such as "[personal data](https://gdpr-info.eu/issues/personal-data/)" in the [European Union's General Data Protection Regulation](https://gdpr-info.eu); and "personal information" in the Republic of South Africa's [Protection of Personal Information Act](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf), The People's Republic of China's [Personal information protection law](http://en.npc.gov.cn.cdurl.cn/2021-12/29/c_694559.htm). - <a name="sensitive-characteristics">**Sensitive characteristics:**</a> This includes specifically protected categories in human rights (see [UHDR, Article 2](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf)) and personal information regulation (see GDPR, [Article 9; Protection of Personal Information Act, Chapter 1](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf)) - <a name="deception">**Deception:**</a> Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated. </details> <p>&nbsp;</p> ## More Information <details> <summary>Click to expand</summary><br/> ### Dataset Creation Blog post detailing the design choices during the dataset creation: https://bigscience.huggingface.co/blog/building-a-tb-scale-multilingual-dataset-for-language-modeling ### Technical Specifications Blog post summarizing how the architecture, size, shape, and pre-training duration where selected: https://bigscience.huggingface.co/blog/what-language-model-to-train-if-you-have-two-million-gpu-hours More details on the architecture/optimizer: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml Blog post on the hardware/engineering side: https://bigscience.huggingface.co/blog/which-hardware-to-train-a-176b-parameters-model Details on the distributed setup used for the training: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml Tensorboard updated during the training: https://huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard#scalars&tagFilter=loss Insights on how to approach training, negative results: https://github.com/bigscience-workshop/bigscience/blob/master/train/lessons-learned.md Details on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): https://github.com/bigscience-workshop/bigscience/blob/master/train/tr11-176B-ml/chronicles.md ### Initial Results Initial prompting experiments using interim checkpoints: https://huggingface.co/spaces/bigscience/bloom-book </details> <p>&nbsp;</p> ## Model Card Authors *Ordered roughly chronologically and by amount of time spent.* Margaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff
pavle-tsotskolauri/distilbert-base-uncased-finetuned-imdb
pavle-tsotskolauri
2022-10-19T11:12:06Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "fill-mask", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-10-19T10:50:14Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4738 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7133 | 1.0 | 157 | 2.4957 | | 2.5751 | 2.0 | 314 | 2.4250 | | 2.5293 | 3.0 | 471 | 2.4358 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
thucdangvan020999/distilbert-base-uncased-finetuned-imdb
thucdangvan020999
2022-10-19T11:02:23Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "fill-mask", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-10-19T10:48:34Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4721 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7086 | 1.0 | 157 | 2.4898 | | 2.5796 | 2.0 | 314 | 2.4230 | | 2.5269 | 3.0 | 471 | 2.4354 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
rbroc/contrastive-user-encoder-multipost
rbroc
2022-10-19T10:08:47Z
3
0
transformers
[ "transformers", "pytorch", "distilbert", "feature-extraction", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
feature-extraction
2022-10-19T08:49:08Z
--- language: - en license: apache-2.0 library_name: transformers --- ### Contrastive user encoder (multi-post) This model is a `DistilBertModel` trained by fine-tuning `distilbert-base-uncased` on author-based triplet loss. #### Details Training and evaluation details are provided in our EMNLP Findings paper: - Rocca, R., & Yarkoni, T. (2022), Language as a fingerprint: Self-supervised learning of user encodings using transformers, to appear in *Findings of the Association for Computational Linguistics: EMNLP 2022* #### Training We fine-tuned DistilBERT on triplets consisting of: - a set of Reddit submissions from a given user (10 posts, called "anchors") - see ```rbroc/contrastive-user-encoder-singlepost``` for an equivalent model trained on a single anchor; - an additional post from the same user (a "positive example"); - a post from a different, randomly selected user (the "negative example") To compute the loss, we use [CLS] encodings of the anchors, positive examples and negative examples from the last layer of the DistilBERT encoder. We perform feature-wise averaging of anchor posts encodings and optimize for \\(max(||\overline{f(A)} - f(n)|| - ||\overline{f(A)} - f(p)|| + \alpha,0)\\) where: - \\( \overline{f(A)}\\) is the feature-wise average of the anchor encodings; - \\( f(n) \\) is the [CLS] encoding of the negative example; - \\( f(p) \\) is the [CLS] encoding of the positive example; - \\( \alpha \\) is a tunable parameter called margin. Here, we tuned this to \\( \alpha = 1.0\\) #### Evaluation and usage The model yields performance advantages downstream user-based classification tasks. We encourage usage and benchmarking on tasks involving: - prediction of user traits (e.g., personality); - extraction of user-aware text encodings (e.g., style modeling); - contextualized text modeling, where standard text representations are complemented with compact user representations #### Limitations Being exclusively trained on Reddit data, our models probably overfit to linguistic markers and traits which are relevant to characterizing the Reddit user population, but less salient in the general population. Domain-specific fine-tuning may be required before deployment. Furthermore, our self-supervised approach enforces little or no control over biases, which models may actively use as part of their heuristics in contrastive and downstream tasks.
amichailidis/greek_legal_bert_v2-finetuned-ner-V2
amichailidis
2022-10-19T09:27:25Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-11T09:10:51Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: greek_legal_bert_v2-finetuned-ner-V3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # greek_legal_bert_v2-finetuned-ner-V3 This model is a fine-tuned version of [alexaapo/greek_legal_bert_v2](https://huggingface.co/alexaapo/greek_legal_bert_v2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0907 - Precision: 0.9023 - Recall: 0.9265 - F1: 0.9142 - Accuracy: 0.9828 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.19 | 25 | 0.0661 | 0.8895 | 0.9229 | 0.9059 | 0.9813 | | No log | 2.38 | 50 | 0.0820 | 0.9091 | 0.9319 | 0.9204 | 0.9838 | | No log | 3.57 | 75 | 0.0791 | 0.8924 | 0.9211 | 0.9065 | 0.9825 | | No log | 4.76 | 100 | 0.0824 | 0.8950 | 0.9319 | 0.9131 | 0.9841 | | No log | 5.95 | 125 | 0.0820 | 0.8830 | 0.9194 | 0.9008 | 0.9812 | | No log | 7.14 | 150 | 0.0862 | 0.9059 | 0.9319 | 0.9187 | 0.9817 | | No log | 8.33 | 175 | 0.0915 | 0.9021 | 0.9247 | 0.9133 | 0.9826 | | No log | 9.52 | 200 | 0.0905 | 0.9023 | 0.9265 | 0.9142 | 0.9828 | ### Framework versions - Transformers 4.23.0 - Pytorch 1.12.1+cu113 - Datasets 2.5.2 - Tokenizers 0.13.1
mriggs/byt5-small-finetuned-1epoch-batch16-opus_books-en-to-it
mriggs
2022-10-19T08:42:40Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:opus_books", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-10-19T07:20:38Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - opus_books model-index: - name: byt5-small-finetuned-1epoch-batch16-opus_books-en-to-it results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # byt5-small-finetuned-1epoch-batch16-opus_books-en-to-it This model is a fine-tuned version of [google/byt5-small](https://huggingface.co/google/byt5-small) on the opus_books dataset. It achieves the following results on the evaluation set: - Loss: 0.9848 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.3771 | 1.0 | 1819 | 0.9848 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
inkoziev/rugpt_chitchat
inkoziev
2022-10-19T07:44:11Z
208
17
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "PyTorch", "Transformers", "ru", "license:unlicense", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-09-15T07:20:18Z
--- pipeline_tag: text-generation tags: - PyTorch - Transformers - gpt2 license: unlicense language: ru widget: - text: "- У Джульетты было 7 пончиков, а потом она 3 съела. Сколько у нее осталось пончиков? -" - text: "- Поглажено 4 манула. Осталось погладить 6. Сколько всего манулов надо погладить? -" - text: "- Для начала скажи, чему равно пятью девять? -" - text: "- ты чё такой борзый? -" - text: "- Привет! Как ваше ничего? -" --- ## Russian Chit-chat, Deductive and Common Sense reasoning model Модель является ядром прототипа [диалоговой системы](https://github.com/Koziev/chatbot) с двумя основными функциями. Первая функция - **генерация реплик чит-чата**. В качестве затравки подается история диалога (предшествующие несколько реплик, от 1 до 10). ``` - Привет, как дела? - Привет, так себе. - <<< эту реплику ожидаем от модели >>> ``` Вторая функция модели - вывод ответа на заданный вопрос, опираясь на дополнительные факты или на "здравый смысл". Предполагается, что релевантные факты извлекаются из стороннего хранилища (базы знаний) с помощью другой модели, например [sbert_pq](https://huggingface.co/inkoziev/sbert_pq). Используя указанный факт(ы) и текст вопроса, модель построит грамматичный и максимально краткий ответ, как это сделал бы человек в подобной коммуникативной ситуации. Релевантные факты следует указывать перед текстом заданного вопроса так, будто сам собеседник сказал их: ``` - Сегодня 15 сентября. Какой сейчас у нас месяц? - Сентябрь ``` Модель не ожидает, что все найденные и добавленные в контекст диалога факты действительно имеют отношение к заданному вопросу. Поэтому модель, извлекающая из базы знаний информацию, может жертвовать точностью в пользу полноте и добавлять что-то лишнее. Модель читчата в этом случае сама выберет среди добавленных в контекст фактов необходимую фактуру и проигнорирует лишнее. Текущая версия модели допускает до 5 фактов перед вопросом. Например: ``` - Стасу 16 лет. Стас живет в Подольске. У Стаса нет своей машины. Где живет Стас? - в Подольске ``` В некоторых случаях модель может выполнять **силлогический вывод** ответа, опираясь на 2 предпосылки, связанные друг с другом. Выводимое из двух предпосылок следствие не фигурирует явно, а *как бы* используется для вывода ответа: ``` - Смертен ли Аристофан, если он был греческим философом, а все философы смертны? - Да ``` Как можно видеть из приведенных примеров, формат подаваемой на вход модели фактической информации для выполнения вывода предельно естественный и свободный. Кроме логического вывода, модель также умеет решать простые арифметические задачи в рамках 1-2 классов начальной школы, с двумя числовыми аргументами: ``` - Чему равно 2+8? - 10 ``` ### Варианты модели и метрики Выложенная на данный момент модель имеет 760 млн. параметров, т.е. уровня sberbank-ai/rugpt3large_based_on_gpt2. Далее приводится результат замера точности решения арифметических задач на отложенном тестовом наборе сэмплов: | base model | arith. accuracy | | --------------------------------------- | --------------- | | sberbank-ai/rugpt3large_based_on_gpt2 | 0.91 | | sberbank-ai/rugpt3medium_based_on_gpt2 | 0.70 | | sberbank-ai/rugpt3small_based_on_gpt2 | 0.58 | | tinkoff-ai/ruDialoGPT-small | 0.44 | | tinkoff-ai/ruDialoGPT-medium | 0.69 | Цифра 0.91 в столбце "arith. accuracy" означает, что 91% тестовых задач решено полностью верно. Любое отклонение сгенерированного ответа от эталонного рассматривается как ошибка. Например, выдача ответа "120" вместо "119" тоже фиксируется как ошибка. ### Пример использования ``` import torch from transformers import AutoTokenizer, AutoModelForCausalLM device = "cuda" if torch.cuda.is_available() else "cpu" model_name = "inkoziev/rugpt_chitchat" tokenizer = AutoTokenizer.from_pretrained(model_name) tokenizer.add_special_tokens({'bos_token': '<s>', 'eos_token': '</s>', 'pad_token': '<pad>'}) model = AutoModelForCausalLM.from_pretrained(model_name) model.to(device) model.eval() # На вход модели подаем последние 2-3 реплики диалога. Каждая реплика на отдельной строке, начинается с символа "-" input_text = """<s>- Привет! Что делаешь? - Привет :) В такси еду -""" encoded_prompt = tokenizer.encode(input_text, add_special_tokens=False, return_tensors="pt").to(device) output_sequences = model.generate(input_ids=encoded_prompt, max_length=100, num_return_sequences=1, pad_token_id=tokenizer.pad_token_id) text = tokenizer.decode(output_sequences[0].tolist(), clean_up_tokenization_spaces=True)[len(input_text)+1:] text = text[: text.find('</s>')] print(text) ``` ### Контакты Если у Вас есть какие-то вопросы по использованию этой модели, или предложения по ее улучшению - пишите мне [email protected] ### Citation: ``` @MISC{rugpt_chitchat, author = {Ilya Koziev}, title = {Russian Chit-chat with Common sence Reasoning}, url = {https://huggingface.co/inkoziev/rugpt_chitchat}, year = 2022 } ```
CompVis/stable-diffusion
CompVis
2022-10-19T07:43:53Z
0
958
null
[ "stable-diffusion", "text-to-image", "arxiv:2207.12598", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2022-08-10T13:09:19Z
--- license: creativeml-openrail-m tags: - stable-diffusion - text-to-image inference: false --- # Stable Diffusion Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input. This model card gives an overview of all available model checkpoints. For more in-detail model cards, please have a look at the model repositories listed under [Model Access](#model-access). ## Stable Diffusion Version 1 For the first version 4 model checkpoints are released. *Higher* versions have been trained for longer and are thus usually better in terms of image generation quality then *lower* versions. More specifically: - **stable-diffusion-v1-1**: The checkpoint is randomly initialized and has been trained on 237,000 steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en). 194,000 steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`). - **stable-diffusion-v1-2**: The checkpoint resumed training from `stable-diffusion-v1-1`. 515,000 steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en, filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)). - **stable-diffusion-v1-3**: The checkpoint resumed training from `stable-diffusion-v1-2`. 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598) - **stable-diffusion-v1-4**: The checkpoint resumed training from `stable-diffusion-v1-2`. 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). - [**`stable-diffusion-v1-4`**](https://huggingface.co/CompVis/stable-diffusion-v1-4) Resumed from `stable-diffusion-v1-2`.225,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). ### Model Access Each checkpoint can be used both with Hugging Face's [ 🧨 Diffusers library](https://github.com/huggingface/diffusers) or the original [Stable Diffusion GitHub repository](https://github.com/CompVis/stable-diffusion). Note that you have to *"click-request"* them on each respective model repository. | **[🤗's 🧨 Diffusers library](https://github.com/huggingface/diffusers)** | **[Stable Diffusion GitHub repository](https://github.com/CompVis/stable-diffusion)** | | ----------- | ----------- | | [`stable-diffusion-v1-1`](https://huggingface.co/CompVis/stable-diffusion-v1-1) | [`stable-diffusion-v-1-1-original`](https://huggingface.co/CompVis/stable-diffusion-v-1-1-original) | | [`stable-diffusion-v1-2`](https://huggingface.co/CompVis/stable-diffusion-v1-2) | [`stable-diffusion-v-1-2-original`](https://huggingface.co/CompVis/stable-diffusion-v-1-2-original) | | [`stable-diffusion-v1-3`](https://huggingface.co/CompVis/stable-diffusion-v1-3) | [`stable-diffusion-v-1-3-original`](https://huggingface.co/CompVis/stable-diffusion-v-1-3-original) | | [`stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) | [`stable-diffusion-v-1-4-original`](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original) | ### Demo To quickly try out the model, you can try out the [Stable Diffusion Space](https://huggingface.co/spaces/stabilityai/stable-diffusion). ### License [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based. ## Citation ```bibtex @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } ``` *This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
SalML/DETR-table-detection
SalML
2022-10-19T07:22:07Z
5
2
transformers
[ "transformers", "pytorch", "detr", "object-detection", "en", "dataset:PubTables-1M", "license:unknown", "endpoints_compatible", "region:us" ]
object-detection
2022-09-09T10:49:56Z
--- language: en tags: - detr license: unknown datasets: - PubTables-1M --- # The models are taken from https://github.com/microsoft/table-transformer/ # Original model now on MSFT org: https://huggingface.co/microsoft/table-transformer-detection I have built a HuggingFace Space: https://huggingface.co/spaces/SalML/TableTransformer2CSV It runs an OCR on the table-transformer output image to obtain a CSV downloadable table.
Harisudhan/layoutlmv3-finetuned-cord_100
Harisudhan
2022-10-19T07:17:35Z
12
0
transformers
[ "transformers", "pytorch", "tensorboard", "layoutlmv3", "token-classification", "generated_from_trainer", "dataset:cord-layoutlmv3", "license:cc-by-nc-sa-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-18T11:23:44Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer datasets: - cord-layoutlmv3 metrics: - precision - recall - f1 - accuracy model-index: - name: layoutlmv3-finetuned-cord_100 results: - task: name: Token Classification type: token-classification dataset: name: cord-layoutlmv3 type: cord-layoutlmv3 config: cord split: train args: cord metrics: - name: Precision type: precision value: 0.9472118959107807 - name: Recall type: recall value: 0.9535928143712575 - name: F1 type: f1 value: 0.9503916449086163 - name: Accuracy type: accuracy value: 0.9562818336162988 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlmv3-finetuned-cord_100 This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the cord-layoutlmv3 dataset. It achieves the following results on the evaluation set: - Loss: 0.2152 - Precision: 0.9472 - Recall: 0.9536 - F1: 0.9504 - Accuracy: 0.9563 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 5 - eval_batch_size: 5 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 2500 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.56 | 250 | 0.9909 | 0.7582 | 0.8099 | 0.7832 | 0.8128 | | 1.3653 | 3.12 | 500 | 0.5650 | 0.8392 | 0.8675 | 0.8531 | 0.8756 | | 1.3653 | 4.69 | 750 | 0.3851 | 0.8865 | 0.9177 | 0.9018 | 0.9181 | | 0.3744 | 6.25 | 1000 | 0.3104 | 0.9280 | 0.9364 | 0.9322 | 0.9380 | | 0.3744 | 7.81 | 1250 | 0.2778 | 0.9347 | 0.9424 | 0.9385 | 0.9440 | | 0.1955 | 9.38 | 1500 | 0.2316 | 0.9327 | 0.9446 | 0.9386 | 0.9440 | | 0.1955 | 10.94 | 1750 | 0.2461 | 0.9414 | 0.9491 | 0.9452 | 0.9533 | | 0.1349 | 12.5 | 2000 | 0.2316 | 0.9379 | 0.9491 | 0.9435 | 0.9478 | | 0.1349 | 14.06 | 2250 | 0.2227 | 0.9487 | 0.9551 | 0.9519 | 0.9533 | | 0.1024 | 15.62 | 2500 | 0.2152 | 0.9472 | 0.9536 | 0.9504 | 0.9563 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
eunyounglee/mBART_translator_json_sentence_split
eunyounglee
2022-10-19T05:43:21Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-10-19T03:46:08Z
--- tags: - generated_from_trainer metrics: - bleu model-index: - name: mBART_translator_json_sentence_split results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mBART_translator_json_sentence_split This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0769 - Bleu: 87.2405 - Gen Len: 27.425 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:| | 2.0011 | 1.0 | 2978 | 0.5458 | 63.8087 | 32.3819 | | 1.1978 | 2.0 | 5956 | 0.1854 | 76.5291 | 27.6781 | | 0.9276 | 3.0 | 8934 | 0.1123 | 84.7194 | 27.5773 | | 0.776 | 4.0 | 11912 | 0.0845 | 87.505 | 27.2845 | | 0.6889 | 5.0 | 14890 | 0.0769 | 87.2405 | 27.425 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
amartyobanerjee/bert-finetuned-squad
amartyobanerjee
2022-10-19T04:50:34Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-10-17T11:11:44Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
MoseliMotsoehli/DeepGeoPark
MoseliMotsoehli
2022-10-19T02:07:19Z
0
0
null
[ "license:openrail", "region:us" ]
null
2022-10-17T21:09:29Z
--- license: openrail --- # <span>Public Parking Sport Detector Using Deep Learning</span>
Passion/t5-small-finetuned-multinews-custom
Passion
2022-10-19T01:41:15Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-10-19T01:28:04Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: t5-small-finetuned-multinews-custom results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-multinews-custom This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | No log | 1.0 | 419 | 1.8253 | 23.5846 | 9.5181 | 18.9798 | 21.8248 | 19.0 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
pierric/test-EsperBERTo-small
pierric
2022-10-19T01:28:09Z
13
0
transformers
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "eo", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: eo thumbnail: https://huggingface.co/blog/assets/EsperBERTo-thumbnail-v2.png --- ## EsperBERTo: RoBERTa-like Language model trained on Esperanto **Companion model to blog post https://huggingface.co/blog/how-to-train** 🔥 ### Training Details - current checkpoint: 566000 - machine name: `galinette`
Arnaudmkonan/adn-setfit-model
Arnaudmkonan
2022-10-19T00:37:34Z
1
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-10-19T00:37:17Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 2500 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 2500, "warmup_steps": 250, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
emilys/hmBERT-CoNLL-cp3
emilys
2022-10-19T00:15:01Z
18
1
transformers
[ "transformers", "pytorch", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-18T23:44:26Z
--- license: mit tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: hmBERT-CoNLL-cp3 results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - name: Precision type: precision value: 0.9121408403919614 - name: Recall type: recall value: 0.9242679232581622 - name: F1 type: f1 value: 0.9181643400484828 - name: Accuracy type: accuracy value: 0.9862154900510105 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hmBERT-CoNLL-cp3 This model is a fine-tuned version of [dbmdz/bert-base-historic-multilingual-cased](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0572 - Precision: 0.9121 - Recall: 0.9243 - F1: 0.9182 - Accuracy: 0.9862 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 0.06 | 25 | 0.4115 | 0.3643 | 0.3728 | 0.3685 | 0.9007 | | No log | 0.11 | 50 | 0.2243 | 0.6393 | 0.6908 | 0.6641 | 0.9460 | | No log | 0.17 | 75 | 0.1617 | 0.7319 | 0.7637 | 0.7475 | 0.9580 | | No log | 0.23 | 100 | 0.1544 | 0.7282 | 0.7637 | 0.7455 | 0.9585 | | No log | 0.28 | 125 | 0.1341 | 0.7595 | 0.8117 | 0.7847 | 0.9644 | | No log | 0.34 | 150 | 0.1221 | 0.7980 | 0.8251 | 0.8114 | 0.9693 | | No log | 0.4 | 175 | 0.1013 | 0.7968 | 0.8344 | 0.8152 | 0.9719 | | No log | 0.46 | 200 | 0.1076 | 0.8265 | 0.8403 | 0.8333 | 0.9732 | | No log | 0.51 | 225 | 0.0883 | 0.8453 | 0.8635 | 0.8543 | 0.9763 | | No log | 0.57 | 250 | 0.0973 | 0.8439 | 0.8633 | 0.8535 | 0.9763 | | No log | 0.63 | 275 | 0.0883 | 0.8497 | 0.8655 | 0.8575 | 0.9765 | | No log | 0.68 | 300 | 0.0879 | 0.8462 | 0.8642 | 0.8551 | 0.9766 | | No log | 0.74 | 325 | 0.0781 | 0.8592 | 0.8834 | 0.8711 | 0.9787 | | No log | 0.8 | 350 | 0.0725 | 0.8697 | 0.8928 | 0.8811 | 0.9803 | | No log | 0.85 | 375 | 0.0755 | 0.8687 | 0.8943 | 0.8813 | 0.9807 | | No log | 0.91 | 400 | 0.0666 | 0.8781 | 0.9004 | 0.8891 | 0.9822 | | No log | 0.97 | 425 | 0.0658 | 0.8877 | 0.8995 | 0.8936 | 0.9823 | | No log | 1.03 | 450 | 0.0645 | 0.8951 | 0.9036 | 0.8993 | 0.9837 | | No log | 1.08 | 475 | 0.0697 | 0.8864 | 0.9039 | 0.8951 | 0.9831 | | 0.1392 | 1.14 | 500 | 0.0688 | 0.8824 | 0.8994 | 0.8908 | 0.9824 | | 0.1392 | 1.2 | 525 | 0.0681 | 0.8950 | 0.9049 | 0.8999 | 0.9827 | | 0.1392 | 1.25 | 550 | 0.0676 | 0.8855 | 0.8977 | 0.8915 | 0.9823 | | 0.1392 | 1.31 | 575 | 0.0618 | 0.8940 | 0.9088 | 0.9014 | 0.9842 | | 0.1392 | 1.37 | 600 | 0.0644 | 0.8945 | 0.9076 | 0.9010 | 0.9840 | | 0.1392 | 1.42 | 625 | 0.0641 | 0.8936 | 0.9086 | 0.9010 | 0.9837 | | 0.1392 | 1.48 | 650 | 0.0619 | 0.8969 | 0.9120 | 0.9044 | 0.9846 | | 0.1392 | 1.54 | 675 | 0.0608 | 0.9045 | 0.9105 | 0.9075 | 0.9848 | | 0.1392 | 1.59 | 700 | 0.0624 | 0.9038 | 0.9143 | 0.9091 | 0.9851 | | 0.1392 | 1.65 | 725 | 0.0596 | 0.9062 | 0.9170 | 0.9116 | 0.9852 | | 0.1392 | 1.71 | 750 | 0.0580 | 0.8995 | 0.9143 | 0.9069 | 0.9848 | | 0.1392 | 1.77 | 775 | 0.0582 | 0.9082 | 0.9172 | 0.9127 | 0.9858 | | 0.1392 | 1.82 | 800 | 0.0588 | 0.9024 | 0.9179 | 0.9101 | 0.9852 | | 0.1392 | 1.88 | 825 | 0.0592 | 0.9020 | 0.9219 | 0.9119 | 0.9856 | | 0.1392 | 1.94 | 850 | 0.0600 | 0.9054 | 0.9182 | 0.9118 | 0.9852 | | 0.1392 | 1.99 | 875 | 0.0568 | 0.9068 | 0.9202 | 0.9135 | 0.9861 | | 0.1392 | 2.05 | 900 | 0.0571 | 0.9131 | 0.9212 | 0.9171 | 0.9861 | | 0.1392 | 2.11 | 925 | 0.0577 | 0.9110 | 0.9204 | 0.9157 | 0.9858 | | 0.1392 | 2.16 | 950 | 0.0605 | 0.9127 | 0.9243 | 0.9185 | 0.9860 | | 0.1392 | 2.22 | 975 | 0.0575 | 0.9109 | 0.9224 | 0.9166 | 0.9867 | | 0.0392 | 2.28 | 1000 | 0.0572 | 0.9121 | 0.9243 | 0.9182 | 0.9862 | | 0.0392 | 2.33 | 1025 | 0.0567 | 0.9171 | 0.9253 | 0.9212 | 0.9870 | | 0.0392 | 2.39 | 1050 | 0.0570 | 0.9193 | 0.9295 | 0.9244 | 0.9871 | | 0.0392 | 2.45 | 1075 | 0.0584 | 0.9155 | 0.9276 | 0.9215 | 0.9867 | | 0.0392 | 2.51 | 1100 | 0.0591 | 0.9168 | 0.9286 | 0.9227 | 0.9867 | | 0.0392 | 2.56 | 1125 | 0.0577 | 0.9182 | 0.9312 | 0.9246 | 0.9874 | | 0.0392 | 2.62 | 1150 | 0.0570 | 0.9184 | 0.9283 | 0.9233 | 0.9870 | | 0.0392 | 2.68 | 1175 | 0.0563 | 0.9191 | 0.9298 | 0.9245 | 0.9872 | | 0.0392 | 2.73 | 1200 | 0.0565 | 0.9180 | 0.9313 | 0.9246 | 0.9872 | | 0.0392 | 2.79 | 1225 | 0.0559 | 0.9190 | 0.9298 | 0.9244 | 0.9873 | | 0.0392 | 2.85 | 1250 | 0.0562 | 0.9185 | 0.9293 | 0.9239 | 0.9873 | | 0.0392 | 2.9 | 1275 | 0.0564 | 0.9175 | 0.9285 | 0.9230 | 0.9872 | | 0.0392 | 2.96 | 1300 | 0.0563 | 0.9181 | 0.9295 | 0.9237 | 0.9873 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0 - Datasets 2.4.0 - Tokenizers 0.12.1
emilys/hmBERT-CoNLL-cp2
emilys
2022-10-18T23:43:05Z
16
0
transformers
[ "transformers", "pytorch", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-10-18T23:21:57Z
--- license: mit tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: hmBERT-CoNLL-cp2 results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - name: Precision type: precision value: 0.8931730929727926 - name: Recall type: recall value: 0.9005385392123864 - name: F1 type: f1 value: 0.8968406938741306 - name: Accuracy type: accuracy value: 0.983217164440637 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hmBERT-CoNLL-cp2 This model is a fine-tuned version of [dbmdz/bert-base-historic-multilingual-cased](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0666 - Precision: 0.8932 - Recall: 0.9005 - F1: 0.8968 - Accuracy: 0.9832 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 0.06 | 25 | 0.4116 | 0.3632 | 0.3718 | 0.3674 | 0.9005 | | No log | 0.11 | 50 | 0.2247 | 0.6384 | 0.6902 | 0.6633 | 0.9459 | | No log | 0.17 | 75 | 0.1624 | 0.7303 | 0.7627 | 0.7461 | 0.9580 | | No log | 0.23 | 100 | 0.1541 | 0.7338 | 0.7688 | 0.7509 | 0.9588 | | No log | 0.28 | 125 | 0.1349 | 0.7610 | 0.8095 | 0.7845 | 0.9643 | | No log | 0.34 | 150 | 0.1230 | 0.7982 | 0.8253 | 0.8115 | 0.9694 | | No log | 0.4 | 175 | 0.0997 | 0.8069 | 0.8406 | 0.8234 | 0.9727 | | No log | 0.46 | 200 | 0.1044 | 0.8211 | 0.8410 | 0.8309 | 0.9732 | | No log | 0.51 | 225 | 0.0871 | 0.8413 | 0.8603 | 0.8507 | 0.9760 | | No log | 0.57 | 250 | 0.1066 | 0.8288 | 0.8465 | 0.8376 | 0.9733 | | No log | 0.63 | 275 | 0.0872 | 0.8580 | 0.8667 | 0.8624 | 0.9766 | | No log | 0.68 | 300 | 0.0834 | 0.8522 | 0.8706 | 0.8613 | 0.9773 | | No log | 0.74 | 325 | 0.0832 | 0.8545 | 0.8834 | 0.8687 | 0.9783 | | No log | 0.8 | 350 | 0.0776 | 0.8542 | 0.8834 | 0.8685 | 0.9787 | | No log | 0.85 | 375 | 0.0760 | 0.8629 | 0.8896 | 0.8760 | 0.9801 | | No log | 0.91 | 400 | 0.0673 | 0.8775 | 0.9004 | 0.8888 | 0.9824 | | No log | 0.97 | 425 | 0.0681 | 0.8827 | 0.8938 | 0.8882 | 0.9817 | | No log | 1.03 | 450 | 0.0659 | 0.8844 | 0.8950 | 0.8897 | 0.9824 | | No log | 1.08 | 475 | 0.0690 | 0.8833 | 0.9015 | 0.8923 | 0.9832 | | 0.1399 | 1.14 | 500 | 0.0666 | 0.8932 | 0.9005 | 0.8968 | 0.9832 | | 0.1399 | 1.2 | 525 | 0.0667 | 0.8891 | 0.8997 | 0.8944 | 0.9825 | | 0.1399 | 1.25 | 550 | 0.0699 | 0.8751 | 0.8953 | 0.8851 | 0.9820 | | 0.1399 | 1.31 | 575 | 0.0617 | 0.8947 | 0.9068 | 0.9007 | 0.9840 | | 0.1399 | 1.37 | 600 | 0.0633 | 0.9 | 0.9058 | 0.9029 | 0.9841 | | 0.1399 | 1.42 | 625 | 0.0639 | 0.8966 | 0.9116 | 0.9040 | 0.9843 | | 0.1399 | 1.48 | 650 | 0.0624 | 0.8972 | 0.9110 | 0.9041 | 0.9845 | | 0.1399 | 1.54 | 675 | 0.0619 | 0.8980 | 0.9081 | 0.9030 | 0.9842 | | 0.1399 | 1.59 | 700 | 0.0615 | 0.9002 | 0.9090 | 0.9045 | 0.9843 | | 0.1399 | 1.65 | 725 | 0.0601 | 0.9037 | 0.9128 | 0.9082 | 0.9850 | | 0.1399 | 1.71 | 750 | 0.0585 | 0.9031 | 0.9142 | 0.9086 | 0.9849 | | 0.1399 | 1.77 | 775 | 0.0582 | 0.9035 | 0.9143 | 0.9089 | 0.9851 | | 0.1399 | 1.82 | 800 | 0.0580 | 0.9044 | 0.9157 | 0.9100 | 0.9853 | | 0.1399 | 1.88 | 825 | 0.0583 | 0.9034 | 0.9160 | 0.9097 | 0.9851 | | 0.1399 | 1.94 | 850 | 0.0578 | 0.9058 | 0.9170 | 0.9114 | 0.9854 | | 0.1399 | 1.99 | 875 | 0.0576 | 0.9060 | 0.9165 | 0.9112 | 0.9852 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0 - Datasets 2.4.0 - Tokenizers 0.12.1
facebook/xm_transformer_sm_all-en
facebook
2022-10-18T23:27:19Z
12
4
fairseq
[ "fairseq", "audio", "audio-to-audio", "speech-to-speech-translation", "region:us" ]
audio-to-audio
2022-10-11T17:47:55Z
--- library_name: fairseq task: audio-to-audio tags: - fairseq - audio - audio-to-audio - speech-to-speech-translation widget: - example_title: Common Voice sample 1 src: https://huggingface.co/facebook/xm_transformer_600m-es_en-multi_domain/resolve/main/common_voice_es_19966634.flac ---
g30rv17ys/ddpm-hkuoct-wamd-200ep
g30rv17ys
2022-10-18T23:04:07Z
4
0
diffusers
[ "diffusers", "tensorboard", "en", "dataset:imagefolder", "license:apache-2.0", "diffusers:DDPMPipeline", "region:us" ]
null
2022-10-18T20:17:29Z
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: imagefolder metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-hkuoct-wamd-200ep ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `imagefolder` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/geevegeorge/ddpm-hkuoct-wamd-200ep/tensorboard?#scalars)
mg4364/wav2vec2-large-xls-r-300m-turkish-colab
mg4364
2022-10-18T22:53:48Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-10-18T21:51:06Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-turkish-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-turkish-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
Aitor/q-Taxi-v3
Aitor
2022-10-18T21:14:47Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-10-18T21:14:42Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.50 +/- 2.76 name: mean_reward verified: false --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Aitor/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
huggingtweets/benshapiro
huggingtweets
2022-10-18T20:24:37Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-06-29T12:26:34Z
--- language: en thumbnail: http://www.huggingtweets.com/benshapiro/1666124624885/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1580596905721171969/0NnLeJWA_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Ben Shapiro</div> <div style="text-align: center; font-size: 14px;">@benshapiro</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Ben Shapiro. | Data | Ben Shapiro | | --- | --- | | Tweets downloaded | 3235 | | Retweets | 2449 | | Short tweets | 71 | | Tweets kept | 715 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1qlxpk8a/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @benshapiro's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/27rtx8jj) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/27rtx8jj/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/benshapiro') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
mjawadazad2321/donut-base-Medical_Handwritten_Prescriptions_Information_Extraction_updated
mjawadazad2321
2022-10-18T20:09:41Z
43
2
transformers
[ "transformers", "pytorch", "tensorboard", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "dataset:imagefolder", "license:mit", "endpoints_compatible", "region:us" ]
image-text-to-text
2022-10-18T20:02:17Z
--- license: mit tags: - generated_from_trainer datasets: - imagefolder model-index: - name: donut-base-Medical_Handwritten_Prescriptions_Information_Extraction_updated results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # donut-base-Medical_Handwritten_Prescriptions_Information_Extraction_updated This model is a fine-tuned version of [mjawadazad2321/donut-base-Medical_Handwritten_Prescriptions_Information_Extraction](https://huggingface.co/mjawadazad2321/donut-base-Medical_Handwritten_Prescriptions_Information_Extraction) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.24.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
mlstudent/finetuning-sentiment-model-3000-samples
mlstudent
2022-10-18T19:52:23Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-18T13:15:41Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: train args: plain_text metrics: - name: Accuracy type: accuracy value: 0.632 - name: F1 type: f1 value: 0.43209876543209874 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.6646 - Accuracy: 0.632 - F1: 0.4321 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
pablorodriper/vivit
pablorodriper
2022-10-18T19:18:49Z
0
0
keras
[ "keras", "tf-keras", "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:05Z
--- title: Video Vision Transformer on medmnist emoji: 🧑‍⚕️ colorFrom: red colorTo: green sdk: gradio app_file: app.py pinned: false license: apache-2.0 library_name: keras --- # Configuration `title`: _string_ Display title for the Space `emoji`: _string_ Space emoji (emoji-only character allowed) `colorFrom`: _string_ Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) `colorTo`: _string_ Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) `sdk`: _string_ Can be either `gradio`, `streamlit`, or `static` `sdk_version` : _string_ Only applicable for `streamlit` SDK. See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. `app_file`: _string_ Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). Path is relative to the root of the repository. `models`: _List[string]_ HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space. Will be parsed automatically from your code if not specified here. `datasets`: _List[string]_ HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space. Will be parsed automatically from your code if not specified here. `pinned`: _boolean_ Whether the Space stays on top of your list.
huggingtweets/tvman000
huggingtweets
2022-10-18T18:46:34Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-10-18T18:45:39Z
--- language: en thumbnail: http://www.huggingtweets.com/tvman000/1666118790144/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1313242619510689794/BO-zQyrZ_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Daniel Cieslinski</div> <div style="text-align: center; font-size: 14px;">@tvman000</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Daniel Cieslinski. | Data | Daniel Cieslinski | | --- | --- | | Tweets downloaded | 214 | | Retweets | 32 | | Short tweets | 42 | | Tweets kept | 140 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/xo7nzzp0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tvman000's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3gi0grtu) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3gi0grtu/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/tvman000') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/__emmamme__-shell_nigeria-wef
huggingtweets
2022-10-18T18:30:01Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-10-18T18:29:53Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/565498192171507712/r2Hb2gvX_400x400.png&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1582040730842841089/FGLi_5Xd_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/479362131813343232/Vl0Ow-_W_400x400.jpeg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">World Economic Forum & emma & Shell Nigeria</div> <div style="text-align: center; font-size: 14px;">@__emmamme__-shell_nigeria-wef</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from World Economic Forum & emma & Shell Nigeria. | Data | World Economic Forum | emma | Shell Nigeria | | --- | --- | --- | --- | | Tweets downloaded | 3250 | 151 | 3195 | | Retweets | 29 | 6 | 455 | | Short tweets | 6 | 29 | 13 | | Tweets kept | 3215 | 116 | 2727 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/11b1thr0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @__emmamme__-shell_nigeria-wef's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3tc6nf11) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3tc6nf11/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/__emmamme__-shell_nigeria-wef') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
lethebodies/ppo_lunarlander
lethebodies
2022-10-18T18:23:01Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-10-18T16:30:45Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 196.41 +/- 19.88 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ``` Use the model like this ```python import gym from huggingface_sb3 import load_from_hub from stable_baselines3 import PPO from stable_baselines3.common.evaluation import evaluate_policy # Retrieve the model from the hub ## repo_id = id of the model repository from the Hugging Face Hub (repo_id = {organization}/{repo_name}) ## filename = name of the model zip file from the repository checkpoint = load_from_hub(repo_id="ThomasSimonini/ppo-LunarLander-v2", filename="ppo-LunarLander-v2.zip") model = PPO.load(checkpoint) # Evaluate the agent eval_env = gym.make('LunarLander-v2') mean_reward, std_reward = evaluate_policy(model, eval_env, n_eval_episodes=10, deterministic=True) print(f"mean_reward={mean_reward:.2f} +/- {std_reward}") # Watch the agent play obs = eval_env.reset() for i in range(1000): action, _state = model.predict(obs) obs, reward, done, info = eval_env.step(action) eval_env.render() if done: obs = eval_env.reset() eval_env.close() ```
xiaoding/finetuning-sentiment-model-3000-samples
xiaoding
2022-10-18T18:09:06Z
3
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-10-18T18:04:15Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: train args: plain_text metrics: - name: Accuracy type: accuracy value: 0.8733333333333333 - name: F1 type: f1 value: 0.8766233766233766 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3365 - Accuracy: 0.8733 - F1: 0.8766 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1 - Datasets 2.6.1 - Tokenizers 0.13.1
SzegedAI/hubertusz-medium-wiki-seq128
SzegedAI
2022-10-18T17:48:41Z
6
0
transformers
[ "transformers", "pytorch", "tf", "bert", "pretraining", "generated_from_keras_callback", "hubert", "hu", "dataset:wikipedia", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-10-10T11:31:58Z
--- language: hu license: apache-2.0 datasets: - wikipedia tags: - generated_from_keras_callback - hubert model-index: - name: hubert-medium-wiki-seq128 results: [] --- # hubert-medium-wiki-seq128 Fully trained model with the second phase of training is available here: [SzegedAI/hubert-medium-wiki](https://huggingface.co/SzegedAI/hubert-medium-wiki) This model was trained from scratch on the Wikipedia subset of Hungarian Webcorpus 2.0 with MLM and SOP tasks. ### Pre-Training Parameters: - Training steps: 500.000 - Sequence length: 128 (the model is capable for 512) - Batch size: 1024 ### Framework versions - Transformers 4.21.3 - TensorFlow 2.10.0 - Datasets 2.4.0 - Tokenizers 0.12.1 # Acknowledgement [![Artificial Intelligence - National Laboratory - Hungary](https://milab.tk.hu/uploads/images/milab_logo_en.png)](https://mi.nemzetilabor.hu/)
SzegedAI/hubertusz-medium-wiki
SzegedAI
2022-10-18T17:47:15Z
3
0
transformers
[ "transformers", "pytorch", "tf", "bert", "pretraining", "generated_from_keras_callback", "hubert", "hu", "dataset:wikipedia", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-10-18T17:31:30Z
--- language: hu license: apache-2.0 datasets: - wikipedia tags: - generated_from_keras_callback - hubert model-index: - name: hubert-medium-wiki results: [] --- # hubert-medium-wiki This model was trained from scratch on the Wikipedia subset of Hungarian Webcorpus 2.0 with MLM and SOP tasks. ### Pre-Training Parameters: First phase: - Training steps: 500.000 - Sequence length: 128 - Batch size: 1024 Second phase: - Training steps: 100.000 - Sequence length: 512 - Batch size: 384 ### Framework versions - Transformers 4.21.3 - TensorFlow 2.10.0 - Datasets 2.4.0 - Tokenizers 0.12.1 # Acknowledgement [![Artificial Intelligence - National Laboratory - Hungary](https://milab.tk.hu/uploads/images/milab_logo_en.png)](https://mi.nemzetilabor.hu/)
huggingtweets/exxonmobil-tencentglobal-wef
huggingtweets
2022-10-18T16:36:52Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-10-18T12:03:50Z
--- language: en thumbnail: http://www.huggingtweets.com/exxonmobil-tencentglobal-wef/1666111008009/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/902558084064616448/YTOCYYnn_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1397133852246646784/Z4XI4oyC_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/565498192171507712/r2Hb2gvX_400x400.png&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">ExxonMobil & Tencent 腾讯 & World Economic Forum</div> <div style="text-align: center; font-size: 14px;">@exxonmobil-tencentglobal-wef</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from ExxonMobil & Tencent 腾讯 & World Economic Forum. | Data | ExxonMobil | Tencent 腾讯 | World Economic Forum | | --- | --- | --- | --- | | Tweets downloaded | 3248 | 590 | 3250 | | Retweets | 209 | 39 | 29 | | Short tweets | 7 | 1 | 6 | | Tweets kept | 3032 | 550 | 3215 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/146l36xw/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @exxonmobil-tencentglobal-wef's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/kqpaxkc6) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/kqpaxkc6/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/exxonmobil-tencentglobal-wef') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
mrahusain/ppo-LunarLander-v2
mrahusain
2022-10-18T16:10:21Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-10-18T16:09:59Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -238.72 +/- 303.92 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
micole66/autotrain-sexy-or-ugly-1802962297
micole66
2022-10-18T15:59:45Z
12
0
transformers
[ "transformers", "pytorch", "autotrain", "token-classification", "en", "dataset:micole66/autotrain-data-sexy-or-ugly", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
token-classification
2022-10-18T15:59:23Z
--- tags: - autotrain - token-classification language: - en widget: - text: "I love AutoTrain 🤗" datasets: - micole66/autotrain-data-sexy-or-ugly co2_eq_emissions: emissions: 0.316594943692132 --- # Model Trained Using AutoTrain - Problem type: Entity Extraction - Model ID: 1802962297 - CO2 Emissions (in grams): 0.3166 ## Validation Metrics - Loss: 0.616 - Accuracy: 0.800 - Precision: 0.429 - Recall: 0.600 - F1: 0.500 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/micole66/autotrain-sexy-or-ugly-1802962297 ``` Or Python API: ``` from transformers import AutoModelForTokenClassification, AutoTokenizer model = AutoModelForTokenClassification.from_pretrained("micole66/autotrain-sexy-or-ugly-1802962297", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("micole66/autotrain-sexy-or-ugly-1802962297", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
JJRohan/q-FrozenLake-v1-4x4-noSlippery
JJRohan
2022-10-18T15:50:28Z
0
0
null
[ "FrozenLake-v1-4x4", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-10-18T15:50:22Z
--- tags: - FrozenLake-v1-4x4 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4 type: FrozenLake-v1-4x4 metrics: - type: mean_reward value: 0.80 +/- 0.40 name: mean_reward verified: false --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="JJRohan/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
bthomas/article2keyword2.1_barthez-orangesum-title_finetuned_for_mlm
bthomas
2022-10-18T15:49:11Z
7
0
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "mlm", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-10-18T15:24:22Z
--- license: apache-2.0 tags: - mlm - generated_from_trainer model-index: - name: article2keyword2.1_barthez-orangesum-title_finetuned_for_mlm results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # article2keyword2.1_barthez-orangesum-title_finetuned_for_mlm This model is a fine-tuned version of [moussaKam/barthez-orangesum-title](https://huggingface.co/moussaKam/barthez-orangesum-title) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0437 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.2949 | 1.0 | 1370 | 0.0557 | | 0.0569 | 2.0 | 2740 | 0.0477 | | 0.0495 | 3.0 | 4110 | 0.0449 | | 0.0444 | 4.0 | 5480 | 0.0437 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.11.0 - Datasets 2.3.2 - Tokenizers 0.12.1
Rocketknight1/mt5-small-finetuned-amazon-en-es
Rocketknight1
2022-10-18T14:34:56Z
4
0
transformers
[ "transformers", "tf", "mt5", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-23T14:34:02Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Rocketknight1/mt5-small-finetuned-amazon-en-es results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Rocketknight1/mt5-small-finetuned-amazon-en-es This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 10.2613 - Validation Loss: 4.5342 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 9672, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 10.2613 | 4.5342 | 0 | ### Framework versions - Transformers 4.24.0.dev0 - TensorFlow 2.10.0 - Datasets 2.6.1 - Tokenizers 0.11.0