modelId
stringlengths
4
112
sha
stringlengths
40
40
lastModified
stringlengths
24
24
tags
list
pipeline_tag
stringclasses
29 values
private
bool
1 class
author
stringlengths
2
38
config
null
id
stringlengths
4
112
downloads
float64
0
36.8M
likes
float64
0
712
library_name
stringclasses
17 values
__index_level_0__
int64
0
38.5k
readme
stringlengths
0
186k
huggingtweets/thucydiplease
814c1f43af471868f6840fb17c1e113ee22c2f6f
2021-05-23T02:15:35.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/thucydiplease
17
null
transformers
9,000
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div> <div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1324921465385279488/JoqDiFxH_400x400.jpg')"> </div> <div style="margin-top: 8px; font-size: 19px; font-weight: 800">Samantha Pritchard 🤖 AI Bot </div> <div style="font-size: 15px">@thucydiplease bot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on [@thucydiplease's tweets](https://twitter.com/thucydiplease). | Data | Quantity | | --- | --- | | Tweets downloaded | 3216 | | Retweets | 663 | | Short tweets | 590 | | Tweets kept | 1963 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/aht8pe1a/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @thucydiplease's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/k2mweitd) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/k2mweitd/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/thucydiplease') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/youronlinedad
3306b07ac62e97e3c06ed01f6ec02d3b35d7a9b0
2021-05-23T05:04:15.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/youronlinedad
17
null
transformers
9,001
--- language: en thumbnail: https://www.huggingtweets.com/youronlinedad/1614100614383/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div> <div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1184826580910125057/gqE8fCKg_400x400.jpg')"> </div> <div style="margin-top: 8px; font-size: 19px; font-weight: 800">Internet Dad 🤖 AI Bot </div> <div style="font-size: 15px">@youronlinedad bot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on [@youronlinedad's tweets](https://twitter.com/youronlinedad). | Data | Quantity | | --- | --- | | Tweets downloaded | 3201 | | Retweets | 41 | | Short tweets | 508 | | Tweets kept | 2652 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3g7jg14o/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @youronlinedad's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2t2wy77n) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2t2wy77n/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/youronlinedad') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
icelab/spacescibert_CR
5dff26c775e35fe80e50e0d31bb09aff1e5eff95
2021-10-25T14:38:27.000Z
[ "pytorch", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
icelab
null
icelab/spacescibert_CR
17
null
transformers
9,002
--- widget: - text: "The CubeSat RF design shall either have one RF inhibit and a RF power output no greater than 1.5W at the transmitter antenna's RF input OR the CubeSat shall have a minimum of two independent RF inhibits (CDS 3.3.9) (ISO 5.5.6)." --- --- # spacescibert_CR ## Model desciption This is fine-tuned further SpaceSciBERT model from the SpaceTransformers model family presented in SpaceTransformers: Language Modeling for Space Systems. The original Git repo is strath-ace/smart-nlp. The [fine-tuning](https://github.com/strath-ace/smart-nlp/blob/master/SpaceTransformers/CR/CR_ECSS_dataset.json) dataset is available for download and consists of 874 unique manual annotated ECSS requirements. The notebookfor fine-tuning can be assesed in Google Colab: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1EGh9bdxq6RqIzbvKuptAWvmIBG2EQJzJ?usp=sharing) ### BibTeX entry and citation info ``` @ARTICLE{ 9548078, author={Berquand, Audrey and Darm, Paul and Riccardi, Annalisa}, journal={IEEE Access}, title={SpaceTransformers: Language Modeling for Space Systems}, year={2021}, volume={9}, number={}, pages={133111-133122}, doi={10.1109/ACCESS.2021.3115659} } ```
indonesian-nlp/wav2vec2-luganda
67d044bc4b54f96cb75915dcf6cc7bbcf9cfb288
2022-01-19T16:19:45.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "lg", "dataset:common_voice", "transformers", "audio", "speech", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
indonesian-nlp
null
indonesian-nlp/wav2vec2-luganda
17
1
transformers
9,003
--- language: lg datasets: - common_voice metrics: - wer tags: - audio - automatic-speech-recognition - speech license: apache-2.0 model-index: - name: Wav2Vec2 Luganda by Indonesian-NLP results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice lg type: common_voice args: lg metrics: - name: Test WER type: wer value: 7.53 --- # Automatic Speech Recognition for Luganda This is the model built for the [Mozilla Luganda Automatic Speech Recognition competition](https://zindi.africa/competitions/mozilla-luganda-automatic-speech-recognition). It is a fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) model on the [Luganda Common Voice dataset](https://huggingface.co/datasets/common_voice) version 7.0. We also provide a [live demo](https://huggingface.co/spaces/indonesian-nlp/luganda-asr) to test the model. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "lg", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("indonesian-nlp/wav2vec2-luganda") model = Wav2Vec2ForCTC.from_pretrained("indonesian-nlp/wav2vec2-luganda") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): if "audio" in batch: speech_array = torch.tensor(batch["audio"]["array"]) else: speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset[:2]["sentence"]) ``` ## Evaluation The model can be evaluated as follows on the Indonesian test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "lg", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("indonesian-nlp/wav2vec2-luganda") model = Wav2Vec2ForCTC.from_pretrained("indonesian-nlp/wav2vec2-luganda") model.to("cuda") chars_to_ignore = [",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�", "‘", "’", "’"] chars_to_ignore_regex = f'[{"".join(chars_to_ignore)}]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() if "audio" in batch: speech_array = torch.tensor(batch["audio"]["array"]) else: speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the audio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` WER without KenLM: 15.38 % WER With KenLM: **Test Result**: 7.53 % ## Training The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ... # TODO The script used for training can be found [here](https://github.com/indonesian-nlp/luganda-asr)
it5/it5-base-ilgiornale-to-repubblica
db91c86fc720499db2fb99a361076182169d2b96
2022-03-09T08:04:46.000Z
[ "pytorch", "tf", "jax", "tensorboard", "t5", "text2text-generation", "it", "dataset:gsarti/change_it", "arxiv:2203.03759", "transformers", "italian", "sequence-to-sequence", "newspaper", "ilgiornale", "repubblica", "style-transfer", "license:apache-2.0", "model-index", "co2_eq_emissions", "autotrain_compatible" ]
text2text-generation
false
it5
null
it5/it5-base-ilgiornale-to-repubblica
17
null
transformers
9,004
--- language: - it license: apache-2.0 datasets: - gsarti/change_it tags: - italian - sequence-to-sequence - newspaper - ilgiornale - repubblica - style-transfer widget: - text: "WASHINGTON - La Corea del Nord torna dopo nove anni nella blacklist Usa degli Stati considerati sponsor del terrorismo. Come Iran, Siria e Sudan. Lo ha deciso Donald Trump , che ha preferito dare l'annuncio non durante il suo recente viaggio in Asia ma ieri, in una riunione del governo alla Casa Bianca. 'Oggi gli Stati Uniti designeranno la Corea del nord come uno stato sponsor del terrorismo', ha tuonato il tycoon, anticipando che sarà formalizzata oggi dal dipartimento di stato e sarà accompagnata da nuove e più severe sanzioni. 'Il livello più alto' mai imposto a Pyongyang, ha promesso. 'Avrebbe dovuto succedere molto tempo fa', ha aggiunto, scaricando per l'ennesima volta la responsabilità dell'attuale crisi sull'amministrazione Obama. Poi si è scagliato contro un 'regime assassino' che 'deve mettere fine allo sviluppo del suo programma illegale nucleare e balistico'. Per giustificare la svolta, Trump ha accusato Pyongyang non solo di 'minacciare il mondo con una devastazione nucleare' ma anche di aver 'ripetutamente sostenuto atti di terrorismo internazionale', compreso omicidi in suolo straniero. Il riferimento è all' uccisione all'aeroporto della capitale malese di Kim Jong Nam , il fratellastro del leader nordcoreano Kim Jong Un , ma non ci sono altri episodi noti. Tanto che alcuni esperti, come pure dirigenti Usa coperti dall'anonimato, dubitano che Pyongyang risponda ai criteri per una tale designazione. La mossa appare altamente simbolica, dato che la Corea del Nord è già pesantemente sanzionata a livello internazionale. Per il segretario di stato Rex Tillerson è solo l'ultima di una serie di passi per rafforzare la pressione su Pyongyang e costringerla a sedersi ad un tavolo perché gli Usa hanno sempre 'speranza nella diplomazia'. Ma nello stesso tempo è un monito per 'fermare e dissuadere' altri Paesi dal sostenere la Corea del Nord, finita nella blacklist 'anche per l'uso di armi chimiche'. Ma la mossa potrebbe anche essere controproducente, provocando una risposta di Kim o minando gli sforzi per sollecitare Pechino ad una maggiore pressione su Pyongyang. In ogni caso non aiuta il dialogo diretto tra Usa e Corea del Nord, che sembrava essere stato avviato in modo riservato. Come non aiutano gli scambi di insulti fra Trump e Kim. Nord Corea, Trump: 'Cerco di essere amico di Kim, sarebbe una bella cosa per il mondo'. Pyongyang era stata messa nella lista Usa degli Stati sponsor del terrorismo per aver fatto esplodere nel 1987 un volo della Korean Air uccidendo tutti i 115 passeggeri a bordo. Ma l'amministrazione di George W. Bush l'aveva rimossa sperando di far avanzare i negoziati sulla denuclearizzazione della penisola coreana. Il governo giapponese sostiene la decisione degli Stati Uniti di inserire la Corea del Nord nella lista degli stati che sponsorizzano il terrorismo, pur riconoscendo che l'annuncio potrebbe provocare una reazione immediata del regime di Pyongyang. Il premier Shinzo Abe ha accolto con consenso il comunicato Usa e ha detto alla stampa che servirà a incrementare la pressione sulla Corea del Nord. Il ministro della Difesa Itsunori Onodera , pur valutando positivamente la notifica, ha spiegato che si attendono azioni provocatorie dallo stato eremita, ribadendo che è vitale rimanere vigili. Secondo la stampa nipponica Abe aveva richiesto al dipartimento di Stato Usa di mettere la Corea del Nord sulla lista durante l'incontro col presidente Usa Donald Trump a Tokyo a inizio mese. L'ultimo lancio di missile balistico condotto da Pyongyang nell'oceano Pacifico, sorvolando il mare del Giappone, risale allo scorso settembre." - text: "ROMA - Una nuova droga killer è stata sequestrata per la prima volta in Europa dagli investigatori del Nas. Si tratta di una nuova \"miscela psicoattiva altamente tossica\" per la prima volta individuata da forze di polizia, simile all'eroina sintetica, ma molto più economica e letale. Tanto che i 20 grammi scoperti sarebbero stati sufficienti per fabbricare ben 20.000 dosi e lo stesso contatto attraverso la pelle può provocare intossicazione. Individuata per la prima volta, la nuova droga presenta una struttura simile al farmaco sedativo Fentanyl ma con effetti molto più devastanti per l'organismo. Proveniva dell'estero ed era contenuta in un plico postale indirizzato in una città del centro Italia: è stata intercettata tramite accertamenti sul web grazie a un'operazione di intelligence che ha visto come protagonisti i militari della Sezione operativa centrale del Comando carabinieri per la Tutela della salute (Nas). Economica e letale, secondo gli investigatori \"in confronto l'eroina è quasi 'acqua fresca', anzi, proprio per la sua economicità, in alcuni casi viene venduta dai pusher a giovani conviti di comprare eroina\". La diffusione di nuove droghe sintetiche che continuamente appaiono sui mercati necessita di un'attività investigativa costante e complessa. Si tratta infatti di sostanze dalla struttura molecolare molto simile a quella del Fentanyl ma ogni volta leggermente diversa. Di qui la difficoltà di individuarle e l'importanza del nuovo sequestro. \"La chiamano impropriamente 'eroina sintetica' - spiega il comandante dei Nas, generale Adelmo Lusi - per il tipo di effetto psicotropo simile, ma dal punto di vista della tossicità è molto peggio: con 25 milligrammi di eroina ci si sballa, con 25mg di simil-fentanyl, come quello appena sequestrato, si muore\". Le indagini sono partite da ricoveri per overdose in ospedale, in cui arrivavano ragazzi che non rispondevano al trattamento disintossicante per l'eroina. La nuova sostanza verrà ora segnalata per l'inserimento tra le tabelle ministeriali degli stupefacenti prevista dal Dpr 309/1990." - text: "Fragile come il burro. Il nostro territorio è precario. Ne sanno qualcosa i comuni che sono stati investititi dal maltempo . Il dissesto idrogeologico imperversa su tutto il territorio. Infatti, oltre 6.600 comuni , pari all’82% del totale, sono in aree ad elevato rischio idrogeologico, pari al 10% della sua superficie. La popolazione potenzialmente esposta è stimata in 5,8 milioni di persone. I dati emergono dalle recenti analisi fatte da Legambiente e Protezione civile, che mettono in evidenza come in 10 anni in Italia sia raddoppiata l’area dei territori colpiti da alluvioni e frane , passando da una media di quattro regioni all’anno a otto regioni. Nella classifica delle regioni a maggior rischio idrogeologico prima è la Calabria con il 100% dei comuni esposti; al 100% ci sono anche la provincia di Trento, il Molise, la Basilicata, l’Umbria, la Valle d’Aosta. Poi Marche, Liguria al 99%; Lazio, Toscana al 98%; Abruzzo (96%), Emilia-Romagna (95%), Campania e Friuli Venezia Giulia al 92%, Piemonte (87%), Sardegna (81%), Puglia (78%), Sicilia (71%), Lombardia (60%), provincia di Bolzano (59%), Veneto (56%). Tra le cause che condizionano ed amplificano il rischio idrogeologico c’è l’azione dell’uomo (abbandono e degrado, cementificazione, consumo di suolo, abusivismo, disboscamento e incendi). Ma anche e soprattutto la mancanza di una seria manutenzione ordinaria e non ad una organica politica di prevenzione." - text: "Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrà nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si è trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perché dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi è stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perché rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , María Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\"." metrics: - rouge - bertscore - headline-headline-consistency-classifier - headline-article-consistency-classifier model-index: - name: it5-base-ilgiornale-to-repubblica results: - task: type: headline-style-transfer-ilgiornale-to-repubblica name: "Headline style transfer (Il Giornale to Repubblica)" dataset: type: gsarti/change_it name: "CHANGE-IT" metrics: - type: rouge1 value: 0.297 name: "Test Rouge1" - type: rouge2 value: 0.104 name: "Test Rouge2" - type: rougeL value: 0.259 name: "Test RougeL" - type: bertscore value: 0.425 name: "Test BERTScore" args: - model_type: "dbmdz/bert-base-italian-xxl-uncased" - lang: "it" - num_layers: 10 - rescale_with_baseline: True - baseline_path: "bertscore_baseline_ita.tsv" - type: headline-headline-consistency-classifier value: 0.925 name: "Test Headline-Headline Consistency Accuracy" - type: headline-article-consistency-classifier value: 0.852 name: "Test Headline-Article Consistency Accuracy" co2_eq_emissions: emissions: "17g" source: "Google Cloud Platform Carbon Footprint" training_type: "fine-tuning" geographical_location: "Eemshaven, Netherlands, Europe" hardware_used: "1 TPU v3-8 VM" thumbnail: https://gsarti.com/publication/it5/featured.png --- # IT5 Base for News Headline Style Transfer (Il Giornale to Repubblica) 🗞️➡️🗞️ 🇮🇹 This repository contains the checkpoint for the [IT5 Base](https://huggingface.co/gsarti/it5-base) model fine-tuned on news headline style transfer in the Il Giornale to Repubblica direction on the Italian CHANGE-IT dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io). A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach. ## Using the model The model is trained to generate an headline in the style of Repubblica from the full body of an article written in the style of Il Giornale. Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as: ```python from transformers import pipelines g2r = pipeline("text2text-generation", model='it5/it5-base-ilgiornale-to-repubblica') g2r("Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrà nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si è trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perché dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi è stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perché rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , María Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\".") >>> [{"generated_text": "il nazionalista rajoy: 'voteremo la sfiducia'"}] ``` or loaded using autoclasses: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("it5/it5-base-ilgiornale-to-repubblica") model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-base-ilgiornale-to-repubblica") ``` If you use this model in your research, please cite our work as: ```bibtex @article{sarti-nissim-2022-it5, title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation}, author={Sarti, Gabriele and Nissim, Malvina}, journal={ArXiv preprint 2203.03759}, url={https://arxiv.org/abs/2203.03759}, year={2022}, month={mar} } ```
it5/it5-base-wiki-summarization
d9c0b97204dec4fb765905d1ac50831a8029c557
2022-03-09T08:06:40.000Z
[ "pytorch", "tf", "jax", "tensorboard", "t5", "text2text-generation", "it", "dataset:wits", "arxiv:2203.03759", "transformers", "italian", "sequence-to-sequence", "wikipedia", "summarization", "wits", "license:apache-2.0", "model-index", "co2_eq_emissions", "autotrain_compatible" ]
summarization
false
it5
null
it5/it5-base-wiki-summarization
17
null
transformers
9,005
--- language: - it license: apache-2.0 datasets: - wits tags: - italian - sequence-to-sequence - wikipedia - summarization - wits widget: - text: "La 5ª Commissione ha competenza per i disegni di legge riguardanti le specifiche materie del bilancio, del personale e dei servizi del Ministero dell'economia, nonché per i disegni di legge riguardanti la materia finanziaria. La Commissione è composta da 26 senatori (di cui 2 segretari, 2 vicepresidenti di cui 1 componente esterno, e un presidente) scelti in modo omogeneo tra i componenti di quel ramo del Parlamento, in modo da rispecchiarne le forze politiche presenti. Essi sono scelti dai gruppi parlamentari (e non dal Presidente, come invece accade per l'organismo della Giunta parlamentare): per la nomina dei membri ciascun Gruppo, entro cinque giorni dalla propria costituzione, procede, dandone comunicazione alla Presidenza del Senato, alla designazione dei propri rappresentanti nelle singole Commissioni permanenti. Ogni senatore chiamato a far parte del governo o eletto presidente della Commissione è, per la durata della carica, sostituito dal suo gruppo nella Commissione con un altro senatore, che continuerà ad appartenere anche alla Commissione di provenienza. Tranne in rari casi nessun Senatore può essere assegnato a più di una Commissione permanente. Le Commissioni permanenti sono rinnovate dopo il primo biennio della legislatura ed i loro componenti possono essere confermati." - text: "Interni della chiesa Si pensa che già ai tempi di Gediminas vi fosse una piccola chiesa, probabilmente in legno. Nel 1408 circa Vitoldo costruì la chiesa dello Spirito Santo che andò in seguito ampliata. Nel 1501 Alessandro Jagellone lo donò al monastero domenicano, il più antico della Lituania, che nel 1679-88 fu ampliato e ricostruito. Di quel periodo sopravvivono le mura della chiesa, mentre l'arredamento interno fu realizzato nel 1749-1770 e la cupola affrontò dei lavori di restauro nel 1752-1760. Nel 1844 le autorità zariste chiusero il monastero e la chiesa divenne parrocchiale. Oggi serve la comunità polacca di Vilnius. Su via Šv. Ignoto fu fondato un monastero domenicano nel 1501. Come molti altri edifici, questo monastero fu convertito in una prigione dalle autorità zariste nel 1807. Costituì un luogo di prigionia per molti patrioti lituani, nello specifico i Filareti, i quali parteciparono alle rivolte del 1831 e del 1863. Organo La chiesa si trova lateralmente rispetto alla strada e non ha una facciata principale ben disegnata. L'altezza, inclusa la cupola, è di 51 m. La parte inferiore della facciata (con piccole torri gemelle) è ricoperta da edifici conventuali e l'esterno presenta caratteristiche architettoniche tipiche del tardo barocco. Celebre per i fantasiosi ornamenti rococò, l'interno della chiesa è tra i più celebri della Lituania per via dei cartigli con vari stemmi e affreschi lungo la navata: vi sono 16 altari nella chiesa. Gli altari e il pulpito sono assai decorati con sculture e ornamenti rotondi e in rilievo. Tra gli affreschi barocchi, si pensi alla composizione multi-figurale intitolata ''Apoteosi dello Spirito Santo'' (neobarocco, XIX secolo) nella cupola, 45 dipinti nella chiesa (tra cui un'immagine di Santa Barbara con un'ambientazione del XVII o XVIII secolo, una di Santa Caterina da Siena in stile rococò di Szymon Czechowicz, un ritratto di Alessandro Jagellone di un artista sconosciuto della seconda metà del XVIII secolo). Un ingresso sotto l'altare conduce alle grandi volte, labirintiche, con molte stanze e cripte: i sotterranei ospitano i resti di centinaia di residenti di Vilnius, alcuni dei quali mummificatisi naturalmente, e sono circondati da leggende metropolitane. Sebbene l'esistenza dei sotterranei fosse nota, i primi sforzi per esplorare e mappare le cripte furono abbandonate nonostante lo sforzo degli studenti dell'Università di Vilnius negli anni '30. Tuttavia, questi ultimi non avevano osservato le corrette procedure archeologiche e causarono infatti molti danni: il modus operandi prevedeva lo smistamento delle ossa ponendo tutti i teschi sugli scaffali e rimuovendoli le tombe. Da allora, i resti sono stati spostati molte volte lasciandoli in uno stato casuale e disorganizzato. Stando alle leggende che aleggiano sul luogo, i resti sarebbero di soldati francesi recatisi in città nel corso della campagna di Russia del 1812 avviata da Napoleone Bonaparte, di vittime dell'Inquisizione o della peste nera. Più romantiche risultano le affermazioni di chi sostiene che i corridoi sotterranei facevano parte di una rete di passaggi più ampia che consentiva agli amanti leggendari Barbara Radziwiłł e Sigismondo II Augusto di incontrarsi in segreto. Nel 2011, gli antropologi dell'Università di Vilnius, guidati da Rimantas Jankauskas, avviarono uno studio sui corpi mummificati, stimando settimane dopo che le volte conservassero i resti di circa 600 persone, tra cui molte donne e bambini dalla metà del XVIII secolo all'inizio del XIX secolo. Il team ha selezionato i cadaveri meglio conservati e ha eseguito la loro tomografia. I risultati mostrano che molte persone erano in sovrappeso e avevano l'alluce valgo, il che ha portato alla conclusione che si trattava di alti borghesi o comunque di cittadini abbienti. " - text: "Le dimensioni dell'isola sono di 8 km di lunghezza e di 3,2 km di larghezza. Si trova a 1,6 km a sud-est dell'isola di Renaud, dalla quale è separata dal passaggio Rodman. La sua altezza è di 100 m. Fu scoperta dall'esploratore e baleniere britannico John Biscoe nel 1832 e venne mappata durante una spedizione antartica francese realizzata nel primo decennio del XX secolo. Al comando della spedizione era Jean-Baptiste Charcot e il nome fu scelto per onorare l'esploratore e geografo francese Charles Rabot. === Rivendicazioni territoriali === * Secondo l'Argentina appartiene al dipartimento dell'Antartide Argentina nella provincia della Terra del Fuoco. * Secondo il Cile appartiene al comune antartico della provincia cilena antartica nella regione di Magallanes e dell'Antartico cileno. * Secondo il Regno Unito fa parte del territorio antartico britannico. Per il Trattato Antartico tali rivendicazioni sono sospese. Sull'isola è presente il rifugio Guillochon, sito storico antartico. " - text: "Vanni ha la sua prima mostra personale nel 1948, alla Galleria Margherita di Roma. Nel 1949 vince una borsa di studio che lo porterà a studiare ad Amsterdam sotto la guida del pittore neoplastico Friedrich Vordemberge-Gildewart. Nel 1952 vince una Fulbright Scholarship che lo porterà a studiare in America, alla Yale University, sotto la guida di Josef Albers. Dal 1953 al 1960 si stabilisce a Parigi, dove illustra alcuni libri per bambini che in seguito vinceranno il premio del Club des Editeurs. Nel 1954 lavora come consulente del colore per il documentario su Picasso di Luciano Emmer, e nel 1955 comincia la sua lunga collaborazione con la Galleria Schneider, affiancando artisti come Corrado Cagli. Dal 1969 al 1974 lavora su dei bassorilievi in vetro resina sui quali vengono proiettati dei film astratti da lui creati, per creare dei quadri che si trasformino continuamente nel tempo. Nel 1979 lascia Roma per stabilirsi a New York, dove alla carriera di pittore affiancherà quella di professore per la prestigiosa Cooper Union School of Art, dove insegnerà ininterrottamente dal 1984 al 2014. L'opera pittorica di Vanni è segnata da una visione estremamente personale, lontana dalle correnti e dai movimenti che hanno caratterizzato la seconda metà del XX secolo. Memore delle lunghe conversazioni avute da Vanni nella sua primissima gioventù, con il filosofo e pittore futurista Alberto Bragaglia, le sue opere sono contrassegnate da un “eclettismo” formale programmatico, alla base del quale resta costante una conoscenza profonda delle molteplici tecniche artistiche utilizzate (tra cui il mosaico, l’affresco e la tempera ad uovo). Pur esprimendosi per lo più in cicli di opere dove l’astrazione formale è la principale componente figurativa, sono da sottolineare alcune opere dove Vanni ha dato prova di una importante padronanza dell’arte figurativa. Importanti e numerose sono le sue realizzazioni anche nel campo dell’illustrazione. Sue sono le illustrazioni per la novella ''Agostino'' di Alberto Moravia, per il libro ''Love'' di Lowell A. Siff e delle ''Contes de Cristal'' di Alice Coléno. Ha tenuto mostre personali in Italia e all’estero ed esposto in mostre collettive di rappresentanza italiana nei musei e nelle gallerie di ogni parte del mondo. " metrics: - rouge - bertscore model-index: - name: it5-base-wiki-summarization results: - task: type: wiki-summarization name: "Wikipedia Summarization" dataset: type: wits name: "WITS" metrics: - type: rouge1 value: 0.369 name: "Test Rouge1" - type: rouge2 value: 0.217 name: "Test Rouge2" - type: rougeL value: 0.333 name: "Test RougeL" - type: bertscore value: 0.530 name: "Test BERTScore" args: - model_type: "dbmdz/bert-base-italian-xxl-uncased" - lang: "it" - num_layers: 10 - rescale_with_baseline: True - baseline_path: "bertscore_baseline_ita.tsv" co2_eq_emissions: emissions: "17g" source: "Google Cloud Platform Carbon Footprint" training_type: "fine-tuning" geographical_location: "Eemshaven, Netherlands, Europe" hardware_used: "1 TPU v3-8 VM" thumbnail: https://gsarti.com/publication/it5/featured.png --- # IT5 Base for Wikipedia Summarization 📑 🇮🇹 This repository contains the checkpoint for the [IT5 Base](https://huggingface.co/gsarti/it5-base) model fine-tuned on Wikipedia summarization on the [WITS](https://www.semanticscholar.org/paper/WITS%3A-Wikipedia-for-Italian-Text-Summarization-Casola-Lavelli/ad6c83122e721c7c0db4a40727dac3b4762cd2b1) dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io). A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach. ## Using the model Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as: ```python from transformers import pipelines hg = pipeline("text2text-generation", model='it5/it5-base-wiki-summarization') hg("Le dimensioni dell'isola sono di 8 km di lunghezza e di 3,2 km di larghezza. Si trova a 1,6 km a sud-est dell'isola di Renaud, dalla quale è separata dal passaggio Rodman. La sua altezza è di 100 m. Fu scoperta dall'esploratore e baleniere britannico John Biscoe nel 1832 e venne mappata durante una spedizione antartica francese realizzata nel primo decennio del XX secolo. Al comando della spedizione era Jean-Baptiste Charcot e il nome fu scelto per onorare l'esploratore e geografo francese Charles Rabot. === Rivendicazioni territoriali === * Secondo l'Argentina appartiene al dipartimento dell'Antartide Argentina nella provincia della Terra del Fuoco. * Secondo il Cile appartiene al comune antartico della provincia cilena antartica nella regione di Magallanes e dell'Antartico cileno. * Secondo il Regno Unito fa parte del territorio antartico britannico. Per il Trattato Antartico tali rivendicazioni sono sospese. Sull'isola è presente il rifugio Guillochon, sito storico antartico. " - text: "Vanni ha la sua prima mostra personale nel 1948, alla Galleria Margherita di Roma. Nel 1949 vince una borsa di studio che lo porterà a studiare ad Amsterdam sotto la guida del pittore neoplastico Friedrich Vordemberge-Gildewart. Nel 1952 vince una Fulbright Scholarship che lo porterà a studiare in America, alla Yale University, sotto la guida di Josef Albers. Dal 1953 al 1960 si stabilisce a Parigi, dove illustra alcuni libri per bambini che in seguito vinceranno il premio del Club des Editeurs. Nel 1954 lavora come consulente del colore per il documentario su Picasso di Luciano Emmer, e nel 1955 comincia la sua lunga collaborazione con la Galleria Schneider, affiancando artisti come Corrado Cagli. Dal 1969 al 1974 lavora su dei bassorilievi in vetro resina sui quali vengono proiettati dei film astratti da lui creati, per creare dei quadri che si trasformino continuamente nel tempo. Nel 1979 lascia Roma per stabilirsi a New York, dove alla carriera di pittore affiancherà quella di professore per la prestigiosa Cooper Union School of Art, dove insegnerà ininterrottamente dal 1984 al 2014. L'opera pittorica di Vanni è segnata da una visione estremamente personale, lontana dalle correnti e dai movimenti che hanno caratterizzato la seconda metà del XX secolo. Memore delle lunghe conversazioni avute da Vanni nella sua primissima gioventù, con il filosofo e pittore futurista Alberto Bragaglia, le sue opere sono contrassegnate da un “eclettismo” formale programmatico, alla base del quale resta costante una conoscenza profonda delle molteplici tecniche artistiche utilizzate (tra cui il mosaico, l’affresco e la tempera ad uovo). Pur esprimendosi per lo più in cicli di opere dove l’astrazione formale è la principale componente figurativa, sono da sottolineare alcune opere dove Vanni ha dato prova di una importante padronanza dell’arte figurativa. Importanti e numerose sono le sue realizzazioni anche nel campo dell’illustrazione. Sue sono le illustrazioni per la novella ''Agostino'' di Alberto Moravia, per il libro ''Love'' di Lowell A. Siff e delle ''Contes de Cristal'' di Alice Coléno. Ha tenuto mostre personali in Italia e all’estero ed esposto in mostre collettive di rappresentanza italiana nei musei e nelle gallerie di ogni parte del mondo.") >>> [{"generated_text": "L' '''isola di Rabot''' si trova in prossimità dell'isola di Renaud, a sud dell'Argentina."}] ``` or loaded using autoclasses: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("it5/it5-base-wiki-summarization") model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-base-wiki-summarization") ``` If you use this model in your research, please cite our work as: ```bibtex @article{sarti-nissim-2022-it5, title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation}, author={Sarti, Gabriele and Nissim, Malvina}, journal={ArXiv preprint 2203.03759}, url={https://arxiv.org/abs/2203.03759}, year={2022}, month={mar} } ```
julien-c/EsperBERTo-small-pos
1183bc1ab394cc09d9c631c07b076cdcedd77954
2021-05-20T17:28:42.000Z
[ "pytorch", "jax", "roberta", "token-classification", "eo", "transformers", "autotrain_compatible" ]
token-classification
false
julien-c
null
julien-c/EsperBERTo-small-pos
17
1
transformers
9,006
--- language: eo thumbnail: https://huggingface.co/blog/assets/01_how-to-train/EsperBERTo-thumbnail-v2.png widget: - text: "Mi estas viro kej estas tago varma." --- # EsperBERTo: RoBERTa-like Language model trained on Esperanto **Companion model to blog post https://huggingface.co/blog/how-to-train** 🔥 ## Training Details - current checkpoint: 566000 - machine name: `galinette` ![](https://huggingface.co/blog/assets/01_how-to-train/EsperBERTo-thumbnail-v2.png) ## Example pipeline ```python from transformers import TokenClassificationPipeline, pipeline MODEL_PATH = "./models/EsperBERTo-small-pos/" nlp = pipeline( "ner", model=MODEL_PATH, tokenizer=MODEL_PATH, ) # or instantiate a TokenClassificationPipeline directly. nlp("Mi estas viro kej estas tago varma.") # {'entity': 'PRON', 'score': 0.9979867339134216, 'word': ' Mi'} # {'entity': 'VERB', 'score': 0.9683094620704651, 'word': ' estas'} # {'entity': 'VERB', 'score': 0.9797462821006775, 'word': ' estas'} # {'entity': 'NOUN', 'score': 0.8509314060211182, 'word': ' tago'} # {'entity': 'ADJ', 'score': 0.9996201395988464, 'word': ' varma'} ```
juliensimon/autonlp-imdb-demo-hf-16622775
f679643e1e113864071f50a08815be4652aded48
2021-10-11T12:46:02.000Z
[ "pytorch", "roberta", "text-classification", "en", "dataset:juliensimon/autonlp-data-imdb-demo-hf", "transformers", "autonlp" ]
text-classification
false
juliensimon
null
juliensimon/autonlp-imdb-demo-hf-16622775
17
1
transformers
9,007
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - juliensimon/autonlp-data-imdb-demo-hf --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 16622775 ## Validation Metrics - Loss: 0.18653589487075806 - Accuracy: 0.9408 - Precision: 0.9537643207855974 - Recall: 0.9272076372315036 - AUC: 0.985847396174344 - F1: 0.9402985074626865 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/juliensimon/autonlp-imdb-demo-hf-16622775 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("juliensimon/autonlp-imdb-demo-hf-16622775", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("juliensimon/autonlp-imdb-demo-hf-16622775", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
junnyu/roformer_small_discriminator
411682a5b9cb673344ebd2ebf6482612c1c6006f
2021-09-22T08:54:23.000Z
[ "pytorch", "roformer", "feature-extraction", "en", "dataset:openwebtext", "transformers", "electra", "rotary position embedding", "license:mit" ]
feature-extraction
false
junnyu
null
junnyu/roformer_small_discriminator
17
null
transformers
9,008
--- language: en thumbnail: https://github.com/junnyu tags: - pytorch - electra - roformer - rotary position embedding license: mit datasets: - openwebtext --- # 一、 个人在openwebtext数据集上添加rotary-position-embedding,训练得到的electra-small模型 # 二、 复现结果(dev dataset) |Model|CoLA|SST|MRPC|STS|QQP|MNLI|QNLI|RTE|Avg.| |---|---|---|---|---|---|---|---|---|---| |ELECTRA-Small-OWT(original)|56.8|88.3|87.4|86.8|88.3|78.9|87.9|68.5|80.36| |**ELECTRA-RoFormer-Small-OWT (this)**|55.76|90.45|87.3|86.64|89.61|81.17|88.85|62.71|80.31| # 三、 训练细节 - 数据集 openwebtext - 训练batch_size 256 - 学习率lr 5e-4 - 最大句子长度max_seqlen 128 - 训练total step 50W - GPU RTX3090 - 训练时间总共耗费55h # 四、wandb日志 - [**预训练日志**](https://wandb.ai/junyu/electra_rotary_small_pretrain?workspace=user-junyu) - [**GLUE微调日志**](https://wandb.ai/junyu/electra_rotary_glue_100?workspace=user-junyu) # 五、 使用 ```python import torch from transformers import ElectraTokenizer,RoFormerModel tokenizer = ElectraTokenizer.from_pretrained("junnyu/roformer_small_discriminator") model = RoFormerModel.from_pretrained("junnyu/roformer_small_discriminator") inputs = tokenizer("Beijing is the capital of China.", return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) print(outputs[0].shape) ```
kaesve/BioBERT_patent_reference_extraction
a1e26ee5926ce7bf00ccc2a08d11d099cf24da91
2021-05-19T20:58:49.000Z
[ "pytorch", "jax", "bert", "fill-mask", "arxiv:2101.01039", "transformers", "autotrain_compatible" ]
fill-mask
false
kaesve
null
kaesve/BioBERT_patent_reference_extraction
17
null
transformers
9,009
# Reference extraction in patents This repository contains a finetuned BioBERT model that can extract references to scientific literature from patents. See https://github.com/kaesve/patent-citation-extraction and https://arxiv.org/abs/2101.01039 for more information.
kuppuluri/telugu_bertu_ner
c1649a30768be0256c2d4375cc45baf64f1c1199
2021-12-02T18:15:04.000Z
[ "pytorch", "jax", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
kuppuluri
null
kuppuluri/telugu_bertu_ner
17
null
transformers
9,010
# Named Entity Recognition Model for Telugu #### How to use Use the below script from your python terminal as the web interface for inference has few encoding issues for Telugu PS: If you find my model useful, I would appreciate a note from you as it would encourage me to continue improving it and also add new models. ```python from simpletransformers.ner import NERModel model = NERModel('bert', 'kuppuluri/telugu_bertu_ner', labels=[ 'B-PERSON', 'I-ORG', 'B-ORG', 'I-LOC', 'B-MISC', 'I-MISC', 'I-PERSON', 'B-LOC', 'O' ], use_cuda=False, args={"use_multiprocessing": False}) text = "విరాట్ కోహ్లీ కూడా అదే నిర్లక్ష్యాన్ని ప్రదర్శించి కేవలం ఒక పరుగుకే రనౌటై పెవిలియన్ చేరాడు ." results = model.predict([text]) ``` ## Training data Training data is from https://github.com/anikethjr/NER_Telugu ## Eval results On the test set my results were eval_loss = 0.0004407190410447974 f1_score = 0.999519076627124 precision = 0.9994389677005691 recall = 0.9995991983967936
ltrctelugu/gpt2_ltrc_telugu
c31edbe619c6cce20bfecaab8b843095c0dd2738
2021-05-23T08:35:13.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "transformers" ]
text-generation
false
ltrctelugu
null
ltrctelugu/gpt2_ltrc_telugu
17
null
transformers
9,011
Entry not found
ltrctelugu/ltrc-distilbert
0fc61ff343b7d8a916b8245293148084c66f25f0
2021-11-22T11:34:05.000Z
[ "pytorch", "distilbert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
ltrctelugu
null
ltrctelugu/ltrc-distilbert
17
null
transformers
9,012
hello
m3hrdadfi/albert-fa-base-v2-ner-peyma
e6f7d8a4e274f0a26de0e2c704c38ad2d7145c73
2020-12-26T08:36:20.000Z
[ "pytorch", "tf", "albert", "token-classification", "fa", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
false
m3hrdadfi
null
m3hrdadfi/albert-fa-base-v2-ner-peyma
17
1
transformers
9,013
--- language: fa license: apache-2.0 --- # ALBERT Persian A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language > میتونی بهش بگی برت_کوچولو [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT. Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models. ## Persian NER [ARMAN, PEYMA] This task aims to extract named entities in the text, such as names and label with appropriate `NER` classes such as locations, organizations, etc. The datasets used for this task contain sentences that are marked with `IOB` format. In this format, tokens that are not part of an entity are tagged as `”O”` the `”B”`tag corresponds to the first word of an object, and the `”I”` tag corresponds to the rest of the terms of the same entity. Both `”B”` and `”I”` tags are followed by a hyphen (or underscore), followed by the entity category. Therefore, the NER task is a multi-class token classification problem that labels the tokens upon being fed a raw text. There are two primary datasets used in Persian NER, `ARMAN`, and `PEYMA`. ### PEYMA PEYMA dataset includes 7,145 sentences with a total of 302,530 tokens from which 41,148 tokens are tagged with seven different classes. 1. Organization 2. Money 3. Location 4. Date 5. Time 6. Person 7. Percent | Label | # | |:------------:|:-----:| | Organization | 16964 | | Money | 2037 | | Location | 8782 | | Date | 4259 | | Time | 732 | | Person | 7675 | | Percent | 699 | **Download** You can download the dataset from [here](http://nsurl.org/tasks/task-7-named-entity-recognition-ner-for-farsi/) ## Results The following table summarizes the F1 score obtained as compared to other models and architectures. | Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT | MorphoBERT | Beheshti-NER | LSTM-CRF | Rule-Based CRF | BiLSTM-CRF | |:-------:|:-----------------:|:-----------:|:-----:|:----------:|:------------:|:--------:|:--------------:|:----------:| | PEYMA | 88.99 | 93.10 | 86.64 | - | 90.59 | - | 84.00 | - | ### BibTeX entry and citation info Please cite in publications as the following: ```bibtex @misc{ALBERTPersian, author = {Mehrdad Farahani}, title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}}, } @article{ParsBERT, title={ParsBERT: Transformer-based Model for Persian Language Understanding}, author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri}, journal={ArXiv}, year={2020}, volume={abs/2005.12515} } ``` ## Questions? Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo.
madlag/bert-base-uncased-squadv1-x2.01-f89.2-d30-hybrid-rewind-opt-v1
151170c410fad18bd5890fa53cba8a3c06d56805
2021-06-16T15:02:14.000Z
[ "pytorch", "tf", "bert", "question-answering", "en", "dataset:squad", "transformers", "license:mit", "autotrain_compatible" ]
question-answering
false
madlag
null
madlag/bert-base-uncased-squadv1-x2.01-f89.2-d30-hybrid-rewind-opt-v1
17
null
transformers
9,014
--- language: en thumbnail: license: mit tags: - question-answering - - datasets: - squad metrics: - squad widget: - text: "Where is the Eiffel Tower located?" context: "The Eiffel Tower is a wrought-iron lattice tower on the Champ de Mars in Paris, France. It is named after the engineer Gustave Eiffel, whose company designed and built the tower." - text: "Who is Frederic Chopin?" context: "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano." --- ## BERT-base uncased model fine-tuned on SQuAD v1 This model was created using the [nn_pruning](https://github.com/huggingface/nn_pruning) python library: the **linear layers contains 30.0%** of the original weights. This model **CANNOT be used without using nn_pruning `optimize_model`** function, as it uses NoNorms instead of LayerNorms and this is not currently supported by the Transformers library. It uses ReLUs instead of GeLUs as in the initial BERT network, to speedup inference. This does not need special handling, as it is supported by the Transformers library, and flagged in the model config by the ```"hidden_act": "relu"``` entry. The model contains **45.0%** of the original weights **overall** (the embeddings account for a significant part of the model, and they are not pruned by this method). With a simple resizing of the linear matrices it ran **2.01x as fast as bert-base-uncased** on the evaluation. This is possible because the pruning method lead to structured matrices: to visualize them, hover below on the plot to see the non-zero/zero parts of each matrix. <div class="graph"><script src="/madlag/bert-base-uncased-squadv1-x2.01-f89.2-d30-hybrid-rewind-opt-v1/raw/main/model_card/density_info.js" id="c3b978cc-6d18-4fd0-a24b-e4369569d64d"></script></div> In terms of accuracy, its **F1 is 89.19**, compared with 88.5 for bert-base-uncased, a **F1 gain of 0.69**. ## Fine-Pruning details This model was fine-tuned from the HuggingFace [model](https://huggingface.co/bert-base-uncased) checkpoint on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer), and distilled from the model [bert-large-uncased-whole-word-masking-finetuned-squad](https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad) This model is case-insensitive: it does not make a difference between english and English. A side-effect of the block pruning is that some of the attention heads are completely removed: 55 heads were removed on a total of 144 (38.2%). Here is a detailed view on how the remaining heads are distributed in the network after pruning. <div class="graph"><script src="/madlag/bert-base-uncased-squadv1-x2.01-f89.2-d30-hybrid-rewind-opt-v1/raw/main/model_card/pruning_info.js" id="7de38b6d-774c-4313-a5a4-8e32f554d9ec"></script></div> ## Details of the SQuAD1.1 dataset | Dataset | Split | # samples | | -------- | ----- | --------- | | SQuAD1.1 | train | 90.6K | | SQuAD1.1 | eval | 11.1k | ### Fine-tuning - Python: `3.8.5` - Machine specs: ```CPU: Intel(R) Core(TM) i7-6700K CPU Memory: 64 GiB GPUs: 1 GeForce GTX 3090, with 24GiB memory GPU driver: 455.23.05, CUDA: 11.1 ``` ### Results **Pytorch model file size**: `374MB` (original BERT: `420MB`) | Metric | # Value | # Original ([Table 2](https://www.aclweb.org/anthology/N19-1423.pdf))| Variation | | ------ | --------- | --------- | --------- | | **EM** | **82.21** | **80.8** | **+1.41**| | **F1** | **89.19** | **88.5** | **+0.69**| ## Example Usage Install nn_pruning: it contains the optimization script, which just pack the linear layers into smaller ones by removing empty rows/columns. `pip install nn_pruning` Then you can use the `transformers library` almost as usual: you just have to call `optimize_model` when the pipeline has loaded. ```python from transformers import pipeline from nn_pruning.inference_model_patcher import optimize_model qa_pipeline = pipeline( "question-answering", model="madlag/bert-base-uncased-squadv1-x2.01-f89.2-d30-hybrid-rewind-opt-v1", tokenizer="madlag/bert-base-uncased-squadv1-x2.01-f89.2-d30-hybrid-rewind-opt-v1" ) print("bert-base-uncased parameters: 200.0M") print(f"Parameters count (includes only head pruning, not feed forward pruning)={int(qa_pipeline.model.num_parameters() / 1E6)}M") qa_pipeline.model = optimize_model(qa_pipeline.model, "dense") print(f"Parameters count after complete optimization={int(qa_pipeline.model.num_parameters() / 1E6)}M") predictions = qa_pipeline({ 'context': "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano.", 'question': "Who is Frederic Chopin?", }) print("Predictions", predictions) ```
maroo93/squad1.1
250b75f3eed58a84c3094d8deb08270287ed5bf2
2021-05-19T23:07:37.000Z
[ "pytorch", "jax", "bert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
false
maroo93
null
maroo93/squad1.1
17
null
transformers
9,015
Entry not found
mrm8488/bert-small-finetuned-typo-detection
c78290d7b75061bf5bedab66e589def2cec7372e
2021-05-25T20:20:35.000Z
[ "pytorch", "jax", "bert", "token-classification", "en", "transformers", "autotrain_compatible" ]
token-classification
false
mrm8488
null
mrm8488/bert-small-finetuned-typo-detection
17
null
transformers
9,016
--- language: en thumbnail: widget: - text: "here there is an error in coment" --- # BERT SMALL + Typo Detection ✍❌✍✔ [BERT SMALL](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) fine-tuned on [GitHub Typo Corpus](https://github.com/mhagiwara/github-typo-corpus) for **typo detection** (using *NER* style) ## Details of the downstream task (Typo detection as NER) - Dataset: [GitHub Typo Corpus](https://github.com/mhagiwara/github-typo-corpus) 📚 - [Fine-tune script on NER dataset provided by Huggingface](https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_ner_old.py) 🏋️‍♂️ ## Metrics on test set 📋 | Metric | # score | | :-------: | :-------: | | F1 | **89.12** | | Precision | **93.82** | | Recall | **84.87** | ## Model in action 🔨 Fast usage with **pipelines** 🧪 ```python from transformers import pipeline typo_checker = pipeline( "ner", model="mrm8488/bert-small-finetuned-typo-detection", tokenizer="mrm8488/bert-small-finetuned-typo-detection" ) result = typo_checker("here there is an error in coment") result[1:-1] # Output: [{'entity': 'ok', 'score': 0.9021041989326477, 'word': 'here'}, {'entity': 'ok', 'score': 0.7975626587867737, 'word': 'there'}, {'entity': 'ok', 'score': 0.8596242070198059, 'word': 'is'}, {'entity': 'ok', 'score': 0.7071516513824463, 'word': 'an'}, {'entity': 'ok', 'score': 0.943381130695343, 'word': 'error'}, {'entity': 'ok', 'score': 0.8047608733177185, 'word': 'in'}, {'entity': 'ok', 'score': 0.8240702152252197, 'word': 'come'}, {'entity': 'typo', 'score': 0.5004884004592896, 'word': '##nt'}] ``` It works🎉! we typed ```coment``` instead of ```comment``` Let's try with another example ```python result = typo_checker("Adddd validation midelware") result[1:-1] # Output: [{'entity': 'ok', 'score': 0.7128152847290039, 'word': 'add'}, {'entity': 'typo', 'score': 0.5388424396514893, 'word': '##dd'}, {'entity': 'ok', 'score': 0.94792640209198, 'word': 'validation'}, {'entity': 'typo', 'score': 0.5839331746101379, 'word': 'mid'}, {'entity': 'ok', 'score': 0.5195121765136719, 'word': '##el'}, {'entity': 'ok', 'score': 0.7222476601600647, 'word': '##ware'}] ``` Yeah! We typed wrong ```Add and middleware``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/electricidad-base-discriminator
1353d86e74c5ae322590dcda6e216259b1f72b67
2022-03-30T20:42:47.000Z
[ "pytorch", "electra", "pretraining", "es", "dataset:-large_spanish_corpus", "transformers", "Spanish", "Electra" ]
null
false
mrm8488
null
mrm8488/electricidad-base-discriminator
17
2
transformers
9,017
--- language: es thumbnail: https://i.imgur.com/uxAvBfh.png tags: - Spanish - Electra datasets: -large_spanish_corpus --- ## ELECTRICIDAD: The Spanish Electra [Imgur](https://imgur.com/uxAvBfh) **Electricidad-base-discriminator** (uncased) is a ```base``` Electra like model (discriminator in this case) trained on a [Large Spanish Corpus](https://github.com/josecannete/spanish-corpora) (aka BETO's corpus) As mentioned in the original [paper](https://openreview.net/pdf?id=r1xMH1BtvB): **ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset. For a detailed description and experimental results, please refer the paper [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB). ## Model details ⚙ |Name| # Value| |-----|--------| |Layers| 12 | |Hidden | 768 | |Params| 110M | ## Evaluation metrics (for discriminator) 🧾 |Metric | # Score | |-------|---------| |Accuracy| 0.985| |Precision| 0.726| |AUC | 0.922| ## Fast example of usage 🚀 ```python from transformers import ElectraForPreTraining, ElectraTokenizerFast import torch discriminator = ElectraForPreTraining.from_pretrained("mrm8488/electricidad-base-discriminator") tokenizer = ElectraTokenizerFast.from_pretrained("mrm8488/electricidad-base-discriminator") sentence = "El rápido zorro marrón salta sobre el perro perezoso" fake_sentence = "El rápido zorro marrón amar sobre el perro perezoso" fake_tokens = tokenizer.tokenize(fake_sentence) fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt") discriminator_outputs = discriminator(fake_inputs) predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2) [print("%7s" % token, end="") for token in fake_tokens] [print("%7s" % prediction, end="") for prediction in predictions.tolist()] # Output: ''' el rapido zorro marro ##n amar sobre el perro pere ##zoso 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0[None, None, None, None, None, None, None, None, None, None, None, None, None ''' ``` As you can see there are **1s** in the places where the model detected a fake token. So, it works! 🎉 ### Some models fine-tuned on a downstream task 🛠️ [Question Answering](https://huggingface.co/mrm8488/electricidad-base-finetuned-squadv1-es) [POS](https://huggingface.co/mrm8488/electricidad-base-finetuned-pos) [NER](https://huggingface.co/mrm8488/electricidad-base-finetuned-ner) ### Spanish LM model comparison 📊 | Dataset | Metric | RoBERTa-b | RoBERTa-l | BETO | mBERT | BERTIN | Electricidad-b | |-------------|----------|-----------|-----------|--------|--------|--------|---------| | UD-POS | F1 | 0.9907 | 0.9901 | 0.9900 | 0.9886 | 0.9904 | 0.9818 | | Conll-NER | F1 | 0.8851 | 0.8772 | 0.8759 | 0.8691 | 0.8627 | 0.7954 | | Capitel-POS | F1 | 0.9846 | 0.9851 | 0.9836 | 0.9839 | 0.9826 | 0.9816 | | Capitel-NER | F1 | 0.8959 | 0.8998 | 0.8771 | 0.8810 | 0.8741 | 0.8035 | | STS | Combined | 0.8423 | 0.8420 | 0.8216 | 0.8249 | 0.7822 | 0.8065 | | MLDoc | Accuracy | 0.9595 | 0.9600 | 0.9650 | 0.9560 | 0.9673 | 0.9490 | | PAWS-X | F1 | 0.9035 | 0.9000 | 0.8915 | 0.9020 | 0.8820 | **0.9045** | | XNLI | Accuracy | 0.8016 | 0.7958 | 0.8130 | 0.7876 | 0.7864 | 0.7878 | ## Acknowledgments I thank [🤗/transformers team](https://github.com/huggingface/transformers) for allowing me to train the model (specially to [Julien Chaumond](https://twitter.com/julien_c)). ## Citation If you want to cite this model you can use this: ```bibtex @misc{mromero2020electricidad-base-discriminator, title={Spanish Electra by Manuel Romero}, author={Romero, Manuel}, publisher={Hugging Face}, journal={Hugging Face Hub}, howpublished={\url{https://huggingface.co/mrm8488/electricidad-base-discriminator/}}, year={2020} } ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
mrm8488/longformer-base-4096-spanish
38c75a848ba74f488916841566f57f5ce2c57b60
2022-03-30T20:36:36.000Z
[ "pytorch", "tensorboard", "roberta", "fill-mask", "es", "dataset:spanish_large_corpus", "arxiv:2004.05150", "transformers", "Long documents", "longformer", "bertin", "spanish", "license:mit", "autotrain_compatible" ]
fill-mask
false
mrm8488
null
mrm8488/longformer-base-4096-spanish
17
7
transformers
9,018
--- language: - es license: mit widget: - text: "Manuel Romero ha creado con el equipo de BERTIN un modelo que procesa documentos <mask> largos." tags: - Long documents - longformer - bertin - spanish datasets: - spanish_large_corpus --- # longformer-base-4096-spanish ## [Longformer](https://arxiv.org/abs/2004.05150) is a Transformer model for long documents. `longformer-base-4096` is a BERT-like model started from the RoBERTa checkpoint (**BERTIN** in this case) and pre-trained for *MLM* on long documents (from BETO's `all_wikis`). It supports sequences of length up to 4,096! **Longformer** uses a combination of a sliding window (*local*) attention and *global* attention. Global attention is user-configured based on the task to allow the model to learn task-specific representations. This model was made following the research done by [Iz Beltagy and Matthew E. Peters and Arman Cohan](https://arxiv.org/abs/2004.05150). ## Citation If you want to cite this model you can use this: ```bibtex @misc{mromero2022longformer-base-4096-spanish, title={Spanish LongFormer by Manuel Romero}, author={Romero, Manuel}, publisher={Hugging Face}, journal={Hugging Face Hub}, howpublished={\url{https://huggingface.co/mrm8488/longformer-base-4096-spanish}}, year={2022} } ```
munggok/mt5-large-id-qgen-qa
b2cc736d866cfb585b1096140080532b3ce3cc66
2021-01-27T12:55:12.000Z
[ "pytorch", "t5", "text2text-generation", "id", "dataset:Squad", "dataset:XQuad", "dataset:Tydiqa", "transformers", "license:mit", "autotrain_compatible" ]
text2text-generation
false
munggok
null
munggok/mt5-large-id-qgen-qa
17
null
transformers
9,019
--- language: "id" license: "mit" datasets: - Squad - XQuad - Tydiqa widget: - text: "I love you" --- ## Prefix use Use prefix "question: {question} context: {context}" before input to generate the question answering e.g "question: siapa nama saya ? context: nama saya andi. saya tinggal di jakarta. istri saya bernama raisa" for generate question prefix generate questions: nama saya andi. saya tinggal di jakarta. istri saya bernama raisa ## Training data Squad XQuad Tydiqa
nlpconnect/dpr-nq-reader-roberta-base-v2
6e4e658ff4feec24464ee048f89b26b3b8ff4d05
2022-01-03T04:35:47.000Z
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
false
nlpconnect
null
nlpconnect/dpr-nq-reader-roberta-base-v2
17
null
transformers
9,020
Entry not found
patrickvonplaten/sew-d-mid-400k-librispeech-clean-100h-ft
4ae68de4dad4afdcf26b02ea022e528ef7ab4278
2021-10-27T23:44:33.000Z
[ "pytorch", "tensorboard", "sew-d", "automatic-speech-recognition", "transformers", "librispeech_asr", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
patrickvonplaten
null
patrickvonplaten/sew-d-mid-400k-librispeech-clean-100h-ft
17
1
transformers
9,021
--- license: apache-2.0 tags: - automatic-speech-recognition - librispeech_asr - generated_from_trainer model-index: - name: sew-d-mid-400k-librispeech-clean-100h-ft results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sew-d-mid-400k-librispeech-clean-100h-ft This model is a fine-tuned version of [asapp/sew-d-mid-400k](https://huggingface.co/asapp/sew-d-mid-400k) on the LIBRISPEECH_ASR - CLEAN dataset. It achieves the following results on the evaluation set: - Loss: 2.3540 - Wer: 1.0536 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 32 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 7.319 | 0.11 | 100 | 11.0572 | 1.0 | | 3.6726 | 0.22 | 200 | 4.2003 | 1.0 | | 2.981 | 0.34 | 300 | 3.5742 | 0.9919 | | 2.9411 | 0.45 | 400 | 3.2599 | 1.0 | | 2.903 | 0.56 | 500 | 2.9350 | 1.0 | | 2.8597 | 0.67 | 600 | 2.9514 | 1.0 | | 2.7771 | 0.78 | 700 | 2.8521 | 1.0 | | 2.7926 | 0.9 | 800 | 2.7821 | 1.0120 | | 2.6623 | 1.01 | 900 | 2.7027 | 0.9924 | | 2.5893 | 1.12 | 1000 | 2.6667 | 1.0240 | | 2.5733 | 1.23 | 1100 | 2.6341 | 1.0368 | | 2.5455 | 1.35 | 1200 | 2.5928 | 1.0411 | | 2.4919 | 1.46 | 1300 | 2.5695 | 1.0817 | | 2.5182 | 1.57 | 1400 | 2.5559 | 1.1072 | | 2.4766 | 1.68 | 1500 | 2.5229 | 1.1257 | | 2.4267 | 1.79 | 1600 | 2.4991 | 1.1151 | | 2.3919 | 1.91 | 1700 | 2.4768 | 1.1139 | | 2.3883 | 2.02 | 1800 | 2.4452 | 1.0636 | | 2.3737 | 2.13 | 1900 | 2.4304 | 1.0594 | | 2.3569 | 2.24 | 2000 | 2.4095 | 1.0539 | | 2.3641 | 2.35 | 2100 | 2.3997 | 1.0511 | | 2.3281 | 2.47 | 2200 | 2.3856 | 1.0414 | | 2.2912 | 2.58 | 2300 | 2.3750 | 1.0696 | | 2.3028 | 2.69 | 2400 | 2.3684 | 1.0436 | | 2.2906 | 2.8 | 2500 | 2.3613 | 1.0538 | | 2.2822 | 2.91 | 2600 | 2.3558 | 1.0506 | ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.9.0+cu111 - Datasets 1.13.4.dev0 - Tokenizers 0.10.3
patrickvonplaten/wav2vec2-base-timit-fine-tuned
fbe294145f692fa52eccc285e5927b9c7927f8f6
2021-10-27T10:49:08.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "dataset:timit_asr", "transformers", "timit_asr", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
patrickvonplaten
null
patrickvonplaten/wav2vec2-base-timit-fine-tuned
17
null
transformers
9,022
--- license: apache-2.0 tags: - automatic-speech-recognition - timit_asr - generated_from_trainer datasets: - timit_asr model-index: - name: wav2vec2-base-timit-fine-tuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-fine-tuned This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the TIMIT_ASR - NA dataset. It achieves the following results on the evaluation set: - Loss: 0.3457 - Wer: 0.2151 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.1621 | 0.69 | 100 | 3.1102 | 1.0 | | 2.9592 | 1.38 | 200 | 2.9603 | 1.0 | | 2.9116 | 2.07 | 300 | 2.8921 | 1.0 | | 2.1332 | 2.76 | 400 | 1.9718 | 0.9958 | | 0.8477 | 3.45 | 500 | 0.7813 | 0.5237 | | 0.4251 | 4.14 | 600 | 0.5166 | 0.3982 | | 0.3743 | 4.83 | 700 | 0.4400 | 0.3578 | | 0.4194 | 5.52 | 800 | 0.4077 | 0.3370 | | 0.23 | 6.21 | 900 | 0.4018 | 0.3142 | | 0.1554 | 6.9 | 1000 | 0.3623 | 0.2995 | | 0.1511 | 7.59 | 1100 | 0.3433 | 0.2697 | | 0.1983 | 8.28 | 1200 | 0.3539 | 0.2715 | | 0.1443 | 8.97 | 1300 | 0.3622 | 0.2551 | | 0.0971 | 9.66 | 1400 | 0.3580 | 0.2519 | | 0.0764 | 10.34 | 1500 | 0.3529 | 0.2437 | | 0.1203 | 11.03 | 1600 | 0.3455 | 0.2431 | | 0.0881 | 11.72 | 1700 | 0.3648 | 0.2415 | | 0.0521 | 12.41 | 1800 | 0.3564 | 0.2320 | | 0.0434 | 13.1 | 1900 | 0.3485 | 0.2270 | | 0.0864 | 13.79 | 2000 | 0.3517 | 0.2228 | | 0.0651 | 14.48 | 2100 | 0.3506 | 0.2285 | | 0.0423 | 15.17 | 2200 | 0.3428 | 0.2247 | | 0.0302 | 15.86 | 2300 | 0.3372 | 0.2198 | | 0.0548 | 16.55 | 2400 | 0.3496 | 0.2196 | | 0.0674 | 17.24 | 2500 | 0.3407 | 0.2166 | | 0.0291 | 17.93 | 2600 | 0.3512 | 0.2171 | | 0.0298 | 18.62 | 2700 | 0.3363 | 0.2158 | | 0.0419 | 19.31 | 2800 | 0.3493 | 0.2145 | | 0.046 | 20.0 | 2900 | 0.3457 | 0.2151 | ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.8.1 - Datasets 1.14.1.dev0 - Tokenizers 0.10.3
peterhsu/marian-finetuned-kde4-en-to-zh_TW
1bb82729445285143405f711752f692a65448848
2022-02-28T11:26:43.000Z
[ "pytorch", "tensorboard", "marian", "text2text-generation", "dataset:kde4", "transformers", "translation", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
translation
false
peterhsu
null
peterhsu/marian-finetuned-kde4-en-to-zh_TW
17
null
transformers
9,023
--- license: apache-2.0 tags: - translation - generated_from_trainer datasets: - kde4 metrics: - bleu model-index: - name: marian-finetuned-kde4-en-to-zh_TW results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: kde4 type: kde4 args: en-zh_TW metrics: - name: Bleu type: bleu value: 39.086345838465 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-zh_TW This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-zh](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 1.0047 - Bleu: 39.0863 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.6
philschmid/distilroberta-base-ner-wikiann
595c043f2d236eda3c67a5fc6ed52f79b3958cf7
2022-06-24T11:21:38.000Z
[ "pytorch", "roberta", "token-classification", "dataset:wikiann", "transformers", "license:apache-2.0", "model-index", "autotrain_compatible" ]
token-classification
false
philschmid
null
philschmid/distilroberta-base-ner-wikiann
17
null
transformers
9,024
--- license: apache-2.0 tags: - token-classification datasets: - wikiann metrics: - precision - recall - f1 - accuracy model-index: - name: distilroberta-base-ner-wikiann results: - task: name: Token Classification type: token-classification dataset: name: wikiann type: wikiann metrics: - name: Precision type: precision value: 0.8331921416757433 - name: Recall type: recall value: 0.84243586083126 - name: F1 type: f1 value: 0.8377885044416501 - name: Accuracy type: accuracy value: 0.91930707459758 - task: type: token-classification name: Token Classification dataset: name: wikiann type: wikiann config: en split: test metrics: - name: Accuracy type: accuracy value: 0.9200373733433721 verified: true - name: Precision type: precision value: 0.9258482820953792 verified: true - name: Recall type: recall value: 0.9347545055892119 verified: true - name: F1 type: f1 value: 0.9302800779500893 verified: true - name: loss type: loss value: 0.3007512390613556 verified: true --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-ner-wikiann This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the wikiann dataset. eval F1-Score: **83,78** test F1-Score: **83,76** ## Model Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("philschmid/distilroberta-base-ner-wikiann") model = AutoModelForTokenClassification.from_pretrained("philschmid/distilroberta-base-ner-wikiann") nlp = pipeline("ner", model=model, tokenizer=tokenizer, grouped_entities=True) example = "My name is Philipp and live in Germany" nlp(example) ``` ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.9086903597787154e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 - mixed_precision_training: Native AMP ### Training results It achieves the following results on the evaluation set: - Loss: 0.3156 - Precision: 0.8332 - Recall: 0.8424 - F1: 0.8378 - Accuracy: 0.9193 It achieves the following results on the test set: - Loss: 0.3023 - Precision: 0.8301 - Recall: 0.8452 - F1: 0.8376 - Accuracy: 0.92 ### Framework versions - Transformers 4.6.1 - Pytorch 1.8.1+cu101 - Datasets 1.6.2 - Tokenizers 0.10.2
ricardo-filho/bert-base-portuguese-cased-finetuned-ner
f79cdaa48bdfd404f576c8f2f1a27ec0e5d99da4
2021-11-23T13:48:05.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "dataset:harem", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
ricardo-filho
null
ricardo-filho/bert-base-portuguese-cased-finetuned-ner
17
null
transformers
9,025
--- license: mit tags: - generated_from_trainer datasets: - harem metrics: - precision - recall - f1 - accuracy model-index: - name: bert-base-portuguese-cased-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: harem type: harem args: default metrics: - name: Precision type: precision value: 0.0 - name: Recall type: recall value: 0.0 - name: F1 type: f1 value: 0.0 - name: Accuracy type: accuracy value: 0.7333736396614269 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-portuguese-cased-finetuned-ner This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on the harem dataset. It achieves the following results on the evaluation set: - Loss: 1.2948 - Precision: 0.0 - Recall: 0.0 - F1: 0.0 - Accuracy: 0.7334 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:| | No log | 1.0 | 8 | 1.7381 | 0.0 | 0.0 | 0.0 | 0.7334 | | No log | 2.0 | 16 | 1.3301 | 0.0 | 0.0 | 0.0 | 0.7334 | | No log | 3.0 | 24 | 1.2948 | 0.0 | 0.0 | 0.0 | 0.7334 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
sammy786/wav2vec2-xlsr-tatar
c6b788c09ae0d195e8ee66bf2ae119f80470bc71
2022-03-23T18:32:40.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "tt", "dataset:mozilla-foundation/common_voice_8_0", "transformers", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
sammy786
null
sammy786/wav2vec2-xlsr-tatar
17
null
transformers
9,026
--- language: - tt license: apache-2.0 tags: - automatic-speech-recognition - generated_from_trainer - hf-asr-leaderboard - model_for_talk - mozilla-foundation/common_voice_8_0 - robust-speech-event - tt datasets: - mozilla-foundation/common_voice_8_0 model-index: - name: sammy786/wav2vec2-xlsr-tatar results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8 type: mozilla-foundation/common_voice_8_0 args: tt metrics: - name: Test WER type: wer value: 16.87 - name: Test CER type: cer value: 3.64 --- # sammy786/wav2vec2-xlsr-tatar This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - tt dataset. It achieves the following results on evaluation set (which is 10 percent of train data set merged with other and dev datasets): - Loss: 7.66 - Wer: 7.08 ## Model description "facebook/wav2vec2-xls-r-1b" was finetuned. ## Intended uses & limitations More information needed ## Training and evaluation data Training data - Common voice Finnish train.tsv, dev.tsv and other.tsv ## Training procedure For creating the train dataset, all possible datasets were appended and 90-10 split was used. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000045637994662983496 - train_batch_size: 16 - eval_batch_size: 16 - seed: 13 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 500 - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Step | Training Loss | Validation Loss | Wer | |-------|---------------|-----------------|----------| | 200 | 4.849400 | 1.874908 | 0.995232 | | 400 | 1.105700 | 0.257292 | 0.367658 | | 600 | 0.723000 | 0.181150 | 0.250513 | | 800 | 0.660600 | 0.167009 | 0.226078 | | 1000 | 0.568000 | 0.135090 | 0.177339 | | 1200 | 0.721200 | 0.117469 | 0.166413 | | 1400 | 0.416300 | 0.115142 | 0.153765 | | 1600 | 0.346000 | 0.105782 | 0.153963 | | 1800 | 0.279700 | 0.102452 | 0.146149 | | 2000 | 0.273800 | 0.095818 | 0.128468 | | 2200 | 0.252900 | 0.102302 | 0.133766 | | 2400 | 0.255100 | 0.096592 | 0.121316 | | 2600 | 0.229600 | 0.091263 | 0.124561 | | 2800 | 0.213900 | 0.097748 | 0.125687 | | 3000 | 0.210700 | 0.091244 | 0.125422 | | 3200 | 0.202600 | 0.084076 | 0.106284 | | 3400 | 0.200900 | 0.093809 | 0.113238 | | 3600 | 0.192700 | 0.082918 | 0.108139 | | 3800 | 0.182000 | 0.084487 | 0.103371 | | 4000 | 0.167700 | 0.091847 | 0.104960 | | 4200 | 0.183700 | 0.085223 | 0.103040 | | 4400 | 0.174400 | 0.083862 | 0.100589 | | 4600 | 0.163100 | 0.086493 | 0.099728 | | 4800 | 0.162000 | 0.081734 | 0.097543 | | 5000 | 0.153600 | 0.077223 | 0.092974 | | 5200 | 0.153700 | 0.086217 | 0.090789 | | 5400 | 0.140200 | 0.093256 | 0.100457 | | 5600 | 0.142900 | 0.086903 | 0.097742 | | 5800 | 0.131400 | 0.083068 | 0.095225 | | 6000 | 0.126000 | 0.086642 | 0.091252 | | 6200 | 0.135300 | 0.083387 | 0.091186 | | 6400 | 0.126100 | 0.076479 | 0.086352 | | 6600 | 0.127100 | 0.077868 | 0.086153 | | 6800 | 0.118000 | 0.083878 | 0.087676 | | 7000 | 0.117600 | 0.085779 | 0.091054 | | 7200 | 0.113600 | 0.084197 | 0.084233 | | 7400 | 0.112000 | 0.078688 | 0.081319 | | 7600 | 0.110200 | 0.082534 | 0.086087 | | 7800 | 0.106400 | 0.077245 | 0.080988 | | 8000 | 0.102300 | 0.077497 | 0.079332 | | 8200 | 0.109500 | 0.079083 | 0.088339 | | 8400 | 0.095900 | 0.079721 | 0.077809 | | 8600 | 0.094700 | 0.079078 | 0.079730 | | 8800 | 0.097400 | 0.078785 | 0.079200 | | 9000 | 0.093200 | 0.077445 | 0.077015 | | 9200 | 0.088700 | 0.078207 | 0.076617 | | 9400 | 0.087200 | 0.078982 | 0.076485 | | 9600 | 0.089900 | 0.081209 | 0.076021 | | 9800 | 0.081900 | 0.078158 | 0.075757 | | 10000 | 0.080200 | 0.078074 | 0.074498 | | 10200 | 0.085000 | 0.078830 | 0.073373 | | 10400 | 0.080400 | 0.078144 | 0.073373 | | 10600 | 0.078200 | 0.077163 | 0.073902 | | 10800 | 0.080900 | 0.076394 | 0.072446 | | 11000 | 0.080700 | 0.075955 | 0.071585 | | 11200 | 0.076800 | 0.077031 | 0.072313 | | 11400 | 0.076300 | 0.077401 | 0.072777 | | 11600 | 0.076700 | 0.076613 | 0.071916 | | 11800 | 0.076000 | 0.076672 | 0.071916 | | 12000 | 0.077200 | 0.076490 | 0.070989 | | 12200 | 0.076200 | 0.076688 | 0.070856 | | 12400 | 0.074400 | 0.076780 | 0.071055 | | 12600 | 0.076300 | 0.076768 | 0.071320 | | 12800 | 0.077600 | 0.076727 | 0.071055 | | 13000 | 0.077700 | 0.076714 | 0.071254 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.10.3 #### Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test` ```bash python eval.py --model_id sammy786/wav2vec2-xlsr-tatar --dataset mozilla-foundation/common_voice_8_0 --config tt --split test ```
sebastian-hofstaetter/distilbert-dot-margin_mse-T2-msmarco
f094fd09201e305431b52570d2a9727edf64b394
2021-03-16T17:03:58.000Z
[ "pytorch", "distilbert", "feature-extraction", "en", "dataset:ms_marco", "arxiv:2010.02666", "transformers", "dpr", "dense-passage-retrieval", "knowledge-distillation" ]
feature-extraction
false
sebastian-hofstaetter
null
sebastian-hofstaetter/distilbert-dot-margin_mse-T2-msmarco
17
1
transformers
9,027
--- language: "en" tags: - dpr - dense-passage-retrieval - knowledge-distillation datasets: - ms_marco --- # Margin-MSE Trained DistilBert for Dense Passage Retrieval We provide a retrieval trained DistilBert-based model (we call the architecture BERT_Dot). Our model is trained with Margin-MSE using a 3 teacher BERT_Cat (concatenated BERT scoring) ensemble on MSMARCO-Passage. This instance can be used to **re-rank a candidate set** or **directly for a vector index based dense retrieval**. The architecture is a 6-layer DistilBERT, without architecture additions or modifications (we only change the weights during training) - to receive a query/passage representation we pool the CLS vector. We use the same BERT layers for both query and passage encoding (yields better results, and lowers memory requirements). If you want to know more about our simple, yet effective knowledge distillation method for efficient information retrieval models for a variety of student architectures that is used for this model instance check out our paper: https://arxiv.org/abs/2010.02666 🎉 For more information, training data, source code, and a minimal usage example please visit: https://github.com/sebastian-hofstaetter/neural-ranking-kd ## Effectiveness on MSMARCO Passage & TREC-DL'19 We trained our model on the MSMARCO standard ("small"-400K query) training triples with knowledge distillation with a batch size of 32 on a single consumer-grade GPU (11GB memory). For re-ranking we used the top-1000 BM25 results. ### MSMARCO-DEV | | MRR@10 | NDCG@10 | Recall@1K | |----------------------------------|--------|---------|-----------------------------| | BM25 | .194 | .241 | .868 | | **Margin-MSE BERT_Dot** (Re-ranking) | .332 | .391 | .868 (from BM25 candidates) | | **Margin-MSE BERT_Dot** (Retrieval) | .323 | .381 | .957 | ### TREC-DL'19 For MRR and Recall we use the recommended binarization point of the graded relevance of 2. This might skew the results when compared to other binarization point numbers. | | MRR@10 | NDCG@10 | Recall@1K | |----------------------------------|--------|---------|-----------------------------| | BM25 | .689 | .501 | .739 | | **Margin-MSE BERT_Dot** (Re-ranking) | .862 | .712 | .739 (from BM25 candidates) | | **Margin-MSE BERT_Dot** (Retrieval) | .868 | .697 | .769 | For more baselines, info and analysis, please see the paper: https://arxiv.org/abs/2010.02666 ## Limitations & Bias - The model inherits social biases from both DistilBERT and MSMARCO. - The model is only trained on relatively short passages of MSMARCO (avg. 60 words length), so it might struggle with longer text. ## Citation If you use our model checkpoint please cite our work as: ``` @misc{hofstaetter2020_crossarchitecture_kd, title={Improving Efficient Neural Ranking Models with Cross-Architecture Knowledge Distillation}, author={Sebastian Hofst{\"a}tter and Sophia Althammer and Michael Schr{\"o}der and Mete Sertkan and Allan Hanbury}, year={2020}, eprint={2010.02666}, archivePrefix={arXiv}, primaryClass={cs.IR} } ```
seongju/kor-3i4k-bert-base-cased
12c1152e20a9d293985fa077c90a723bd3257ff4
2021-07-20T07:58:11.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
seongju
null
seongju/kor-3i4k-bert-base-cased
17
null
transformers
9,028
### Model information * language : Korean * fine tuning data : [kor_3i4k](https://huggingface.co/datasets/kor_3i4k) * License : CC-BY-SA 4.0 * Base model : [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) * input : sentence * output : intent ---- ### Train information * train_runtime: 2376.638 * train_steps_per_second: 2.175 * train_loss: 0.356829648599977 * epoch: 3.0 ---- ### How to use ``` from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained ( "seongju/kor-3i4k-bert-base-cased" ) model = AutoModelForSequenceClassification.from_pretrained ( "seongju/kor-3i4k-bert-base-cased" ) inputs = tokenizer( "너는 지금 무엇을 하고 있니?", padding=True, truncation=True, max_length=128, return_tensors="pt" ) outputs = model(**inputs) probs = outputs[0].softmax(1) output = probs.argmax().item() ```
shamikbose89/mt5-small-finetuned-arxiv-cs-finetuned-arxiv-cs-full
7a8420078c15eb05d48aa4a5cbb095c09a11779a
2021-11-19T17:54:25.000Z
[ "pytorch", "tensorboard", "mt5", "text2text-generation", "transformers", "generated_from_trainer", "summarization", "license:apache-2.0", "model-index", "autotrain_compatible" ]
summarization
false
shamikbose89
null
shamikbose89/mt5-small-finetuned-arxiv-cs-finetuned-arxiv-cs-full
17
5
transformers
9,029
--- license: apache-2.0 tags: - generated_from_trainer - summarization metrics: - rouge model-index: - name: mt5-small-finetuned-arxiv-cs-finetuned-arxiv-cs-full results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-small-finetuned-arxiv-cs-finetuned-arxiv-cs-full This model is a fine-tuned version of [shamikbose89/mt5-small-finetuned-arxiv-cs](https://huggingface.co/shamikbose89/mt5-small-finetuned-arxiv-cs) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4037 - Rouge1: 39.8923 - Rouge2: 20.9831 - Rougel: 35.8642 - Rougelsum: 35.8511 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:| | 1.9675 | 1.0 | 500 | 1.5573 | 36.4989 | 18.4839 | 33.2984 | 33.2917 | | 1.7523 | 2.0 | 1000 | 1.4972 | 37.7911 | 19.0357 | 33.5725 | 33.6058 | | 1.6611 | 3.0 | 1500 | 1.4593 | 38.5822 | 19.4928 | 34.215 | 34.2531 | | 1.6187 | 4.0 | 2000 | 1.4492 | 39.1219 | 20.8705 | 35.1969 | 35.2255 | | 1.5864 | 5.0 | 2500 | 1.4289 | 39.7304 | 21.0654 | 35.6602 | 35.6667 | | 1.5553 | 6.0 | 3000 | 1.4184 | 40.0696 | 21.0883 | 35.9536 | 35.9132 | | 1.5215 | 7.0 | 3500 | 1.4163 | 39.1956 | 20.6757 | 35.5016 | 35.5196 | | 1.5038 | 8.0 | 4000 | 1.4148 | 39.2373 | 20.3114 | 35.1676 | 35.1532 | | 1.4929 | 9.0 | 4500 | 1.4064 | 39.9249 | 21.0155 | 35.8247 | 35.7937 | | 1.4791 | 10.0 | 5000 | 1.4037 | 39.8923 | 20.9831 | 35.8642 | 35.8511 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
shuqi/seed-encoder
f3ca2a12f7ac5921d0d9793ec9e9fb03e6f19aba
2021-09-18T11:24:50.000Z
[ "pytorch", "seed_encoder", "transformers" ]
null
false
shuqi
null
shuqi/seed-encoder
17
null
transformers
9,030
# Less is More: Pre-train a Strong Text Encoder for Dense Retrieval Using a Weak Decoder Please check the [official repository](https://github.com/microsoft/SEED-Encoder) for more details and updates. # Fine-tuning on Marco passage/doc ranking tasks and NQ tasks | MSMARCO Dev Passage Retrieval | MRR@10 | Recall@1k | |------------------------------|---------------|--------------------- | | BM25 warmup checkpoint | 0.329 | 0.953 | | ANCE Passage checkpoint | 0.334 | 0.961 | | MSMARCO Document Retrieval | MRR@10 (Dev) | MRR@10 (Eval) | |---------------- | -------------- | -------------- | | ANCE Document (FirstP) checkpoint | 0.394 | 0.362 | | NQ Task | Top-1 | Top-5 | Top-20 | Top-100 | MRR@20 | P@20 | |---------------- | -------------- | -------------- |-------------- | -------------- | -------------- |-------------- | | DPR checkpoint | 46.1 | 68.8 | 80.4 | 87.1 | 56.2 | 20.1 | | ANCE NQ checkpoint | 52.5 | 73.1 | 83.1 | 88.7 | 61.5 | 22.5 # Citation If you find SEED-Encoder useful for your work, please cite the following paper: ``` @article{lu2021less, title={Less is More: Pre-training a Strong Siamese Encoder Using a Weak Decoder}, author={Lu, Shuqi and Xiong, Chenyan and He, Di and Ke, Guolin and Malik, Waleed and Dou, Zhicheng and Bennett, Paul and Liu, Tieyan and Overwijk, Arnold}, journal={arXiv preprint arXiv:2102.09206}, year={2021} } ```
sismetanin/mbart_ru_sum_gazeta-ru-sentiment-rureviews
4fedb6d31035d2017f4cb8e2758032035e93ffc1
2021-02-25T23:49:57.000Z
[ "pytorch", "mbart", "text-classification", "ru", "transformers", "sentiment analysis", "Russian" ]
text-classification
false
sismetanin
null
sismetanin/mbart_ru_sum_gazeta-ru-sentiment-rureviews
17
null
transformers
9,031
--- language: - ru tags: - sentiment analysis - Russian --- ## MBARTRuSumGazeta-ru-sentiment-RuReviews MBARTRuSumGazeta-ru-sentiment-RuReviews is a [MBARTRuSumGazeta](https://huggingface.co/IlyaGusev/mbart_ru_sum_gazeta) model fine-tuned on [RuReviews dataset](https://github.com/sismetanin/rureviews) of Russian-language reviews from the ”Women’s Clothes and Accessories” product category on the primary e-commerce site in Russia. <table> <thead> <tr> <th rowspan="4">Model</th> <th rowspan="4">Score<br></th> <th rowspan="4">Rank</th> <th colspan="12">Dataset</th> </tr> <tr> <td colspan="6">SentiRuEval-2016<br></td> <td colspan="2" rowspan="2">RuSentiment</td> <td rowspan="2">KRND</td> <td rowspan="2">LINIS Crowd</td> <td rowspan="2">RuTweetCorp</td> <td rowspan="2">RuReviews</td> </tr> <tr> <td colspan="3">TC</td> <td colspan="3">Banks</td> </tr> <tr> <td>micro F1</td> <td>macro F1</td> <td>F1</td> <td>micro F1</td> <td>macro F1</td> <td>F1</td> <td>wighted</td> <td>F1</td> <td>F1</td> <td>F1</td> <td>F1</td> <td>F1</td> </tr> </thead> <tbody> <tr> <td>SOTA</td> <td>n/s</td> <td></td> <td>76.71</td> <td>66.40</td> <td>70.68</td> <td>67.51</td> <td>69.53</td> <td>74.06</td> <td>78.50</td> <td>n/s</td> <td>73.63</td> <td>60.51</td> <td>83.68</td> <td>77.44</td> </tr> <tr> <td>XLM-RoBERTa-Large</td> <td>76.37</td> <td>1</td> <td>82.26</td> <td>76.36</td> <td>79.42</td> <td>76.35</td> <td>76.08</td> <td>80.89</td> <td>78.31</td> <td>75.27</td> <td>75.17</td> <td>60.03</td> <td>88.91</td> <td>78.81</td> </tr> <tr> <td>SBERT-Large</td> <td>75.43</td> <td>2</td> <td>78.40</td> <td>71.36</td> <td>75.14</td> <td>72.39</td> <td>71.87</td> <td>77.72</td> <td>78.58</td> <td>75.85</td> <td>74.20</td> <td>60.64</td> <td>88.66</td> <td>77.41</td> </tr> <tr> <td>MBARTRuSumGazeta</td> <td>74.70</td> <td>3</td> <td>76.06</td> <td>68.95</td> <td>73.04</td> <td>72.34</td> <td>71.93</td> <td>77.83</td> <td>76.71</td> <td>73.56</td> <td>74.18</td> <td>60.54</td> <td>87.22</td> <td>77.51</td> </tr> <tr> <td>Conversational RuBERT</td> <td>74.44</td> <td>4</td> <td>76.69</td> <td>69.09</td> <td>73.11</td> <td>69.44</td> <td>68.68</td> <td>75.56</td> <td>77.31</td> <td>74.40</td> <td>73.10</td> <td>59.95</td> <td>87.86</td> <td>77.78</td> </tr> <tr> <td>LaBSE</td> <td>74.11</td> <td>5</td> <td>77.00</td> <td>69.19</td> <td>73.55</td> <td>70.34</td> <td>69.83</td> <td>76.38</td> <td>74.94</td> <td>70.84</td> <td>73.20</td> <td>59.52</td> <td>87.89</td> <td>78.47</td> </tr> <tr> <td>XLM-RoBERTa-Base</td> <td>73.60</td> <td>6</td> <td>76.35</td> <td>69.37</td> <td>73.42</td> <td>68.45</td> <td>67.45</td> <td>74.05</td> <td>74.26</td> <td>70.44</td> <td>71.40</td> <td>60.19</td> <td>87.90</td> <td>78.28</td> </tr> <tr> <td>RuBERT</td> <td>73.45</td> <td>7</td> <td>74.03</td> <td>66.14</td> <td>70.75</td> <td>66.46</td> <td>66.40</td> <td>73.37</td> <td>75.49</td> <td>71.86</td> <td>72.15</td> <td>60.55</td> <td>86.99</td> <td>77.41</td> </tr> <tr> <td>MBART-50-Large-Many-to-Many</td> <td>73.15</td> <td>8</td> <td>75.38</td> <td>67.81</td> <td>72.26</td> <td>67.13</td> <td>66.97</td> <td>73.85</td> <td>74.78</td> <td>70.98</td> <td>71.98</td> <td>59.20</td> <td>87.05</td> <td>77.24</td> </tr> <tr> <td>SlavicBERT</td> <td>71.96</td> <td>9</td> <td>71.45</td> <td>63.03</td> <td>68.44</td> <td>64.32</td> <td>63.99</td> <td>71.31</td> <td>72.13</td> <td>67.57</td> <td>72.54</td> <td>58.70</td> <td>86.43</td> <td>77.16</td> </tr> <tr> <td>EnRuDR-BERT</td> <td>71.51</td> <td>10</td> <td>72.56</td> <td>64.74</td> <td>69.07</td> <td>61.44</td> <td>60.21</td> <td>68.34</td> <td>74.19</td> <td>69.94</td> <td>69.33</td> <td>56.55</td> <td>87.12</td> <td>77.95</td> </tr> <tr> <td>RuDR-BERT</td> <td>71.14</td> <td>11</td> <td>72.79</td> <td>64.23</td> <td>68.36</td> <td>61.86</td> <td>60.92</td> <td>68.48</td> <td>74.65</td> <td>70.63</td> <td>68.74</td> <td>54.45</td> <td>87.04</td> <td>77.91</td> </tr> <tr> <td>MBART-50-Large</td> <td>69.46</td> <td>12</td> <td>70.91</td> <td>62.67</td> <td>67.24</td> <td>61.12</td> <td>60.25</td> <td>68.41</td> <td>72.88</td> <td>68.63</td> <td>70.52</td> <td>46.39</td> <td>86.48</td> <td>77.52</td> </tr> </tbody> </table> The table shows per-task scores and a macro-average of those scores to determine a models’s position on the leaderboard. For datasets with multiple evaluation metrics (e.g., macro F1 and weighted F1 for RuSentiment), we use an unweighted average of the metrics as the score for the task when computing the overall macro-average. The same strategy for comparing models’ results was applied in the GLUE benchmark. ## Citation If you find this repository helpful, feel free to cite our publication: ``` @article{Smetanin2021Deep, author = {Sergey Smetanin and Mikhail Komarov}, title = {Deep transfer learning baselines for sentiment analysis in Russian}, journal = {Information Processing & Management}, volume = {58}, number = {3}, pages = {102484}, year = {2021}, issn = {0306-4573}, doi = {0.1016/j.ipm.2020.102484} } ``` Dataset: ``` @INPROCEEDINGS{Smetanin2019Sentiment, author={Sergey Smetanin and Michail Komarov}, booktitle={2019 IEEE 21st Conference on Business Informatics (CBI)}, title={Sentiment Analysis of Product Reviews in Russian using Convolutional Neural Networks}, year={2019}, volume={01}, pages={482-486}, doi={10.1109/CBI.2019.00062}, ISSN={2378-1963}, month={July} } ```
stanford-crfm/battlestar-gpt2-small-x49
2f4e2079c9ac92c2b5c6fecc19fae645bcef56fa
2022-06-20T09:04:32.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
stanford-crfm
null
stanford-crfm/battlestar-gpt2-small-x49
17
null
transformers
9,032
Entry not found
subbareddyiiit/TeElectra
5ec4c5d8a5fa681713005efc391e26e05726f0e6
2020-06-21T06:59:39.000Z
[ "pytorch", "electra", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
subbareddyiiit
null
subbareddyiiit/TeElectra
17
null
transformers
9,033
Entry not found
tals/albert-base-vitaminc_flagging
1e5f38d76c4d9402bf0c7d73e1aab6eaafca0ea8
2022-06-22T23:56:43.000Z
[ "pytorch", "albert", "text-classification", "python", "dataset:fever", "dataset:glue", "dataset:tals/vitaminc", "transformers" ]
text-classification
false
tals
null
tals/albert-base-vitaminc_flagging
17
null
transformers
9,034
--- language: python datasets: - fever - glue - tals/vitaminc --- # Details Model used in [Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence](https://aclanthology.org/2021.naacl-main.52/) (Schuster et al., NAACL 21`). For more details see: https://github.com/TalSchuster/VitaminC When using this model, please cite the paper. # BibTeX entry and citation info ```bibtex @inproceedings{schuster-etal-2021-get, title = "Get Your Vitamin {C}! Robust Fact Verification with Contrastive Evidence", author = "Schuster, Tal and Fisch, Adam and Barzilay, Regina", booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", month = jun, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.naacl-main.52", doi = "10.18653/v1/2021.naacl-main.52", pages = "624--643", abstract = "Typical fact verification models use retrieved written evidence to verify claims. Evidence sources, however, often change over time as more information is gathered and revised. In order to adapt, models must be sensitive to subtle differences in supporting evidence. We present VitaminC, a benchmark infused with challenging cases that require fact verification models to discern and adjust to slight factual changes. We collect over 100,000 Wikipedia revisions that modify an underlying fact, and leverage these revisions, together with additional synthetically constructed ones, to create a total of over 400,000 claim-evidence pairs. Unlike previous resources, the examples in VitaminC are contrastive, i.e., they contain evidence pairs that are nearly identical in language and content, with the exception that one supports a given claim while the other does not. We show that training using this design increases robustness{---}improving accuracy by 10{\%} on adversarial fact verification and 6{\%} on adversarial natural language inference (NLI). Moreover, the structure of VitaminC leads us to define additional tasks for fact-checking resources: tagging relevant words in the evidence for verifying the claim, identifying factual revisions, and providing automatic edits via factually consistent text generation.", } ```
trnt/twitter_emotions
0fdc42320272eddfe43aa03670ac20c5028a7e9a
2021-11-20T04:31:53.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "dataset:emotion", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
trnt
null
trnt/twitter_emotions
17
1
transformers
9,035
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy model-index: - name: twitter_emotions results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.9375 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # twitter_emotions This model is a fine-tuned version of [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1647 - Accuracy: 0.9375 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2486 | 1.0 | 2000 | 0.2115 | 0.931 | | 0.135 | 2.0 | 4000 | 0.1725 | 0.936 | | 0.1041 | 3.0 | 6000 | 0.1647 | 0.9375 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
turtlesoupy/inverse-dictionary-model-v1
485568f794dce00946739bf86e31841623655087
2021-05-23T13:17:21.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "transformers" ]
text-generation
false
turtlesoupy
null
turtlesoupy/inverse-dictionary-model-v1
17
null
transformers
9,036
Entry not found
yhavinga/gpt-neo-1.3B-dutch
02db444ac45c0ed6dfebf10010eeab7fb3a1a0ae
2022-03-20T10:20:34.000Z
[ "pytorch", "jax", "tensorboard", "gpt_neo", "text-generation", "nl", "dataset:yhavinga/mc4_nl_cleaned", "transformers", "gpt-neo-1.3B", "gpt-neo" ]
text-generation
false
yhavinga
null
yhavinga/gpt-neo-1.3B-dutch
17
null
transformers
9,037
--- language: nl widget: - text: "In het jaar 2030 zullen we" - text: "Toen ik gisteren volledig in de ban was van" - text: "Studenten en leraren van de Bogazici Universiteit in de Turkse stad Istanbul" - text: "In Israël was een strenge lockdown" tags: - gpt-neo-1.3B - gpt-neo pipeline_tag: text-generation datasets: - yhavinga/mc4_nl_cleaned --- # GPT Neo 1.3B pre-trained on cleaned Dutch mC4 🇳🇱 A GPT-Neo model trained from scratch on Dutch, with perplexity 16.0 on cleaned Dutch mC4. ## How To Use You can use this GPT-Neo model directly with a pipeline for text generation. ```python MODEL_DIR='yhavinga/gpt-neo-1.3B-dutch' from transformers import pipeline, GPT2Tokenizer, GPTNeoForCausalLM tokenizer = GPT2Tokenizer.from_pretrained(MODEL_DIR) model = GPTNeoForCausalLM.from_pretrained(MODEL_DIR) generator = pipeline('text-generation', model, tokenizer=tokenizer) generated_text = generator('1 - geel. 2 - groen. 3 -', max_length=60, num_beams=4, no_repeat_ngram_size=3, repetition_penalty=2.0) ``` *"1 - geel. 2 - groen. 3 - rood. 4 - blauw. 5 - bruin. 6 - zwart. 7 - oranje. 8 - roze. 9 - paars. 10 - wit. 11 - grijs. 12 - magenta. 13 - lila. 14 - lichtgroen. 15"* ## Tokenizer * BPE tokenizer trained from scratch for Dutch on mC4 nl cleaned with scripts from the Huggingface Transformers [Flax examples](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling). ## Dataset This model was trained on the `full` configuration (33B tokens) of [cleaned Dutch mC4](https://huggingface.co/datasets/yhavinga/mc4_nl_cleaned), which is the original mC4, except * Documents that contained words from a selection of the Dutch and English [List of Dirty Naught Obscene and Otherwise Bad Words](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words) are removed * Sentences with less than 3 words are removed * Sentences with a word of more than 1000 characters are removed * Documents with less than 5 sentences are removed * Documents with "javascript", "lorum ipsum", "terms of use", "privacy policy", "cookie policy", "uses cookies", "use of cookies", "use cookies", "elementen ontbreken", "deze printversie" are removed. ## Models TL;DR: [yhavinga/gpt2-medium-dutch](https://huggingface.co/yhavinga/gpt2-medium-dutch) is the best model. * The models with `a`/`b` in the step-column have been trained to step `a` of a total of `b` steps. | | model | params | train seq len | ppl | loss | batch size | epochs | steps | optim | lr | duration | config | |-----------------------------------------------------------------------------------|---------|--------|---------------|------|------|------------|--------|-----------------|-----------|--------|----------|-----------| | [yhavinga/gpt-neo-125M-dutch](https://huggingface.co/yhavinga/gpt-neo-125M-dutch) | gpt neo | 125M | 512 | 20.9 | 3.04 | 128 | 1 | 190000/558608 | adam | 2.4e-3 | 1d 12h | full | | [yhavinga/gpt2-medium-dutch](https://huggingface.co/yhavinga/gpt2-medium-dutch) | gpt2 | 345M | 512 | 15.1 | 2.71 | 128 | 1 | 320000/520502 | adam | 8e-4 | 7d 2h | full | | [yhavinga/gpt2-large-dutch](https://huggingface.co/yhavinga/gpt2-large-dutch) | gpt2 | 762M | 512 | 15.1 | 2.72 | 32 | 1 | 1100000/2082009 | adafactor | 3.3e-5 | 8d 15h | large | | [yhavinga/gpt-neo-1.3B-dutch](https://huggingface.co/yhavinga/gpt-neo-1.3B-dutch) | gpt neo | 1.3B | 512 | 16.0 | 2.77 | 16 | 1 | 960000/3049896 | adafactor | 5e-4 | 7d 11h | full | ## Acknowledgements This project would not have been possible without compute generously provided by Google through the [TPU Research Cloud](https://sites.research.google/trc/). The HuggingFace 🤗 ecosystem was also instrumental in most, if not all, parts of the training. The following repositories where helpful in setting up the TPU-VM, and training the models: * [Gsarti's Pretrain and Fine-tune a T5 model with Flax on GCP](https://github.com/gsarti/t5-flax-gcp) * [HUggingFace Flax MLM examples](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling) * [gpt2-medium-persian](https://huggingface.co/flax-community/gpt2-medium-persian) * [gpt2-medium-indonesian](https://huggingface.co/flax-community/gpt2-medium-persian) Created by [Yeb Havinga](https://www.linkedin.com/in/yeb-havinga-86530825/)
bookbot/distil-wav2vec2-adult-child-cls-37m
80548f793c175d52787f726302db721c6fd25bf8
2022-02-26T14:49:52.000Z
[ "pytorch", "tensorboard", "wav2vec2", "audio-classification", "en", "arxiv:2006.11477", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
audio-classification
false
bookbot
null
bookbot/distil-wav2vec2-adult-child-cls-37m
17
null
transformers
9,038
--- language: en license: apache-2.0 tags: - audio-classification - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distil-wav2vec2-adult-child-cls-37m results: [] --- # DistilWav2Vec2 Adult/Child Speech Classifier 37M DistilWav2Vec2 Adult/Child Speech Classifier is an audio classification model based on the [wav2vec 2.0](https://arxiv.org/abs/2006.11477) architecture. This model is a distilled version of [wav2vec2-adult-child-cls](https://huggingface.co/bookbot/wav2vec2-adult-child-cls) on a private adult/child speech classification dataset. This model was trained using HuggingFace's PyTorch framework. All training was done on a Tesla P100, provided by Kaggle. Training metrics were logged via Tensorboard. ## Model | Model | #params | Arch. | Training/Validation data (text) | | ------------------------------------- | ------- | ----------- | ----------------------------------------- | | `distil-wav2vec2-adult-child-cls-37m` | 37M | wav2vec 2.0 | Adult/Child Speech Classification Dataset | ## Evaluation Results The model achieves the following results on evaluation: | Dataset | Loss | Accuracy | F1 | | --------------------------------- | ------ | -------- | ------ | | Adult/Child Speech Classification | 0.1431 | 95.89% | 0.9624 | ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - `learning_rate`: 3e-05 - `train_batch_size`: 32 - `eval_batch_size`: 32 - `seed`: 42 - `gradient_accumulation_steps`: 4 - `total_train_batch_size`: 128 - `optimizer`: Adam with `betas=(0.9,0.999)` and `epsilon=1e-08` - `lr_scheduler_type`: linear - `lr_scheduler_warmup_ratio`: 0.1 - `num_epochs`: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | | :-----------: | :---: | :--: | :-------------: | :------: | :----: | | 0.2586 | 1.0 | 96 | 0.2257 | 0.9298 | 0.9363 | | 0.1917 | 2.0 | 192 | 0.1743 | 0.9460 | 0.9500 | | 0.1568 | 3.0 | 288 | 0.1701 | 0.9511 | 0.9545 | | 0.0965 | 4.0 | 384 | 0.1501 | 0.9548 | 0.9584 | | 0.1179 | 5.0 | 480 | 0.1431 | 0.9589 | 0.9624 | ## Disclaimer Do consider the biases which came from pre-training datasets that may be carried over into the results of this model. ## Authors DistilWav2Vec2 Adult/Child Speech Classifier was trained and evaluated by [Ananto Joyoadikusumo](https://anantoj.github.io/). All computation and development are done on Kaggle. ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.10.3
abdusahmbzuai/arabert-ner
d0903445372670c4402f3441ac3723c9dcfc5bc0
2022-03-01T15:53:14.000Z
[ "pytorch", "bert", "token-classification", "ar", "dataset:wikiann", "transformers", "ner", "classification", "autotrain_compatible" ]
token-classification
false
abdusahmbzuai
null
abdusahmbzuai/arabert-ner
17
1
transformers
9,039
--- pipeline_tag: token-classification language: ar datasets: - wikiann task_ids: - named-entity-recognition tags: - "ner" - "ar" - "classification" widget: - text: "كريستيانو رونالدو يلعب مع نادي يوفنتوس" example_title: "Sentence 1" - text: "تخرج أحمد من الجامعة الأمريكية في الشارقة الشهر الماضي" example_title: "Sentence 2" - text: "لا يزال ديبالا يلعب لفريق يوفنتوس" example_title: "Sentence 3" --- # Arabic NER
davanstrien/convnext_flyswot
ba93cdfc85a8cc69f491717f7f184a03cbca78d8
2022-03-01T20:47:35.000Z
[ "pytorch", "convnext", "image-classification", "dataset:image_folder", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
image-classification
false
davanstrien
null
davanstrien/convnext_flyswot
17
null
transformers
9,040
--- license: apache-2.0 tags: - generated_from_trainer datasets: - image_folder metrics: - f1 model-index: - name: convnext_flyswot results: - task: name: Image Classification type: image-classification dataset: name: image_folder type: image_folder args: default metrics: - name: F1 type: f1 value: 0.959245529738118 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # convnext_flyswot This model is a fine-tuned version of [facebook/convnext-base-224-22k](https://huggingface.co/facebook/convnext-base-224-22k) on the image_folder dataset. It achieves the following results on the evaluation set: - Loss: 0.1441 - F1: 0.9592 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 666 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 52 | 0.6833 | 0.7484 | | No log | 2.0 | 104 | 0.3666 | 0.8750 | | No log | 3.0 | 156 | 0.2090 | 0.9321 | | No log | 4.0 | 208 | 0.1478 | 0.9449 | | No log | 5.0 | 260 | 0.1002 | 0.9518 | | No log | 6.0 | 312 | 0.1053 | 0.9506 | | No log | 7.0 | 364 | 0.1182 | 0.9616 | | No log | 8.0 | 416 | 0.1102 | 0.9592 | | No log | 9.0 | 468 | 0.1262 | 0.9616 | | 0.203 | 10.0 | 520 | 0.1286 | 0.9616 | | 0.203 | 11.0 | 572 | 0.1355 | 0.9592 | | 0.203 | 12.0 | 624 | 0.1299 | 0.9592 | | 0.203 | 13.0 | 676 | 0.1154 | 0.9592 | | 0.203 | 14.0 | 728 | 0.1385 | 0.9580 | | 0.203 | 15.0 | 780 | 0.1330 | 0.9592 | | 0.203 | 16.0 | 832 | 0.1390 | 0.9592 | | 0.203 | 17.0 | 884 | 0.1386 | 0.9592 | | 0.203 | 18.0 | 936 | 0.1390 | 0.9592 | | 0.203 | 19.0 | 988 | 0.1409 | 0.9592 | | 0.0006 | 20.0 | 1040 | 0.1411 | 0.9592 | | 0.0006 | 21.0 | 1092 | 0.1413 | 0.9592 | | 0.0006 | 22.0 | 1144 | 0.1415 | 0.9592 | | 0.0006 | 23.0 | 1196 | 0.1426 | 0.9592 | | 0.0006 | 24.0 | 1248 | 0.1435 | 0.9592 | | 0.0006 | 25.0 | 1300 | 0.1438 | 0.9592 | | 0.0006 | 26.0 | 1352 | 0.1434 | 0.9592 | | 0.0006 | 27.0 | 1404 | 0.1437 | 0.9592 | | 0.0006 | 28.0 | 1456 | 0.1441 | 0.9592 | | 0.0002 | 29.0 | 1508 | 0.1440 | 0.9592 | | 0.0002 | 30.0 | 1560 | 0.1441 | 0.9592 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.6
davanstrien/flyswot_iiif
d8b0a089e42854c5c5f5129ecfc83a8285d45670
2022-03-02T07:59:30.000Z
[ "pytorch", "convnext", "image-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
image-classification
false
davanstrien
null
davanstrien/flyswot_iiif
17
null
transformers
9,041
--- license: apache-2.0 tags: - generated_from_trainer metrics: - f1 model-index: - name: flyswot_iiif results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flyswot_iiif This model is a fine-tuned version of [facebook/convnext-base-224-22k](https://huggingface.co/facebook/convnext-base-224-22k) on the None dataset. It achieves the following results on the evaluation set: - Loss: 6.1280 - F1: 0.0034 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 666 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 - mixed_precision_training: Native AMP - label_smoothing_factor: 0.1 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 8.5184 | 0.26 | 500 | 7.9280 | 0.0005 | | 7.7409 | 0.52 | 1000 | 7.5824 | 0.0007 | | 7.4649 | 0.78 | 1500 | 7.3841 | 0.0010 | | 7.3285 | 1.04 | 2000 | 7.2652 | 0.0012 | | 7.1404 | 1.3 | 2500 | 7.1559 | 0.0014 | | 7.0322 | 1.56 | 3000 | 7.0551 | 0.0016 | | 6.9197 | 1.82 | 3500 | 6.9449 | 0.0019 | | 6.7822 | 2.09 | 4000 | 6.8773 | 0.0018 | | 6.6506 | 2.35 | 4500 | 6.7980 | 0.0020 | | 6.5811 | 2.61 | 5000 | 6.7382 | 0.0022 | | 6.538 | 2.87 | 5500 | 6.6582 | 0.0022 | | 6.4136 | 3.13 | 6000 | 6.6013 | 0.0024 | | 6.3325 | 3.39 | 6500 | 6.5369 | 0.0024 | | 6.2566 | 3.65 | 7000 | 6.4875 | 0.0025 | | 6.2285 | 3.91 | 7500 | 6.4342 | 0.0027 | | 6.1281 | 4.17 | 8000 | 6.4066 | 0.0027 | | 6.0762 | 4.43 | 8500 | 6.3674 | 0.0027 | | 6.0309 | 4.69 | 9000 | 6.3336 | 0.0027 | | 6.0123 | 4.95 | 9500 | 6.2932 | 0.0030 | | 5.9089 | 5.21 | 10000 | 6.2835 | 0.0029 | | 5.8901 | 5.47 | 10500 | 6.2481 | 0.0030 | | 5.86 | 5.74 | 11000 | 6.2295 | 0.0030 | | 5.8586 | 6.0 | 11500 | 6.2068 | 0.0033 | | 5.7768 | 6.26 | 12000 | 6.1937 | 0.0031 | | 5.7591 | 6.52 | 12500 | 6.1916 | 0.0032 | | 5.7443 | 6.78 | 13000 | 6.1579 | 0.0033 | | 5.7125 | 7.04 | 13500 | 6.1478 | 0.0033 | | 5.6751 | 7.3 | 14000 | 6.1379 | 0.0035 | | 5.6648 | 7.56 | 14500 | 6.1304 | 0.0035 | | 5.6644 | 7.82 | 15000 | 6.1280 | 0.0034 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.6
Ameer05/bart-large-cnn-samsum-rescom-finetuned-resume-summarizer-10-epoch-tweak-lr-8-100-1
c0b4d0d486b0ffe8c8cf79ecf7001bb7a2090794
2022-03-08T16:43:01.000Z
[ "pytorch", "tensorboard", "bart", "text2text-generation", "transformers", "summarization", "generated_from_trainer", "model-index", "autotrain_compatible" ]
summarization
false
Ameer05
null
Ameer05/bart-large-cnn-samsum-rescom-finetuned-resume-summarizer-10-epoch-tweak-lr-8-100-1
17
null
transformers
9,042
--- tags: - summarization - generated_from_trainer metrics: - rouge model-index: - name: bart-large-cnn-samsum-rescom-finetuned-resume-summarizer-10-epoch-tweak-lr-8-100-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-cnn-samsum-rescom-finetuned-resume-summarizer-10-epoch-tweak-lr-8-100-1 This model is a fine-tuned version of [Ameer05/model-token-repo](https://huggingface.co/Ameer05/model-token-repo) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.6315 - Rouge1: 61.441 - Rouge2: 52.9403 - Rougel: 58.3426 - Rougelsum: 60.8249 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:| | No log | 0.91 | 5 | 2.0139 | 53.4301 | 46.6698 | 50.644 | 53.3985 | | No log | 1.91 | 10 | 1.6309 | 61.4629 | 53.8884 | 59.0867 | 60.8823 | | No log | 2.91 | 15 | 1.5379 | 61.2938 | 53.7208 | 59.0644 | 60.7381 | | No log | 3.91 | 20 | 1.4470 | 63.2667 | 55.9273 | 60.5112 | 62.7538 | | 1.5454 | 4.91 | 25 | 1.4353 | 62.7166 | 54.8328 | 60.0101 | 62.1378 | | 1.5454 | 5.91 | 30 | 1.4411 | 59.7469 | 51.9068 | 57.036 | 58.9474 | | 1.5454 | 6.91 | 35 | 1.5195 | 64.152 | 57.1447 | 61.362 | 63.5951 | | 1.5454 | 7.91 | 40 | 1.6174 | 60.1464 | 51.5654 | 57.1676 | 59.4405 | | 0.5429 | 8.91 | 45 | 1.7451 | 61.9696 | 53.6421 | 58.5884 | 61.3286 | | 0.5429 | 9.91 | 50 | 1.9081 | 60.3296 | 52.3052 | 57.6518 | 59.7854 | | 0.5429 | 10.91 | 55 | 1.9721 | 61.5597 | 51.9027 | 57.1184 | 60.6717 | | 0.5429 | 11.91 | 60 | 2.0471 | 61.2222 | 53.9475 | 58.725 | 60.6668 | | 0.5429 | 12.91 | 65 | 2.1422 | 60.1915 | 52.0627 | 56.9955 | 59.438 | | 0.1506 | 13.91 | 70 | 2.1542 | 61.6915 | 53.045 | 58.1727 | 60.8765 | | 0.1506 | 14.91 | 75 | 2.1885 | 59.8069 | 51.6543 | 56.8112 | 59.2055 | | 0.1506 | 15.91 | 80 | 2.3146 | 61.695 | 53.2666 | 57.9003 | 61.1108 | | 0.1506 | 16.91 | 85 | 2.3147 | 60.4482 | 52.1694 | 57.0649 | 59.7882 | | 0.0452 | 17.91 | 90 | 2.1731 | 60.0259 | 51.5046 | 56.7399 | 59.2955 | | 0.0452 | 18.91 | 95 | 2.2690 | 60.0534 | 52.4819 | 57.1631 | 59.5056 | | 0.0452 | 19.91 | 100 | 2.2990 | 58.0737 | 48.8098 | 54.5684 | 57.3187 | | 0.0452 | 20.91 | 105 | 2.2704 | 61.8982 | 53.9077 | 58.6909 | 61.4252 | | 0.0267 | 21.91 | 110 | 2.3012 | 62.0174 | 53.5427 | 58.5278 | 61.1921 | | 0.0267 | 22.91 | 115 | 2.3569 | 61.6327 | 53.7387 | 58.8908 | 61.1623 | | 0.0267 | 23.91 | 120 | 2.3579 | 60.228 | 52.3747 | 58.1448 | 59.7322 | | 0.0267 | 24.91 | 125 | 2.3389 | 60.4902 | 51.7935 | 57.0689 | 59.7132 | | 0.0267 | 25.91 | 130 | 2.3168 | 58.8469 | 50.3181 | 55.7386 | 58.3598 | | 0.0211 | 26.91 | 135 | 2.4147 | 59.4225 | 50.8405 | 56.503 | 58.7221 | | 0.0211 | 27.91 | 140 | 2.3631 | 59.7489 | 51.2137 | 57.3204 | 59.3348 | | 0.0211 | 28.91 | 145 | 2.3850 | 60.1718 | 51.4176 | 57.2152 | 59.5157 | | 0.0211 | 29.91 | 150 | 2.4610 | 60.1433 | 51.433 | 56.6256 | 59.3265 | | 0.0175 | 30.91 | 155 | 2.4400 | 58.8345 | 49.7031 | 55.3079 | 57.9236 | | 0.0175 | 31.91 | 160 | 2.4506 | 59.209 | 50.1626 | 55.6451 | 58.5791 | | 0.0175 | 32.91 | 165 | 2.4316 | 59.7713 | 50.8999 | 56.4235 | 58.9845 | | 0.0175 | 33.91 | 170 | 2.2781 | 60.1822 | 51.9435 | 57.4586 | 59.6766 | | 0.0175 | 34.91 | 175 | 2.3849 | 58.2328 | 49.2106 | 55.1516 | 57.5072 | | 0.0141 | 35.91 | 180 | 2.4872 | 58.4916 | 50.3345 | 55.5991 | 58.1131 | | 0.0141 | 36.91 | 185 | 2.4883 | 59.0957 | 49.76 | 55.3567 | 58.076 | | 0.0141 | 37.91 | 190 | 2.4327 | 58.091 | 48.8628 | 54.8678 | 57.5406 | | 0.0141 | 38.91 | 195 | 2.4998 | 57.7428 | 48.7366 | 54.2166 | 56.7643 | | 0.0089 | 39.91 | 200 | 2.4107 | 60.1662 | 51.9832 | 57.1372 | 59.6989 | | 0.0089 | 40.91 | 205 | 2.4700 | 58.2159 | 49.3934 | 54.9265 | 57.4126 | | 0.0089 | 41.91 | 210 | 2.4833 | 58.7434 | 49.6619 | 55.5239 | 57.9562 | | 0.0089 | 42.91 | 215 | 2.4703 | 60.2984 | 51.3168 | 56.9082 | 59.3958 | | 0.0062 | 43.91 | 220 | 2.5306 | 60.5455 | 52.1189 | 57.3213 | 60.0232 | | 0.0062 | 44.91 | 225 | 2.5181 | 60.2149 | 51.2187 | 56.1935 | 59.3471 | | 0.0062 | 45.91 | 230 | 2.4871 | 59.8013 | 51.6114 | 56.0911 | 59.0902 | | 0.0062 | 46.91 | 235 | 2.4811 | 58.0271 | 48.9441 | 54.3108 | 57.3647 | | 0.0062 | 47.91 | 240 | 2.5290 | 62.5087 | 54.6149 | 59.638 | 62.0455 | | 0.0072 | 48.91 | 245 | 2.5194 | 58.7193 | 49.9679 | 55.6517 | 58.1569 | | 0.0072 | 49.91 | 250 | 2.5708 | 58.4626 | 49.5257 | 54.5032 | 58.1413 | | 0.0072 | 50.91 | 255 | 2.6449 | 58.446 | 49.4625 | 55.1092 | 58.03 | | 0.0072 | 51.91 | 260 | 2.5592 | 58.859 | 49.4398 | 55.1503 | 57.9663 | | 0.0056 | 52.91 | 265 | 2.5086 | 59.7322 | 51.3051 | 56.5401 | 59.2726 | | 0.0056 | 53.91 | 270 | 2.4846 | 57.8603 | 48.2408 | 54.3847 | 57.115 | | 0.0056 | 54.91 | 275 | 2.5509 | 58.9506 | 50.045 | 55.6658 | 58.3618 | | 0.0056 | 55.91 | 280 | 2.5032 | 60.2524 | 51.8167 | 56.98 | 59.7506 | | 0.0056 | 56.91 | 285 | 2.5012 | 60.0596 | 51.4924 | 56.7181 | 59.5037 | | 0.0054 | 57.91 | 290 | 2.5176 | 61.0622 | 52.6235 | 57.9317 | 60.5036 | | 0.0054 | 58.91 | 295 | 2.5024 | 62.9246 | 54.8544 | 59.9824 | 62.5584 | | 0.0054 | 59.91 | 300 | 2.5687 | 62.2602 | 53.9673 | 58.9862 | 61.5837 | | 0.0054 | 60.91 | 305 | 2.5890 | 62.5706 | 54.227 | 59.2032 | 62.125 | | 0.0036 | 61.91 | 310 | 2.5454 | 62.1565 | 53.2585 | 58.7169 | 61.3943 | | 0.0036 | 62.91 | 315 | 2.5629 | 62.8292 | 54.6781 | 59.9889 | 62.254 | | 0.0036 | 63.91 | 320 | 2.5581 | 58.8394 | 50.4421 | 56.0742 | 58.1945 | | 0.0036 | 64.91 | 325 | 2.5532 | 59.5814 | 51.1335 | 56.5841 | 59.196 | | 0.0031 | 65.91 | 330 | 2.5826 | 59.0485 | 50.3992 | 55.5283 | 58.3757 | | 0.0031 | 66.91 | 335 | 2.5815 | 61.4832 | 52.7977 | 57.7351 | 60.9888 | | 0.0031 | 67.91 | 340 | 2.5865 | 61.7836 | 53.6797 | 58.6743 | 61.3765 | | 0.0031 | 68.91 | 345 | 2.6007 | 61.2253 | 52.8781 | 57.7006 | 60.717 | | 0.0031 | 69.91 | 350 | 2.6210 | 60.717 | 52.4933 | 57.5089 | 60.4196 | | 0.0035 | 70.91 | 355 | 2.6169 | 61.3491 | 53.3932 | 58.2288 | 60.8793 | | 0.0035 | 71.91 | 360 | 2.6025 | 62.0101 | 54.0289 | 59.0822 | 61.7202 | | 0.0035 | 72.91 | 365 | 2.5705 | 61.2227 | 52.9937 | 58.2493 | 60.6631 | | 0.0035 | 73.91 | 370 | 2.5623 | 59.1718 | 50.7827 | 56.1851 | 58.7118 | | 0.002 | 74.91 | 375 | 2.5536 | 58.4201 | 49.6923 | 55.0398 | 57.7707 | | 0.002 | 75.91 | 380 | 2.5478 | 60.2307 | 51.7503 | 57.3173 | 59.692 | | 0.002 | 76.91 | 385 | 2.6039 | 58.7637 | 49.741 | 55.5341 | 58.0784 | | 0.002 | 77.91 | 390 | 2.6371 | 59.3929 | 50.6444 | 55.9887 | 58.813 | | 0.002 | 78.91 | 395 | 2.6238 | 59.0572 | 50.605 | 55.6631 | 58.4366 | | 0.0019 | 79.91 | 400 | 2.5783 | 57.9852 | 49.2588 | 54.822 | 57.4643 | | 0.0019 | 80.91 | 405 | 2.5982 | 58.0218 | 49.1651 | 54.9876 | 57.4066 | | 0.0019 | 81.91 | 410 | 2.6141 | 60.3133 | 51.5723 | 56.9476 | 59.715 | | 0.0019 | 82.91 | 415 | 2.5904 | 60.8199 | 51.8956 | 58.406 | 60.323 | | 0.0017 | 83.91 | 420 | 2.5718 | 60.3449 | 51.1433 | 57.6984 | 59.7513 | | 0.0017 | 84.91 | 425 | 2.5737 | 60.151 | 51.1986 | 57.3376 | 59.378 | | 0.0017 | 85.91 | 430 | 2.5807 | 60.9273 | 52.2469 | 58.2038 | 60.1642 | | 0.0017 | 86.91 | 435 | 2.5900 | 60.1846 | 51.6144 | 57.5407 | 59.5109 | | 0.0011 | 87.91 | 440 | 2.6066 | 62.0776 | 53.6022 | 59.157 | 61.6201 | | 0.0011 | 88.91 | 445 | 2.6231 | 61.8822 | 53.5232 | 58.965 | 61.401 | | 0.0011 | 89.91 | 450 | 2.6273 | 60.3358 | 51.9941 | 57.3823 | 59.7729 | | 0.0011 | 90.91 | 455 | 2.6194 | 60.0196 | 51.6134 | 57.1357 | 59.4594 | | 0.0011 | 91.91 | 460 | 2.6118 | 60.6898 | 52.1328 | 57.3076 | 60.0351 | | 0.0015 | 92.91 | 465 | 2.6032 | 61.2119 | 52.5034 | 57.8098 | 60.6634 | | 0.0015 | 93.91 | 470 | 2.6040 | 61.4812 | 52.8197 | 57.9668 | 60.8767 | | 0.0015 | 94.91 | 475 | 2.6158 | 61.4046 | 52.8905 | 57.8958 | 60.804 | | 0.0015 | 95.91 | 480 | 2.6280 | 62.1764 | 53.8521 | 58.8608 | 61.6138 | | 0.0012 | 96.91 | 485 | 2.6304 | 62.2028 | 53.8967 | 58.8976 | 61.6409 | | 0.0012 | 97.91 | 490 | 2.6328 | 61.7371 | 53.3908 | 58.4107 | 61.1382 | | 0.0012 | 98.91 | 495 | 2.6331 | 61.441 | 52.9403 | 58.3426 | 60.8249 | | 0.0012 | 99.91 | 500 | 2.6315 | 61.441 | 52.9403 | 58.3426 | 60.8249 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.9.1 - Datasets 1.18.4 - Tokenizers 0.10.3
AlekseyKorshuk/bert-finetuned-ner
58198745f8dd6219a7303702eaa3596570465bab
2022-03-08T14:27:56.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "dataset:wnut_17", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
token-classification
false
AlekseyKorshuk
null
AlekseyKorshuk/bert-finetuned-ner
17
null
transformers
9,043
--- license: apache-2.0 tags: - generated_from_trainer datasets: - wnut_17 model-index: - name: bert-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the wnut_17 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 425 | 0.3961 | 0.5707 | 0.2847 | 0.3799 | 0.9058 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
AmrSheta/Meme
ab6b8aaabee48905907041dc1595f954d9e17b02
2022-03-12T20:50:10.000Z
[ "pytorch", "bert", "feature-extraction", "transformers", "text-classification" ]
text-classification
false
AmrSheta
null
AmrSheta/Meme
17
null
transformers
9,044
--- tags: - text-classification --- #meme description classification
facebook/m2m100-12B-avg-5-ckpt
a8f832018c8e51e3db1652e7ae9652664a1e4647
2022-05-26T22:26:32.000Z
[ "pytorch", "m2m_100", "text2text-generation", "multilingual", "af", "am", "ar", "ast", "az", "ba", "be", "bg", "bn", "br", "bs", "ca", "ceb", "cs", "cy", "da", "de", "el", "en", "es", "et", "fa", "ff", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "ht", "hu", "hy", "id", "ig", "ilo", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "lb", "lg", "ln", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "ns", "oc", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "so", "sq", "sr", "ss", "su", "sv", "sw", "ta", "th", "tl", "tn", "tr", "uk", "ur", "uz", "vi", "wo", "xh", "yi", "yo", "zh", "zu", "arxiv:2010.11125", "transformers", "m2m100-12B", "license:mit", "autotrain_compatible" ]
text2text-generation
false
facebook
null
facebook/m2m100-12B-avg-5-ckpt
17
null
transformers
9,045
--- language: - multilingual - af - am - ar - ast - az - ba - be - bg - bn - br - bs - ca - ceb - cs - cy - da - de - el - en - es - et - fa - ff - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - ht - hu - hy - id - ig - ilo - is - it - ja - jv - ka - kk - km - kn - ko - lb - lg - ln - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - no - ns - oc - or - pa - pl - ps - pt - ro - ru - sd - si - sk - sl - so - sq - sr - ss - su - sv - sw - ta - th - tl - tn - tr - uk - ur - uz - vi - wo - xh - yi - yo - zh - zu license: mit tags: - m2m100-12B --- # M2M100 12B (average of last 5 checkpoints) M2M100 is a multilingual encoder-decoder (seq-to-seq) model trained for Many-to-Many multilingual translation. It was introduced in this [paper](https://arxiv.org/abs/2010.11125) and first released in [this](https://github.com/pytorch/fairseq/tree/master/examples/m2m_100) repository. The model that can directly translate between the 9,900 directions of 100 languages. To translate into a target language, the target language id is forced as the first generated token. To force the target language id as the first generated token, pass the `forced_bos_token_id` parameter to the `generate` method. *Note: `M2M100Tokenizer` depends on `sentencepiece`, so make sure to install it before running the example.* To install `sentencepiece` run `pip install sentencepiece` ```python from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer hi_text = "जीवन एक चॉकलेट बॉक्स की तरह है।" chinese_text = "生活就像一盒巧克力。" model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100-12B-avg-5-ckpt") tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100-12B-avg-5-ckpt") # translate Hindi to French tokenizer.src_lang = "hi" encoded_hi = tokenizer(hi_text, return_tensors="pt") generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.get_lang_id("fr")) tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) # => "La vie est comme une boîte de chocolat." # translate Chinese to English tokenizer.src_lang = "zh" encoded_zh = tokenizer(chinese_text, return_tensors="pt") generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id("en")) tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) # => "Life is like a box of chocolate." ``` See the [model hub](https://huggingface.co/models?filter=m2m_100) to look for more fine-tuned versions. ## Languages covered Afrikaans (af), Amharic (am), Arabic (ar), Asturian (ast), Azerbaijani (az), Bashkir (ba), Belarusian (be), Bulgarian (bg), Bengali (bn), Breton (br), Bosnian (bs), Catalan; Valencian (ca), Cebuano (ceb), Czech (cs), Welsh (cy), Danish (da), German (de), Greeek (el), English (en), Spanish (es), Estonian (et), Persian (fa), Fulah (ff), Finnish (fi), French (fr), Western Frisian (fy), Irish (ga), Gaelic; Scottish Gaelic (gd), Galician (gl), Gujarati (gu), Hausa (ha), Hebrew (he), Hindi (hi), Croatian (hr), Haitian; Haitian Creole (ht), Hungarian (hu), Armenian (hy), Indonesian (id), Igbo (ig), Iloko (ilo), Icelandic (is), Italian (it), Japanese (ja), Javanese (jv), Georgian (ka), Kazakh (kk), Central Khmer (km), Kannada (kn), Korean (ko), Luxembourgish; Letzeburgesch (lb), Ganda (lg), Lingala (ln), Lao (lo), Lithuanian (lt), Latvian (lv), Malagasy (mg), Macedonian (mk), Malayalam (ml), Mongolian (mn), Marathi (mr), Malay (ms), Burmese (my), Nepali (ne), Dutch; Flemish (nl), Norwegian (no), Northern Sotho (ns), Occitan (post 1500) (oc), Oriya (or), Panjabi; Punjabi (pa), Polish (pl), Pushto; Pashto (ps), Portuguese (pt), Romanian; Moldavian; Moldovan (ro), Russian (ru), Sindhi (sd), Sinhala; Sinhalese (si), Slovak (sk), Slovenian (sl), Somali (so), Albanian (sq), Serbian (sr), Swati (ss), Sundanese (su), Swedish (sv), Swahili (sw), Tamil (ta), Thai (th), Tagalog (tl), Tswana (tn), Turkish (tr), Ukrainian (uk), Urdu (ur), Uzbek (uz), Vietnamese (vi), Wolof (wo), Xhosa (xh), Yiddish (yi), Yoruba (yo), Chinese (zh), Zulu (zu) ## BibTeX entry and citation info ``` @misc{fan2020englishcentric, title={Beyond English-Centric Multilingual Machine Translation}, author={Angela Fan and Shruti Bhosale and Holger Schwenk and Zhiyi Ma and Ahmed El-Kishky and Siddharth Goyal and Mandeep Baines and Onur Celebi and Guillaume Wenzek and Vishrav Chaudhary and Naman Goyal and Tom Birch and Vitaliy Liptchinsky and Sergey Edunov and Edouard Grave and Michael Auli and Armand Joulin}, year={2020}, eprint={2010.11125}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
saattrupdan/job-listing-relevance-model
3751de206442b9b400d6660d7da787a74aba09c2
2022-03-22T19:51:07.000Z
[ "pytorch", "xlm-roberta", "text-classification", "transformers", "generated_from_trainer", "license:mit", "model-index" ]
text-classification
false
saattrupdan
null
saattrupdan/job-listing-relevance-model
17
null
transformers
9,046
--- license: mit tags: - generated_from_trainer model-index: - name: job-listing-relevance-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # job-listing-relevance-model This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1649 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.7435 | 0.43 | 50 | 0.6889 | | 0.3222 | 0.87 | 100 | 0.2906 | | 0.2573 | 1.3 | 150 | 0.1937 | | 0.1205 | 1.74 | 200 | 0.1411 | | 0.1586 | 2.17 | 250 | 0.2008 | | 0.0755 | 2.61 | 300 | 0.1926 | | 0.062 | 3.04 | 350 | 0.2257 | | 0.0644 | 3.48 | 400 | 0.1497 | | 0.1034 | 3.91 | 450 | 0.1561 | | 0.008 | 4.35 | 500 | 0.2067 | | 0.0616 | 4.78 | 550 | 0.2067 | | 0.0766 | 5.22 | 600 | 0.1494 | | 0.0029 | 5.65 | 650 | 0.2078 | | 0.1076 | 6.09 | 700 | 0.1669 | | 0.0025 | 6.52 | 750 | 0.1564 | | 0.0498 | 6.95 | 800 | 0.2355 | | 0.0011 | 7.39 | 850 | 0.1652 | | 0.0271 | 7.82 | 900 | 0.1731 | | 0.012 | 8.26 | 950 | 0.1590 | | 0.0257 | 8.69 | 1000 | 0.1638 | | 0.0009 | 9.13 | 1050 | 0.1851 | | 0.0013 | 9.56 | 1100 | 0.1613 | | 0.0015 | 10.0 | 1150 | 0.1649 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
RomanEnikeev/distilbert-base-uncased-finetuned-cola
f8049e8669ceb20d8a2282e612b3229840074d7a
2022-03-25T09:13:46.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:glue", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
RomanEnikeev
null
RomanEnikeev/distilbert-base-uncased-finetuned-cola
17
0
transformers
9,047
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5670814703238499 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8265 - Matthews Correlation: 0.5671 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5216 | 1.0 | 535 | 0.5536 | 0.4041 | | 0.3481 | 2.0 | 1070 | 0.5242 | 0.5206 | | 0.2372 | 3.0 | 1605 | 0.6162 | 0.5311 | | 0.1701 | 4.0 | 2140 | 0.7704 | 0.5461 | | 0.1304 | 5.0 | 2675 | 0.8265 | 0.5671 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
l3cube-pune/hing-mbert-mixed
865aa54a29dbb68d074172807e17dda68dc7ecde
2022-06-26T15:12:05.000Z
[ "pytorch", "bert", "fill-mask", "hi", "en", "dataset:L3Cube-HingCorpus", "arxiv:2204.08398", "transformers", "codemix", "license:cc-by-4.0", "autotrain_compatible" ]
fill-mask
false
l3cube-pune
null
l3cube-pune/hing-mbert-mixed
17
null
transformers
9,048
--- license: cc-by-4.0 language: - hi - en tags: - hi - en - codemix datasets: - L3Cube-HingCorpus --- ## HingBERT-Mixed HingBERT-Mixed is a Hindi-English code-mixed BERT model trained on roman + devanagari text. It is a base BERT model fine-tuned on mixed script L3Cube-HingCorpus. <br> [dataset link] (https://github.com/l3cube-pune/code-mixed-nlp) More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2204.08398) ``` @InProceedings{nayak-joshi:2022:WILDRE6, author = {Nayak, Ravindra and Joshi, Raviraj}, title = {L3Cube-HingCorpus and HingBERT: A Code Mixed Hindi-English Dataset and BERT Language Models}, booktitle = {Proceedings of The WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference}, month = {June}, year = {2022}, address = {Marseille, France}, publisher = {European Language Resources Association}, pages = {7--12} } ```
Graphcore/lxmert-gqa-uncased
7827f5b7093dd9ef2119df8ab3a512526cdffe68
2022-05-25T18:28:12.000Z
[ "pytorch", "lxmert", "question-answering", "dataset:Graphcore/gqa-lxmert", "arxiv:1908.07490", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
question-answering
false
Graphcore
null
Graphcore/lxmert-gqa-uncased
17
null
transformers
9,049
--- license: apache-2.0 tags: - generated_from_trainer datasets: - Graphcore/gqa-lxmert metrics: - accuracy model-index: - name: gqa results: - task: name: Question Answering type: question-answering dataset: name: Graphcore/gqa-lxmert type: Graphcore/gqa-lxmert args: gqa metrics: - name: Accuracy type: accuracy value: 0.5933514030612245 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Graphcore/lxmert-gqa-uncased BERT (Bidirectional Encoder Representations from Transformers) is a transformers model which is designed to pretrain bidirectional representations from unlabeled texts. It enables easy and fast fine-tuning for different downstream task such as Sequence Classification, Named Entity Recognition, Question Answering, Multiple Choice and MaskedLM. It was trained with two objectives in pretraining : Masked language modeling(MLM) and Next sentence prediction(NSP). First, MLM is different from traditional LM which sees the words one after another while BERT allows the model to learn a bidirectional representation. In addition to MLM, NSP is used for jointly pertaining text-pair representations. It reduces the need of many engineering efforts for building task specific architectures through pre-trained representation. And achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks. ## Model description LXMERT is a transformer model for learning vision-and-language cross-modality representations. It has a Transformer model that has three encoders: object relationship encoder, a language encoder, and a cross-modality encoder. It is pretrained via a combination of masked language modelling, visual-language text alignment, ROI-feature regression, masked visual-attribute modeling, masked visual-object modelling, and visual-question answering objectives. It achieves the state-of-the-art results on VQA anad GQA. Paper link : [LXMERT: Learning Cross-Modality Encoder Representations from Transformers](https://arxiv.org/pdf/1908.07490.pdf) ## Intended uses & limitations This model is a fine-tuned version of [unc-nlp/lxmert-base-uncased](https://huggingface.co/unc-nlp/lxmert-base-uncased) on the [Graphcore/gqa-lxmert](https://huggingface.co/datasets/Graphcore/gqa-lxmert) dataset. It achieves the following results on the evaluation set: - Loss: 1.9326 - Accuracy: 0.5934 ## Training and evaluation data - [Graphcore/gqa-lxmert](https://huggingface.co/datasets/Graphcore/gqa-lxmert) dataset ## Training procedure Trained on 16 Graphcore Mk2 IPUs using [optimum-graphcore](https://github.com/huggingface/optimum-graphcore). Command line: ``` python examples/question-answering/run_vqa.py \ --model_name_or_path unc-nlp/lxmert-base-uncased \ --ipu_config_name Graphcore/lxmert-base-ipu \ --dataset_name Graphcore/gqa-lxmert \ --do_train \ --do_eval \ --max_seq_length 512 \ --per_device_train_batch_size 1 \ --num_train_epochs 4 \ --dataloader_num_workers 64 \ --logging_steps 5 \ --learning_rate 1e-5 \ --lr_scheduler_type linear \ --loss_scaling 16384 \ --weight_decay 0.01 \ --warmup_ratio 0.1 \ --output_dir /tmp/gqa/ \ --dataloader_drop_last \ --replace_qa_head \ --pod_type pod16 ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: IPU - total_train_batch_size: 64 - total_eval_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 4.0 - training precision: Mixed Precision ### Training results ``` ***** train metrics ***** "epoch": 4.0, "train_loss": 0.6123406731570221, "train_runtime": 29986.2288, "train_samples": 943000, "train_samples_per_second": 125.791, "train_steps_per_second": 1.965 ***** eval metrics ***** "eval_accuracy": 0.5933514030612245, "eval_loss": 1.9326171875, "eval_samples": 12576, ``` ### Framework versions - Transformers 4.18.0.dev0 - Pytorch 1.10.0+cpu - Datasets 2.0.0 - Tokenizers 0.11.6
IIC/roberta-base-bne-ranker
8ee5133c03047e93559dfbfd6f2122045e91e8c3
2022-04-02T15:04:54.000Z
[ "pytorch", "roberta", "text-classification", "es", "dataset:IIC/msmarco_es", "transformers", "sentence similarity", "passage reranking", "model-index" ]
text-classification
false
IIC
null
IIC/roberta-base-bne-ranker
17
null
transformers
9,050
--- language: - es tags: - sentence similarity # Example: audio - passage reranking # Example: automatic-speech-recognition datasets: - IIC/msmarco_es metrics: - eval_MRR@10: 0.688 model-index: - name: roberta-base-bne-ranker results: - task: type: text similarity # Required. Example: automatic-speech-recognition name: text similarity # Optional. Example: Speech Recognition dataset: type: IIC/msmarco_es # Required. Example: common_voice. Use dataset id from https://hf.co/datasets name: IIC/msmarco_es # Required. Example: Common Voice zh-CN args: es # Optional. Example: zh-CN metrics: - type: MRR@10 value: 0.688 name: eval_MRR@10 --- This is a model to rank documents based on importance. It is trained on an [automatically translated version of MS Marco](https://huggingface.co/datasets/IIC/msmarco_es). After some experiments, the best configuration was to train for 2 epochs with learning rate 2e-5 and batch size 32. Example of use: ```python from sentence_transformers import CrossEncoder model = CrossEncoder("IIC/roberta-base-bne-ranker", device="cpu") question = "¿Cómo se llama el rey?" contexts = ["Me encanta la canción de el rey", "Cuando el rey fue a Sevilla, perdió su silla", "El rey se llama Juan Carlos y es conocido por sus escándalos"] similarity_scores = model.predict([[question, context] for context in contexts]) ``` ### Contributions Thanks to [@avacaondata](https://huggingface.co/avacaondata), [@alborotis](https://huggingface.co/alborotis), [@albarji](https://huggingface.co/albarji), [@Dabs](https://huggingface.co/Dabs), [@GuillemGSubies](https://huggingface.co/GuillemGSubies) for adding this model.
Meowren/MichaelScottBott
d48033535b3d403e3a55b76c3323f38588441195
2022-05-16T16:03:13.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
Meowren
null
Meowren/MichaelScottBott
17
null
transformers
9,051
--- tags: - conversational --- # Michael Scott DialoGPT Model
nielsr/convnext-tiny-finetuned-eurostat
f836aee3c8bc4e7424702ed00d2b8343bd0dbf21
2022-04-04T19:25:58.000Z
[ "pytorch", "convnext", "image-classification", "dataset:eurosat", "transformers", "license:apache-2.0" ]
image-classification
false
nielsr
null
nielsr/convnext-tiny-finetuned-eurostat
17
null
transformers
9,052
--- license: apache-2.0 datasets: - eurosat widget: - src: forest.png example_title: Forest --- # ConvNext fine-tuned on Eurosat This model is a `facebook/convnext-tiny-224` model fine-tuned on the [Eurosat dataset](https://github.com/phelber/EuroSAT).
Intel/bert-base-uncased-mrpc-int8-qat
54b72a05d03d7085c951b861c3d546cfe5de354a
2022-06-10T02:43:22.000Z
[ "pytorch", "bert", "text-classification", "en", "dataset:mrpc", "transformers", "text-classfication", "int8", "Intel® Neural Compressor", "QuantizationAwareTraining", "license:apache-2.0" ]
text-classification
false
Intel
null
Intel/bert-base-uncased-mrpc-int8-qat
17
null
transformers
9,053
--- language: en license: apache-2.0 tags: - text-classfication - int8 - Intel® Neural Compressor - QuantizationAwareTraining datasets: - mrpc metrics: - f1 --- # INT8 BERT base uncased finetuned MRPC ### QuantizationAwareTraining This is an INT8 PyTorch model quantized with [Intel® Neural Compressor](https://github.com/intel/neural-compressor). The original fp32 model comes from the fine-tuned model [Intel/bert-base-uncased-mrpc](https://huggingface.co/Intel/bert-base-uncased-mrpc). ### Test result | |INT8|FP32| |---|:---:|:---:| | **Accuracy (eval-f1)** |0.9142|0.9042| | **Model size (MB)** |107|418| ### Load with Intel® Neural Compressor: ```python from neural_compressor.utils.load_huggingface import OptimizedModel int8_model = OptimizedModel.from_pretrained( 'Intel/bert-base-uncased-mrpc-int8-qat', ) ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 - train_batch_size: 8 - eval_batch_size: 8 - eval_steps: 100 - load_best_model_at_end: True - metric_for_best_model: f1 - early_stopping_patience = 6 - early_stopping_threshold = 0.001
Stremie/bert-base-uncased-clickbait-keywords
a629da66d459d9ced721b258d5f5ca5f5cad1db1
2022-04-18T12:49:08.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
Stremie
null
Stremie/bert-base-uncased-clickbait-keywords
17
null
transformers
9,054
This model classifies whether a tweet is clickbait or not. It has been trained using [Webis-Clickbait-17](https://webis.de/data/webis-clickbait-17.html) dataset. Input is composed of 'postText' + '[SEP]' + 'targetKeywords'. Achieved ~0.7 F1-score on test data.
Kuray107/librispeech-100h-supervised-meta
263762b247ca3d1590e5d0f257fac9ea3b7bb836
2022-04-11T14:24:58.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
Kuray107
null
Kuray107/librispeech-100h-supervised-meta
17
null
transformers
9,055
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: librispeech-100h-supervised-meta results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # librispeech-100h-supervised-meta This model is a fine-tuned version of [Kuray107/librispeech-5h-supervised](https://huggingface.co/Kuray107/librispeech-5h-supervised) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0965 - Wer: 0.0330 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.1131 | 1.12 | 1000 | 0.0755 | 0.0487 | | 0.0725 | 2.24 | 2000 | 0.0637 | 0.0404 | | 0.0539 | 3.36 | 3000 | 0.0661 | 0.0389 | | 0.0441 | 4.48 | 4000 | 0.0637 | 0.0371 | | 0.0379 | 5.61 | 5000 | 0.0675 | 0.0356 | | 0.0341 | 6.73 | 6000 | 0.0735 | 0.0360 | | 0.0295 | 7.85 | 7000 | 0.0737 | 0.0362 | | 0.0265 | 8.97 | 8000 | 0.0741 | 0.0350 | | 0.0244 | 10.09 | 9000 | 0.0779 | 0.0337 | | 0.0217 | 11.21 | 10000 | 0.0835 | 0.0343 | | 0.0203 | 12.33 | 11000 | 0.0785 | 0.0339 | | 0.0188 | 13.45 | 12000 | 0.0827 | 0.0344 | | 0.0179 | 14.57 | 13000 | 0.0875 | 0.0332 | | 0.0169 | 15.7 | 14000 | 0.0860 | 0.0330 | | 0.0158 | 16.82 | 15000 | 0.0954 | 0.0330 | | 0.0147 | 17.94 | 16000 | 0.0934 | 0.0329 | | 0.0148 | 19.06 | 17000 | 0.0965 | 0.0330 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2 - Datasets 1.18.2 - Tokenizers 0.10.3
Conrad747/lg-en
49f97fb3cc1b52693783027c6a3d44f14288d83e
2022-07-20T13:39:31.000Z
[ "pytorch", "tensorboard", "marian", "text2text-generation", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
text2text-generation
false
Conrad747
null
Conrad747/lg-en
17
null
transformers
9,056
--- tags: - generated_from_trainer metrics: - bleu model-index: - name: lg-en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lg-en This model is a fine-tuned version of [AI-Lab-Makerere/lg_en](https://huggingface.co/AI-Lab-Makerere/lg_en) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0047 - Bleu: 31.3411 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | |:-------------:|:-----:|:----:|:---------------:|:-------:| | No log | 1.0 | 178 | 1.0047 | 31.3411 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
amir36/distilbert-base-uncased-finetuned-emotion
d721f69df9829e53438617352c3f33e8e6313068
2022-07-14T02:52:28.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:emotion", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
amir36
null
amir36/distilbert-base-uncased-finetuned-emotion
17
null
transformers
9,057
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.921 - name: F1 type: f1 value: 0.920970510317642 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2180 - Accuracy: 0.921 - F1: 0.9210 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8133 | 1.0 | 250 | 0.3078 | 0.9095 | 0.9076 | | 0.2431 | 2.0 | 500 | 0.2180 | 0.921 | 0.9210 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.4 - Tokenizers 0.11.6
studio-ousia/luke-large-lite
367bdf0609d247be6ce1eb76f9f228d40d26d05a
2022-04-13T10:32:20.000Z
[ "pytorch", "luke", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
studio-ousia
null
studio-ousia/luke-large-lite
17
null
transformers
9,058
Entry not found
Toshifumi/distilbert-base-multilingual-cased-finetuned-emotion
c44daf307230625367378c08e353508ae3f29a16
2022-04-13T12:30:50.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:emotion", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
Toshifumi
null
Toshifumi/distilbert-base-multilingual-cased-finetuned-emotion
17
null
transformers
9,059
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-multilingual-cased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.8885 - name: F1 type: f1 value: 0.8888307905223247 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-multilingual-cased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.3702 - Accuracy: 0.8885 - F1: 0.8888 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 1.1646 | 1.0 | 250 | 0.6190 | 0.8085 | 0.7992 | | 0.4536 | 2.0 | 500 | 0.3702 | 0.8885 | 0.8888 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
rmihaylov/bert-base-pos-theseus-bg
e85ab91f3bc5524d7e491d17883feb065203b2f8
2022-04-16T19:26:17.000Z
[ "pytorch", "bert", "token-classification", "bg", "dataset:oscar", "dataset:chitanka", "dataset:wikipedia", "arxiv:1810.04805", "arxiv:2002.02925", "transformers", "torch", "license:mit", "autotrain_compatible" ]
token-classification
false
rmihaylov
null
rmihaylov/bert-base-pos-theseus-bg
17
null
transformers
9,060
--- inference: false language: - bg license: mit datasets: - oscar - chitanka - wikipedia tags: - torch --- # BERT BASE (cased) finetuned on Bulgarian part-of-speech data Pretrained model on Bulgarian language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert). This model is cased: it does make a difference between bulgarian and Bulgarian. The training data is Bulgarian text from [OSCAR](https://oscar-corpus.com/post/oscar-2019/), [Chitanka](https://chitanka.info/) and [Wikipedia](https://bg.wikipedia.org/). It was finetuned on public part-of-speech Bulgarian data. Then, it was compressed via [progressive module replacing](https://arxiv.org/abs/2002.02925). ### How to use Here is how to use this model in PyTorch: ```python >>> from transformers import pipeline >>> >>> model = pipeline( >>> 'token-classification', >>> model='rmihaylov/bert-base-pos-theseus-bg', >>> tokenizer='rmihaylov/bert-base-pos-theseus-bg', >>> device=0, >>> revision=None) >>> output = model('Здравей, аз се казвам Иван.') >>> print(output) [{'end': 7, 'entity': 'INTJ', 'index': 1, 'score': 0.9640711, 'start': 0, 'word': '▁Здравей'}, {'end': 8, 'entity': 'PUNCT', 'index': 2, 'score': 0.9998927, 'start': 7, 'word': ','}, {'end': 11, 'entity': 'PRON', 'index': 3, 'score': 0.9998872, 'start': 8, 'word': '▁аз'}, {'end': 14, 'entity': 'PRON', 'index': 4, 'score': 0.99990034, 'start': 11, 'word': '▁се'}, {'end': 21, 'entity': 'VERB', 'index': 5, 'score': 0.99989736, 'start': 14, 'word': '▁казвам'}, {'end': 26, 'entity': 'PROPN', 'index': 6, 'score': 0.99990785, 'start': 21, 'word': '▁Иван'}, {'end': 27, 'entity': 'PUNCT', 'index': 7, 'score': 0.9999685, 'start': 26, 'word': '.'}] ```
ToToKr/kobigbird-bert-base-finetuned-klue
518fbcf145fdcc835d00a37a895bd7b0282b1cf5
2022-06-07T08:24:06.000Z
[ "pytorch", "tensorboard", "big_bird", "question-answering", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
question-answering
false
ToToKr
null
ToToKr/kobigbird-bert-base-finetuned-klue
17
null
transformers
9,061
--- tags: - generated_from_trainer model-index: - name: kobigbird-bert-base-finetuned-klue results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kobigbird-bert-base-finetuned-klue This model is a fine-tuned version of [monologg/kobigbird-bert-base](https://huggingface.co/monologg/kobigbird-bert-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8347 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 5.3957 | 0.13 | 500 | 3.7603 | | 3.2242 | 0.26 | 1000 | 2.3961 | | 2.0812 | 0.4 | 1500 | 1.5552 | | 1.6198 | 0.53 | 2000 | 1.3609 | | 1.447 | 0.66 | 2500 | 1.2270 | | 1.3438 | 0.79 | 3000 | 1.1321 | | 1.2399 | 0.93 | 3500 | 1.0973 | | 1.1976 | 1.06 | 4000 | 1.0418 | | 1.1177 | 1.19 | 4500 | 1.0301 | | 1.0811 | 1.32 | 5000 | 1.0232 | | 1.0506 | 1.45 | 5500 | 0.9971 | | 1.0293 | 1.59 | 6000 | 0.9580 | | 1.0196 | 1.72 | 6500 | 0.9551 | | 0.9846 | 1.85 | 7000 | 0.9274 | | 0.9702 | 1.98 | 7500 | 0.9286 | | 0.9224 | 2.11 | 8000 | 0.8961 | | 0.8867 | 2.25 | 8500 | 0.9193 | | 0.8711 | 2.38 | 9000 | 0.8727 | | 0.883 | 2.51 | 9500 | 0.8790 | | 0.8513 | 2.64 | 10000 | 0.8830 | | 0.8709 | 2.78 | 10500 | 0.8604 | | 0.8766 | 2.91 | 11000 | 0.8260 | | 0.7976 | 3.04 | 11500 | 0.8401 | | 0.7724 | 3.17 | 12000 | 0.8617 | | 0.78 | 3.3 | 12500 | 0.8601 | | 0.7566 | 3.44 | 13000 | 0.8657 | | 0.7407 | 3.57 | 13500 | 0.8347 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
AJGP/bert-finetuned-ner
d7b33d9a94cbae6b6a6c910649e7bd30ccebd4ec
2022-04-17T14:57:27.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "dataset:conll2003", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
token-classification
false
AJGP
null
AJGP/bert-finetuned-ner
17
null
transformers
9,062
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - name: Precision type: precision value: 0.9354518371400199 - name: Recall type: recall value: 0.9511948838774823 - name: F1 type: f1 value: 0.9432576769025368 - name: Accuracy type: accuracy value: 0.9868870312591982 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0598 - Precision: 0.9355 - Recall: 0.9512 - F1: 0.9433 - Accuracy: 0.9869 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0833 | 1.0 | 1756 | 0.0654 | 0.9202 | 0.9350 | 0.9275 | 0.9833 | | 0.034 | 2.0 | 3512 | 0.0610 | 0.9262 | 0.9458 | 0.9359 | 0.9846 | | 0.0233 | 3.0 | 5268 | 0.0598 | 0.9355 | 0.9512 | 0.9433 | 0.9869 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.1.0 - Tokenizers 0.12.1
mwong/albert-base-fever-claim-related
4848442a348fedbb771c97df962650c0644884c4
2022-06-24T03:34:53.000Z
[ "pytorch", "albert", "text-classification", "en", "dataset:mwong/fever-claim-related", "transformers", "text classification", "fact checking", "license:mit" ]
text-classification
false
mwong
null
mwong/albert-base-fever-claim-related
17
1
transformers
9,063
--- language: en license: mit tags: - text classification - fact checking datasets: - mwong/fever-claim-related widget: - text: "Earth’s changing climate is a critical issue and poses the risk of significant environmental, social and economic disruptions around the globe.</s></s>Because of fears of climate change and adverse effects of drilling explosions and oil spills in the Gulf of Mexico, legislation has been considered, and governmental regulations and orders have been issued, which, combined with the local economic and employment conditions caused by both, could materially adversely impact the oil and gas industries and the economic health of areas in which a significant number of our stores are located." example_title: "Evidence related to claim" metrics: f1 --- # FeverAlbert FeverAlbert is a classifier model that predicts if evidence is related to query claim. The model achieved F1 score of 88.33% with test dataset "mwong/fever-claim-related". Using pretrained albert-base-v2 model, the classifier head is trained on Fever dataset.
Intel/bert-base-uncased-mrpc-int8-dynamic
eab02b076b47301343cb77fa7cf23d029bee7376
2022-06-10T02:32:38.000Z
[ "pytorch", "bert", "text-classification", "en", "dataset:mrpc", "transformers", "text-classfication", "int8", "Intel® Neural Compressor", "PostTrainingDynamic", "license:apache-2.0" ]
text-classification
false
Intel
null
Intel/bert-base-uncased-mrpc-int8-dynamic
17
null
transformers
9,064
--- language: en license: apache-2.0 tags: - text-classfication - int8 - Intel® Neural Compressor - PostTrainingDynamic datasets: - mrpc metrics: - f1 --- # INT8 BERT base uncased finetuned MRPC ### Post-training dynamic quantization This is an INT8 PyTorch model quantized with [Intel® Neural Compressor](https://github.com/intel/neural-compressor). The original fp32 model comes from the fine-tuned model [Intel/bert-base-uncased-mrpc](https://huggingface.co/Intel/bert-base-uncased-mrpc). ### Test result | |INT8|FP32| |---|:---:|:---:| | **Accuracy (eval-f1)** |0.8997|0.9042| | **Model size (MB)** |174|418| ### Load with Intel® Neural Compressor: ```python from neural_compressor.utils.load_huggingface import OptimizedModel int8_model = OptimizedModel.from_pretrained( 'Intel/bert-base-uncased-mrpc-int8-dynamic', ) ```
Hate-speech-CNERG/tamil-codemixed-abusive-MuRIL
6eef32cd2cd8eb9f26dd76beaeec370ab6c48b2f
2022-05-03T08:52:47.000Z
[ "pytorch", "bert", "text-classification", "ta-en", "arxiv:2204.12543", "transformers", "license:afl-3.0" ]
text-classification
false
Hate-speech-CNERG
null
Hate-speech-CNERG/tamil-codemixed-abusive-MuRIL
17
null
transformers
9,065
--- language: ta-en license: afl-3.0 --- This model is used to detect **abusive speech** in **Code-Mixed Tamil**. It is finetuned on MuRIL model using Code-Mixed Tamil abusive speech dataset. The model is trained with learning rates of 2e-5. Training code can be found at this [url](https://github.com/hate-alert/IndicAbusive) LABEL_0 :-> Normal LABEL_1 :-> Abusive ### For more details about our paper Mithun Das, Somnath Banerjee and Animesh Mukherjee. "[Data Bootstrapping Approaches to Improve Low Resource Abusive Language Detection for Indic Languages](https://arxiv.org/abs/2204.12543)". Accepted at ACM HT 2022. ***Please cite our paper in any published work that uses any of these resources.*** ~~~ @article{das2022data, title={Data Bootstrapping Approaches to Improve Low Resource Abusive Language Detection for Indic Languages}, author={Das, Mithun and Banerjee, Somnath and Mukherjee, Animesh}, journal={arXiv preprint arXiv:2204.12543}, year={2022} } ~~~
benjamin/gpt2-wechsel-ukrainian
b654dd26f575dc9d2ff07bf501e5c442b22d5e39
2022-04-29T17:42:44.000Z
[ "pytorch", "gpt2", "text-generation", "uk", "arxiv:2112.06598", "transformers", "license:mit" ]
text-generation
false
benjamin
null
benjamin/gpt2-wechsel-ukrainian
17
1
transformers
9,066
--- license: mit language: uk --- # gpt2-wechsel-ukrainian [`gpt2`](https://huggingface.co/gpt2) transferred to Ukrainian using the method from the NAACL2022 paper [WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models](https://arxiv.org/abs/2112.065989).
KoenBronstring/finetuning-sentiment-model-3000-samples
ae2500fe723ee0c8ac6856d16e7815bbfda2e57e
2022-05-04T17:53:58.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:imdb", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
KoenBronstring
null
KoenBronstring/finetuning-sentiment-model-3000-samples
17
null
transformers
9,067
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb args: plain_text metrics: - name: Accuracy type: accuracy value: 0.8733333333333333 - name: F1 type: f1 value: 0.8758169934640523 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3149 - Accuracy: 0.8733 - F1: 0.8758 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cpu - Datasets 2.1.0 - Tokenizers 0.12.1
mikeadimech/pegasus-qmsum-meeting-summarization
1c8b4f4ac589d791c6f976cce4d05e945ee84cb9
2022-05-25T16:15:41.000Z
[ "pytorch", "pegasus", "text2text-generation", "dataset:yawnick/QMSum", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
text2text-generation
false
mikeadimech
null
mikeadimech/pegasus-qmsum-meeting-summarization
17
null
transformers
9,068
--- tags: - generated_from_trainer metrics: - rouge model-index: - name: pegasus-qmsum-meeting-summarization results: [] datasets: - yawnick/QMSum --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-qmsum-meeting-summarization This model is a fine-tuned version of [google/pegasus-xsum](https://huggingface.co/google/pegasus-xsum) on the QMSum dataset. It achieves the following results on the evaluation set: - Loss: 4.2331 - Rouge1: 32.7156 - Rouge2: 10.5699 - Rougel: 23.2759 - Rougelsum: 29.7903 - Gen Len: 61.65 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-06 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 300 - label_smoothing_factor: 0.1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:------:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 5.5746 | 1.09 | 100 | 5.1739 | 9.4941 | 1.7868 | 7.2455 | 8.4302 | 29.825 | | 5.5784 | 2.17 | 200 | 5.0939 | 9.113 | 1.7887 | 6.9741 | 8.0457 | 26.85 | | 5.3777 | 3.26 | 300 | 4.9723 | 9.6387 | 1.9301 | 7.349 | 8.7941 | 25.325 | | 5.1884 | 4.35 | 400 | 4.8423 | 10.6045 | 2.4008 | 7.8423 | 9.4593 | 22.625 | | 5.0795 | 5.43 | 500 | 4.7313 | 13.7621 | 3.1231 | 9.6944 | 12.2204 | 32.175 | | 4.9369 | 6.52 | 600 | 4.6555 | 19.5696 | 4.9121 | 14.2603 | 16.9622 | 46.45 | | 4.8926 | 7.61 | 700 | 4.6038 | 22.8411 | 5.9791 | 17.2227 | 20.1173 | 51.825 | | 4.7502 | 8.7 | 800 | 4.5659 | 24.0555 | 6.1971 | 18.967 | 20.9143 | 54.25 | | 4.6876 | 9.78 | 900 | 4.5379 | 24.7066 | 6.0317 | 19.542 | 21.5774 | 57.575 | | 4.6266 | 10.87 | 1000 | 4.5160 | 26.128 | 6.5089 | 20.5573 | 22.5338 | 58.0 | | 4.6303 | 11.96 | 1100 | 4.4983 | 26.6639 | 7.1208 | 20.5222 | 23.5783 | 57.925 | | 4.6263 | 13.04 | 1200 | 4.4815 | 26.8262 | 7.1029 | 20.5172 | 23.6216 | 57.575 | | 4.577 | 14.13 | 1300 | 4.4667 | 27.7952 | 7.8331 | 21.1111 | 24.6086 | 56.95 | | 4.5797 | 15.22 | 1400 | 4.4559 | 27.728 | 7.8144 | 21.1519 | 24.4858 | 56.6 | | 4.4923 | 16.3 | 1500 | 4.4448 | 28.0998 | 8.1346 | 21.4004 | 25.3769 | 55.975 | | 4.4583 | 17.39 | 1600 | 4.4335 | 28.9003 | 8.6135 | 22.0139 | 26.0409 | 56.55 | | 4.5036 | 18.48 | 1700 | 4.4246 | 29.2187 | 8.8301 | 22.3569 | 26.1964 | 58.125 | | 4.4383 | 19.57 | 1800 | 4.4144 | 28.8424 | 8.9131 | 22.0398 | 25.9214 | 56.75 | | 4.4797 | 20.65 | 1900 | 4.4054 | 28.9285 | 8.9298 | 22.222 | 26.0316 | 56.225 | | 4.4264 | 21.74 | 2000 | 4.3989 | 29.7184 | 9.0477 | 22.2885 | 26.7439 | 56.225 | | 4.3615 | 22.83 | 2100 | 4.3902 | 29.1538 | 8.9529 | 22.0076 | 26.4925 | 57.175 | | 4.329 | 23.91 | 2200 | 4.3839 | 29.5186 | 9.2777 | 21.9025 | 26.3141 | 55.5 | | 4.3578 | 25.0 | 2300 | 4.3766 | 28.4309 | 8.9423 | 21.0945 | 25.8191 | 53.975 | | 4.3748 | 26.09 | 2400 | 4.3707 | 28.3 | 9.0625 | 21.4946 | 25.1966 | 53.0 | | 4.3233 | 27.17 | 2500 | 4.3639 | 28.2325 | 8.9889 | 21.6226 | 25.3677 | 54.6 | | 4.339 | 28.26 | 2600 | 4.3578 | 28.0744 | 8.774 | 21.2509 | 25.2901 | 54.1 | | 4.2798 | 29.35 | 2700 | 4.3532 | 27.772 | 8.7096 | 21.1687 | 25.3345 | 54.025 | | 4.2964 | 30.43 | 2800 | 4.3465 | 27.7827 | 8.1597 | 20.8139 | 25.0152 | 54.45 | | 4.3365 | 31.52 | 2900 | 4.3423 | 28.2039 | 8.4661 | 21.3546 | 25.6381 | 55.5 | | 4.2385 | 32.61 | 3000 | 4.3380 | 28.1098 | 8.6483 | 21.5279 | 25.2009 | 53.95 | | 4.2451 | 33.7 | 3100 | 4.3331 | 28.2745 | 8.5024 | 21.4456 | 25.3363 | 52.6 | | 4.2393 | 34.78 | 3200 | 4.3289 | 28.7597 | 9.0881 | 21.6532 | 25.8954 | 52.65 | | 4.2116 | 35.87 | 3300 | 4.3252 | 29.0463 | 9.1218 | 21.8026 | 26.2037 | 53.65 | | 4.2175 | 36.96 | 3400 | 4.3210 | 28.8009 | 9.0188 | 21.8368 | 25.8678 | 52.85 | | 4.2071 | 38.04 | 3500 | 4.3169 | 28.9313 | 8.9787 | 21.3554 | 26.0628 | 54.325 | | 4.1775 | 39.13 | 3600 | 4.3132 | 28.837 | 8.9621 | 21.6342 | 26.0569 | 54.025 | | 4.1962 | 40.22 | 3700 | 4.3086 | 28.9265 | 9.0701 | 21.588 | 26.0702 | 53.075 | | 4.1452 | 41.3 | 3800 | 4.3060 | 29.7968 | 9.366 | 22.1712 | 26.8461 | 54.925 | | 4.1912 | 42.39 | 3900 | 4.3018 | 29.1488 | 9.1631 | 21.6566 | 26.1476 | 54.25 | | 4.1356 | 43.48 | 4000 | 4.2984 | 30.0138 | 9.2456 | 22.2547 | 27.2714 | 54.85 | | 4.1272 | 44.57 | 4100 | 4.2949 | 29.8858 | 9.1498 | 22.1221 | 27.0798 | 55.65 | | 4.1174 | 45.65 | 4200 | 4.2895 | 30.0427 | 9.2297 | 22.2602 | 27.4219 | 56.175 | | 4.1029 | 46.74 | 4300 | 4.2885 | 29.9443 | 9.4293 | 22.1229 | 27.3496 | 56.45 | | 4.157 | 47.83 | 4400 | 4.2851 | 30.3693 | 9.406 | 22.471 | 27.7511 | 56.775 | | 4.1105 | 48.91 | 4500 | 4.2827 | 30.6193 | 9.7082 | 22.6169 | 27.8044 | 57.225 | | 4.083 | 50.0 | 4600 | 4.2796 | 30.8083 | 9.9211 | 22.5228 | 28.1236 | 57.575 | | 4.0891 | 51.09 | 4700 | 4.2764 | 30.4201 | 9.6192 | 22.4747 | 27.7514 | 57.475 | | 4.0603 | 52.17 | 4800 | 4.2741 | 30.7777 | 9.7432 | 22.6705 | 27.5956 | 57.1 | | 4.0472 | 53.26 | 4900 | 4.2731 | 30.8093 | 9.7916 | 22.5533 | 27.7858 | 56.15 | | 4.0712 | 54.35 | 5000 | 4.2703 | 29.9667 | 9.5645 | 22.113 | 26.647 | 56.525 | | 4.0658 | 55.43 | 5100 | 4.2674 | 29.5415 | 9.4291 | 21.6862 | 26.7816 | 56.55 | | 4.059 | 56.52 | 5200 | 4.2659 | 30.2032 | 9.8875 | 22.2539 | 27.1801 | 56.925 | | 4.0257 | 57.61 | 5300 | 4.2629 | 30.3181 | 9.8187 | 22.4266 | 27.4318 | 56.925 | | 4.0002 | 58.7 | 5400 | 4.2608 | 29.6641 | 9.9252 | 22.1725 | 27.0764 | 56.6 | | 4.0978 | 59.78 | 5500 | 4.2591 | 30.653 | 10.087 | 22.6956 | 27.7481 | 56.25 | | 3.9978 | 60.87 | 5600 | 4.2568 | 29.5473 | 9.5653 | 21.6367 | 26.391 | 55.825 | | 3.9832 | 61.96 | 5700 | 4.2552 | 30.6368 | 10.1624 | 22.7204 | 27.5866 | 57.425 | | 3.9841 | 63.04 | 5800 | 4.2525 | 30.3045 | 9.7966 | 22.2939 | 27.0978 | 57.725 | | 4.002 | 64.13 | 5900 | 4.2507 | 30.4468 | 9.9323 | 22.6572 | 27.0761 | 57.5 | | 3.9705 | 65.22 | 6000 | 4.2491 | 30.1218 | 9.6921 | 22.465 | 26.3835 | 57.55 | | 3.9863 | 66.3 | 6100 | 4.2477 | 31.3982 | 9.9901 | 22.8762 | 27.6169 | 58.975 | | 3.9308 | 67.39 | 6200 | 4.2454 | 30.2673 | 9.5804 | 22.4474 | 26.6111 | 59.2 | | 3.9794 | 68.48 | 6300 | 4.2449 | 30.8612 | 9.8254 | 22.8444 | 27.4979 | 58.075 | | 3.9499 | 69.57 | 6400 | 4.2412 | 30.8366 | 9.7 | 22.4469 | 27.1621 | 59.025 | | 3.9722 | 70.65 | 6500 | 4.2414 | 30.9625 | 9.8251 | 22.4089 | 27.4342 | 59.1 | | 3.9125 | 71.74 | 6600 | 4.2394 | 30.5777 | 9.5514 | 22.1581 | 26.8665 | 58.75 | | 3.9184 | 72.83 | 6700 | 4.2396 | 30.8306 | 9.5469 | 22.6571 | 27.4302 | 59.725 | | 3.9337 | 73.91 | 6800 | 4.2377 | 30.8688 | 9.6733 | 22.3073 | 27.2943 | 58.975 | | 3.9145 | 75.0 | 6900 | 4.2358 | 30.467 | 9.6393 | 22.225 | 27.0127 | 58.45 | | 3.9038 | 76.09 | 7000 | 4.2353 | 30.6344 | 9.3676 | 22.1945 | 27.1871 | 59.275 | | 3.893 | 77.17 | 7100 | 4.2335 | 31.4486 | 9.8839 | 22.735 | 27.7854 | 59.025 | | 3.885 | 78.26 | 7200 | 4.2318 | 30.7118 | 9.8568 | 22.2546 | 27.3983 | 58.5 | | 3.9266 | 79.35 | 7300 | 4.2304 | 31.6171 | 9.8817 | 22.6145 | 27.6888 | 59.25 | | 3.8826 | 80.43 | 7400 | 4.2299 | 31.0976 | 9.4662 | 22.2285 | 27.817 | 58.95 | | 3.8775 | 81.52 | 7500 | 4.2286 | 31.1379 | 10.0975 | 22.5686 | 27.883 | 59.8 | | 3.8455 | 82.61 | 7600 | 4.2292 | 32.076 | 10.0214 | 22.8866 | 28.3828 | 59.15 | | 3.8838 | 83.7 | 7700 | 4.2269 | 31.5696 | 9.7812 | 22.7619 | 28.2236 | 58.6 | | 3.8425 | 84.78 | 7800 | 4.2266 | 31.1731 | 9.97 | 22.4203 | 27.4956 | 59.1 | | 3.8766 | 85.87 | 7900 | 4.2260 | 32.3221 | 10.6243 | 23.079 | 28.9008 | 58.45 | | 3.8217 | 86.96 | 8000 | 4.2258 | 31.9956 | 10.4201 | 23.083 | 28.4945 | 58.5 | | 3.8319 | 88.04 | 8100 | 4.2245 | 32.0272 | 10.4673 | 23.3471 | 28.9845 | 58.35 | | 3.8283 | 89.13 | 8200 | 4.2231 | 32.2943 | 10.2594 | 23.1819 | 29.1345 | 60.5 | | 3.8394 | 90.22 | 8300 | 4.2221 | 31.3976 | 10.3085 | 22.6581 | 28.2494 | 59.25 | | 3.8258 | 91.3 | 8400 | 4.2203 | 31.4433 | 10.1184 | 22.672 | 28.1236 | 58.85 | | 3.7981 | 92.39 | 8500 | 4.2205 | 31.1313 | 10.0056 | 22.677 | 27.7409 | 59.075 | | 3.8349 | 93.48 | 8600 | 4.2215 | 31.5779 | 10.0303 | 22.6155 | 28.0566 | 59.2 | | 3.8225 | 94.57 | 8700 | 4.2201 | 31.9646 | 10.0643 | 22.7808 | 28.67 | 58.925 | | 3.8145 | 95.65 | 8800 | 4.2193 | 32.0347 | 10.5103 | 23.095 | 28.6056 | 57.225 | | 3.7771 | 96.74 | 8900 | 4.2180 | 30.8138 | 9.602 | 22.2649 | 27.7948 | 57.875 | | 3.823 | 97.83 | 9000 | 4.2168 | 31.3785 | 9.7046 | 22.3877 | 28.2578 | 58.675 | | 3.7701 | 98.91 | 9100 | 4.2169 | 31.4511 | 9.9183 | 22.6645 | 28.1932 | 59.0 | | 3.773 | 100.0 | 9200 | 4.2169 | 31.7392 | 9.9669 | 22.5894 | 28.218 | 58.15 | | 3.7661 | 101.09 | 9300 | 4.2161 | 31.5507 | 9.8992 | 22.4602 | 28.3357 | 58.375 | | 3.7875 | 102.17 | 9400 | 4.2163 | 31.5145 | 9.5173 | 22.321 | 27.8613 | 58.375 | | 3.7659 | 103.26 | 9500 | 4.2152 | 31.2967 | 9.8797 | 22.6247 | 28.0317 | 57.925 | | 3.7576 | 104.35 | 9600 | 4.2139 | 31.5739 | 9.8376 | 22.7561 | 28.2318 | 58.4 | | 3.7784 | 105.43 | 9700 | 4.2144 | 32.2269 | 10.2299 | 22.6582 | 28.6249 | 58.425 | | 3.7356 | 106.52 | 9800 | 4.2139 | 32.3031 | 10.1505 | 22.7079 | 28.9052 | 58.475 | | 3.7799 | 107.61 | 9900 | 4.2124 | 31.1334 | 9.1481 | 22.1297 | 27.5951 | 58.6 | | 3.7269 | 108.7 | 10000 | 4.2122 | 31.6957 | 9.2874 | 22.4867 | 28.225 | 58.4 | | 3.719 | 109.78 | 10100 | 4.2108 | 31.477 | 10.0245 | 22.4703 | 28.1316 | 58.075 | | 3.7411 | 110.87 | 10200 | 4.2112 | 31.4165 | 9.9791 | 22.4396 | 28.3068 | 58.275 | | 3.7135 | 111.96 | 10300 | 4.2122 | 31.4924 | 9.9864 | 22.496 | 28.2414 | 57.8 | | 3.7317 | 113.04 | 10400 | 4.2120 | 31.6599 | 10.1605 | 22.5322 | 28.3045 | 59.075 | | 3.7113 | 114.13 | 10500 | 4.2127 | 31.6814 | 10.106 | 22.4311 | 28.5808 | 59.5 | | 3.7063 | 115.22 | 10600 | 4.2132 | 31.2448 | 10.0006 | 22.5549 | 28.4686 | 57.775 | | 3.681 | 116.3 | 10700 | 4.2123 | 31.1739 | 10.0533 | 22.2954 | 28.0822 | 58.35 | | 3.7369 | 117.39 | 10800 | 4.2118 | 31.8541 | 10.1452 | 22.7607 | 28.9501 | 58.8 | | 3.6645 | 118.48 | 10900 | 4.2122 | 31.7128 | 9.8554 | 22.4464 | 28.5888 | 58.375 | | 3.6766 | 119.57 | 11000 | 4.2118 | 31.1492 | 9.8058 | 22.0978 | 28.1827 | 58.725 | | 3.6915 | 120.65 | 11100 | 4.2110 | 31.1679 | 9.5755 | 22.1391 | 28.0886 | 58.375 | | 3.6702 | 121.74 | 11200 | 4.2129 | 31.0682 | 9.7375 | 22.0118 | 28.2189 | 59.15 | | 3.6946 | 122.83 | 11300 | 4.2118 | 31.6134 | 9.5918 | 22.2506 | 28.5343 | 59.175 | | 3.6713 | 123.91 | 11400 | 4.2110 | 31.3585 | 9.4211 | 22.1884 | 27.8744 | 59.05 | | 3.6694 | 125.0 | 11500 | 4.2126 | 32.0058 | 9.6453 | 22.3911 | 28.6928 | 59.55 | | 3.6585 | 126.09 | 11600 | 4.2123 | 31.7679 | 9.7101 | 22.2378 | 28.4985 | 59.2 | | 3.6857 | 127.17 | 11700 | 4.2118 | 31.7766 | 10.0375 | 22.5097 | 28.8104 | 59.6 | | 3.6338 | 128.26 | 11800 | 4.2126 | 32.2508 | 10.2617 | 22.6745 | 29.0714 | 59.075 | | 3.6412 | 129.35 | 11900 | 4.2135 | 32.0515 | 10.0905 | 22.7015 | 29.0028 | 58.9 | | 3.6594 | 130.43 | 12000 | 4.2122 | 32.7784 | 10.351 | 23.0969 | 29.6672 | 59.525 | | 3.6571 | 131.52 | 12100 | 4.2120 | 32.3165 | 10.329 | 22.8445 | 29.2886 | 59.5 | | 3.6002 | 132.61 | 12200 | 4.2120 | 32.5553 | 10.0875 | 22.6064 | 29.1046 | 59.425 | | 3.6621 | 133.7 | 12300 | 4.2126 | 31.7637 | 9.9785 | 22.5716 | 28.7173 | 59.275 | | 3.6651 | 134.78 | 12400 | 4.2122 | 31.7568 | 9.7503 | 22.3876 | 28.6015 | 59.6 | | 3.6127 | 135.87 | 12500 | 4.2123 | 31.5708 | 9.5203 | 21.9951 | 28.2082 | 58.75 | | 3.6544 | 136.96 | 12600 | 4.2124 | 32.0767 | 9.8955 | 22.2724 | 28.4755 | 59.5 | | 3.5994 | 138.04 | 12700 | 4.2125 | 31.8523 | 9.9159 | 22.2978 | 28.8159 | 59.175 | | 3.6174 | 139.13 | 12800 | 4.2114 | 32.2165 | 9.784 | 22.4377 | 28.5603 | 59.1 | | 3.6122 | 140.22 | 12900 | 4.2115 | 32.0247 | 9.6881 | 22.3116 | 28.61 | 58.9 | | 3.6174 | 141.3 | 13000 | 4.2116 | 31.9549 | 9.5924 | 22.3997 | 28.9145 | 59.15 | | 3.5965 | 142.39 | 13100 | 4.2113 | 32.6173 | 10.4241 | 22.8644 | 29.3928 | 60.9 | | 3.6076 | 143.48 | 13200 | 4.2112 | 33.0058 | 10.6417 | 23.0297 | 29.8375 | 61.0 | | 3.6013 | 144.57 | 13300 | 4.2105 | 33.005 | 10.5398 | 22.9758 | 29.7266 | 60.325 | | 3.6181 | 145.65 | 13400 | 4.2117 | 31.0558 | 9.4714 | 21.9025 | 27.9627 | 60.025 | | 3.6288 | 146.74 | 13500 | 4.2107 | 32.7196 | 10.4991 | 22.9182 | 29.6586 | 60.25 | | 3.5879 | 147.83 | 13600 | 4.2091 | 32.6755 | 10.3936 | 22.9559 | 29.5314 | 60.425 | | 3.591 | 148.91 | 13700 | 4.2101 | 33.2956 | 10.6616 | 22.8509 | 29.5237 | 60.4 | | 3.5658 | 150.0 | 13800 | 4.2116 | 33.4712 | 10.3725 | 23.1449 | 30.0987 | 60.2 | | 3.574 | 151.09 | 13900 | 4.2115 | 33.5427 | 10.5852 | 22.9671 | 29.8456 | 60.175 | | 3.5795 | 152.17 | 14000 | 4.2115 | 33.4387 | 10.5744 | 23.4785 | 30.0494 | 60.15 | | 3.5728 | 153.26 | 14100 | 4.2119 | 33.1244 | 10.0308 | 22.8377 | 29.7725 | 60.775 | | 3.5441 | 154.35 | 14200 | 4.2121 | 32.9226 | 9.9625 | 22.9013 | 29.6004 | 59.7 | | 3.5236 | 155.43 | 14300 | 4.2114 | 32.3717 | 9.9122 | 22.78 | 28.8305 | 59.725 | | 3.5679 | 156.52 | 14400 | 4.2120 | 33.6347 | 10.7457 | 23.5191 | 30.1966 | 60.65 | | 3.5574 | 157.61 | 14500 | 4.2119 | 33.4821 | 10.986 | 23.3567 | 30.1972 | 60.1 | | 3.5935 | 158.7 | 14600 | 4.2115 | 32.7255 | 10.2639 | 23.1617 | 29.8065 | 60.35 | | 3.5316 | 159.78 | 14700 | 4.2118 | 32.8033 | 10.0216 | 22.7099 | 29.3968 | 60.525 | | 3.5618 | 160.87 | 14800 | 4.2118 | 32.6244 | 10.7228 | 22.8601 | 29.3613 | 60.8 | | 3.545 | 161.96 | 14900 | 4.2132 | 32.6231 | 10.0711 | 22.4686 | 29.5341 | 59.675 | | 3.5466 | 163.04 | 15000 | 4.2129 | 32.7601 | 10.3376 | 22.2373 | 29.3588 | 59.4 | | 3.5594 | 164.13 | 15100 | 4.2127 | 32.4645 | 10.5106 | 22.6804 | 29.6229 | 60.375 | | 3.4839 | 165.22 | 15200 | 4.2130 | 32.1799 | 10.0462 | 22.5474 | 29.1419 | 59.75 | | 3.5492 | 166.3 | 15300 | 4.2133 | 32.6831 | 10.5307 | 22.8539 | 29.6406 | 59.875 | | 3.5053 | 167.39 | 15400 | 4.2133 | 32.8614 | 10.0344 | 23.0577 | 29.5848 | 60.975 | | 3.5427 | 168.48 | 15500 | 4.2140 | 32.7897 | 10.178 | 22.6287 | 29.4839 | 60.1 | | 3.5495 | 169.57 | 15600 | 4.2126 | 33.1428 | 10.2866 | 22.9377 | 29.6883 | 60.525 | | 3.5245 | 170.65 | 15700 | 4.2116 | 32.9892 | 10.1082 | 23.1528 | 29.576 | 60.675 | | 3.5121 | 171.74 | 15800 | 4.2131 | 33.2677 | 10.5916 | 23.3002 | 29.8222 | 59.975 | | 3.5559 | 172.83 | 15900 | 4.2126 | 32.5155 | 9.9557 | 22.6846 | 29.1171 | 60.85 | | 3.4758 | 173.91 | 16000 | 4.2133 | 32.374 | 9.9127 | 22.4816 | 29.2839 | 60.9 | | 3.5148 | 175.0 | 16100 | 4.2125 | 32.5611 | 9.8266 | 22.5993 | 28.9821 | 61.1 | | 3.5093 | 176.09 | 16200 | 4.2132 | 32.1092 | 9.6761 | 22.3612 | 28.7771 | 60.05 | | 3.5248 | 177.17 | 16300 | 4.2143 | 32.2696 | 9.6471 | 22.2791 | 28.9759 | 60.925 | | 3.4807 | 178.26 | 16400 | 4.2139 | 31.9593 | 9.3878 | 22.0643 | 28.5392 | 61.3 | | 3.5138 | 179.35 | 16500 | 4.2144 | 32.0284 | 9.8303 | 22.5724 | 29.0168 | 59.95 | | 3.4834 | 180.43 | 16600 | 4.2153 | 32.3203 | 9.5741 | 22.4998 | 28.8014 | 60.5 | | 3.4701 | 181.52 | 16700 | 4.2156 | 31.7243 | 9.544 | 22.1355 | 28.2238 | 61.275 | | 3.5501 | 182.61 | 16800 | 4.2152 | 32.519 | 9.9372 | 22.3881 | 28.8347 | 61.45 | | 3.4789 | 183.7 | 16900 | 4.2148 | 32.3324 | 9.7556 | 22.2474 | 28.7559 | 61.575 | | 3.5172 | 184.78 | 17000 | 4.2156 | 32.161 | 9.4847 | 22.2358 | 28.8895 | 60.95 | | 3.4681 | 185.87 | 17100 | 4.2167 | 32.6524 | 9.7116 | 22.8415 | 29.0798 | 60.575 | | 3.4936 | 186.96 | 17200 | 4.2173 | 32.533 | 9.9478 | 22.7379 | 29.1301 | 61.575 | | 3.4664 | 188.04 | 17300 | 4.2165 | 32.4549 | 10.1094 | 22.7097 | 28.7992 | 61.4 | | 3.4599 | 189.13 | 17400 | 4.2164 | 32.6665 | 10.3463 | 22.7678 | 29.308 | 61.575 | | 3.4724 | 190.22 | 17500 | 4.2175 | 32.4146 | 10.1782 | 22.7414 | 29.3546 | 60.75 | | 3.4923 | 191.3 | 17600 | 4.2163 | 32.3624 | 9.8306 | 22.7311 | 28.7497 | 59.825 | | 3.4771 | 192.39 | 17700 | 4.2161 | 33.1427 | 10.429 | 23.462 | 29.6967 | 60.35 | | 3.4737 | 193.48 | 17800 | 4.2168 | 31.6894 | 9.7073 | 22.527 | 28.3711 | 60.65 | | 3.4307 | 194.57 | 17900 | 4.2182 | 32.4769 | 10.1673 | 22.8356 | 29.4565 | 60.75 | | 3.4843 | 195.65 | 18000 | 4.2168 | 32.5461 | 10.2855 | 22.8587 | 29.1242 | 60.825 | | 3.4479 | 196.74 | 18100 | 4.2170 | 32.9284 | 10.2293 | 23.2679 | 29.8067 | 61.075 | | 3.489 | 197.83 | 18200 | 4.2180 | 32.9561 | 10.481 | 23.2807 | 29.5499 | 61.25 | | 3.4596 | 198.91 | 18300 | 4.2179 | 33.1418 | 10.2768 | 22.8762 | 30.0241 | 61.2 | | 3.4552 | 200.0 | 18400 | 4.2171 | 33.5524 | 10.5969 | 23.5734 | 30.1587 | 61.525 | | 3.4699 | 201.09 | 18500 | 4.2176 | 33.1941 | 10.3296 | 23.1962 | 30.1624 | 61.45 | | 3.4281 | 202.17 | 18600 | 4.2187 | 33.3715 | 10.1919 | 23.1843 | 30.3192 | 61.55 | | 3.4561 | 203.26 | 18700 | 4.2186 | 32.5288 | 9.9299 | 22.6515 | 29.2853 | 61.575 | | 3.446 | 204.35 | 18800 | 4.2188 | 33.4268 | 10.7152 | 23.6525 | 30.4668 | 61.575 | | 3.4259 | 205.43 | 18900 | 4.2189 | 33.1715 | 10.198 | 22.9264 | 29.8387 | 61.25 | | 3.4497 | 206.52 | 19000 | 4.2192 | 33.3472 | 10.5372 | 23.0833 | 30.2925 | 61.25 | | 3.4674 | 207.61 | 19100 | 4.2192 | 32.7581 | 10.2502 | 23.0554 | 29.6639 | 61.175 | | 3.4521 | 208.7 | 19200 | 4.2186 | 33.7883 | 10.8639 | 23.4038 | 30.6114 | 61.475 | | 3.443 | 209.78 | 19300 | 4.2194 | 33.029 | 10.6622 | 22.9009 | 29.9762 | 61.675 | | 3.4356 | 210.87 | 19400 | 4.2199 | 32.7229 | 9.9204 | 22.5445 | 29.5517 | 61.3 | | 3.4198 | 211.96 | 19500 | 4.2208 | 33.5216 | 10.3836 | 22.9423 | 29.9006 | 61.625 | | 3.4417 | 213.04 | 19600 | 4.2210 | 32.7772 | 10.3206 | 22.9031 | 29.3774 | 61.625 | | 3.4348 | 214.13 | 19700 | 4.2214 | 31.9959 | 10.0821 | 22.2012 | 28.6722 | 61.375 | | 3.4528 | 215.22 | 19800 | 4.2213 | 32.5434 | 10.2807 | 22.6512 | 29.1705 | 61.65 | | 3.3955 | 216.3 | 19900 | 4.2220 | 32.9148 | 10.5869 | 22.8107 | 29.4975 | 61.675 | | 3.4437 | 217.39 | 20000 | 4.2227 | 32.8879 | 10.4334 | 22.6863 | 29.6794 | 61.125 | | 3.4374 | 218.48 | 20100 | 4.2225 | 32.1453 | 9.9115 | 22.2936 | 28.9428 | 61.1 | | 3.429 | 219.57 | 20200 | 4.2230 | 33.0805 | 10.5792 | 22.9417 | 29.9572 | 61.55 | | 3.4089 | 220.65 | 20300 | 4.2239 | 32.0499 | 10.1613 | 22.6264 | 28.9217 | 61.65 | | 3.418 | 221.74 | 20400 | 4.2237 | 32.6069 | 10.5032 | 22.8024 | 29.5804 | 61.275 | | 3.4274 | 222.83 | 20500 | 4.2235 | 31.8624 | 10.2513 | 22.2816 | 28.8234 | 61.2 | | 3.4156 | 223.91 | 20600 | 4.2242 | 32.2666 | 10.4604 | 22.5607 | 29.0666 | 61.025 | | 3.4135 | 225.0 | 20700 | 4.2247 | 31.3445 | 10.0898 | 22.0664 | 28.5988 | 60.5 | | 3.4283 | 226.09 | 20800 | 4.2245 | 31.47 | 10.0171 | 21.9423 | 28.4329 | 61.175 | | 3.4048 | 227.17 | 20900 | 4.2242 | 31.93 | 10.4874 | 22.5287 | 29.1292 | 60.7 | | 3.3925 | 228.26 | 21000 | 4.2243 | 32.3618 | 10.0902 | 22.6176 | 29.2689 | 60.775 | | 3.4371 | 229.35 | 21100 | 4.2245 | 32.174 | 10.0424 | 22.516 | 28.9855 | 60.775 | | 3.3789 | 230.43 | 21200 | 4.2239 | 33.0237 | 10.8644 | 23.3016 | 29.916 | 61.275 | | 3.4109 | 231.52 | 21300 | 4.2248 | 32.88 | 10.6969 | 22.8426 | 30.0468 | 60.8 | | 3.4128 | 232.61 | 21400 | 4.2257 | 32.6551 | 10.6032 | 22.6787 | 29.5307 | 60.725 | | 3.3941 | 233.7 | 21500 | 4.2266 | 31.9296 | 10.0718 | 22.5 | 28.9451 | 60.75 | | 3.3734 | 234.78 | 21600 | 4.2266 | 32.4862 | 10.0754 | 22.9705 | 29.2087 | 61.225 | | 3.4144 | 235.87 | 21700 | 4.2269 | 32.1757 | 10.1225 | 22.6842 | 29.1731 | 60.75 | | 3.3986 | 236.96 | 21800 | 4.2273 | 32.3403 | 10.481 | 22.7186 | 29.3236 | 60.725 | | 3.3898 | 238.04 | 21900 | 4.2275 | 32.4957 | 10.4595 | 22.8682 | 29.6414 | 60.8 | | 3.4031 | 239.13 | 22000 | 4.2275 | 32.4625 | 10.3807 | 22.7121 | 29.5187 | 60.725 | | 3.3836 | 240.22 | 22100 | 4.2274 | 31.8107 | 10.2075 | 22.4437 | 28.9719 | 60.725 | | 3.4084 | 241.3 | 22200 | 4.2272 | 32.3374 | 10.1027 | 22.5784 | 29.2192 | 61.2 | | 3.3805 | 242.39 | 22300 | 4.2276 | 32.2783 | 10.375 | 22.7825 | 29.3762 | 61.2 | | 3.3815 | 243.48 | 22400 | 4.2277 | 32.3337 | 10.3561 | 22.8489 | 29.4485 | 61.15 | | 3.418 | 244.57 | 22500 | 4.2273 | 32.333 | 10.2841 | 22.8481 | 29.403 | 61.125 | | 3.369 | 245.65 | 22600 | 4.2277 | 32.038 | 10.3555 | 22.6939 | 29.242 | 60.7 | | 3.4305 | 246.74 | 22700 | 4.2276 | 32.7594 | 10.6867 | 23.0632 | 29.5852 | 61.575 | | 3.3928 | 247.83 | 22800 | 4.2282 | 32.4979 | 10.5013 | 22.7875 | 29.4793 | 61.55 | | 3.3676 | 248.91 | 22900 | 4.2286 | 32.6014 | 10.5697 | 22.8526 | 29.7876 | 61.6 | | 3.3918 | 250.0 | 23000 | 4.2288 | 32.4746 | 10.6321 | 22.586 | 29.6323 | 60.675 | | 3.395 | 251.09 | 23100 | 4.2294 | 32.4704 | 10.5456 | 22.6785 | 29.5769 | 60.725 | | 3.363 | 252.17 | 23200 | 4.2296 | 32.2721 | 10.2554 | 22.5303 | 29.4554 | 60.725 | | 3.3884 | 253.26 | 23300 | 4.2298 | 32.2746 | 10.434 | 22.6686 | 29.4486 | 60.725 | | 3.3891 | 254.35 | 23400 | 4.2296 | 32.5382 | 10.5112 | 23.0243 | 29.8106 | 61.125 | | 3.3679 | 255.43 | 23500 | 4.2296 | 32.4656 | 10.5631 | 22.9952 | 29.6832 | 61.125 | | 3.4078 | 256.52 | 23600 | 4.2297 | 32.3377 | 10.4791 | 22.8362 | 29.6212 | 60.7 | | 3.3642 | 257.61 | 23700 | 4.2302 | 32.2519 | 10.5551 | 22.6957 | 29.3763 | 61.075 | | 3.3745 | 258.7 | 23800 | 4.2300 | 31.9413 | 10.4752 | 22.7447 | 29.1 | 61.175 | | 3.3844 | 259.78 | 23900 | 4.2305 | 32.237 | 10.5492 | 23.0342 | 29.4079 | 61.65 | | 3.3501 | 260.87 | 24000 | 4.2302 | 31.9797 | 10.4631 | 22.9089 | 29.332 | 61.65 | | 3.4259 | 261.96 | 24100 | 4.2304 | 31.7515 | 10.3564 | 22.5923 | 29.1275 | 61.175 | | 3.3578 | 263.04 | 24200 | 4.2309 | 32.0462 | 10.3883 | 22.9083 | 29.3591 | 61.65 | | 3.39 | 264.13 | 24300 | 4.2308 | 31.9307 | 10.3057 | 22.8501 | 29.2547 | 61.65 | | 3.3805 | 265.22 | 24400 | 4.2312 | 32.1836 | 10.3577 | 23.1293 | 29.4325 | 61.65 | | 3.3667 | 266.3 | 24500 | 4.2309 | 32.1545 | 10.301 | 23.0613 | 29.343 | 61.65 | | 3.3977 | 267.39 | 24600 | 4.2313 | 31.9549 | 10.2824 | 23.0397 | 29.2684 | 61.65 | | 3.3434 | 268.48 | 24700 | 4.2314 | 31.9432 | 10.167 | 23.098 | 29.2669 | 61.65 | | 3.3577 | 269.57 | 24800 | 4.2316 | 31.9679 | 10.3075 | 23.0715 | 29.3077 | 61.65 | | 3.3781 | 270.65 | 24900 | 4.2317 | 32.2292 | 10.2988 | 23.0879 | 29.4171 | 61.65 | | 3.3514 | 271.74 | 25000 | 4.2321 | 32.1653 | 10.4198 | 23.0554 | 29.3574 | 61.65 | | 3.3935 | 272.83 | 25100 | 4.2320 | 32.134 | 10.2884 | 22.9444 | 29.2272 | 61.65 | | 3.3447 | 273.91 | 25200 | 4.2324 | 32.3498 | 10.4505 | 23.0734 | 29.4438 | 61.65 | | 3.3872 | 275.0 | 25300 | 4.2323 | 32.1743 | 10.4152 | 22.9462 | 29.3187 | 61.65 | | 3.3755 | 276.09 | 25400 | 4.2324 | 32.2311 | 10.372 | 22.9563 | 29.3285 | 61.65 | | 3.3832 | 277.17 | 25500 | 4.2323 | 32.0289 | 10.2105 | 22.9636 | 29.1449 | 61.65 | | 3.3367 | 278.26 | 25600 | 4.2321 | 32.3053 | 10.2512 | 23.0834 | 29.4111 | 61.65 | | 3.3767 | 279.35 | 25700 | 4.2323 | 32.4099 | 10.2793 | 23.0137 | 29.4049 | 61.65 | | 3.3989 | 280.43 | 25800 | 4.2324 | 32.3471 | 10.4356 | 23.0179 | 29.4453 | 61.65 | | 3.3625 | 281.52 | 25900 | 4.2325 | 32.2213 | 10.4363 | 22.9573 | 29.2886 | 61.65 | | 3.3352 | 282.61 | 26000 | 4.2328 | 32.713 | 10.7489 | 23.2367 | 29.8725 | 61.65 | | 3.3899 | 283.7 | 26100 | 4.2328 | 32.2145 | 10.2347 | 22.7896 | 29.2107 | 61.65 | | 3.359 | 284.78 | 26200 | 4.2327 | 32.2466 | 10.4236 | 22.916 | 29.4227 | 61.65 | | 3.3866 | 285.87 | 26300 | 4.2327 | 32.2466 | 10.4236 | 22.916 | 29.4227 | 61.65 | | 3.3845 | 286.96 | 26400 | 4.2328 | 32.2466 | 10.4236 | 22.916 | 29.4227 | 61.65 | | 3.3486 | 288.04 | 26500 | 4.2328 | 32.595 | 10.5041 | 23.1214 | 29.69 | 61.65 | | 3.3807 | 289.13 | 26600 | 4.2328 | 32.759 | 10.566 | 23.3108 | 29.8555 | 61.65 | | 3.3676 | 290.22 | 26700 | 4.2330 | 32.759 | 10.566 | 23.3108 | 29.8555 | 61.65 | | 3.3361 | 291.3 | 26800 | 4.2332 | 32.759 | 10.566 | 23.3108 | 29.8555 | 61.65 | | 3.3897 | 292.39 | 26900 | 4.2331 | 32.7251 | 10.566 | 23.3108 | 29.7958 | 61.65 | | 3.3579 | 293.48 | 27000 | 4.2331 | 32.759 | 10.566 | 23.3108 | 29.8555 | 61.65 | | 3.3809 | 294.57 | 27100 | 4.2331 | 32.759 | 10.566 | 23.3108 | 29.8555 | 61.65 | | 3.3885 | 295.65 | 27200 | 4.2331 | 32.759 | 10.566 | 23.3108 | 29.8555 | 61.65 | | 3.3173 | 296.74 | 27300 | 4.2331 | 32.7156 | 10.5699 | 23.2759 | 29.7903 | 61.65 | | 3.3648 | 297.83 | 27400 | 4.2331 | 32.7156 | 10.5699 | 23.2759 | 29.7903 | 61.65 | | 3.3793 | 298.91 | 27500 | 4.2331 | 32.7156 | 10.5699 | 23.2759 | 29.7903 | 61.65 | | 3.3604 | 300.0 | 27600 | 4.2331 | 32.7156 | 10.5699 | 23.2759 | 29.7903 | 61.65 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
pietrolesci/bert-base-uncased-mnli
df493f6a1838576b54552afcee3a08dabb7579b2
2022-05-03T10:10:29.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
pietrolesci
null
pietrolesci/bert-base-uncased-mnli
17
null
transformers
9,069
Entry not found
arxyzan/data2vec-roberta-base
68434a0eeab8ff055b5ca13aa7e9a972233948aa
2022-05-17T06:05:15.000Z
[ "pytorch", "roberta", "feature-extraction", "arxiv:2202.03555", "transformers" ]
feature-extraction
false
arxyzan
null
arxyzan/data2vec-roberta-base
17
null
transformers
9,070
A RoBERTa model trained using Data2Vec based on the paper [data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555).<br> This model is provided here for [this repo](https://github.com/AryanShekarlaban/data2vec-pytorch) but was NOT trained using that codebase but instead, copied from `facebook/data2vec-text-base` for convenience and reproducibility. ### BibTeX entry and citation info ```bibtex @misc{https://doi.org/10.48550/arxiv.2202.03555, doi = {10.48550/ARXIV.2202.03555}, url = {https://arxiv.org/abs/2202.03555}, author = {Baevski, Alexei and Hsu, Wei-Ning and Xu, Qiantong and Babu, Arun and Gu, Jiatao and Auli, Michael}, keywords = {Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language}, publisher = {arXiv}, year = {2022}, copyright = {arXiv.org perpetual, non-exclusive license} } ```
TweebankNLP/bertweet-tb2-ner
773f0129e5bb057190d69e79068f23391f0deb7b
2022-05-05T00:23:29.000Z
[ "pytorch", "roberta", "token-classification", "arxiv:2201.07281", "transformers", "license:cc-by-nc-4.0", "autotrain_compatible" ]
token-classification
false
TweebankNLP
null
TweebankNLP/bertweet-tb2-ner
17
null
transformers
9,071
--- license: cc-by-nc-4.0 --- ## Model Specification - This is one **baseline Twitter NER model (with 73.71\% Entity-Level F1)** on Tweebank V2's NER benchmark (also called `Tweebank-NER`), trained on the Tweebank-NER training data. - **If you are looking for the SOTA Twitter NER model**, please go to this [HuggingFace hub link](https://huggingface.co/TweebankNLP/bertweet-tb2_wnut17-ner). - For more details about the `TweebankNLP` project, please refer to this [our paper](https://arxiv.org/pdf/2201.07281.pdf) and [github](https://github.com/social-machines/TweebankNLP) page. - In the paper, it is referred as `HuggingFace-BERTweet (TB2)` in the NER table. ## How to use the model - **PRE-PROCESSING**: when you apply the model on tweets, please make sure that tweets are preprocessed by the [TweetTokenizer](https://github.com/VinAIResearch/BERTweet/blob/master/TweetNormalizer.py) to get the best performance. ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("TweebankNLP/bertweet-tb2-ner") model = AutoModelForTokenClassification.from_pretrained("TweebankNLP/bertweet-tb2-ner") ``` ## References If you use this repository in your research, please kindly cite [our paper](https://arxiv.org/pdf/2201.07281.pdf): ```bibtex @article{jiang2022tweetnlp, title={Annotating the Tweebank Corpus on Named Entity Recognition and Building NLP Models for Social Media Analysis}, author={Jiang, Hang and Hua, Yining and Beeferman, Doug and Roy, Deb}, journal={In Proceedings of the 13th Language Resources and Evaluation Conference (LREC)}, year={2022} } ```
Wakaka/bert-finetuned-imdb
000f4675fd6b9dab2afadd4b79f35cfa9d56698f
2022-05-06T06:38:19.000Z
[ "pytorch", "bert", "text-classification", "dataset:imdb", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
Wakaka
null
Wakaka/bert-finetuned-imdb
17
null
transformers
9,072
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy model-index: - name: bert-finetuned-imdb results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb args: plain_text metrics: - name: Accuracy type: accuracy value: 0.866 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-imdb This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.5591 - Accuracy: 0.866 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 125 | 0.4995 | 0.79 | | No log | 2.0 | 250 | 0.4000 | 0.854 | | No log | 3.0 | 375 | 0.5591 | 0.866 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
eslamxm/mt5-base-finetuned-persian-finetuned-persian-arabic
6213dea489fa88fa70afd5f55e8dce9e24495cb3
2022-05-09T05:50:11.000Z
[ "pytorch", "mt5", "text2text-generation", "dataset:xlsum", "transformers", "summarization", "arabic", "ar", "Abstractive Summarization", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
summarization
false
eslamxm
null
eslamxm/mt5-base-finetuned-persian-finetuned-persian-arabic
17
null
transformers
9,073
--- license: apache-2.0 tags: - summarization - arabic - ar - mt5 - Abstractive Summarization - generated_from_trainer datasets: - xlsum model-index: - name: mt5-base-finetuned-persian-finetuned-persian-arabic results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-base-finetuned-persian-finetuned-persian-arabic This model is a fine-tuned version of [ahmeddbahaa/mt5-base-finetuned-persian](https://huggingface.co/ahmeddbahaa/mt5-base-finetuned-persian) on the xlsum dataset. It achieves the following results on the evaluation set: - Loss: 3.3234 - Rouge-1: 22.96 - Rouge-2: 10.27 - Rouge-l: 20.95 - Gen Len: 19.0 - Bertscore: 71.59 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - label_smoothing_factor: 0.1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge-1 | Rouge-2 | Rouge-l | Gen Len | Bertscore | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:| | 4.2754 | 1.0 | 1172 | 3.5717 | 19.26 | 7.26 | 17.48 | 19.0 | 70.49 | | 3.7388 | 2.0 | 2344 | 3.4291 | 19.71 | 7.88 | 17.94 | 19.0 | 70.64 | | 3.541 | 3.0 | 3516 | 3.3653 | 21.18 | 8.84 | 19.35 | 19.0 | 71.05 | | 3.4113 | 4.0 | 4688 | 3.3306 | 21.54 | 9.11 | 19.65 | 19.0 | 71.19 | | 3.3256 | 5.0 | 5860 | 3.3234 | 21.69 | 9.22 | 19.81 | 19.0 | 71.31 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
CEBaB/lstm.CEBaB.sa.5-class.exclusive.seed_42
81da16c37a9842d084e09fb98ce0eed9dd6e7174
2022-05-10T23:55:29.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
CEBaB
null
CEBaB/lstm.CEBaB.sa.5-class.exclusive.seed_42
17
null
transformers
9,074
Entry not found
CEBaB/lstm.CEBaB.sa.2-class.exclusive.seed_66
5926cb9615a5167fa024aa89e16c63763449e14d
2022-05-11T00:12:47.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
CEBaB
null
CEBaB/lstm.CEBaB.sa.2-class.exclusive.seed_66
17
null
transformers
9,075
Entry not found
CEBaB/lstm.CEBaB.sa.3-class.exclusive.seed_66
203d0253d4d65b3e5f2fc468b9a3625af2092f3d
2022-05-11T00:29:46.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
CEBaB
null
CEBaB/lstm.CEBaB.sa.3-class.exclusive.seed_66
17
null
transformers
9,076
Entry not found
CEBaB/lstm.CEBaB.sa.5-class.exclusive.seed_66
1f5d7fbecaa02c186081dc39a5f02fc44b6e92c6
2022-05-11T00:47:25.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
CEBaB
null
CEBaB/lstm.CEBaB.sa.5-class.exclusive.seed_66
17
null
transformers
9,077
Entry not found
CEBaB/lstm.CEBaB.sa.2-class.exclusive.seed_77
1bc9ad59b02f79207afed09a393b17cb63817eb3
2022-05-11T01:04:38.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
CEBaB
null
CEBaB/lstm.CEBaB.sa.2-class.exclusive.seed_77
17
null
transformers
9,078
Entry not found
CEBaB/lstm.CEBaB.sa.2-class.exclusive.seed_88
04dd7f7b2caa39f8c07dabf9d240decec4d9521e
2022-05-11T01:57:06.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
CEBaB
null
CEBaB/lstm.CEBaB.sa.2-class.exclusive.seed_88
17
null
transformers
9,079
Entry not found
CEBaB/lstm.CEBaB.sa.3-class.exclusive.seed_88
2c35cae91b5a0f6ea6c6f18e04b5397966a8c69f
2022-05-11T02:14:28.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
CEBaB
null
CEBaB/lstm.CEBaB.sa.3-class.exclusive.seed_88
17
null
transformers
9,080
Entry not found
CEBaB/lstm.CEBaB.sa.5-class.exclusive.seed_88
771f8dc662e5bb81aa34310c453e61b39396b90a
2022-05-11T02:31:28.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
CEBaB
null
CEBaB/lstm.CEBaB.sa.5-class.exclusive.seed_88
17
null
transformers
9,081
Entry not found
CEBaB/lstm.CEBaB.sa.2-class.exclusive.seed_99
26c4fa8bb1925a918a48c637e3f9c0e869da4651
2022-05-11T02:48:32.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
CEBaB
null
CEBaB/lstm.CEBaB.sa.2-class.exclusive.seed_99
17
null
transformers
9,082
Entry not found
CEBaB/lstm.CEBaB.sa.3-class.exclusive.seed_99
4bfb3006a776db8aa19b5846581aeabab64a65f9
2022-05-11T03:05:48.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
CEBaB
null
CEBaB/lstm.CEBaB.sa.3-class.exclusive.seed_99
17
null
transformers
9,083
Entry not found
CEBaB/lstm.CEBaB.sa.5-class.exclusive.seed_99
057ec2347ae97c6eb4562e75b70da01a0250b1e8
2022-05-11T03:22:57.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
CEBaB
null
CEBaB/lstm.CEBaB.sa.5-class.exclusive.seed_99
17
null
transformers
9,084
Entry not found
SalamaThanks/SalamaThanksTransformer_fil2en_v2
ed75269aa77cac1ada651a21f8c2777235a65090
2022-05-11T05:57:37.000Z
[ "pytorch", "marian", "text2text-generation", "transformers", "license:afl-3.0", "autotrain_compatible" ]
text2text-generation
false
SalamaThanks
null
SalamaThanks/SalamaThanksTransformer_fil2en_v2
17
null
transformers
9,085
--- license: afl-3.0 --- SalamaThanks Transformer for Filipino-to-English Text Translation version 2. A finetuned model based on the Helsinki-NLP/opus-mt-en-tl transformer model.
Paleontolog/bert_sentence_classifier
7a617b1f1dffb0f487af6a89fa92f2fed7ad7369
2022-05-11T14:05:26.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
Paleontolog
null
Paleontolog/bert_sentence_classifier
17
null
transformers
9,086
Entry not found
enoriega/kw_pubmed_5000_0.00006
7589c51c64d9b77b1dadf3b8d821190f4fcf92a9
2022-05-12T11:09:45.000Z
[ "pytorch", "tensorboard", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
enoriega
null
enoriega/kw_pubmed_5000_0.00006
17
null
transformers
9,087
Entry not found
nikitast/lang-classifier-roberta
33ed588b1fb6089c6e43c57917e067f4e3cebc11
2022-07-18T11:19:10.000Z
[ "pytorch", "xlm-roberta", "text-classification", "ru", "uk", "be", "kk", "az", "hy", "ka", "he", "en", "de", "dataset:open_subtitles", "dataset:tatoeba", "dataset:oscar", "transformers", "language classification" ]
text-classification
false
nikitast
null
nikitast/lang-classifier-roberta
17
1
transformers
9,088
--- language: - ru - uk - be - kk - az - hy - ka - he - en - de tags: - language classification datasets: - open_subtitles - tatoeba - oscar --- # RoBERTa for Single Language Classification ## Training RoBERTa fine-tuned on small parts of Open Subtitles, Oscar and Tatoeba datasets (~9k samples per language). | data source | language | |-----------------|----------------| | open_subtitles | ka, he, en, de | | oscar | be, kk, az, hu | | tatoeba | ru, uk | ## Validation The metrics obtained from validation on the another part of dataset (~1k samples per language). |index|class|f1-score|precision|recall|support| |---|---|---|---|---|---| |0|az|0\.998|0\.997|1\.0|997| |1|be|0\.996|0\.998|0\.994|1004| |2|de|0\.976|0\.966|0\.987|979| |3|en|0\.976|0\.986|0\.967|1020| |4|he|1\.0|1\.0|0\.999|1001| |5|hy|0\.994|0\.991|0\.998|993| |6|ka|0\.999|0\.999|0\.999|1000| |7|kk|0\.996|0\.998|0\.993|1005| |8|uk|0\.982|0\.997|0\.968|1030| |9|ru|0\.982|0\.968|0\.997|971| |10|macro\_avg|0\.99|0\.99|0\.99|10000| |11|weighted avg|0\.99|0\.99|0\.99|10000|
Bryan0123/bert-hashtag-to-hashtag-20
eb089721e6a7585e6a5fe7a41474c9fd426157cf
2022-05-15T05:02:12.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
Bryan0123
null
Bryan0123/bert-hashtag-to-hashtag-20
17
null
transformers
9,089
Entry not found
vives/distilbert-base-uncased-finetuned-cvent-2022
de2d5128d93fe20949d25eb1ce7351ea78e0a489
2022-05-13T20:37:30.000Z
[ "pytorch", "distilbert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
vives
null
vives/distilbert-base-uncased-finetuned-cvent-2022
17
null
transformers
9,090
Entry not found
dipstheman/DialoGPT-small-humanconversation
aa81c831d8303afbaf1522ce24f7f569185f3ce2
2022-05-16T22:05:07.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
dipstheman
null
dipstheman/DialoGPT-small-humanconversation
17
null
transformers
9,091
--- tags: - conversational --- #human conversation DialoGPT Model
SyedMujtabaHassanRizvi/convnext-tiny-finetuned-eurosat
cb9800974779afb36ab23ed01f92b41e77752d4e
2022-05-19T12:48:40.000Z
[ "pytorch", "convnext", "image-classification", "transformers" ]
image-classification
false
SyedMujtabaHassanRizvi
null
SyedMujtabaHassanRizvi/convnext-tiny-finetuned-eurosat
17
null
transformers
9,092
Entry not found
animalthemuppet/bert-finetuned-ner
c5082885310360f718e076f7d05b9c19e5cf7e73
2022-05-22T17:04:06.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "dataset:conll2003", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
token-classification
false
animalthemuppet
null
animalthemuppet/bert-finetuned-ner
17
null
transformers
9,093
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - name: Precision type: precision value: 0.9306472919418758 - name: Recall type: recall value: 0.9485021878155503 - name: F1 type: f1 value: 0.9394899149858308 - name: Accuracy type: accuracy value: 0.9859304173779949 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0633 - Precision: 0.9306 - Recall: 0.9485 - F1: 0.9395 - Accuracy: 0.9859 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0834 | 1.0 | 1756 | 0.0676 | 0.9162 | 0.9315 | 0.9238 | 0.9824 | | 0.0388 | 2.0 | 3512 | 0.0587 | 0.9286 | 0.9473 | 0.9379 | 0.9852 | | 0.0188 | 3.0 | 5268 | 0.0633 | 0.9306 | 0.9485 | 0.9395 | 0.9859 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
eslamxm/mt5-base-finetuned-ar-sp
0ff443165c15491cae6b60db5ca9cca22bdf693e
2022-05-23T23:27:43.000Z
[ "pytorch", "tensorboard", "mt5", "text2text-generation", "transformers", "summarization", "arabic", "am", "es", "amharic", "Abstractive Summarization", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
summarization
false
eslamxm
null
eslamxm/mt5-base-finetuned-ar-sp
17
null
transformers
9,094
--- license: apache-2.0 tags: - summarization - arabic - am - es - amharic - mt5 - Abstractive Summarization - generated_from_trainer model-index: - name: mt5-base-finetuned-ar-sp results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-base-finetuned-ar-sp This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2772 - Rouge-1: 23.01 - Rouge-2: 10.41 - Rouge-l: 20.94 - Gen Len: 19.0 - Bertscore: 71.56 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - label_smoothing_factor: 0.1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge-1 | Rouge-2 | Rouge-l | Gen Len | Bertscore | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:| | 4.1968 | 1.0 | 1352 | 3.5142 | 18.69 | 6.73 | 16.97 | 19.0 | 70.3 | | 3.6932 | 2.0 | 2704 | 3.3799 | 20.67 | 8.38 | 18.75 | 19.0 | 70.82 | | 3.5058 | 3.0 | 4056 | 3.3184 | 20.97 | 8.58 | 19.08 | 19.0 | 71.08 | | 3.3832 | 4.0 | 5408 | 3.2851 | 21.59 | 8.94 | 19.63 | 19.0 | 71.28 | | 3.2994 | 5.0 | 6760 | 3.2772 | 21.84 | 9.23 | 19.85 | 19.0 | 71.34 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
hd94/roberta-hindi
10c6f839598e6f2acc27ff67627d89ceb2e8dbda
2022-05-24T09:42:28.000Z
[ "pytorch", "xlm-roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
false
hd94
null
hd94/roberta-hindi
17
null
transformers
9,095
Entry not found
Ravindra001/bert-finetuned-ner
2967d6f51750d99db081eea1a9e5bf703c3bf439
2022-07-28T09:29:11.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "dataset:wikiann", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
token-classification
false
Ravindra001
null
Ravindra001/bert-finetuned-ner
17
null
transformers
9,096
--- license: apache-2.0 tags: - generated_from_trainer datasets: - wikiann metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: wikiann type: wikiann args: en metrics: - name: Precision type: precision value: 0.819622641509434 - name: Recall type: recall value: 0.8444790046656299 - name: F1 type: f1 value: 0.8318651857525853 - name: Accuracy type: accuracy value: 0.9269227060339613 - task: type: token-classification name: Token Classification dataset: name: wikiann type: wikiann config: en split: test metrics: - name: Accuracy type: accuracy value: 0.8492771401033908 verified: true - name: Precision type: precision value: 0.857294905524994 verified: true - name: Recall type: recall value: 0.865900059186607 verified: true - name: F1 type: f1 value: 0.8615759964905745 verified: true - name: loss type: loss value: 1.054654836654663 verified: true --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the wikiann dataset. It achieves the following results on the evaluation set: - Loss: 0.3217 - Precision: 0.8196 - Recall: 0.8445 - F1: 0.8319 - Accuracy: 0.9269 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2821 | 1.0 | 2500 | 0.2906 | 0.7983 | 0.8227 | 0.8103 | 0.9193 | | 0.2087 | 2.0 | 5000 | 0.2614 | 0.8030 | 0.8379 | 0.8201 | 0.9257 | | 0.1404 | 3.0 | 7500 | 0.3217 | 0.8196 | 0.8445 | 0.8319 | 0.9269 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
Mathking/all-mpnet-base-v2_outcome_sim
af3847ab3ef6e74ac548712a0fe6a88a115b3485
2022-05-25T13:40:22.000Z
[ "pytorch", "mpnet", "feature-extraction", "sentence-transformers", "sentence-similarity" ]
sentence-similarity
false
Mathking
null
Mathking/all-mpnet-base-v2_outcome_sim
17
null
sentence-transformers
9,097
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 48 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 5, "evaluation_steps": 100, "evaluator": "sentence_transformers.evaluation.BinaryClassificationEvaluator.BinaryClassificationEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 20, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
aditya2029/gpt-neo-genre-storygenerator
d63e6a511a2eab462b397d813f07ab6e79ec807c
2022-05-26T02:27:55.000Z
[ "pytorch", "gpt_neo", "text-generation", "transformers" ]
text-generation
false
aditya2029
null
aditya2029/gpt-neo-genre-storygenerator
17
null
transformers
9,098
andidu/paraphrase-ru
05678a1fae2802efc7ba76715569b3043a001b9a
2022-05-28T07:05:58.000Z
[ "pytorch", "mt5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
andidu
null
andidu/paraphrase-ru
17
null
transformers
9,099
Entry not found