Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
null | transformers | {} | Capreolus/birch-bert-large-car_mb | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"next-sentence-prediction",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | transformers | {} | Capreolus/birch-bert-large-mb | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"next-sentence-prediction",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | transformers | {} | Capreolus/birch-bert-large-msmarco_mb | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"next-sentence-prediction",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | # capreolus/electra-base-msmarco
## Model description
ELECTRA-Base model (`google/electra-base-discriminator`) fine-tuned on the MS MARCO passage classification task. It is intended to be used as a `ForSequenceClassification` model, but requires some modification since it contains a BERT classification head rather than the standard ELECTRA classification head. See the [TFElectraRelevanceHead](https://github.com/capreolus-ir/capreolus/blob/master/capreolus/reranker/TFBERTMaxP.py) in the Capreolus BERT-MaxP implementation for a usage example.
This corresponds to the ELECTRA-Base model used to initialize PARADE (ELECTRA) in [PARADE: Passage Representation Aggregation for Document Reranking](https://arxiv.org/abs/2008.09093) by Li et al. It was converted from the released [TFv1 checkpoint](https://zenodo.org/record/3974431/files/vanilla_electra_base_on_MSMARCO.tar.gz). Please cite the PARADE paper if you use these weights.
| {} | Capreolus/electra-base-msmarco | null | [
"transformers",
"pytorch",
"tf",
"electra",
"text-classification",
"arxiv:2008.09093",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers | # Master Thesis
## Predictive Value of Sentiment Analysis from Headlines for Crude Oil Prices
### Understanding and Exploiting Deep Learning-based Sentiment Analysis from News Headlines for Predicting Price Movements of WTI Crude Oil
The focus of this thesis deals with the task of research and development of state-of-the-art sentiment analysis methods, which can potentially provide helpful quantification of news that can be used to assess the future price movements of crude oil.
CrudeBERT is a pre-trained NLP model to analyze sentiment of news headlines relevant to crude oil.
It was developed by fine tuning [FinBERT: Financial Sentiment Analysis with Pre-trained Language Models](https://arxiv.org/pdf/1908.10063.pdf).

Performing sentiment analysis on the news regarding a specific asset requires domain adaptation.
Domain adaptation requires training data made up of examples with text and its associated polarity of sentiment.
The experiments show that pre-trained deep learning-based sentiment analysis can be further fine-tuned, and the conclusions of these experiments are as follows:
* Deep learning-based sentiment analysis models from the general financial world such as FinBERT are of little or hardly any significance concerning the price development of crude oil. The reason behind this is a lack of domain adaptation of the sentiment. Moreover, the polarity of sentiment cannot be generalized and is highly dependent on the properties of its target.
* The properties of crude oil prices are, according to the literature, determined by changes in supply and demand.
News can convey information about these direction changes and can broadly be identified through query searches and serve as a foundation for creating a training dataset to perform domain adaptation. For this purpose, news headlines tend to be rich enough in content to provide insights into supply and demand changes.
Even when significantly reducing the number of headlines to more reputable sources.
* Domain adaptation can be achieved to some extend by analyzing the properties of the target through literature review and creating a corresponding training dataset to fine-tune the model. For example, considering supply and demand changes regarding crude oil seems to be a suitable component for a domain adaptation.
In order to advance sentiment analysis applications in the domain of crude oil, this paper presents CrudeBERT.
In general, sentiment analysis of headlines from crude oil through CrudeBERT could be a viable source of insight for the price behaviour of WTI crude oil.
However, further research is required to see if CrudeBERT can serve as beneficial for predicting oil prices.
For this matter, the codes and the thesis is made publicly available on [GitHub] (https://github.com/Captain-1337/Master-Thesis). | {} | Captain-1337/CrudeBERT | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:1908.10063",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Captain272/lstm | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Carlork314/Carlos | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Carlork314/Xd | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | **mt5-spanish-memmories-analysis**
**// ES**
Este es un trabajo en proceso.
Este modelo aún es solo un punto de control inicial que mejoraré en los próximos meses.
El objetivo es proporcionar un modelo capaz de, utilizando una combinación de tareas del modelo mT5, comprender los recuerdos y proporcionar una interacción útil para las personas con alzeimer o personas como mi propio abuelo que escribió sus recuerdos, pero ahora es solo un libro en la estantería. por lo que este modelo puede hacer que esos recuerdos parezcan "vivos".
Pronto (si aún no está cargado) cargaré un cuaderno de **Google Colaboratory con una aplicación visual** que al usar este modelo proporcionará toda la interacción necesaria y deseada con una interfaz fácil de usar.
**LINK APLICACIÓN (sobre él se actualizará la versión):** https://drive.google.com/drive/folders/1ewGcxxCYHHwhHhWtGlLiryZfV8wEAaBa?usp=sharing
-> Debe descargarse la carpeta "memorium" del enlace y subirse a Google Drive sin incluir en ninguna otra carpeta (directamente en "Mi unidad").
-> A continuación se podrá abrir la app, encontrada dentro de dicha carpeta "memorium" con nombre "APP-Memorium" (el nombre puede incluir además un indicador de versión).
-> Si haciendo doble click en el archivo de la app no permite abrirla, debe hacerse pulsando el botón derecho sobre el archivo y seleccionar "Abrir con", "Conectar más aplicaciones", y a continuación escoger Colaboratory (se pedirá instalar). Completada la instalación (tiempo aproximado: 2 minutos) se podrá cerrar la ventana de instalación para volver a visualizar la carpeta donde se encuentra el fichero de la app, que de ahora en adelante se podrá abrir haciendo doble click.
-> Se podrán añadir memorias en la carpeta "perfiles" como se indica en la aplicación en el apartado "crear perfil".
**// EN**
This is a work in process.
This model is just an initial checkpoint yet that I will be improving the following months.
**APP LINK (it will contain the latest version):** https://drive.google.com/drive/folders/1ewGcxxCYHHwhHhWtGlLiryZfV8wEAaBa?usp=sharing
-> The folder "memorium" must be downloaded and then uploaded to Google Drive at "My Drive", NOT inside any other folder.
The aim is to provide a model able to, using a mixture of mT5 model's tasks, understand memories and provide an interaction useful for people with alzeimer or people like my own grandfather who wrote his memories but it is now just a book in the shelf, so this model can make those memories seem 'alive'.
I will soon (if it is´t uploaded by now) upload a **Google Colaboratory notebook with a visual App** that using this model will provide all the needed and wanted interaction with an easy-to-use Interface.
| {} | CarlosPR/mt5-spanish-memmories-analysis | null | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | CarlosTron/Yo | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Carolhuehuehuehue/Sla | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
# Harry potter DialoGPT Model | {"tags": ["conversational"]} | CasualHomie/DialoGPT-small-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Cat/Kitty | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | Cathy/reranking_model | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers |
# Cdial/Hausa_xlsr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m)
It achieves the following results on the evaluation set (which is 10 percent of train data set merged with invalidated data, reported, other, and dev datasets):
- Loss: 0.275118
- Wer: 0.329955
## Model description
"facebook/wav2vec2-xls-r-300m" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Hausa train.tsv, dev.tsv, invalidated.tsv, reported.tsv and other.tsv
Only those points were considered where upvotes were greater than downvotes and duplicates were removed after concatenation of all the datasets given in common voice 7.0
## Training procedure
For creating the training dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000096
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 2
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|------|---------------|-----------------|----------|
| 500 | 5.175900 | 2.750914 | 1.000000 |
| 1000 | 1.028700 | 0.338649 | 0.497999 |
| 1500 | 0.332200 | 0.246896 | 0.402241 |
| 2000 | 0.227300 | 0.239640 | 0.395839 |
| 2500 | 0.175000 | 0.239577 | 0.373966 |
| 3000 | 0.140400 | 0.243272 | 0.356095 |
| 3500 | 0.119200 | 0.263761 | 0.365164 |
| 4000 | 0.099300 | 0.265954 | 0.353428 |
| 4500 | 0.084400 | 0.276367 | 0.349693 |
| 5000 | 0.073700 | 0.282631 | 0.343825 |
| 5500 | 0.068000 | 0.282344 | 0.341158 |
| 6000 | 0.064500 | 0.281591 | 0.342491 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id Akashpb13/Hausa_xlsr --dataset mozilla-foundation/common_voice_8_0 --config ha --split test
```
| {"language": ["ha"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "ha", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "Cdial/Hausa_xlsr", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "ha"}, "metrics": [{"type": "wer", "value": 0.20614541257934219, "name": "Test WER"}, {"type": "cer", "value": 0.04358048053214061, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "ha"}, "metrics": [{"type": "wer", "value": 0.20614541257934219, "name": "Test WER"}, {"type": "cer", "value": 0.04358048053214061, "name": "Test CER"}]}]}]} | Cdial/hausa-asr | null | [
"transformers",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"ha",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
# Cedille AI
Cedille is a project to bring large language models to non-English languages.
## fr-boris
Boris is a 6B parameter autoregressive language model based on the GPT-J architecture and trained using the [mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax) codebase.
Boris was trained on around 78B tokens of French text from the [C4](https://huggingface.co/datasets/c4) dataset. We started training from GPT-J, which has been trained on [The Pile](https://pile.eleuther.ai/). As a consequence the model still has good performance in English language. Boris makes use of the unmodified GPT-2 tokenizer.
Boris is named after the great French writer [Boris Vian](https://en.wikipedia.org/wiki/Boris_Vian).
# How do I test Cedille?
For the time being, the easiest way to test the model is to use our [publicly accessible playground](https://en.cedille.ai/).
Cedille is a relatively large model and running it in production can get expensive. Consider contacting us for API access at [email protected].
## 📊 Cedille paper
Our paper is out now! https://arxiv.org/abs/2202.03371
Thanks for citing our work if you make use of Cedille
```bibtex
@misc{muller2022cedille,
title={Cedille: A large autoregressive French language model},
author={Martin M{\"{u}}ller and Florian Laurent},
year={2022},
eprint={2202.03371},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Contact us
For any custom development please contact us at [email protected].
## Links
* [Official website](https://en.cedille.ai/)
* [Blog](https://en.cedille.ai/blog)
* [GitHub](https://github.com/coteries/cedille-ai)
* [Twitter](https://twitter.com/CedilleAI)
| {"language": "fr", "license": "mit", "tags": ["pytorch", "causal-lm"], "datasets": ["c4"]} | Cedille/fr-boris | null | [
"transformers",
"pytorch",
"gptj",
"text-generation",
"causal-lm",
"fr",
"dataset:c4",
"arxiv:2202.03371",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers | {} | dccuchile/albert-base-spanish-finetuned-mldoc | null | [
"transformers",
"pytorch",
"albert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
token-classification | transformers | {} | dccuchile/albert-base-spanish-finetuned-ner | null | [
"transformers",
"pytorch",
"albert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | dccuchile/albert-base-spanish-finetuned-pawsx | null | [
"transformers",
"pytorch",
"albert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
token-classification | transformers | {} | dccuchile/albert-base-spanish-finetuned-pos | null | [
"transformers",
"pytorch",
"albert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
question-answering | transformers | {} | dccuchile/albert-base-spanish-finetuned-qa-mlqa | null | [
"transformers",
"pytorch",
"albert",
"question-answering",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | dccuchile/albert-base-spanish-finetuned-xnli | null | [
"transformers",
"pytorch",
"albert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | dccuchile/albert-large-spanish-finetuned-mldoc | null | [
"transformers",
"pytorch",
"albert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
token-classification | transformers | {} | dccuchile/albert-large-spanish-finetuned-ner | null | [
"transformers",
"pytorch",
"albert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | dccuchile/albert-large-spanish-finetuned-pawsx | null | [
"transformers",
"pytorch",
"albert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
token-classification | transformers | {} | dccuchile/albert-large-spanish-finetuned-pos | null | [
"transformers",
"pytorch",
"albert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
question-answering | transformers | {} | dccuchile/albert-large-spanish-finetuned-qa-mlqa | null | [
"transformers",
"pytorch",
"albert",
"question-answering",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | dccuchile/albert-large-spanish-finetuned-xnli | null | [
"transformers",
"pytorch",
"albert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | dccuchile/albert-tiny-spanish-finetuned-mldoc | null | [
"transformers",
"pytorch",
"albert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
token-classification | transformers | {} | dccuchile/albert-tiny-spanish-finetuned-ner | null | [
"transformers",
"pytorch",
"albert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | dccuchile/albert-tiny-spanish-finetuned-pawsx | null | [
"transformers",
"pytorch",
"albert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
token-classification | transformers | {} | dccuchile/albert-tiny-spanish-finetuned-pos | null | [
"transformers",
"pytorch",
"albert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
question-answering | transformers | {} | dccuchile/albert-tiny-spanish-finetuned-qa-mlqa | null | [
"transformers",
"pytorch",
"albert",
"question-answering",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | dccuchile/albert-tiny-spanish-finetuned-xnli | null | [
"transformers",
"pytorch",
"albert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | dccuchile/albert-xlarge-spanish-finetuned-mldoc | null | [
"transformers",
"pytorch",
"albert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
token-classification | transformers | {} | dccuchile/albert-xlarge-spanish-finetuned-ner | null | [
"transformers",
"pytorch",
"albert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | dccuchile/albert-xlarge-spanish-finetuned-pawsx | null | [
"transformers",
"pytorch",
"albert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
token-classification | transformers | {} | dccuchile/albert-xlarge-spanish-finetuned-pos | null | [
"transformers",
"pytorch",
"albert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
question-answering | transformers | {} | dccuchile/albert-xlarge-spanish-finetuned-qa-mlqa | null | [
"transformers",
"pytorch",
"albert",
"question-answering",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | dccuchile/albert-xlarge-spanish-finetuned-xnli | null | [
"transformers",
"pytorch",
"albert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | dccuchile/albert-xxlarge-spanish-finetuned-mldoc | null | [
"transformers",
"pytorch",
"albert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
token-classification | transformers | {} | dccuchile/albert-xxlarge-spanish-finetuned-ner | null | [
"transformers",
"pytorch",
"albert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | dccuchile/albert-xxlarge-spanish-finetuned-pawsx | null | [
"transformers",
"pytorch",
"albert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
token-classification | transformers | {} | dccuchile/albert-xxlarge-spanish-finetuned-pos | null | [
"transformers",
"pytorch",
"albert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
question-answering | transformers | {} | dccuchile/albert-xxlarge-spanish-finetuned-qa-mlqa | null | [
"transformers",
"pytorch",
"albert",
"question-answering",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | dccuchile/albert-xxlarge-spanish-finetuned-xnli | null | [
"transformers",
"pytorch",
"albert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | transformers |
# ALBERT Base Spanish
This is an [ALBERT](https://github.com/google-research/albert) model trained on a [big spanish corpora](https://github.com/josecannete/spanish-corpora).
The model was trained on a single TPU v3-8 with the following hyperparameters and steps/time:
- LR: 0.0008838834765
- Batch Size: 960
- Warmup ratio: 0.00625
- Warmup steps: 53333.33333
- Goal steps: 8533333.333
- Total steps: 3650000
- Total training time (aprox): 70.4 days.
## Training loss
 | {"language": ["es"], "tags": ["albert", "spanish", "OpenCENIA"], "datasets": ["large_spanish_corpus"]} | dccuchile/albert-base-spanish | null | [
"transformers",
"pytorch",
"tf",
"albert",
"pretraining",
"spanish",
"OpenCENIA",
"es",
"dataset:large_spanish_corpus",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | transformers |
# ALBERT Large Spanish
This is an [ALBERT](https://github.com/google-research/albert) model trained on a [big spanish corpora](https://github.com/josecannete/spanish-corpora).
The model was trained on a single TPU v3-8 with the following hyperparameters and steps/time:
- LR: 0.000625
- Batch Size: 512
- Warmup ratio: 0.003125
- Warmup steps: 12500
- Goal steps: 4000000
- Total steps: 1450000
- Total training time (aprox): 42 days.
## Training loss

| {"language": ["es"], "tags": ["albert", "spanish", "OpenCENIA"], "datasets": ["large_spanish_corpus"]} | dccuchile/albert-large-spanish | null | [
"transformers",
"pytorch",
"tf",
"albert",
"pretraining",
"spanish",
"OpenCENIA",
"es",
"dataset:large_spanish_corpus",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | transformers |
# ALBERT Tiny Spanish
This is an [ALBERT](https://github.com/google-research/albert) model trained on a [big spanish corpora](https://github.com/josecannete/spanish-corpora).
The model was trained on a single TPU v3-8 with the following hyperparameters and steps/time:
- LR: 0.00125
- Batch Size: 2048
- Warmup ratio: 0.0125
- Warmup steps: 125000
- Goal steps: 10000000
- Total steps: 8300000
- Total training time (aprox): 58.2 days
## Training loss
 | {"language": ["es"], "tags": ["albert", "spanish", "OpenCENIA"], "datasets": ["large_spanish_corpus"]} | dccuchile/albert-tiny-spanish | null | [
"transformers",
"pytorch",
"tf",
"albert",
"pretraining",
"spanish",
"OpenCENIA",
"es",
"dataset:large_spanish_corpus",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | transformers |
# ALBERT XLarge Spanish
This is an [ALBERT](https://github.com/google-research/albert) model trained on a [big spanish corpora](https://github.com/josecannete/spanish-corpora).
The model was trained on a single TPU v3-8 with the following hyperparameters and steps/time:
- LR: 0.0003125
- Batch Size: 128
- Warmup ratio: 0.00078125
- Warmup steps: 6250
- Goal steps: 8000000
- Total steps: 2775000
- Total training time (aprox): 64.2 days.
## Training loss
 | {"language": ["es"], "tags": ["albert", "spanish", "OpenCENIA"], "datasets": ["large_spanish_corpus"]} | dccuchile/albert-xlarge-spanish | null | [
"transformers",
"pytorch",
"tf",
"albert",
"pretraining",
"spanish",
"OpenCENIA",
"es",
"dataset:large_spanish_corpus",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | transformers |
# ALBERT XXLarge Spanish
This is an [ALBERT](https://github.com/google-research/albert) model trained on a [big spanish corpora](https://github.com/josecannete/spanish-corpora).
The model was trained on a single TPU v3-8 with the following hyperparameters and steps/time:
- LR: 0.0003125
- Batch Size: 128
- Warmup ratio: 0.00078125
- Warmup steps: 3125
- Goal steps: 4000000
- Total steps: 1650000
- Total training time (aprox): 70.7 days.
## Training loss
 | {"language": ["es"], "tags": ["albert", "spanish", "OpenCENIA"], "datasets": ["large_spanish_corpus"]} | dccuchile/albert-xxlarge-spanish | null | [
"transformers",
"pytorch",
"tf",
"albert",
"pretraining",
"spanish",
"OpenCENIA",
"es",
"dataset:large_spanish_corpus",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers | {} | dccuchile/bert-base-spanish-wwm-cased-finetuned-mldoc | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
token-classification | transformers | {} | dccuchile/bert-base-spanish-wwm-cased-finetuned-ner | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | dccuchile/bert-base-spanish-wwm-cased-finetuned-pawsx | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
token-classification | transformers | {} | dccuchile/bert-base-spanish-wwm-cased-finetuned-pos | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
question-answering | transformers | {} | dccuchile/bert-base-spanish-wwm-cased-finetuned-qa-mlqa | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | dccuchile/bert-base-spanish-wwm-cased-finetuned-xnli | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | dccuchile/bert-base-spanish-wwm-uncased-finetuned-mldoc | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
token-classification | transformers | {} | dccuchile/bert-base-spanish-wwm-uncased-finetuned-ner | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | dccuchile/bert-base-spanish-wwm-uncased-finetuned-pawsx | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
token-classification | transformers | {} | dccuchile/bert-base-spanish-wwm-uncased-finetuned-pos | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
question-answering | transformers | {} | dccuchile/bert-base-spanish-wwm-uncased-finetuned-qa-mlqa | null | [
"transformers",
"pytorch",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | dccuchile/bert-base-spanish-wwm-uncased-finetuned-xnli | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | dccuchile/distilbert-base-spanish-uncased-finetuned-mldoc | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
token-classification | transformers | {} | dccuchile/distilbert-base-spanish-uncased-finetuned-ner | null | [
"transformers",
"pytorch",
"distilbert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | dccuchile/distilbert-base-spanish-uncased-finetuned-pawsx | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
token-classification | transformers | {} | dccuchile/distilbert-base-spanish-uncased-finetuned-pos | null | [
"transformers",
"pytorch",
"distilbert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
question-answering | transformers | {} | dccuchile/distilbert-base-spanish-uncased-finetuned-qa-mlqa | null | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | dccuchile/distilbert-base-spanish-uncased-finetuned-xnli | null | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
fill-mask | transformers | {"language": ["es"], "tags": ["distilbert", "spanish", "OpenCENIA"], "datasets": ["large_spanish_corpus"]} | dccuchile/distilbert-base-spanish-uncased | null | [
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"spanish",
"OpenCENIA",
"es",
"dataset:large_spanish_corpus",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-recipe-1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0641
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 3 | 3.2689 |
| No log | 2.0 | 6 | 3.0913 |
| No log | 3.0 | 9 | 3.0641 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-recipe-1", "results": []}]} | CennetOguz/distilbert-base-uncased-finetuned-recipe-1 | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers | {} | CennetOguz/distilbert-base-uncased-finetuned-recipe-accelerate-1 | null | [
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
fill-mask | transformers | {} | CennetOguz/distilbert-base-uncased-finetuned-recipe-accelerate | null | [
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-recipe
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9488
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 3 | 3.2689 |
| No log | 2.0 | 6 | 3.0913 |
| No log | 3.0 | 9 | 3.0641 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-recipe", "results": []}]} | CennetOguz/distilbert-base-uncased-finetuned-recipe | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Certified-Zoomer/DialoGPT-small-rick | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Chaddmckay/Cdm | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
# Lego Batman DialoGPT Model | {"tags": ["conversational"]} | Chae/botman | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers | {} | Chaewon/mmnt_decoder_en | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers | {} | Chaewon/mnmt_decoder_en | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Chaewon/mnmt_decoder_en_gpt2 | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Chaima/TunBerto | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | chainyo/speaker-recognition-meetup | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | ChaitanyaU/FineTuneLM | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
# Model trained on F.R.I.E.N.D.S dialogue | {"tags": ["conversational"]} | Chakita/Friends | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers | Kannada BERT model finetuned on a news corpus
---
language:
- kn
thumbnail:
tags:
- Masked Language model
- Autocomplete
license: mit
datasets:
- custom data set of Kannada news
--- | {} | Chakita/KNUBert | null | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers | RoBERTa model trained on Kannada news corpus. | {"tags": ["masked-lm", "fill-in-the-blanks"]} | Chakita/KROBERT | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"masked-lm",
"fill-in-the-blanks",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Kalbert
This model is a fine-tuned version of [ai4bharat/indic-bert](https://huggingface.co/ai4bharat/indic-bert) on a kannada news dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5324
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.5835 | 1.0 | 3953 | 1.7985 |
| 1.6098 | 2.0 | 7906 | 1.7434 |
| 1.5266 | 3.0 | 11859 | 1.6934 |
| 1.5179 | 4.0 | 15812 | 1.6665 |
| 1.5459 | 5.0 | 19765 | 1.6135 |
| 1.5511 | 6.0 | 23718 | 1.6002 |
| 1.5209 | 7.0 | 27671 | 1.5657 |
| 1.5413 | 8.0 | 31624 | 1.5578 |
| 1.4828 | 9.0 | 35577 | 1.5465 |
| 1.4651 | 10.0 | 39530 | 1.5451 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
| {"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "Kalbert", "results": []}]} | Chakita/Kalbert | null | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers | RoBERTa model trained on OSCAR Kannada corpus. | {"tags": ["masked-lm", "fill-in-the-blanks"]} | Chakita/KannadaBERT | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"masked-lm",
"fill-in-the-blanks",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers | {} | Chakita/gpt2_mwp | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
#help why did i feed this bot the bee movie | {"tags": ["conversational"]} | Chalponkey/DialoGPT-small-Barry | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Chan/distilgpt2-finetuned-wikitext2 | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Chan/distilroberta-base-finetuned-wikitext2 | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Chandanbhat/distilbert-base-uncased-finetuned-cola | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | CharlieChen/feedback-bigbird | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Charlotte/text2dm_models | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Charlotte77/model_test | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
# Harry Potter DialoGPT Model | {"tags": ["conversational"]} | ChaseBread/DialoGPT-small-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | ChauhanVipul/BERT | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Cheapestmedsshop/Buymodafinilus | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | Cheatham/xlm-roberta-base-finetuned | null | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.