Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
text-classification | transformers | {} | Cheatham/xlm-roberta-large-finetuned-d1 | null | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | Cheatham/xlm-roberta-large-finetuned-d12 | null | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Cheatham/xlm-roberta-large-finetuned-d12_2 | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | Cheatham/xlm-roberta-large-finetuned-d1r01 | null | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | Cheatham/xlm-roberta-large-finetuned-r01 | null | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | Cheatham/xlm-roberta-large-finetuned | null | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | Cheatham/xlm-roberta-large-finetuned3 | null | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | Cheatham/xlm-roberta-large-finetuned4 | null | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Check/vaw2tmp | null | [
"tensorboard",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers | {} | CheonggyeMountain-Sherpa/kogpt-trinity-poem | null | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null |
## Model based on
[Ko-GPT-Trinity 1.2B (v0.5)](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5)
## Example
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained(
"CheonggyeMountain-Sherpa/kogpt-trinity-punct-wrapper",
revision="punct_wrapper-related_words-overfit", # or punct_wrapper-related_words-minevalloss
bos_token="<s>",
eos_token="</s>",
unk_token="<unk>",
pad_token="<pad>",
mask_token="<mask>",
)
model = AutoModelForCausalLM.from_pretrained(
"CheonggyeMountain-Sherpa/kogpt-trinity-punct-wrapper",
revision="punct_wrapper-related_words-overfit", # or punct_wrapper-related_words-minevalloss
pad_token_id=tokenizer.eos_token_id,
).to(device="cuda")
model.eval()
prompt = "์์์ด ๋ณด์ด๋ ๊ฒฝ์น"
wrapped_prompt = f"@{prompt}@<usr>\n"
with torch.no_grad():
tokens = tokenizer.encode(wrapped_prompt, return_tensors="pt").to(device="cuda")
gen_tokens = model.generate(
tokens,
max_length=64,
repetition_penalty=2.0,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
bos_token_id=tokenizer.bos_token_id,
top_k=16,
top_p=0.8,
)
generated = tokenizer.decode(gen_tokens[0][len(tokens[0]):])
print(generated)
# ํด๊ฐ ์ง๊ณ ์์ ๋ฌด๋ ต
# ๋๋ ์์์ ๋ณด๋ฌ ๊ฐ๋ค
# ๋ถ์ ํ๋๊ณผ ํ์ ๊ตฌ๋ฆ์ด ๋๋ฅผ ๋ฐ๊ฒจ์ค ๊ฒ ๊ฐ์์๋ฆฌ
# ํ์ง๋ง ๋ด๊ฐ ๋ณธ ํด๋ ์ ๋ฌผ์ด๋ง ๊ฐ๊ณ
# ๊ตฌ๋ฆ๋ง์ ์์ทจ๋ฅผ ๊ฐ์ถ ์ด๋ ๋ง์ด ๋จ์์์ ๋ฟ์ด๋ค
# ๋ด๊ฐ ํ ๋ฐฐ๋ ๋ณด์ด์ง๋ ์๊ณ
``` | {"language": ["ko"], "license": "cc-by-nc-sa-4.0", "tags": ["gpt2"]} | CheonggyeMountain-Sherpa/kogpt-trinity-punct-wrapper | null | [
"gpt2",
"ko",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Chertilasus/main | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Chester/traffic-rec | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {"license": "bsd-3-clause-clear"} | Chikita1/www_stash_stock | null | [
"license:bsd-3-clause-clear",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Chinat/test-classifier | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
question-answering | transformers | This question answering model was fine tuned to detect negation expressions
How to use:
question: negation
context: That is not safe!
Answer: not
question: negation
context: Weren't we going to go to the moon?
Answer: Weren't
| {} | Ching/negation_detector | null | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Chinmay/mlindia | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
Donald Trump DialoGPT Model built by following tutorial by [Ruolin Zheng](https://youtu.be/Rk8eM1p_xgM).
The data used for training was 2020 presidential debate.
More work is needed to optimize it. I don't have access to larger VRAM. | {"tags": ["conversational"]} | Chiuchiyin/DialoGPT-small-Donald | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Chiuchiyin/Donald | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | ChoboAvenger/DialoGPT-small-DocBot | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | ChoboAvenger/DialoGPT-small-joshua | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | ChrisP/xlm-roberta-base-finetuned-marc-en | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers | # CMJS DialoGPT Model | {"tags": ["conversational"]} | ChrisVCB/DialoGPT-medium-cmjs | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers | # Eddie Jones DialoGPT Model | {"tags": ["conversational"]} | ChrisVCB/DialoGPT-medium-ej | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
depth-estimation | null |
# MADNet Keras
MADNet is a deep stereo depth estimation model. Its key defining features are:
1. It has a light-weight architecture which means it has low latency.
2. It supports self-supervised training, so it can be conveniently adapted in the field with no training data.
3. It's a stereo depth model, which means it's capable of high accuracy.
The MADNet weights in this repository were trained using a Tensorflow 2 / Keras implementation of the original code. The model was created using the Keras Functional API, which enables the following features:
1. Good optimization.
2. High level Keras methods (.fit, .predict and .evaluate).
3. Little boilerplate code.
4. Decent support from external packages (like Weights and Biases).
5. Callbacks.
The weights provided were either trained on the 2012 / 2015 kitti stereo dataset or flyingthings-3d dataset. The weights of the pretrained models from the original paper (tf1_conversion_kitti.h5 and tf1_conversion_synthetic.h5) are provided in tensorflow 2 format. The TF1 weights help speed up fine-tuning, but its recommended to use either synthetic.h5 (trained on flyingthings-3d) or kitti.h5 (trained on 2012 and 2015 kitti stereo datasets).
**Abstract**:
Deep convolutional neural networks trained end-to-end are the undisputed state-of-the-art methods to regress dense disparity maps directly from stereo pairs. However, such methods suffer from notable accuracy drops when exposed to scenarios significantly different from those seen in the training phase (e.g.real vs synthetic images, indoor vs outdoor, etc). As it is unlikely to be able to gather enough samples to achieve effective training/ tuning in any target domain, we propose to perform unsupervised and continuous online adaptation of a deep stereo network in order to preserve its accuracy independently of the sensed environment. However, such a strategy can be extremely demanding regarding computational resources and thus not enabling real-time performance. Therefore, we address this side effect by introducing a new lightweight, yet effective, deep stereo architecture Modularly ADaptive Network (MADNet) and by developing Modular ADaptation (MAD), an algorithm to train independently only sub-portions of our model. By deploying MADNet together with MAD we propose the first ever realtime self-adaptive deep stereo system.
## Usage Instructions
See the accompanying codes readme for details on how to perform training and inferencing with the model: [madnet-deep-stereo-with-keras](https://github.com/ChristianOrr/madnet-deep-stereo-with-keras).
## Training
### TF1 Kitti and TF1 Synthetic
Training details for the TF1 weights are available in the supplementary material (at the end) of this paper: [Real-time self-adaptive deep stereo](https://arxiv.org/abs/1810.05424)
### Synthetic
The synthetic model was finetuned using the tf1 synthetic weights. It was trained on the flyingthings-3d dataset with the following parameters:
- Steps: 1.5 million
- Learning Rate: 0.0001
- Decay Rate: 0.999
- Minimum Learning Rate Cap: 0.000001
- Batch Size: 1
- Optimizer: Adam
- Image Height: 480
- Image Width: 640
### Kitti
The kitti model was finetuned using the synthetic weights. Tensorboard events file is available in the logs directory. It was trained on the 2012 and 2015 kitti stereo dataset with the following parameters:
- Steps: 0.5 million
- Learning Rate: 0.0001
- Decay Rate: 0.999
- Minimum Learning Rate Cap: 0.0000001
- Batch Size: 1
- Optimizer: Adam
- Image Height: 480
- Image Width: 640
## BibTeX entry and citation info
```bibtex
@InProceedings{Tonioni_2019_CVPR,
author = {Tonioni, Alessio and Tosi, Fabio and Poggi, Matteo and Mattoccia, Stefano and Di Stefano, Luigi},
title = {Real-time self-adaptive deep stereo},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}
```
```bibtex
@article{Poggi2021continual,
author={Poggi, Matteo and Tonioni, Alessio and Tosi, Fabio
and Mattoccia, Stefano and Di Stefano, Luigi},
title={Continual Adaptation for Deep Stereo},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)},
year={2021}
}
```
```bibtex
@InProceedings{MIFDB16,
author = "N. Mayer and E. Ilg and P. Hausser and P. Fischer and D. Cremers and A. Dosovitskiy and T. Brox",
title = "A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation",
booktitle = "IEEE International Conference on Computer Vision and Pattern Recognition (CVPR)",
year = "2016",
note = "arXiv:1512.02134",
url = "http://lmb.informatik.uni-freiburg.de/Publications/2016/MIFDB16"
}
```
```bibtex
@INPROCEEDINGS{Geiger2012CVPR,
author = {Andreas Geiger and Philip Lenz and Raquel Urtasun},
title = {Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite},
booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2012}
}
```
```bibtex
@INPROCEEDINGS{Menze2015CVPR,
author = {Moritz Menze and Andreas Geiger},
title = {Object Scene Flow for Autonomous Vehicles},
booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2015}
}
``` | {"license": "apache-2.0", "tags": ["vision", "deep-stereo", "depth-estimation", "Tensorflow2", "Keras"], "datasets": ["flyingthings-3d", "kitti"]} | ChristianOrr/madnet_keras | null | [
"tensorboard",
"vision",
"deep-stereo",
"depth-estimation",
"Tensorflow2",
"Keras",
"dataset:flyingthings-3d",
"dataset:kitti",
"arxiv:1810.05424",
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | transformers | # IndoBERT (Indonesian BERT Model)
## Model description
ELECTRA is a new method for self-supervised language representation learning. This repository contains the pre-trained Electra Base model (tensorflow 1.15.0) trained in a Large Indonesian corpus (~16GB of raw text | ~2B indonesian words).
IndoELECTRA is a pre-trained language model based on ELECTRA architecture for the Indonesian Language.
This model is base version which use electra-base config.
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("ChristopherA08/IndoELECTRA")
model = AutoModel.from_pretrained("ChristopherA08/IndoELECTRA")
tokenizer.encode("hai aku mau makan.")
[2, 8078, 1785, 2318, 1946, 18, 4]
```
## Training procedure
The training of the model has been performed using Google's original Tensorflow code on eight core Google Cloud TPU v2.
We used a Google Cloud Storage bucket, for persistent storage of training data and models.
| {"language": "id", "datasets": ["oscar"]} | ChristopherA08/IndoELECTRA | null | [
"transformers",
"pytorch",
"electra",
"pretraining",
"id",
"dataset:oscar",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
# Harry Potter DialoGPT MOdel | {"tags": ["conversational"]} | Chuah/DialoGPT-small-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
# Dr. Fauci DialoGPT Model | {"tags": ["conversational"]} | ChukSamuels/DialoGPT-small-Dr.FauciBot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers | {} | Chun/DialoGPT-large-dailydialog | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers | {} | Chun/DialoGPT-medium-dailydialog | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers | {} | Chun/DialoGPT-small-dailydialog | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Chun/w-en2zh-hsk | null | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Chun/w-en2zh-mtm | null | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Chun/w-en2zh-otm | null | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Chun/w-zh2en-hsk | null | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Chun/w-zh2en-mtm | null | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | {} | Chun/w-zh2en-mto | null | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Chungu424/DATA | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Chungu424/qazwsx | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Chungu424/repo | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Chungu424/repodata | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Chuu/Chumar | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Ci/Pai | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | copied from boris | {} | Cilan/dalle-knockoff | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | transformers |
## Japanese ELECTRA-small
We provide a Japanese **ELECTRA-Small** model, as described in [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB).
Our pretraining process employs subword units derived from the [Japanese Wikipedia](https://dumps.wikimedia.org/jawiki/latest), using the [Byte-Pair Encoding](https://www.aclweb.org/anthology/P16-1162.pdf) method and building on an initial tokenization with [mecab-ipadic-NEologd](https://github.com/neologd/mecab-ipadic-neologd). For optimal performance, please take care to set your MeCab dictionary appropriately.
## How to use the discriminator in `transformers`
```
from transformers import BertJapaneseTokenizer, ElectraForPreTraining
tokenizer = BertJapaneseTokenizer.from_pretrained('Cinnamon/electra-small-japanese-discriminator', mecab_kwargs={"mecab_option": "-d /usr/lib/x86_64-linux-gnu/mecab/dic/mecab-ipadic-neologd"})
model = ElectraForPreTraining.from_pretrained('Cinnamon/electra-small-japanese-discriminator')
```
| {"language": "ja", "license": "apache-2.0"} | Cinnamon/electra-small-japanese-discriminator | null | [
"transformers",
"pytorch",
"electra",
"pretraining",
"ja",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers | ## Japanese ELECTRA-small
We provide a Japanese **ELECTRA-Small** model, as described in [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB).
Our pretraining process employs subword units derived from the [Japanese Wikipedia](https://dumps.wikimedia.org/jawiki/latest), using the [Byte-Pair Encoding](https://www.aclweb.org/anthology/P16-1162.pdf) method and building on an initial tokenization with [mecab-ipadic-NEologd](https://github.com/neologd/mecab-ipadic-neologd). For optimal performance, please take care to set your MeCab dictionary appropriately.
```
# ELECTRA-small generator usage
from transformers import BertJapaneseTokenizer, ElectraForMaskedLM
tokenizer = BertJapaneseTokenizer.from_pretrained('Cinnamon/electra-small-japanese-generator', mecab_kwargs={"mecab_option": "-d /usr/lib/x86_64-linux-gnu/mecab/dic/mecab-ipadic-neologd"})
model = ElectraForMaskedLM.from_pretrained('Cinnamon/electra-small-japanese-generator')
```
| {"language": "ja"} | Cinnamon/electra-small-japanese-generator | null | [
"transformers",
"pytorch",
"electra",
"fill-mask",
"ja",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Ciruzzo/DialoGPT-medium-harrypotter | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
# Harry Potter DialoGPT Model | {"tags": ["conversational"]} | Ciruzzo/DialoGPT-small-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Ciruzzo/DialoGPT-small-hattypotter | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Clarianliz30/Caitlyn | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers | # RickBot built for [Chai](https://chai.ml/)
Make your own [here](https://colab.research.google.com/drive/1o5LxBspm-C28HQvXN-PRQavapDbm5WjG?usp=sharing)
| {"tags": ["conversational"]} | ClaudeCOULOMBE/RickBot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
zero-shot-classification | transformers | ETH Zeroshot | {"datasets": ["multi_nli"], "pipeline_tag": "zero-shot-classification", "widget": [{"text": "ETH", "candidate_labels": "Location & Address, Employment, Organizational, Name, Service, Studies, Science", "hypothesis_template": "This is {}."}]} | ClaudeYang/awesome_fb_model | null | [
"transformers",
"pytorch",
"bart",
"text-classification",
"zero-shot-classification",
"dataset:multi_nli",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers | {} | CleveGreen/FieldClassifier | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | CleveGreen/FieldClassifier_v2 | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | CleveGreen/FieldClassifier_v2_gpt | null | [
"transformers",
"pytorch",
"gpt2",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | CleveGreen/JobClassifier | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | CleveGreen/JobClassifier_v2 | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | CleveGreen/JobClassifier_v2_gpt | null | [
"transformers",
"pytorch",
"gpt2",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Clint/clinton | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers | {"tags": ["conversational"]} | Cloudy/DialoGPT-CJ-large | null | [
"transformers",
"pytorch",
"conversational",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | null |
# My Awesome Model
| {"tags": ["conversational"]} | ClydeWasTaken/DialoGPT-small-joshua | null | [
"conversational",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | CoShin/XLM-roberta-large_ko_en_nil_sts | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | CoachCarter/distilbert-base-uncased-finetuned-squad | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | CoachCarter/distilbert-base-uncased | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
# Cartman DialoGPT Model | {"tags": ["conversational"]} | CodeDanCode/CartmenBot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
# SouthPark Kyle Bot
| {"tags": ["conversational"]} | CodeDanCode/SP-KyleBot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | CodeMonkey98/distilroberta-base-finetuned-wikitext2 | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | transformers | {} | CodeNinja1126/bert-p-encoder | null | [
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | transformers | {} | CodeNinja1126/bert-q-encoder | null | [
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | CodeNinja1126/koelectra-model | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | CodeNinja1126/test-model | null | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
question-answering | transformers | {} | CodeNinja1126/xlm-roberta-large-kor-mrc | null | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
# Harry Potter DialoGPT Model | {"tags": ["conversational"]} | CoderBoy432/DialoGPT-small-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
Chat with the model:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("r3dhummingbird/DialoGPT-marxbot")
model = AutoModelWithLMHead.from_pretrained("r3dhummingbird/DialoGPT-marxbot")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("MarxBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
``` | {"tags": ["conversational"]} | CoderEFE/DialoGPT-marxbot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers | {} | CoderEFE/DialoGPT-medium-marx | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Venkatakrishnan-Ramesh/Text_gen | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers | {} | CoffeeAddict93/gpt1-call-of-the-wild | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers | {} | CoffeeAddict93/gpt1-modest-proposal | null | [
"transformers",
"pytorch",
"openai-gpt",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers | {} | CoffeeAddict93/gpt2-call-of-the-wild | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers | {} | CoffeeAddict93/gpt2-medium-call-of-the-wild | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers | {} | CoffeeAddict93/gpt2-medium-modest-proposal | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers | {} | CoffeeAddict93/gpt2-modest-proposal | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers |
# bart-faithful-summary-detector
## Model description
A BART (base) model trained to classify whether a summary is *faithful* to the original article. See our [paper in NAACL'21](https://www.seas.upenn.edu/~sihaoc/static/pdf/CZSR21.pdf) for details.
## Usage
Concatenate a summary and a source document as input (note that the summary needs to be the **first** sentence).
Here's an example usage (with PyTorch)
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("CogComp/bart-faithful-summary-detector")
model = AutoModelForSequenceClassification.from_pretrained("CogComp/bart-faithful-summary-detector")
article = "Ban Ki-Moon was re-elected for a second term by the UN General Assembly, unopposed and unanimously, on 21 June 2011."
bad_summary = "Ban Ki-moon was elected for a second term in 2007."
good_summary = "Ban Ki-moon was elected for a second term in 2011."
bad_pair = tokenizer(text=bad_summary, text_pair=article, return_tensors='pt')
good_pair = tokenizer(text=good_summary, text_pair=article, return_tensors='pt')
bad_score = model(**bad_pair)
good_score = model(**good_pair)
print(good_score[0][:, 1] > bad_score[0][:, 1]) # True, label mapping: "0" -> "Hallucinated" "1" -> "Faithful"
```
### BibTeX entry and citation info
```bibtex
@inproceedings{CZSR21,
author = {Sihao Chen and Fan Zhang and Kazoo Sone and Dan Roth},
title = {{Improving Faithfulness in Abstractive Summarization with Contrast Candidate Generation and Selection}},
booktitle = {NAACL},
year = {2021}
}
``` | {"language": ["en"], "license": "cc-by-sa-4.0", "tags": ["text-classification", "bart", "xsum"], "datasets": ["xsum"], "thumbnail": "https://cogcomp.seas.upenn.edu/images/logo.png", "widget": [{"text": "<s> Ban Ki-moon was elected for a second term in 2007. </s></s> Ban Ki-Moon was re-elected for a second term by the UN General Assembly, unopposed and unanimously, on 21 June 2011."}, {"text": "<s> Ban Ki-moon was elected for a second term in 2011. </s></s> Ban Ki-Moon was re-elected for a second term by the UN General Assembly, unopposed and unanimously, on 21 June 2011."}]} | CogComp/bart-faithful-summary-detector | null | [
"transformers",
"pytorch",
"jax",
"bart",
"text-classification",
"xsum",
"en",
"dataset:xsum",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers | # roberta-temporal-predictor
A RoBERTa-base model that is fine-tuned on the [The New York Times Annotated Corpus](https://catalog.ldc.upenn.edu/LDC2008T19)
to predict temporal precedence of two events. This is used as the ``temporality prediction'' component
in our ROCK framework for reasoning about commonsense causality. See our [paper](https://arxiv.org/abs/2202.00436) for more details.
# Usage
You can directly use this model for filling-mask tasks, as shown in the example widget.
However, for better temporal inference, it is recommended to symmetrize the outputs as
$$
P(E_1 \prec E_2) = \frac{1}{2} (f(E_1,E_2) + f(E_2,E_1))
$$
where ``f(E_1,E_2)`` denotes the predicted probability for ``E_1`` to occur preceding ``E_2``.
For simplicity, we implement the following TempPredictor class that incorporate this symmetrization automatically.
Below is an example usage for the ``TempPredictor`` class:
```python
from transformers import (RobertaForMaskedLM, RobertaTokenizer)
from src.temp_predictor import TempPredictor
TORCH_DEV = "cuda:0" # change as needed
tp_roberta_ft = src.TempPredictor(
model=RobertaForMaskedLM.from_pretrained("CogComp/roberta-temporal-predictor"),
tokenizer=RobertaTokenizer.from_pretrained("CogComp/roberta-temporal-predictor"),
device=TORCH_DEV
)
E1 = "The man turned on the faucet."
E2 = "Water flows out."
t12 = tp_roberta_ft(E1, E2, top_k=5)
print(f"P('{E1}' before '{E2}'): {t12}")
```
# BibTeX entry and citation info
```bib
@misc{zhang2022causal,
title={Causal Inference Principles for Reasoning about Commonsense Causality},
author={Jiayao Zhang and Hongming Zhang and Dan Roth and Weijie J. Su},
year={2022},
eprint={2202.00436},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | {"license": "mit", "widget": [{"text": "The man turned on the faucet <mask> water flows out."}, {"text": "The woman received her pension <mask> she retired."}]} | CogComp/roberta-temporal-predictor | null | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"arxiv:2202.00436",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | CohleM/bert-nepali-tokenizer | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | CohleM/mbert-nepali-tokenizer | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers | {} | Coldestadam/Breakout_Mentors_SpongeBob_Model | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
feature-extraction | transformers | ํด๋น ๋ชจ๋ธ์ [ํด๋น ์ฌ์ดํธ](https://huggingface.co/gpt2-medium)์์ ๊ฐ์ ธ์จ ๋ชจ๋ธ์
๋๋ค.
ํด๋น ๋ชจ๋ธ์ [Teachable NLP](https://ainize.ai/teachable-nlp) ์๋น์ค์์ ์ฌ์ฉ๋ฉ๋๋ค.
| {} | ComCom/gpt2-large | null | [
"transformers",
"pytorch",
"gpt2",
"feature-extraction",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
feature-extraction | transformers | ํด๋น ๋ชจ๋ธ์ [ํด๋น ์ฌ์ดํธ](https://huggingface.co/gpt2-medium)์์ ๊ฐ์ ธ์จ ๋ชจ๋ธ์
๋๋ค.
ํด๋น ๋ชจ๋ธ์ [Teachable NLP](https://ainize.ai/teachable-nlp) ์๋น์ค์์ ์ฌ์ฉ๋ฉ๋๋ค.
| {} | ComCom/gpt2-medium | null | [
"transformers",
"pytorch",
"gpt2",
"feature-extraction",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
feature-extraction | transformers | ํด๋น ๋ชจ๋ธ์ [ํด๋น ์ฌ์ดํธ](https://huggingface.co/gpt2)์์ ๊ฐ์ ธ์จ ๋ชจ๋ธ์
๋๋ค.
ํด๋น ๋ชจ๋ธ์ [Teachable NLP](https://ainize.ai/teachable-nlp) ์๋น์ค์์ ์ฌ์ฉ๋ฉ๋๋ค. | {} | ComCom/gpt2 | null | [
"transformers",
"pytorch",
"gpt2",
"feature-extraction",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | ComCom-Dev/gpt2-bible-test | null | [
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Cometasonmi451/Mine | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
# neurotitle-rugpt3-small
Model based on [ruGPT-3](https://huggingface.co/sberbank-ai) for generating scientific paper titles.
Trained on [All NeurIPS (NIPS) Papers](https://www.kaggle.com/rowhitswami/nips-papers-1987-2019-updated) dataset.
Use exclusively as a crazier alternative to SCIgen.
## Made with Cometrain AlphaML & AutoCode
This model was automatically fine-tuned using the Cometrain AlphaML framework and tested with CI/CD pipeline made by Cometrain AutoCode
## Cometrain AlphaML command
```shell
$ cometrain create --name neurotitle --model auto --task task_0x2231.txt --output transformers
```
## Use with Transformers
```python
from transformers import pipeline, set_seed
generator = pipeline('text-generation', model="CometrainResearch/neurotitle-rugpt3-small")
generator("BERT:", max_length=50)
```
| {"language": ["ru", "en"], "license": "mit", "tags": ["Cometrain AutoCode", "Cometrain AlphaML"], "datasets": ["All-NeurIPS-Papers-Scraper"], "widget": [{"text": "NIPSE:", "example_title": "NIPS"}, {"text": "Learning CNN", "example_title": "Learning CNN"}, {"text": "ONNX:", "example_title": "ONNX"}, {"text": "BERT:", "example_title": "BERT"}], "inference": {"parameters": {"temperature": 0.9}}} | cometrain/neurotitle-rugpt3-small | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"Cometrain AutoCode",
"Cometrain AlphaML",
"ru",
"en",
"dataset:All-NeurIPS-Papers-Scraper",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
# Rick DialoGPT Model | {"tags": ["conversational"]} | Connor/DialoGPT-small-rick | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers | {} | Connor-tech/bert_cn_finetuning | null | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
#enlightened GPT model | {"tags": ["conversational"]} | Connorvr/BrightBot-small | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.6.0
- Datasets 2.0.0
- Tokenizers 0.11.6
| {"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "model", "results": []}]} | Connorvr/TeachingGen | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | ConstellationBoi/Oop | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
feature-extraction | transformers | {} | Contrastive-Tension/BERT-Base-CT-STSb | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
|
fill-mask | transformers | {} | Contrastive-Tension/BERT-Base-CT | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.