modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
monsoon-nlp/bangla-electra | 6b1473a54f66add692a6e106e91e1212a9ccb145 | 2020-07-29T07:58:53.000Z | [
"pytorch",
"tf",
"electra",
"bn",
"arxiv:2004.07807",
"transformers"
] | null | false | monsoon-nlp | null | monsoon-nlp/bangla-electra | 62 | 1 | transformers | 5,600 | ---
language: bn
---
# Bangla-Electra
This is a second attempt at a Bangla/Bengali language model trained with
Google Research's [ELECTRA](https://github.com/google-research/electra).
Tokenization and pre-training CoLab: https://colab.research.google.com/drive/1gpwHvXAnNQaqcu-YNx1kafEVxz07g2jL
V1 - 120,000 steps; V2 - 190,000 steps
## Classification
Classification with SimpleTransformers: https://colab.research.google.com/drive/1vltPI81atzRvlALv4eCvEB0KdFoEaCOb
On Soham Chatterjee's [news classification task](https://github.com/soham96/Bangla2Vec):
(Random: 16.7%, mBERT: 72.3%, Bangla-Electra: 82.3%)
Similar to mBERT on some tasks and configurations described in https://arxiv.org/abs/2004.07807
## Question Answering
This model can be used for Question Answering - this notebook uses Bangla questions from Google's TyDi dataset:
https://colab.research.google.com/drive/1i6fidh2tItf_-IDkljMuaIGmEU6HT2Ar
## Corpus
Trained on a web crawl from https://oscar-corpus.com/ (deduped version, 5.8GB) and 1 July 2020 dump of bn.wikipedia.org (414MB)
## Vocabulary
Included as vocab.txt in the upload - vocab_size is 29898
|
nvidia/segformer-b0-finetuned-cityscapes-640-1280 | 335a691b445d6cf38370a42546c037d8ff978685 | 2022-07-20T09:54:18.000Z | [
"pytorch",
"tf",
"segformer",
"dataset:cityscapes",
"arxiv:2105.15203",
"transformers",
"vision",
"image-segmentation",
"license:apache-2.0"
] | image-segmentation | false | nvidia | null | nvidia/segformer-b0-finetuned-cityscapes-640-1280 | 62 | null | transformers | 5,601 | ---
license: apache-2.0
tags:
- vision
- image-segmentation
datasets:
- cityscapes
widget:
- src: https://www.researchgate.net/profile/Anurag-Arnab/publication/315881952/figure/fig5/AS:667673876779033@1536197265755/Sample-results-on-the-Cityscapes-dataset-The-above-images-show-how-our-method-can-handle.jpg
example_title: road
---
# SegFormer (b5-sized) model fine-tuned on CityScapes
SegFormer model fine-tuned on CityScapes at resolution 640x1280. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b0-finetuned-cityscapes-640-1280")
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b0-finetuned-cityscapes-640-1280")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
pszemraj/pegasus-large-summary-explain | 5b28618565079b7701b586f01e31f3d294067c18 | 2022-07-15T21:00:36.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"en",
"dataset:kmfoda/booksum",
"transformers",
"summarization",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | pszemraj | null | pszemraj/pegasus-large-summary-explain | 62 | 1 | transformers | 5,602 | ---
language:
- en
tags:
- summarization
- pegasus
license: apache-2.0
datasets:
- kmfoda/booksum
metrics:
- rouge
widget:
- text: large earthquakes along a given fault segment do not occur at random intervals
because it takes time to accumulate the strain energy for the rupture. The rates
at which tectonic plates move and accumulate strain at their boundaries are approximately
uniform. Therefore, in first approximation, one may expect that large ruptures
of the same fault segment will occur at approximately constant time intervals.
If subsequent main shocks have different amounts of slip across the fault, then
the recurrence time may vary, and the basic idea of periodic mainshocks must be
modified. For great plate boundary ruptures the length and slip often vary by
a factor of 2. Along the southern segment of the San Andreas fault the recurrence
interval is 145 years with variations of several decades. The smaller the standard
deviation of the average recurrence interval, the more specific could be the long
term prediction of a future mainshock.
example_title: earthquakes
- text: " A typical feed-forward neural field algorithm. Spatiotemporal coordinates\
\ are fed into a neural network that predicts values in the reconstructed domain.\
\ Then, this domain is mapped to the sensor domain where sensor measurements are\
\ available as supervision. Class and Section Problems Addressed Generalization\
\ (Section 2) Inverse problems, ill-posed problems, editability; symmetries. Hybrid\
\ Representations (Section 3) Computation & memory efficiency, representation\
\ capacity, editability: Forward Maps (Section 4) Inverse problems Network Architecture\
\ (Section 5) Spectral bias, integration & derivatives. Manipulating Neural Fields\
\ (Section 6) Edit ability, constraints, regularization. Table 2: The five classes\
\ of techniques in the neural field toolbox each addresses problems that arise\
\ in learning, inference, and control. (Section 3). We can supervise reconstruction\
\ via differentiable forward maps that transform Or project our domain (e.g, 3D\
\ reconstruction via 2D images; Section 4) With appropriate network architecture\
\ choices, we can overcome neural network spectral biases (blurriness) and efficiently\
\ compute derivatives and integrals (Section 5). Finally, we can manipulate neural\
\ fields to add constraints and regularizations, and to achieve editable representations\
\ (Section 6). Collectively, these classes constitute a 'toolbox' of techniques\
\ to help solve problems with neural fields There are three components in a conditional\
\ neural field: (1) An encoder or inference function \u20AC that outputs the conditioning\
\ latent variable 2 given an observation 0 E(0) =2. 2 is typically a low-dimensional\
\ vector, and is often referred to aS a latent code Or feature code_ (2) A mapping\
\ function 4 between Z and neural field parameters O: Y(z) = O; (3) The neural\
\ field itself $. The encoder \u20AC finds the most probable z given the observations\
\ O: argmaxz P(2/0). The decoder maximizes the inverse conditional probability\
\ to find the most probable 0 given Z: arg- max P(Olz). We discuss different encoding\
\ schemes with different optimality guarantees (Section 2.1.1), both global and\
\ local conditioning (Section 2.1.2), and different mapping functions Y (Section\
\ 2.1.3) 2. Generalization Suppose we wish to estimate a plausible 3D surface\
\ shape given a partial or noisy point cloud. We need a suitable prior over the\
\ sur- face in its reconstruction domain to generalize to the partial observations.\
\ A neural network expresses a prior via the function space of its architecture\
\ and parameters 0, and generalization is influenced by the inductive bias of\
\ this function space (Section 5)."
example_title: scientific paper
- text: ' the big variety of data coming from diverse sources is one of the key properties
of the big data phenomenon. It is, therefore, beneficial to understand how data
is generated in various environments and scenarios, before looking at what should
be done with this data and how to design the best possible architecture to accomplish
this The evolution of IT architectures, described in Chapter 2, means that the
data is no longer processed by a few big monolith systems, but rather by a group
of services In parallel to the processing layer, the underlying data storage has
also changed and became more distributed This, in turn, required a significant
paradigm shift as the traditional approach to transactions (ACID) could no longer
be supported. On top of this, cloud computing is becoming a major approach with
the benefits of reducing costs and providing on-demand scalability but at the
same time introducing concerns about privacy, data ownership, etc In the meantime
the Internet continues its exponential growth: Every day both structured and unstructured
data is published and available for processing: To achieve competitive advantage
companies have to relate their corporate resources to external services, e.g.
financial markets, weather forecasts, social media, etc While several of the sites
provide some sort of API to access the data in a more orderly fashion; countless
sources require advanced web mining and Natural Language Processing (NLP) processing
techniques: Advances in science push researchers to construct new instruments
for observing the universe O conducting experiments to understand even better
the laws of physics and other domains. Every year humans have at their disposal
new telescopes, space probes, particle accelerators, etc These instruments generate
huge streams of data, which need to be stored and analyzed. The constant drive
for efficiency in the industry motivates the introduction of new automation techniques
and process optimization: This could not be done without analyzing the precise
data that describe these processes. As more and more human tasks are automated,
machines provide rich data sets, which can be analyzed in real-time to drive efficiency
to new levels. Finally, it is now evident that the growth of the Internet of Things
is becoming a major source of data. More and more of the devices are equipped
with significant computational power and can generate a continuous data stream
from their sensors. In the subsequent sections of this chapter, we will look at
the domains described above to see what they generate in terms of data sets. We
will compare the volumes but will also look at what is characteristic and important
from their respective points of view. 3.1 The Internet is undoubtedly the largest
database ever created by humans. While several well described; cleaned, and structured
data sets have been made available through this medium, most of the resources
are of an ambiguous, unstructured, incomplete or even erroneous nature. Still,
several examples in the areas such as opinion mining, social media analysis, e-governance,
etc, clearly show the potential lying in these resources. Those who can successfully
mine and interpret the Internet data can gain unique insight and competitive advantage
in their business An important area of data analytics on the edge of corporate
IT and the Internet is Web Analytics.'
example_title: data science textbook
- text: "Transformer-based models have shown to be very useful for many NLP tasks.\
\ However, a major limitation of transformers-based models is its O(n^2)O(n 2)\
\ time & memory complexity (where nn is sequence length). Hence, it's computationally\
\ very expensive to apply transformer-based models on long sequences n > 512n>512.\
\ Several recent papers, e.g. Longformer, Performer, Reformer, Clustered attention\
\ try to remedy this problem by approximating the full attention matrix. You can\
\ checkout \U0001F917's recent blog post in case you are unfamiliar with these\
\ models.\nBigBird (introduced in paper) is one of such recent models to address\
\ this issue. BigBird relies on block sparse attention instead of normal attention\
\ (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a\
\ much lower computational cost compared to BERT. It has achieved SOTA on various\
\ tasks involving very long sequences such as long documents summarization, question-answering\
\ with long contexts.\nBigBird RoBERTa-like model is now available in \U0001F917\
Transformers. The goal of this post is to give the reader an in-depth understanding\
\ of big bird implementation & ease one's life in using BigBird with \U0001F917\
Transformers. But, before going into more depth, it is important to remember that\
\ the BigBird's attention is an approximation of BERT's full attention and therefore\
\ does not strive to be better than BERT's full attention, but rather to be more\
\ efficient. It simply allows to apply transformer-based models to much longer\
\ sequences since BERT's quadratic memory requirement quickly becomes unbearable.\
\ Simply put, if we would have \u221E compute & \u221E time, BERT's attention\
\ would be preferred over block sparse attention (which we are going to discuss\
\ in this post).\nIf you wonder why we need more compute when working with longer\
\ sequences, this blog post is just right for you!\nSome of the main questions\
\ one might have when working with standard BERT-like attention include:\nDo all\
\ tokens really have to attend to all other tokens? Why not compute attention\
\ only over important tokens? How to decide what tokens are important? How to\
\ attend to just a few tokens in a very efficient way? In this blog post, we will\
\ try to answer those questions.\nWhat tokens should be attended to? We will give\
\ a practical example of how attention works by considering the sentence 'BigBird\
\ is now available in HuggingFace for extractive question answering'. In BERT-like\
\ attention, every word would simply attend to all other tokens.\nLet's think\
\ about a sensible choice of key tokens that a queried token actually only should\
\ attend to by writing some pseudo-code. Will will assume that the token available\
\ is queried and build a sensible list of key tokens to attend to.\n>>> # let's\
\ consider following sentence as an example >>> example = ['BigBird', 'is', 'now',\
\ 'available', 'in', 'HuggingFace', 'for', 'extractive', 'question', 'answering']\n\
>>> # further let's assume, we're trying to understand the representation of 'available'\
\ i.e. >>> query_token = 'available' >>> # We will initialize an empty `set` and\
\ fill up the tokens of our interest as we proceed in this section. >>> key_tokens\
\ = [] # => currently 'available' token doesn't have anything to attend Nearby\
\ tokens should be important because, in a sentence (sequence of words), the current\
\ word is highly dependent on neighboring past & future tokens. This intuition\
\ is the idea behind the concept of sliding attention."
example_title: bigbird blog intro
inference:
parameters:
max_length: 64
no_repeat_ngram_size: 2
encoder_no_repeat_ngram_size: 3
repetition_penalty: 2.4
length_penalty: 0.5
num_beams: 4
early_stopping: true
model-index:
- name: pszemraj/pegasus-large-summary-explain
results:
- task:
type: summarization
name: Summarization
dataset:
name: kmfoda/booksum
type: kmfoda/booksum
config: kmfoda--booksum
split: test
metrics:
- name: ROUGE-1
type: rouge
value: 29.1023
verified: true
- name: ROUGE-2
type: rouge
value: 6.2441
verified: true
- name: ROUGE-L
type: rouge
value: 14.7503
verified: true
- name: ROUGE-LSUM
type: rouge
value: 27.2375
verified: true
- name: loss
type: loss
value: 2.979011058807373
verified: true
- name: gen_len
type: gen_len
value: 467.269
verified: true
---
# pszemraj/pegasus-large-summary-explain
This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on the [booksum](https://github.com/salesforce/booksum) dataset for four total epochs.
It achieves the following results on the evaluation set:
- eval_loss: 1.1193
- eval_runtime: 6.6754
- eval_samples_per_second: 27.714
- eval_steps_per_second: 1.798
- epoch: 3.0
- step: 900
A 1-epoch checkpoint can be found at [pszemraj/pegasus-large-book-summary](https://huggingface.co/pszemraj/pegasus-large-book-summary), which is where the second training session started from.
## Model description
- After some initial tests, it was found that models trained on the [booksum](https://github.com/salesforce/booksum) dataset seem to inherit the summaries' SparkNotes-style explanations; so the user gets a shorter and easier-to-understand version of the text instead of **just** more compact.
- This quality (anecdotally) is favourable for learning/comprehension because summarization datasets that simply make the information more compact (* cough * arXiv) can be so dense that the overall time spent trying to _comprehend_ what it is saying can be the same as just reading the original material.
## Intended uses & limitations
- standard pegasus has a max input length of 1024 tokens, therefore the model only saw the first 1024 tokens of a chapter when training, and learned to try to make the chapter's summary from that. Keep this in mind when using this model, as information at the end of a text sequence longer than 1024 tokens may be excluded from the final summary/the model will be biased towards information presented first.
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 4
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
textattack/facebook-bart-large-MNLI | a9bbf281f3c2ea56317e31551b7f3161412906c7 | 2020-06-09T16:49:34.000Z | [
"pytorch",
"bart",
"text-classification",
"transformers"
] | text-classification | false | textattack | null | textattack/facebook-bart-large-MNLI | 62 | null | transformers | 5,603 | Entry not found |
timo/timo-BART-german | 168cf21134bb84888174af675f26de45b4803d3d | 2020-10-28T19:09:26.000Z | [
"pytorch",
"fsmt",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | timo | null | timo/timo-BART-german | 62 | null | transformers | 5,604 | Entry not found |
clips/contact | c87cdcee36a7de0bb572b5201b9ec1795b8f7925 | 2022-03-15T12:57:53.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"arxiv:2203.07362",
"transformers"
] | feature-extraction | false | clips | null | clips/contact | 62 | null | transformers | 5,605 | # CoNTACT
### Model description
<u>Co</u>ntextual <u>N</u>eural <u>T</u>ransformer <u>A</u>dapted to <u>C</u>OVID-19 <u>T</u>weets or **CoNTACT** is a Dutch RobBERT model (```pdelobelle/robbert-v2-dutch-base```) adapted to the domain of COVID-19 tweets. The model was developed at [CLiPS](https://www.uantwerpen.be/en/research-groups/clips/) by Jens Lemmens, Jens Van Nooten, Tim Kreutz and Walter Daelemans. A full description of the model, the data that was used and the experiments that were conducted can be found in this ArXiv preprint: https://arxiv.org/abs/2203.07362
### Intended use
The model was developed with the intention of achieving high results on NLP tasks involving Dutch social media messages related to COVID-19.
### How to use
CoNTACT should be fine-tuned on a downstream task. This can be achieved by referring to ```clips/contact``` in the ```--model_name_or_path``` argument in Huggingface/Transformers' example scripts, or by loading CoNTACT (as shown below) and fine-tuning it using your own code:
```
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained('clips/contact')
tokenizer = AutoTokenizer.from_pretrained('clips/contact')
...
```
### Training data
CoNTACT was trained on 2.8M Dutch tweets related to COVID-19 that were posted in 2021.
### Training Procedure
The model's pre-training phase was extended by performing Masked Language Modeling (MLM) on the training data described above. This was done for 4 epochs, using the largest possible batch size that fit working memory (32).
### Evaluation
The model was evaluated on two tasks using data from two social media platforms: Twitter and Facebook. Task 1 involved the binary classification of COVID-19 vaccine stance (hesitant vs. not hesitant), whereas task 2 consisted of the mulilabel, multiclass classification of arguments for vaccine hesitancy. CoNTACT outperformed out-of-the-box RobBERT in virtually all our experiments, and with statistical significance in most cases.
### How to cite
```
@misc{lemmens2022contact,
title={CoNTACT: A Dutch COVID-19 Adapted BERT for Vaccine Hesitancy and Argumentation Detection},
author={Jens Lemmens and Jens Van Nooten and Tim Kreutz and Walter Daelemans},
year={2022},
eprint={2203.07362},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
NbAiLab/nb-gpt-j-6B | 3c6134eda729fba015f91e8a1776e9b4592a2b55 | 2022-06-17T12:04:34.000Z | [
"pytorch",
"gptj",
"text-generation",
"no",
"nb",
"nn",
"dataset:NbAiLab/NCC",
"dataset:mc4",
"dataset:oscar",
"arxiv:2104.09864",
"arxiv:2101.00027",
"transformers",
"causal-lm",
"license:apache-2.0"
] | text-generation | false | NbAiLab | null | NbAiLab/nb-gpt-j-6B | 62 | 4 | transformers | 5,606 | ---
language:
- no
- nb
- nn
tags:
- pytorch
- causal-lm
license: apache-2.0
datasets:
- NbAiLab/NCC
- mc4
- oscar
---
- **Release v1beta2** (June 18th, 2022) *[Full-precision](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1beta2), [sharded](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/sharded), and [half-precision](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1beta2-float16) weights*
- **Release v1beta1** (April 28th, 2022) *[Half-precision](https://huggingface.co/NbAiLab/nb-gpt-j-6B/tree/v1beta1-float16) weights*
# NB-GPT-J-6B
## Demo: https://ai.nb.no/demo/nb-gpt-j-6B/ (Be patient, it runs on CPU 😅)
## Model Description
NB-GPT-J-6B is a Norwegian finetuned version of GPT-J 6B, a transformer model trained using Ben Wang's [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax/). "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters (6 billion parameters).
<figure>
| Hyperparameter | Value |
|----------------------|------------|
| \(n_{parameters}\) | 6053381344 |
| \(n_{layers}\) | 28* |
| \(d_{model}\) | 4096 |
| \(d_{ff}\) | 16384 |
| \(n_{heads}\) | 16 |
| \(d_{head}\) | 256 |
| \(n_{ctx}\) | 2048 |
| \(n_{vocab}\) | 50257/50400† (same tokenizer as GPT-2/3) |
| Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) |
| RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
<figcaption><p><strong>*</strong> Each layer consists of one feedforward block and one self attention block.</p>
<p><strong>†</strong> Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer.</p></figcaption></figure>
The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model
dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64
dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as
GPT-2/GPT-3.
## Training data
NB-GPT-J-6B was finetuned on [NCC](https://huggingface.co/datasets/NbAiLab/NCC), the Norwegian Colossal Corpus, plus other Internet sources like Wikipedia, mC4, and OSCAR.
## Training procedure
This model was finetuned for 17 billion tokens over 193,000 steps on a TPU v3-8 VM. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly.
## Intended Use and Limitations
NB-GPT-J-6B learns an inner representation of the Norwegian language that can be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating text from a prompt.
### How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("NbAiLab/nb-gpt-j-6B")
model = AutoModelForCausalLM.from_pretrained("NbAiLab/nb-gpt-j-6B")
```
### Limitations and Biases
As the original GPT-J model, the core functionality of NB-GPT-J-6B is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting NB-GPT-J-6B it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon NB-GPT-J-6B to produce factually accurate output.
The original GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile. A fine-grained analysis of the bias contained in the corpus used for fine-tuning is still pending.
As with all language models, it is hard to predict in advance how NB-GPT-J-6B will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
## Evaluation results
We still have to find proper datasets to evaluate the model, so help is welcome!
## Citation and Related Information
### BibTeX entry
To cite this model or the corpus used:
```bibtex
@inproceedings{kummervold2021operationalizing,
title={Operationalizing a National Digital Library: The Case for a Norwegian Transformer Model},
author={Kummervold, Per E and De la Rosa, Javier and Wetjen, Freddy and Brygfjeld, Svein Arne},
booktitle={Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa)},
pages={20--29},
year={2021},
url={https://aclanthology.org/2021.nodalida-main.3/}
}
```
If you use this model, we would love to hear about it! Reach out on twitter, GitHub, Discord, or shoot us an email.
## Disclaimer
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (The National Library of Norway) be liable for any results arising from the use made by third parties of these models.
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/), as well as the Cloud TPU team for providing early access to the [Cloud TPU VM](https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms) Alpha. Specially, to [Stella Biderman](https://www.stellabiderman.com) for her general openness, and [Ben Wang](https://github.com/kingoflolz/mesh-transformer-jax) for the main codebase.
|
KBLab/megatron-bert-large-swedish-cased-165k | 459c4eeb687d7c5cd526139195c00c6f160a2f5a | 2022-04-05T09:01:40.000Z | [
"pytorch",
"megatron-bert",
"fill-mask",
"sv",
"transformers",
"autotrain_compatible"
] | fill-mask | false | KBLab | null | KBLab/megatron-bert-large-swedish-cased-165k | 62 | null | transformers | 5,607 | ---
language:
- sv
---
# Megatron-BERT-large Swedish 165k
This BERT model was trained using the Megatron-LM library.
The size of the model is a regular BERT-large with 340M parameters.
The model was trained on about 70GB of data, consisting mostly of OSCAR and Swedish newspaper text curated by the National Library of Sweden.
Training was done for 165k training steps using a batch size of 8k; the number of training steps is set to 500k, meaning that this version is a checkpoint.
The hyperparameters for training followed the setting for RoBERTa.
The model has three sister models trained on the same dataset:
- [🤗 BERT Swedish](https://huggingface.co/KBLab/bert-base-swedish-cased-new)
- [Megatron-BERT-base-600k](https://huggingface.co/KBLab/megatron-bert-base-swedish-cased-600k)
- [Megatron-BERT-base-125k](https://huggingface.co/KBLab/megatron-bert-base-swedish-cased-125k)
and an earlier checkpoint
- [Megatron-BERT-large-110k](https://huggingface.co/KBLab/megatron-bert-large-swedish-cased-110k)
## Acknowledgements
We gratefully acknowledge the HPC RIVR consortium (https://www.hpc-rivr.si) and EuroHPC JU (https://eurohpc-ju.europa.eu) for funding this research by providing computing resources of the HPC system Vega at the Institute of Information Science (https://www.izum.si). |
itsfuckingdenis/diplomauno | 84ca726b2390ec2e14c57686f7f726707cdbc464 | 2022-03-22T15:29:46.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers",
"license:wtfpl"
] | feature-extraction | false | itsfuckingdenis | null | itsfuckingdenis/diplomauno | 62 | null | transformers | 5,608 | ---
license: wtfpl
---
|
doc2query/msmarco-russian-mt5-base-v1 | a7d5ede00d679dee611a7bd700275fb4f4b8667e | 2022-04-29T12:10:29.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"ru",
"dataset:unicamp-dl/mmarco",
"arxiv:1904.08375",
"arxiv:2104.08663",
"arxiv:2112.07577",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | doc2query | null | doc2query/msmarco-russian-mt5-base-v1 | 62 | null | transformers | 5,609 | ---
language: ru
datasets:
- unicamp-dl/mmarco
widget:
- text: "Python (МФА: [ˈpʌɪθ(ə)n]; в русском языке встречаются названия пито́н или па́йтон) — высокоуровневый язык программирования общего назначения с динамической строгой типизацией и автоматическим управлением памятью, ориентированный на повышение производительности разработчика, читаемости кода и его качества, а также на обеспечение переносимости написанных на нём программ."
license: apache-2.0
---
# doc2query/msmarco-russian-mt5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on mT5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/beir-cellar/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. In our [GPL-Paper](https://arxiv.org/abs/2112.07577) / [GPL Example on SBERT.net](https://www.sbert.net/examples/domain_adaptation/README.html#gpl-generative-pseudo-labeling) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
model_name = 'doc2query/msmarco-russian-mt5-base-v1'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
text = "Python (МФА: [ˈpʌɪθ(ə)n]; в русском языке встречаются названия пито́н или па́йтон) — высокоуровневый язык программирования общего назначения с динамической строгой типизацией и автоматическим управлением памятью, ориентированный на повышение производительности разработчика, читаемости кода и его качества, а также на обеспечение переносимости написанных на нём программ."
def create_queries(para):
input_ids = tokenizer.encode(para, return_tensors='pt')
with torch.no_grad():
# Here we use top_k / top_k random sampling. It generates more diverse queries, but of lower quality
sampling_outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
top_k=10,
num_return_sequences=5
)
# Here we use Beam-search. It generates better quality queries, but with less diversity
beam_outputs = model.generate(
input_ids=input_ids,
max_length=64,
num_beams=5,
no_repeat_ngram_size=2,
num_return_sequences=5,
early_stopping=True
)
print("Paragraph:")
print(para)
print("\nBeam Outputs:")
for i in range(len(beam_outputs)):
query = tokenizer.decode(beam_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
print("\nSampling Outputs:")
for i in range(len(sampling_outputs)):
query = tokenizer.decode(sampling_outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
create_queries(text)
```
**Note:** `model.generate()` is non-deterministic for top_k/top_n sampling. It produces different queries each time you run it.
## Training
This model fine-tuned [google/mt5-base](https://huggingface.co/google/mt5-base) for 66k training steps (4 epochs on the 500k training pairs from MS MARCO). For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (query, passage) from the [mMARCO dataset](https://github.com/unicamp-dl/mMARCO).
|
paust/pko-t5-base | 6d16c5c154114f24d837f8d01e92d49e828bf543 | 2022-07-13T07:12:46.000Z | [
"pytorch",
"t5",
"text2text-generation",
"ko",
"arxiv:2105.09680",
"transformers",
"license:cc-by-4.0",
"autotrain_compatible"
] | text2text-generation | false | paust | null | paust/pko-t5-base | 62 | 1 | transformers | 5,610 | ---
language: ko
license: cc-by-4.0
---
# pko-t5-base
[Source Code](https://github.com/paust-team/pko-t5)
pko-t5 는 한국어 전용 데이터로 학습한 [t5 v1.1 모델](https://github.com/google-research/text-to-text-transfer-transformer/blob/84f8bcc14b5f2c03de51bd3587609ba8f6bbd1cd/released_checkpoints.md)입니다.
한국어를 tokenize 하기 위해서 sentencepiece 대신 OOV 가 없는 BBPE 를 사용했으며 한국어 데이터 (나무위키, 위키피디아, 모두의말뭉치 등..) 를 T5 의 span corruption task 를 사용해서 unsupervised learning 만 적용하여 학습을 진행했습니다.
pko-t5 를 사용하실 때는 대상 task 에 파인튜닝하여 사용하시기 바랍니다.
## Usage
transformers 의 API 를 사용하여 접근 가능합니다. tokenizer 를 사용할때는 `T5Tokenizer` 가 아니라 `T5TokenizerFast` 를 사용해주십시오. model 은 T5ForConditionalGeneration 를 그대로 활용하시면 됩니다.
### Example
```python
from transformers import T5TokenizerFast, T5ForConditionalGeneration
tokenizer = T5TokenizerFast.from_pretrained('paust/pko-t5-base')
model = T5ForConditionalGeneration.from_pretrained('paust/pko-t5-base')
input_ids = tokenizer(["qa question: 당신의 이름은 무엇인가요?"]).input_ids
labels = tokenizer(["T5 입니다."]).input_ids
outputs = model(input_ids=input_ids, labels=labels)
print(f"loss={outputs.loss} logits={outputs.logits}")
```
## Klue 평가 (dev)
| | Model | ynat (macro F1) | sts (pearsonr/F1) | nli (acc) | ner (entity-level F1) | re (micro F1) | dp (LAS) | mrc (EM/F1) |
| --- | --- |-----------------| --- | --- | --- | --- | --- | --- |
| | Baseline | **87.30** | **93.20/86.13** | **89.50** | 86.06 | 71.06 | 87.93 | 75.26/- |
| FT | [pko-t5-small](https://huggingface.co/paust/pko-t5-small) (77M) | 86.21 | 77.99/77.01 | 69.20 | 82.60 | 62.95 | 93.15 | 43.81/46.58 |
| FT | [pko-t5-base](https://huggingface.co/paust/pko-t5-base) (250M) | 87.29 | 90.25/83.43 | 79.73 | 87.80 | 72.94 | 97.28 | 61.53/64.74 |
| FT | [pko-t5-large](https://huggingface.co/paust/pko-t5-large) (800M) | 87.12 | 92.05/85.24 | 84.96 | **88.18** | 72.26 | 97.60 | 68.01/71.44 |
| MT | pko-t5-small | 85.85 | 79.12/77.81 | 66.8 | 81.53 | 67.93 | 91.38 | 44.97/48.07 |
| MT | pko-t5-base | 86.86 | 87.61/81.42 | 75.46 | 86.85 | 71.85 | 96.32 | 61.95/65.06 |
| MT | pko-t5-large | 87.25 | 91.05/84.58 | 82.16 | 87.63 | **74.78** | **97.33** | **69.18/71.92** |
- FT: 싱글태스크 파인튜닝 / MT: 멀티태스크 파인튜닝
- [Baseline](https://arxiv.org/abs/2105.09680): KLUE 논문에서 소개된 dev set 에 대한 SOTA 점수
## License
PAUST에서 만든 pko-t5는 [MIT license](https://github.com/paust-team/pko-t5/blob/main/LICENSE) 하에 공개되어 있습니다. |
autoevaluate/entity-extraction | 654993b2d1862c85f67e10c0e23b0b7131446bd9 | 2022-05-28T11:16:55.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | autoevaluate | null | autoevaluate/entity-extraction | 62 | 1 | transformers | 5,611 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: entity-extraction
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.8862817854414493
- name: Recall
type: recall
value: 0.9084908826490659
- name: F1
type: f1
value: 0.8972489227709645
- name: Accuracy
type: accuracy
value: 0.9774889986814304
- task:
type: token-classification
name: entity_extraction
dataset:
type: conll2003
name: conll2003
config: conll2003
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.9703231821006837
verified: true
- name: Precision
type: precision
value: 0.9758137392136365
verified: true
- name: Recall
type: recall
value: 0.9764192759122017
verified: true
- name: F1 Score
type: f1
value: 0.9761164136513085
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# entity-extraction
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0808
- Precision: 0.8863
- Recall: 0.9085
- F1: 0.8972
- Accuracy: 0.9775
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2552 | 1.0 | 878 | 0.0808 | 0.8863 | 0.9085 | 0.8972 | 0.9775 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
svenstahlmann/finetuned-distilbert-needmining | 092176dd5c42ae0db65f437b942e1323d96ae5fa | 2022-07-18T13:15:23.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"transformers",
"needmining",
"license:apache-2.0"
] | text-classification | false | svenstahlmann | null | svenstahlmann/finetuned-distilbert-needmining | 62 | null | transformers | 5,612 | ---
language: en
tags:
- distilbert
- needmining
license: apache-2.0
metric:
- f1
---
# Finetuned-Distilbert-needmining (uncased)
This model is a finetuned version of the [Distilbert base model](https://huggingface.co/distilbert-base-uncased). It was
trained to predict need-containing sentences from amazon product reviews.
## Model description
This mode is part of ongoing research, after the publication of the research more information will be added.
## Intended uses & limitations
You can use this model to identify sentences that contain customer needs in user-generated content. This can act as a filtering process to remove uninformative content for market research.
### How to use
You can use this model directly with a pipeline for text classification:
```python
>>> from transformers import pipeline
>>> classifier = pipeline("text-classification", model="svenstahlmann/finetuned-distilbert-needmining")
>>> classifier("the plasic feels super cheap.")
[{'label': 'contains need', 'score': 0.9397542476654053}]
```
### Limitations and bias
We are not aware of any bias in the training data.
## Training data
The training was done on a dataset of 6400 sentences. The sentences were taken from product reviews off amazon and coded if they express customer needs.
## Training procedure
For the training, we used [Population Based Training (PBT)](https://www.deepmind.com/blog/population-based-training-of-neural-networks) and optimized for f1 score on a validation set of 1600 sentences.
### Preprocessing
The preprocessing follows the [Distilbert base model](https://huggingface.co/distilbert-base-uncased).
### Pretraining
The model was trained on a titan RTX for 1 hour.
## Evaluation results
Results on the validation set:
| F1 |
|:----:|
| 76.0 |
### BibTeX entry and citation info
coming soon |
kapuska/dialogue-summarizationv1 | 91c770ac8f6549e0384e3e2db15434de6e3867e4 | 2022-07-20T10:57:35.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"dataset:samsum",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | kapuska | null | kapuska/dialogue-summarizationv1 | 62 | null | transformers | 5,613 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: dialogue-summarizationv1
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 47.3665
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dialogue-summarizationv1
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5298
- Rouge1: 47.3665
- Rouge2: 23.9331
- Rougel: 39.9646
- Rougelsum: 43.594
- Gen Len: 17.8264
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.5818 | 1.0 | 1841 | 1.5298 | 47.3665 | 23.9331 | 39.9646 | 43.594 | 17.8264 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Dev-DGT/food-dbert-multiling | b648054d802b2ac2a99f57954e8b8df1cf14320f | 2021-06-18T21:55:58.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | Dev-DGT | null | Dev-DGT/food-dbert-multiling | 61 | null | transformers | 5,614 | ---
widget:
- text: "El paciente se alimenta de pan, sopa de calabaza y coca-cola"
---
# Token classification for FOODs.
Detects foods in sentences.
Currently, only supports spanish. Multiple words foods are detected as one entity.
## To-do
- English support.
- Negation support.
- Quantity tags.
- Psychosocial tags. |
ErykWdowiak/GPTalian | 93ca5ca1b6bc26868d825135d5c0f7773af4e606 | 2021-05-21T09:42:05.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"it",
"scn",
"nap",
"transformers",
"exbert",
"license:apache-2.0"
] | text-generation | false | ErykWdowiak | null | ErykWdowiak/GPTalian | 61 | null | transformers | 5,615 | ---
language:
- en
- it
- scn
- nap
tags:
- exbert
- gpt2
license: apache-2.0
---
# GPTalian
This is a GPT2 model of Italian regional languages trained on [collections of Italian "dialect poetry"](http://dialectpoetry.com) by Luigi Bonaffini.
This is a multilingual model. Italians use the word "dialect" to describe their regional languages, but they are separate languages. And there's a lot of English in this dataset too.
The challenge of this project is to train a model to write the languages of Italy.
For those who do not know Italian, here's some (lowercase) text that you can type into the API box:
- oggi si parla il dialetto
- la sua poesia viene di
- ma non sempre trova
|
Helsinki-NLP/opus-mt-fi-de | 306b92594304d4c34107a4bba34e01a24a27efc4 | 2021-09-09T21:47:10.000Z | [
"pytorch",
"marian",
"text2text-generation",
"fi",
"de",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-fi-de | 61 | null | transformers | 5,616 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-fi-de
* source languages: fi
* target languages: de
* OPUS readme: [fi-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-04.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-de/opus-2019-12-04.zip)
* test set translations: [opus-2019-12-04.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-de/opus-2019-12-04.test.txt)
* test set scores: [opus-2019-12-04.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-de/opus-2019-12-04.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.fi.de | 45.2 | 0.637 |
|
MYX4567/distilgpt2-finetuned-wikitext2 | 7e486eedcd270a40efe669de9349d659b9eadfc7 | 2021-07-28T06:37:12.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] | text-generation | false | MYX4567 | null | MYX4567/distilgpt2-finetuned-wikitext2 | 61 | null | transformers | 5,617 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
model_index:
- name: distilgpt2-finetuned-wikitext2
results:
- task:
name: Causal Language Modeling
type: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6428
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.76 | 1.0 | 2334 | 3.6658 |
| 3.6325 | 2.0 | 4668 | 3.6454 |
| 3.6068 | 3.0 | 7002 | 3.6428 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.10.2
- Tokenizers 0.10.3
|
SkolkovoInstitute/t5-paranmt-detox | a60d9b9d7fa44211621e52156d59dbee54e49b0b | 2021-11-03T08:40:36.000Z | [
"pytorch",
"t5",
"text2text-generation",
"arxiv:1711.05732",
"arxiv:1911.00536",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | SkolkovoInstitute | null | SkolkovoInstitute/t5-paranmt-detox | 61 | 1 | transformers | 5,618 | This is a paraphraser based on [ceshine/t5-paraphrase-paws-msrp-opinosis](https://huggingface.co/ceshine/t5-paraphrase-paws-msrp-opinosis)
and additionally fine-tuned on [ParaNMT](https://arxiv.org/abs/1711.05732) filtered for the task of detoxification.
The model was trained for the paper [Text Detoxification using Large Pre-trained Neural Models](https://arxiv.org/abs/1911.00536).
An example of its use and the code for its training is given in https://github.com/skoltech-nlp/detox |
chinhon/pegasus-multi_news-headline | bf2a96a29df0b4bb16d9d70e1bd21d87454c5947 | 2021-10-31T01:30:14.000Z | [
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | chinhon | null | chinhon/pegasus-multi_news-headline | 61 | 3 | transformers | 5,619 | ---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: pegasus-multi_news-headline
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-multi_news-headline
This model is a fine-tuned version of [google/pegasus-multi_news](https://huggingface.co/google/pegasus-multi_news) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4421
- Rouge1: 41.616
- Rouge2: 22.922
- Rougel: 35.2189
- Rougelsum: 35.3561
- Gen Len: 33.9532
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.6637 | 1.0 | 31200 | 1.4877 | 41.0996 | 22.579 | 34.9311 | 35.0611 | 34.3431 |
| 1.4395 | 2.0 | 62400 | 1.4388 | 41.6075 | 22.8274 | 35.2051 | 35.3526 | 33.7965 |
| 1.3137 | 3.0 | 93600 | 1.4421 | 41.616 | 22.922 | 35.2189 | 35.3561 | 33.9532 |
### Framework versions
- Transformers 4.12.2
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
facebook/s2t-small-covost2-en-fa-st | db4d49793a8778016bcea34c789c5ad6a9d18a83 | 2022-02-07T15:24:59.000Z | [
"pytorch",
"tf",
"speech_to_text",
"automatic-speech-recognition",
"en",
"fa",
"dataset:covost2",
"arxiv:2010.05171",
"arxiv:1912.06670",
"arxiv:1904.08779",
"transformers",
"audio",
"speech-translation",
"license:mit"
] | automatic-speech-recognition | false | facebook | null | facebook/s2t-small-covost2-en-fa-st | 61 | 1 | transformers | 5,620 | ---
language:
- en
- fa
datasets:
- covost2
tags:
- audio
- speech-translation
- automatic-speech-recognition
license: mit
pipeline_tag: automatic-speech-recognition
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
---
# S2T-SMALL-COVOST2-EN-FA-ST
`s2t-small-covost2-en-fa-st` is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST).
The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text)
## Model description
S2T is a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech
Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are
fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the
transcripts/translations autoregressively.
## Intended uses & limitations
This model can be used for end-to-end English speech to Farsi text translation.
See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints.
### How to use
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
*Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the
filter bank features. Make sure to install the `torchaudio` package before running this example.*
You could either install those as extra speech dependancies with
`pip install transformers"[speech, sentencepiece]"` or install the packages seperatly
with `pip install torchaudio sentencepiece`.
```python
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
import soundfile as sf
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-covost2-en-fa-st")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-covost2-en-fa-st")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset(
"patrickvonplaten/librispeech_asr_dummy",
"clean",
split="validation"
)
ds = ds.map(map_to_array)
inputs = processor(
ds["speech"][0],
sampling_rate=48_000,
return_tensors="pt"
)
generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"])
translation = processor.batch_decode(generated_ids, skip_special_tokens=True)
```
## Training data
The s2t-small-covost2-en-fa-st is trained on English-Farsi subset of [CoVoST2](https://github.com/facebookresearch/covost).
CoVoST is a large-scale multilingual ST corpus based on [Common Voice](https://arxiv.org/abs/1912.06670), created to to foster
ST research with the largest ever open dataset
## Training procedure
### Preprocessing
The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from
WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization)
is applied to each example.
The texts are lowercased and tokenized using character based SentencePiece vocab.
### Training
The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779).
The encoder receives speech features, and the decoder generates the transcripts autoregressively. To accelerate
model training and for better performance the encoder is pre-trained for English ASR.
## Evaluation results
CoVOST2 test results for en-fa (BLEU score): 11.43
### BibTeX entry and citation info
```bibtex
@inproceedings{wang2020fairseqs2t,
title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq},
author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino},
booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations},
year = {2020},
}
```
|
hfl/chinese-electra-180g-base-generator | c7713846e35a5a43635508ed64b648b737ff92fa | 2021-03-03T01:26:40.000Z | [
"pytorch",
"tf",
"electra",
"zh",
"arxiv:2004.13922",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | false | hfl | null | hfl/chinese-electra-180g-base-generator | 61 | null | transformers | 5,621 | ---
language:
- zh
license: "apache-2.0"
pipeline_tag: "fill-mask"
---
# This model is trained on 180G data, we recommend using this one than the original version.
## Chinese ELECTRA
Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants.
For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA.
ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.
This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra)
You may also interested in,
- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
## Citation
If you find our resource or paper is useful, please consider including the following citation in your paper.
- https://arxiv.org/abs/2004.13922
```
@inproceedings{cui-etal-2020-revisiting,
title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
author = "Cui, Yiming and
Che, Wanxiang and
Liu, Ting and
Qin, Bing and
Wang, Shijin and
Hu, Guoping",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
pages = "657--668",
}
``` |
huggingtweets/billgates | 87a40ef9ff2217f17c666202db4b84cf3bf094ca | 2022-06-19T05:06:00.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/billgates | 61 | null | transformers | 5,622 | ---
language: en
thumbnail: http://www.huggingtweets.com/billgates/1655615155620/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1414439092373254147/JdS8yLGI_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Bill Gates</div>
<div style="text-align: center; font-size: 14px;">@billgates</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Bill Gates.
| Data | Bill Gates |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 189 |
| Short tweets | 8 |
| Tweets kept | 3053 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3pqsc5fw/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @billgates's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1afcsemd) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1afcsemd/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/billgates')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ielab/TILDE | de04dcba18612539a64bc2c26aa6234064eeaa31 | 2021-06-24T05:46:57.000Z | [
"pytorch",
"bert",
"text-generation",
"transformers"
] | text-generation | false | ielab | null | ielab/TILDE | 61 | 1 | transformers | 5,623 | Please treat TILDE as a BertLMHeadModel model:
```
from transformers import BertLMHeadModel, BertTokenizerFast
model = BertLMHeadModel.from_pretrained("ielab/TILDE")
tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased')
```
Github: https://github.com/ielab/TILDE |
indobenchmark/indogpt | 625917f730659d16068897c6d4e497f915af9737 | 2022-06-21T17:51:47.000Z | [
"pytorch",
"gpt2",
"text-generation",
"id",
"dataset:Indo4B+",
"arxiv:2104.08200",
"transformers",
"indogpt",
"indobenchmark",
"indonlg",
"license:mit"
] | text-generation | false | indobenchmark | null | indobenchmark/indogpt | 61 | 2 | transformers | 5,624 | ---
language: id
tags:
- indogpt
- indobenchmark
- indonlg
license: mit
inference: false
datasets:
- Indo4B+
---
# IndoGPT Model
[IndoGPT](https://arxiv.org/abs/2104.08200) is a state-of-the-art language model for Indonesian based on the GPT model. The pretrained model is trained using the GPT training objective.
## All Pre-trained Models
| Model | #params | Training data |
|--------------------------------|--------------------------------|-----------------------------------|
| `indobenchmark/indogpt` | 117M | Indo4B-Plus (23.79 GB of text) |
## Authors
<b>IndoGPT</b> was trained and evaluated by Samuel Cahyawijaya*, Genta Indra Winata*, Bryan Wilie*, Karissa Vincentio*, Xiaohong Li*, Adhiguna Kuncoro*, Sebastian Ruder, Zhi Yuan Lim, Syafri Bahar, Masayu Leylia Khodra, Ayu Purwarianti, Pascale Fung
## Citation
If you use our work, please cite:
```bibtex
@article{cahyawijaya2021indonlg,
title={IndoNLG: Benchmark and Resources for Evaluating Indonesian Natural Language Generation},
author={Cahyawijaya, Samuel and Winata, Genta Indra and Wilie, Bryan and Vincentio, Karissa and Li, Xiaohong and Kuncoro, Adhiguna and Ruder, Sebastian and Lim, Zhi Yuan and Bahar, Syafri and Khodra, Masayu Leylia and others},
journal={arXiv preprint arXiv:2104.08200},
year={2021}
}
```
|
kco4776/kogpt-chat | 1494648519c992c2cc904188cee1ff2633c8e53d | 2021-12-10T06:24:09.000Z | [
"pytorch",
"gpt2",
"transformers"
] | null | false | kco4776 | null | kco4776/kogpt-chat | 61 | null | transformers | 5,625 | ## References
- [koGPT2](https://github.com/SKT-AI/KoGPT2)
- [koGPT2-chatbot](https://github.com/haven-jeon/KoGPT2-chatbot) |
liaad/srl-en_mbert-base | 0c963b783ca3a81aafce1c53c0bb17715a7f0903 | 2021-09-22T08:56:08.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"multilingual",
"pt",
"en",
"dataset:PropBank.Br",
"dataset:CoNLL-2012",
"arxiv:2101.01213",
"transformers",
"bert-base-multilingual-cased",
"semantic role labeling",
"finetuned",
"license:apache-2.0"
] | feature-extraction | false | liaad | null | liaad/srl-en_mbert-base | 61 | null | transformers | 5,626 | ---
language:
- multilingual
- pt
- en
tags:
- bert-base-multilingual-cased
- semantic role labeling
- finetuned
license: apache-2.0
datasets:
- PropBank.Br
- CoNLL-2012
metrics:
- F1 Measure
---
# mBERT fine-tuned on English semantic role labeling
## Model description
This model is the [`bert-base-multilingual-cased`](https://huggingface.co/bert-base-multilingual-cased) fine-tuned on the English CoNLL formatted OntoNotes v5.0 semantic role labeling data. This is part of a project from which resulted the following models:
* [liaad/srl-pt_bertimbau-base](https://huggingface.co/liaad/srl-pt_bertimbau-base)
* [liaad/srl-pt_bertimbau-large](https://huggingface.co/liaad/srl-pt_bertimbau-large)
* [liaad/srl-pt_xlmr-base](https://huggingface.co/liaad/srl-pt_xlmr-base)
* [liaad/srl-pt_xlmr-large](https://huggingface.co/liaad/srl-pt_xlmr-large)
* [liaad/srl-pt_mbert-base](https://huggingface.co/liaad/srl-pt_mbert-base)
* [liaad/srl-en_xlmr-base](https://huggingface.co/liaad/srl-en_xlmr-base)
* [liaad/srl-en_xlmr-large](https://huggingface.co/liaad/srl-en_xlmr-large)
* [liaad/srl-en_mbert-base](https://huggingface.co/liaad/srl-en_mbert-base)
* [liaad/srl-enpt_xlmr-base](https://huggingface.co/liaad/srl-enpt_xlmr-base)
* [liaad/srl-enpt_xlmr-large](https://huggingface.co/liaad/srl-enpt_xlmr-large)
* [liaad/srl-enpt_mbert-base](https://huggingface.co/liaad/srl-enpt_mbert-base)
* [liaad/ud_srl-pt_bertimbau-large](https://huggingface.co/liaad/ud_srl-pt_bertimbau-large)
* [liaad/ud_srl-pt_xlmr-large](https://huggingface.co/liaad/ud_srl-pt_xlmr-large)
* [liaad/ud_srl-enpt_xlmr-large](https://huggingface.co/liaad/ud_srl-enpt_xlmr-large)
For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Intended uses & limitations
#### How to use
To use the transformers portion of this model:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("liaad/srl-en_mbert-base")
model = AutoModel.from_pretrained("liaad/srl-en_mbert-base")
```
To use the full SRL model (transformers portion + a decoding layer), refer to the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
#### Limitations and bias
- The models were trained only for 5 epochs.
- The English data was preprocessed to match the Portuguese data, so there are some differences in role attributions and some roles were removed from the data.
## Training procedure
The model was trained on the CoNLL-2012 dataset, preprocessed to match the Portuguese PropBank.Br data. They were tested on the PropBank.Br data set as well as on a smaller opinion dataset "Buscapé". For more information, please see the accompanying article (See BibTeX entry and citation info below) and the [project's github](https://github.com/asofiaoliveira/srl_bert_pt).
## Eval results
| Model Name | F<sub>1</sub> CV PropBank.Br (in domain) | F<sub>1</sub> Buscapé (out of domain) |
| --------------- | ------ | ----- |
| `srl-pt_bertimbau-base` | 76.30 | 73.33 |
| `srl-pt_bertimbau-large` | 77.42 | 74.85 |
| `srl-pt_xlmr-base` | 75.22 | 72.82 |
| `srl-pt_xlmr-large` | 77.59 | 73.84 |
| `srl-pt_mbert-base` | 72.76 | 66.89 |
| `srl-en_xlmr-base` | 66.59 | 65.24 |
| `srl-en_xlmr-large` | 67.60 | 64.94 |
| `srl-en_mbert-base` | 63.07 | 58.56 |
| `srl-enpt_xlmr-base` | 76.50 | 73.74 |
| `srl-enpt_xlmr-large` | **78.22** | 74.55 |
| `srl-enpt_mbert-base` | 74.88 | 69.19 |
| `ud_srl-pt_bertimbau-large` | 77.53 | 74.49 |
| `ud_srl-pt_xlmr-large` | 77.69 | 74.91 |
| `ud_srl-enpt_xlmr-large` | 77.97 | **75.05** |
### BibTeX entry and citation info
```bibtex
@misc{oliveira2021transformers,
title={Transformers and Transfer Learning for Improving Portuguese Semantic Role Labeling},
author={Sofia Oliveira and Daniel Loureiro and Alípio Jorge},
year={2021},
eprint={2101.01213},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
m3hrdadfi/bert-fa-base-uncased-farstail-mean-tokens | f80ddc8f90470a4b267e83a69c07dd56fbcff3e8 | 2021-05-28T06:03:42.000Z | [
"pytorch",
"jax",
"bert",
"feature-extraction",
"fa",
"transformers",
"license:apache-2.0"
] | feature-extraction | false | m3hrdadfi | null | m3hrdadfi/bert-fa-base-uncased-farstail-mean-tokens | 61 | null | transformers | 5,627 | ---
language: fa
license: apache-2.0
---
# FarsTail + ParsBERT
Please follow the [FarsTail](https://github.com/dml-qom/FarsTail) repo for the latest information about the dataset. For accessing the beneficiary models from this dataset, check out the [Sentence-Transformer](https://github.com/m3hrdadfi/sentence-transformers) repo.
```bibtex
@article{amirkhani2020farstail,
title={FarsTail: A Persian Natural Language Inference Dataset},
author={Hossein Amirkhani, Mohammad Azari Jafari, Azadeh Amirak, Zohreh Pourjafari, Soroush Faridan Jahromi, and Zeinab Kouhkan},
journal={arXiv preprint arXiv:2009.08820},
year={2020}
}
``` |
mitra-mir/BERT-Persian-Poetry | 7e9db6fe8d4a404fdc120228f4f6546aa44c23c4 | 2021-05-19T23:34:26.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | mitra-mir | null | mitra-mir/BERT-Persian-Poetry | 61 | null | transformers | 5,628 | BERT Language Model Further Pre-trained on Persian Poetry |
mrm8488/t5-base-e2e-question-generation | 61fe7d47f2e5a26a6e2523dd8b78460d259930cd | 2021-08-24T15:37:55.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mrm8488 | null | mrm8488/t5-base-e2e-question-generation | 61 | 1 | transformers | 5,629 | Entry not found |
sagteam/pharm-relation-extraction | c4bb64feddb2d1229f9b457330a502828f0289dd | 2021-11-24T17:12:12.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"arxiv:2105.00059",
"arxiv:1911.02116",
"transformers"
] | text-classification | false | sagteam | null | sagteam/pharm-relation-extraction | 61 | 2 | transformers | 5,630 | pharm-relation-extraction
===
Model trained to recognize 4 types of relationships between significant pharmacological entities in russian-language reviews: ADR–Drugname, Drugname–Diseasename, Drugname–SourceInfoDrug, Diseasename–Indication. The input of the model is a review text and a pair of entities, between which it is required to determine the fact of a relationship and one of the 4 types of relationship, listed above.
Data
----
Proposed model is trained on a subset of 908 reviews of the [Russian Drug Review Corpus (RDRS)](https://arxiv.org/pdf/2105.00059.pdf). The subset contains the pairs of entities marked with the 4 listed types of relationships:
- ADR-Drugname — the relationship between the drug and its side effects
- Drugname-SourceInfodrug — the relationship between the medication and the source of information about it (e.g., “was advised at the pharmacy”, e.g., “was advised at the pharmacy”, “the doctor recommended it”);
- Drugname-Diseasname — the relationship between the drug and the disease
- Diseasename-Indication — the connection between the illness and its symptoms (e.g., “cough”, “fever 39 degrees”)
Also, this subset contains pairs of the same entity types between which there is no relationship: for example, a drug and an unrelated side effect that appeared after taking another drug; in other words, this side effect is related to another drug.
Model topology and training
----
Proposed model is based on the [XLM-RoBERTA-large](https://arxiv.org/abs/1911.02116) topology. After the additional training as a language model on corpus of unmarked drug reviews, this model was trained as a classification model on 80% of the texts from subset of the corps described above.
How to use
----
See section "How to use" in [our git repository for the model](https://github.com/sag111/Relation_Extraction)
Results
----
Here are the accuracy, estimated by the f1 score metric for the recognition of relationships on the best fold.
| ADR–Drugname | Drugname–Diseasename | Drugname–SourceInfoDrug | Diseasename–Indication |
| ------------- | -------------------- | ----------------------- | ---------------------- |
| 0.955 | 0.892 | 0.922 | 0.891 |
Citation info
----
If you have found our results helpful in your work, feel free to cite our publication as:
```
@article{sboev2021extraction,
title={Extraction of the Relations between Significant Pharmacological Entities in Russian-Language Internet Reviews on Medications},
author={Sboev, Alexander and Selivanov, Anton and Moloshnikov, Ivan and Rybka, Roman and Gryaznov, Artem and Sboeva, Sanna and Rylkov, Gleb},
year={2021},
publisher={Preprints}
}
``` |
textattack/distilbert-base-uncased-QNLI | 429c2124a83a32d4dc9b7fd2bb0141f298cac5e9 | 2020-06-09T16:47:34.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | textattack | null | textattack/distilbert-base-uncased-QNLI | 61 | null | transformers | 5,631 | Entry not found |
uer/chinese_roberta_L-12_H-512 | 6675ef56dca044a81fe27ba071e7159f1a62d10d | 2022-07-15T08:15:58.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"zh",
"dataset:CLUECorpusSmall",
"arxiv:1909.05658",
"arxiv:1908.08962",
"transformers",
"autotrain_compatible"
] | fill-mask | false | uer | null | uer/chinese_roberta_L-12_H-512 | 61 | null | transformers | 5,632 | ---
language: zh
datasets: CLUECorpusSmall
widget:
- text: "北京是[MASK]国的首都。"
---
# Chinese RoBERTa Miniatures
## Model description
This is the set of 24 Chinese RoBERTa models pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658).
[Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released the 24 Chinese RoBERTa models. In order to facilitate users to reproduce the results, we used the publicly available corpus and provided all training details.
You can download the 24 Chinese RoBERTa miniatures either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or via HuggingFace from the links below:
| | H=128 | H=256 | H=512 | H=768 |
| -------- | :-----------------------: | :-----------------------: | :-------------------------: | :-------------------------: |
| **L=2** | [**2/128 (Tiny)**][2_128] | [2/256][2_256] | [2/512][2_512] | [2/768][2_768] |
| **L=4** | [4/128][4_128] | [**4/256 (Mini)**][4_256] | [**4/512 (Small)**][4_512] | [4/768][4_768] |
| **L=6** | [6/128][6_128] | [6/256][6_256] | [6/512][6_512] | [6/768][6_768] |
| **L=8** | [8/128][8_128] | [8/256][8_256] | [**8/512 (Medium)**][8_512] | [8/768][8_768] |
| **L=10** | [10/128][10_128] | [10/256][10_256] | [10/512][10_512] | [10/768][10_768] |
| **L=12** | [12/128][12_128] | [12/256][12_256] | [12/512][12_512] | [**12/768 (Base)**][12_768] |
Here are scores on the devlopment set of six Chinese tasks:
| Model | Score | douban | chnsenticorp | lcqmc | tnews(CLUE) | iflytek(CLUE) | ocnli(CLUE) |
| -------------- | :---: | :----: | :----------: | :---: | :---------: | :-----------: | :---------: |
| RoBERTa-Tiny | 72.3 | 83.0 | 91.4 | 81.8 | 62.0 | 55.0 | 60.3 |
| RoBERTa-Mini | 75.7 | 84.8 | 93.7 | 86.1 | 63.9 | 58.3 | 67.4 |
| RoBERTa-Small | 76.8 | 86.5 | 93.4 | 86.5 | 65.1 | 59.4 | 69.7 |
| RoBERTa-Medium | 77.8 | 87.6 | 94.8 | 88.1 | 65.6 | 59.5 | 71.2 |
| RoBERTa-Base | 79.5 | 89.1 | 95.2 | 89.2 | 67.0 | 60.9 | 75.5 |
For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained with the sequence length of 128:
- epochs: 3, 5, 8
- batch sizes: 32, 64
- learning rates: 3e-5, 1e-4, 3e-4
## How to use
You can use this model directly with a pipeline for masked language modeling (take the case of RoBERTa-Medium):
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='uer/chinese_roberta_L-8_H-512')
>>> unmasker("中国的首都是[MASK]京。")
[
{'sequence': '[CLS] 中 国 的 首 都 是 北 京 。 [SEP]',
'score': 0.8701988458633423,
'token': 1266,
'token_str': '北'},
{'sequence': '[CLS] 中 国 的 首 都 是 南 京 。 [SEP]',
'score': 0.1194809079170227,
'token': 1298,
'token_str': '南'},
{'sequence': '[CLS] 中 国 的 首 都 是 东 京 。 [SEP]',
'score': 0.0037803512532263994,
'token': 691,
'token_str': '东'},
{'sequence': '[CLS] 中 国 的 首 都 是 普 京 。 [SEP]',
'score': 0.0017127094324678183,
'token': 3249,
'token_str': '普'},
{'sequence': '[CLS] 中 国 的 首 都 是 望 京 。 [SEP]',
'score': 0.001687526935711503,
'token': 3307,
'token_str': '望'}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-8_H-512')
model = BertModel.from_pretrained("uer/chinese_roberta_L-8_H-512")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-8_H-512')
model = TFBertModel.from_pretrained("uer/chinese_roberta_L-8_H-512")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
[CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data. We found that models pre-trained on CLUECorpusSmall outperform those pre-trained on CLUECorpus2020, although CLUECorpus2020 is much larger than CLUECorpusSmall.
## Training procedure
Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes.
Taking the case of RoBERTa-Medium
Stage1:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_seq128_dataset.pt \
--processes_num 32 --seq_length 128 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_seq128_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_roberta_medium_seq128_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
--learning_rate 1e-4 --batch_size 64 \
--data_processor mlm --target mlm
```
Stage2:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_seq512_dataset.pt \
--processes_num 32 --seq_length 512 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_seq512_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--pretrained_model_path models/cluecorpussmall_roberta_medium_seq128_model.bin-1000000 \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
--learning_rate 5e-5 --batch_size 16 \
--data_processor mlm --target mlm
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin-250000 \
--output_model_path pytorch_model.bin \
--layers_num 8 --type mlm
```
### BibTeX entry and citation info
```
@article{devlin2018bert,
title={Bert: Pre-training of deep bidirectional transformers for language understanding},
author={Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1810.04805},
year={2018}
}
@article{liu2019roberta,
title={Roberta: A robustly optimized bert pretraining approach},
author={Liu, Yinhan and Ott, Myle and Goyal, Naman and Du, Jingfei and Joshi, Mandar and Chen, Danqi and Levy, Omer and Lewis, Mike and Zettlemoyer, Luke and Stoyanov, Veselin},
journal={arXiv preprint arXiv:1907.11692},
year={2019}
}
@article{turc2019,
title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models},
author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1908.08962v2 },
year={2019}
}
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
```
[2_128]:https://huggingface.co/uer/chinese_roberta_L-2_H-128
[2_256]:https://huggingface.co/uer/chinese_roberta_L-2_H-256
[2_512]:https://huggingface.co/uer/chinese_roberta_L-2_H-512
[2_768]:https://huggingface.co/uer/chinese_roberta_L-2_H-768
[4_128]:https://huggingface.co/uer/chinese_roberta_L-4_H-128
[4_256]:https://huggingface.co/uer/chinese_roberta_L-4_H-256
[4_512]:https://huggingface.co/uer/chinese_roberta_L-4_H-512
[4_768]:https://huggingface.co/uer/chinese_roberta_L-4_H-768
[6_128]:https://huggingface.co/uer/chinese_roberta_L-6_H-128
[6_256]:https://huggingface.co/uer/chinese_roberta_L-6_H-256
[6_512]:https://huggingface.co/uer/chinese_roberta_L-6_H-512
[6_768]:https://huggingface.co/uer/chinese_roberta_L-6_H-768
[8_128]:https://huggingface.co/uer/chinese_roberta_L-8_H-128
[8_256]:https://huggingface.co/uer/chinese_roberta_L-8_H-256
[8_512]:https://huggingface.co/uer/chinese_roberta_L-8_H-512
[8_768]:https://huggingface.co/uer/chinese_roberta_L-8_H-768
[10_128]:https://huggingface.co/uer/chinese_roberta_L-10_H-128
[10_256]:https://huggingface.co/uer/chinese_roberta_L-10_H-256
[10_512]:https://huggingface.co/uer/chinese_roberta_L-10_H-512
[10_768]:https://huggingface.co/uer/chinese_roberta_L-10_H-768
[12_128]:https://huggingface.co/uer/chinese_roberta_L-12_H-128
[12_256]:https://huggingface.co/uer/chinese_roberta_L-12_H-256
[12_512]:https://huggingface.co/uer/chinese_roberta_L-12_H-512
[12_768]:https://huggingface.co/uer/chinese_roberta_L-12_H-768 |
wanyu/IteraTeR-ROBERTA-Intention-Classifier | fcb6e9c52a4ee6eb9dcf6985cb1a2f6796babb81 | 2022-04-04T20:13:42.000Z | [
"pytorch",
"roberta",
"text-classification",
"dataset:IteraTeR_full_sent",
"arxiv:2203.03802",
"transformers"
] | text-classification | false | wanyu | null | wanyu/IteraTeR-ROBERTA-Intention-Classifier | 61 | null | transformers | 5,633 | ---
datasets:
- IteraTeR_full_sent
---
# IteraTeR RoBERTa model
This model was obtained by fine-tuning [roberta-large](https://huggingface.co/roberta-large) on [IteraTeR-human-sent](https://huggingface.co/datasets/wanyu/IteraTeR_human_sent) dataset.
Paper: [Understanding Iterative Revision from Human-Written Text](https://arxiv.org/abs/2203.03802) <br>
Authors: Wanyu Du, Vipul Raheja, Dhruv Kumar, Zae Myung Kim, Melissa Lopez, Dongyeop Kang
## Edit Intention Prediction Task
Given a pair of original sentence and revised sentence, our model can predict the edit intention for this revision pair.<br>
More specifically, the model will predict the probability of the following edit intentions:
<table>
<tr>
<th>Edit Intention</th>
<th>Definition</th>
<th>Example</th>
</tr>
<tr>
<td>clarity</td>
<td>Make the text more formal, concise, readable and understandable.</td>
<td>
Original: It's like a house which anyone can enter in it. <br>
Revised: It's like a house which anyone can enter.
</td>
</tr>
<tr>
<td>fluency</td>
<td>Fix grammatical errors in the text.</td>
<td>
Original: In the same year he became the Fellow of the Royal Society. <br>
Revised: In the same year, he became the Fellow of the Royal Society.
</td>
</tr>
<tr>
<td>coherence</td>
<td>Make the text more cohesive, logically linked and consistent as a whole.</td>
<td>
Original: Achievements and awards Among his other activities, he founded the Karachi Film Guild and Pakistan Film and TV Academy. <br>
Revised: Among his other activities, he founded the Karachi Film Guild and Pakistan Film and TV Academy.
</td>
</tr>
<tr>
<td>style</td>
<td>Convey the writer’s writing preferences, including emotions, tone, voice, etc..</td>
<td>
Original: She was last seen on 2005-10-22. <br>
Revised: She was last seen on October 22, 2005.
</td>
</tr>
<tr>
<td>meaning-changed</td>
<td>Update or add new information to the text.</td>
<td>
Original: This method improves the model accuracy from 64% to 78%. <br>
Revised: This method improves the model accuracy from 64% to 83%.
</td>
</tr>
</table>
## Usage
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("wanyu/IteraTeR-ROBERTA-Intention-Classifier")
model = AutoModelForSequenceClassification.from_pretrained("wanyu/IteraTeR-ROBERTA-Intention-Classifier")
id2label = {0: "clarity", 1: "fluency", 2: "coherence", 3: "style", 4: "meaning-changed"}
before_text = 'I likes coffee.'
after_text = 'I like coffee.'
model_input = tokenizer(before_text, after_text, return_tensors='pt')
model_output = model(**model_input)
softmax_scores = torch.softmax(model_output.logits, dim=-1)
pred_id = torch.argmax(softmax_scores)
pred_label = id2label[pred_id.int()]
``` |
brad1141/gpt2-finetuned-comp2 | fcb91eaea65865b9df8753ebf6f359bd9ba31230 | 2022-03-18T08:47:38.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | token-classification | false | brad1141 | null | brad1141/gpt2-finetuned-comp2 | 61 | null | transformers | 5,634 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: gpt2-finetuned-comp2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-finetuned-comp2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7788
- Precision: 0.3801
- Recall: 0.6854
- F1: 0.4800
- Accuracy: 0.4800
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 1.0962 | 1.0 | 1012 | 0.7528 | 0.3793 | 0.6109 | 0.4411 | 0.4411 |
| 0.7022 | 2.0 | 2024 | 0.6763 | 0.3992 | 0.6557 | 0.4799 | 0.4799 |
| 0.6136 | 3.0 | 3036 | 0.6751 | 0.3995 | 0.6597 | 0.4824 | 0.4824 |
| 0.5444 | 4.0 | 4048 | 0.6799 | 0.3891 | 0.6817 | 0.4854 | 0.4854 |
| 0.4846 | 5.0 | 5060 | 0.7371 | 0.4030 | 0.6701 | 0.4906 | 0.4906 |
| 0.4379 | 6.0 | 6072 | 0.7520 | 0.3956 | 0.6788 | 0.4887 | 0.4887 |
| 0.404 | 7.0 | 7084 | 0.7788 | 0.3801 | 0.6854 | 0.4800 | 0.4800 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
ckiplab/bert-tiny-chinese-pos | 5459b8666bf3aa754cefd6d7501b623dfd6a5af2 | 2022-05-10T03:28:12.000Z | [
"pytorch",
"bert",
"token-classification",
"zh",
"transformers",
"license:gpl-3.0",
"autotrain_compatible"
] | token-classification | false | ckiplab | null | ckiplab/bert-tiny-chinese-pos | 61 | null | transformers | 5,635 | ---
language:
- zh
thumbnail: https://ckip.iis.sinica.edu.tw/files/ckip_logo.png
tags:
- pytorch
- token-classification
- bert
- zh
license: gpl-3.0
---
# CKIP BERT Tiny Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- https://github.com/ckiplab/ckip-transformers
## Contributers
- [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
```
from transformers import (
BertTokenizerFast,
AutoModel,
)
tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese')
model = AutoModel.from_pretrained('ckiplab/bert-tiny-chinese-pos')
```
For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers.
有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
|
Amalq/stress-roberta-large | 58e8b6d0707b5432016b3fd5b974226bbbc18259 | 2022-06-08T15:20:24.000Z | [
"pytorch",
"roberta",
"fill-mask",
"en",
"dataset:Dreaddit",
"transformers",
"Transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Amalq | null | Amalq/stress-roberta-large | 61 | null | transformers | 5,636 | ---
language: en
tags:
- Transformers
license: apache-2.0
datasets:
- Dreaddit
---
# StressRoberta model
is a model initialized with roberta-large (https://huggingface.co/roberta-large )and trained with
[Dreaddit: A Reddit Dataset for Stress Analysis in Social Media] ( http://www.cs.columbia.edu/eturcan/data/dreaddit.zip ).
We follow the standard pretraining protocols of RoBERTa with [Huggingface’s Transformers library](https://github.com/huggingface/transformers).
## Usage Load the model via [Huggingface’s Transformers library](https://github.com/huggingface/transformers):
from transformers import AutoTokenizer,
AutoModel tokenizer = AutoTokenizer.from_pretrained("amalq/stress-roberta-large")
model = AutoModel.from_pretrained("amalq/stress-roberta-large")
Perplexity of this model is: 3.28 |
anahitapld/dbd_bert_da_simple | 465b6d91cf4af79218866df8e11ee5e664f2627c | 2022-07-18T07:59:45.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:apache-2.0"
] | text-classification | false | anahitapld | null | anahitapld/dbd_bert_da_simple | 61 | null | transformers | 5,637 | ---
license: apache-2.0
---
|
loubnabnl/codeparrot-small-multi-small-near-dedup | 939eb5b96b33d233d5bb482a83488b6eceb51b22 | 2022-07-18T09:20:36.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"license:apache-2.0"
] | text-generation | false | loubnabnl | null | loubnabnl/codeparrot-small-multi-small-near-dedup | 61 | null | transformers | 5,638 | ---
license: apache-2.0
---
|
Billwzl/20split_dataset_version2 | 007338f42769490b47a9a2ef1ffed5dc58c74df9 | 2022-07-27T08:07:06.000Z | [
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | fill-mask | false | Billwzl | null | Billwzl/20split_dataset_version2 | 61 | null | transformers | 5,639 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: 20split_dataset_version2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20split_dataset_version2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0626
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 2.7621 | 1.0 | 11851 | 2.5216 |
| 2.5466 | 2.0 | 23702 | 2.4157 |
| 2.4505 | 3.0 | 35553 | 2.3592 |
| 2.3798 | 4.0 | 47404 | 2.3028 |
| 2.3178 | 5.0 | 59255 | 2.2768 |
| 2.272 | 6.0 | 71106 | 2.2366 |
| 2.2323 | 7.0 | 82957 | 2.2128 |
| 2.1928 | 8.0 | 94808 | 2.1797 |
| 2.157 | 9.0 | 106659 | 2.1667 |
| 2.1292 | 10.0 | 118510 | 2.1392 |
| 2.0978 | 11.0 | 130361 | 2.1280 |
| 2.0725 | 12.0 | 142212 | 2.1106 |
| 2.052 | 13.0 | 154063 | 2.0944 |
| 2.0268 | 14.0 | 165914 | 2.0804 |
| 2.0121 | 15.0 | 177765 | 2.0698 |
| 1.9997 | 16.0 | 189616 | 2.0626 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
uaritm/ukrt5-base | 7f7487864ff0d9e9da47e80b4325bf1f6e52388f | 2022-07-28T11:02:32.000Z | [
"pytorch",
"t5",
"text2text-generation",
"uk",
"en",
"transformers",
"ukrainian",
"english",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | uaritm | null | uaritm/ukrt5-base | 61 | null | transformers | 5,640 | ---
language: ["uk", "en"]
tags:
- ukrainian
- english
license: mit
---
This is a variant of the [google/mt5-base](https://huggingface.co/google/mt5-base) model, in which Ukrainian and 9% English words remain.
This model has 252M parameters - 43% of the original size.
Special thanks for the practical example and inspiration: [cointegrated ](https://huggingface.co/cointegrated)
|
trickstters/evanbot-gpt | 9f2cdd0f874e430e8bf5222c335971e970ff6323 | 2022-07-27T15:15:16.000Z | [
"pytorch",
"conversational"
] | conversational | false | trickstters | null | trickstters/evanbot-gpt | 61 | null | null | 5,641 | ---
tags:
- conversational
---
# bot |
AykeeSalazar/vc-bantai-vit-withoutAMBI-adunest-v1 | fd6dec2ddc4912e9e294c0d17a1148cb99f1c0b2 | 2022-07-28T02:45:09.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"dataset:imagefolder",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | image-classification | false | AykeeSalazar | null | AykeeSalazar/vc-bantai-vit-withoutAMBI-adunest-v1 | 61 | null | transformers | 5,642 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vc-bantai-vit-withoutAMBI-adunest-v1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
args: Violation-Classification---Raw-6
metrics:
- name: Accuracy
type: accuracy
value: 0.9181222707423581
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vc-bantai-vit-withoutAMBI-adunest-v1
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3318
- Accuracy: 0.9181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.23 | 100 | 0.3365 | 0.8581 |
| No log | 0.45 | 200 | 0.3552 | 0.8472 |
| No log | 0.68 | 300 | 0.3165 | 0.8581 |
| No log | 0.91 | 400 | 0.2882 | 0.8690 |
| 0.3813 | 1.13 | 500 | 0.2825 | 0.8745 |
| 0.3813 | 1.36 | 600 | 0.2686 | 0.9007 |
| 0.3813 | 1.59 | 700 | 0.2381 | 0.9017 |
| 0.3813 | 1.81 | 800 | 0.3643 | 0.8734 |
| 0.3813 | 2.04 | 900 | 0.2873 | 0.8930 |
| 0.2736 | 2.27 | 1000 | 0.2236 | 0.9039 |
| 0.2736 | 2.49 | 1100 | 0.2652 | 0.8723 |
| 0.2736 | 2.72 | 1200 | 0.2793 | 0.8952 |
| 0.2736 | 2.95 | 1300 | 0.2158 | 0.8974 |
| 0.2736 | 3.17 | 1400 | 0.2410 | 0.8886 |
| 0.2093 | 3.4 | 1500 | 0.2262 | 0.9017 |
| 0.2093 | 3.63 | 1600 | 0.2110 | 0.9214 |
| 0.2093 | 3.85 | 1700 | 0.2048 | 0.9138 |
| 0.2093 | 4.08 | 1800 | 0.2044 | 0.9127 |
| 0.2093 | 4.31 | 1900 | 0.2591 | 0.9007 |
| 0.1764 | 4.54 | 2000 | 0.2466 | 0.8952 |
| 0.1764 | 4.76 | 2100 | 0.2554 | 0.9017 |
| 0.1764 | 4.99 | 2200 | 0.2145 | 0.9203 |
| 0.1764 | 5.22 | 2300 | 0.3187 | 0.9039 |
| 0.1764 | 5.44 | 2400 | 0.3336 | 0.9050 |
| 0.1454 | 5.67 | 2500 | 0.2542 | 0.9127 |
| 0.1454 | 5.9 | 2600 | 0.2796 | 0.8952 |
| 0.1454 | 6.12 | 2700 | 0.2410 | 0.9181 |
| 0.1454 | 6.35 | 2800 | 0.2503 | 0.9148 |
| 0.1454 | 6.58 | 2900 | 0.2966 | 0.8996 |
| 0.1216 | 6.8 | 3000 | 0.1978 | 0.9312 |
| 0.1216 | 7.03 | 3100 | 0.2297 | 0.9214 |
| 0.1216 | 7.26 | 3200 | 0.2768 | 0.9203 |
| 0.1216 | 7.48 | 3300 | 0.3356 | 0.9083 |
| 0.1216 | 7.71 | 3400 | 0.3415 | 0.9138 |
| 0.1038 | 7.94 | 3500 | 0.2398 | 0.9061 |
| 0.1038 | 8.16 | 3600 | 0.3347 | 0.8963 |
| 0.1038 | 8.39 | 3700 | 0.2199 | 0.9203 |
| 0.1038 | 8.62 | 3800 | 0.2943 | 0.9061 |
| 0.1038 | 8.84 | 3900 | 0.2561 | 0.9181 |
| 0.0925 | 9.07 | 4000 | 0.4170 | 0.8777 |
| 0.0925 | 9.3 | 4100 | 0.3638 | 0.8974 |
| 0.0925 | 9.52 | 4200 | 0.3233 | 0.9094 |
| 0.0925 | 9.75 | 4300 | 0.3496 | 0.9203 |
| 0.0925 | 9.98 | 4400 | 0.3621 | 0.8996 |
| 0.0788 | 10.2 | 4500 | 0.3260 | 0.9116 |
| 0.0788 | 10.43 | 4600 | 0.3979 | 0.9061 |
| 0.0788 | 10.66 | 4700 | 0.3301 | 0.8974 |
| 0.0788 | 10.88 | 4800 | 0.2197 | 0.9105 |
| 0.0788 | 11.11 | 4900 | 0.3306 | 0.9148 |
| 0.0708 | 11.34 | 5000 | 0.3318 | 0.9181 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Rifky/FND | c3f33a822223f3a9ad991f743360978537c9b0fd | 2022-07-28T18:57:57.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | Rifky | null | Rifky/FND | 61 | null | transformers | 5,643 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: FND
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# FND
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5379
- Accuracy: 0.8029
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6593083676315244e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 25
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 85 | 0.6054 | 0.6824 |
| No log | 2.0 | 170 | 0.6038 | 0.7324 |
| No log | 3.0 | 255 | 0.7082 | 0.7 |
| No log | 4.0 | 340 | 0.5530 | 0.7824 |
| No log | 5.0 | 425 | 0.5379 | 0.8029 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Helsinki-NLP/opus-mt-dra-en | add52153b130e3e38db8497798b1276320b6c971 | 2021-01-18T08:02:54.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ta",
"kn",
"ml",
"te",
"dra",
"en",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-dra-en | 60 | null | transformers | 5,644 | ---
language:
- ta
- kn
- ml
- te
- dra
- en
tags:
- translation
license: apache-2.0
---
### dra-eng
* source group: Dravidian languages
* target group: English
* OPUS readme: [dra-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/dra-eng/README.md)
* model: transformer
* source language(s): kan mal tam tel
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/dra-eng/opus2m-2020-07-31.zip)
* test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/dra-eng/opus2m-2020-07-31.test.txt)
* test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/dra-eng/opus2m-2020-07-31.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.kan-eng.kan.eng | 9.1 | 0.312 |
| Tatoeba-test.mal-eng.mal.eng | 42.0 | 0.584 |
| Tatoeba-test.multi.eng | 30.0 | 0.493 |
| Tatoeba-test.tam-eng.tam.eng | 30.2 | 0.467 |
| Tatoeba-test.tel-eng.tel.eng | 15.9 | 0.378 |
### System Info:
- hf_name: dra-eng
- source_languages: dra
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/dra-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ta', 'kn', 'ml', 'te', 'dra', 'en']
- src_constituents: {'tam', 'kan', 'mal', 'tel'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/dra-eng/opus2m-2020-07-31.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/dra-eng/opus2m-2020-07-31.test.txt
- src_alpha3: dra
- tgt_alpha3: eng
- short_pair: dra-en
- chrF2_score: 0.493
- bleu: 30.0
- brevity_penalty: 1.0
- ref_len: 10641.0
- src_name: Dravidian languages
- tgt_name: English
- train_date: 2020-07-31
- src_alpha2: dra
- tgt_alpha2: en
- prefer_old: False
- long_pair: dra-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-es-es | dc18d22d76133106d65d46e1ffa43b2cc7b8a416 | 2021-09-09T21:42:12.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-es | 60 | null | transformers | 5,645 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-es-es
* source languages: es
* target languages: es
* OPUS readme: [es-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-es/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-es/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-es/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.es.es | 51.7 | 0.688 |
|
Irina/trans_GPT3Medium | bc135e10ddcf6b098cc55b661b8e02e4b2f89edf | 2021-11-13T16:37:50.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | Irina | null | Irina/trans_GPT3Medium | 60 | null | transformers | 5,646 | Entry not found |
NYTK/sentiment-hts5-xlm-roberta-hungarian | c38cbf7336574a02ebd53c106d43c2a336e2ceba | 2022-02-14T13:33:04.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"hu",
"transformers",
"license:gpl"
] | text-classification | false | NYTK | null | NYTK/sentiment-hts5-xlm-roberta-hungarian | 60 | null | transformers | 5,647 | ---
language:
- hu
tags:
- text-classification
license: gpl
metrics:
- accuracy
widget:
- text: "Jó reggelt! majd küldöm az élményhozókat :)."
---
# Hungarian Sentence-level Sentiment Analysis model with XLM-RoBERTa
For further models, scripts and details, see [our repository](https://github.com/nytud/sentiment-analysis) or [our demo site](https://juniper.nytud.hu/demo/nlp).
- Pretrained model used: XLM-RoBERTa base
- Finetuned on Hungarian Twitter Sentiment (HTS) Corpus
- Labels: 1, 2, 3, 4, 5
## Limitations
- max_seq_length = 128
## Results
| Model | HTS2 | HTS5 |
| ------------- | ------------- | ------------- |
| huBERT | 85.55 | 68.99 |
| XLM-RoBERTa| 85.56 | **85.56** |
## Citation
If you use this model, please cite the following paper:
```
@inproceedings {yang-bart,
title = {Improving Performance of Sentence-level Sentiment Analysis with Data Augmentation Methods},
booktitle = {Proceedings of 12th IEEE International Conference on Cognitive Infocommunications (CogInfoCom 2021)},
year = {2021},
publisher = {IEEE},
address = {Online},
author = {{Laki, László and Yang, Zijian Győző}}
pages = {417--422}
}
``` |
alireza7/PEGASUS-persian-base-perkey-summary | 7e76c778545dd20894cd7d08723bcda1ed806ce5 | 2021-09-29T19:25:45.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | alireza7 | null | alireza7/PEGASUS-persian-base-perkey-summary | 60 | null | transformers | 5,648 | More information about models is available [here](https://github.com/alirezasalemi7/ARMAN). |
arianpasquali/twitter-xlm-roberta-base-sentiment-finetunned | c777d22dc6c1d5a6d2c0ad6408a5f108140b8075 | 2022-01-25T23:34:13.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"transformers"
] | text-classification | false | arianpasquali | null | arianpasquali/twitter-xlm-roberta-base-sentiment-finetunned | 60 | null | transformers | 5,649 | Entry not found |
cahya/wav2vec2-large-xlsr-indonesian-artificial | 92576c679e8cbbe21481f7ec7b734b0e69c5b29a | 2021-07-05T23:51:17.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"id",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | cahya | null | cahya/wav2vec2-large-xlsr-indonesian-artificial | 60 | null | transformers | 5,650 | ---
language: id
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Indonesian with Artificial Voice by Cahya
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice id
type: common_voice
args: id
metrics:
- name: Test WER
type: wer
value: 51.69
---
# Wav2Vec2-Large-XLSR-Indonesian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
on the [Indonesian Artificial Common Voice dataset](https://cloud.uncool.ai/index.php/f/2165181).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "id", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-indonesian")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-indonesian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Indonesian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "id", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-indonesian")
model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-indonesian")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\'\”\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 51.69 %
## Training
The Artificial Common Voice `train`, `validation`, and ... datasets were used for training.
The script used for training can be found [here](https://github.com/cahya-wirawan/indonesian-speech-recognition)
(will be available soon)
|
castorini/monot5-3b-msmarco | c8432e9220adb0c59fc360db20274a1729b64802 | 2021-04-03T13:48:44.000Z | [
"pytorch",
"t5",
"feature-extraction",
"transformers"
] | feature-extraction | false | castorini | null | castorini/monot5-3b-msmarco | 60 | null | transformers | 5,651 | This model is a T5-3B reranker fine-tuned on the MS MARCO passage dataset for 100k steps (or 10 epochs).
For more details on how to use it, check [pygaggle.ai](pygaggle.ai)
Paper describing the model: [Document Ranking with a Pretrained Sequence-to-Sequence Model](https://www.aclweb.org/anthology/2020.findings-emnlp.63/) |
dandelin/vilt-b32-finetuned-flickr30k | 494e36e6ea1edd9e295e0eea3d6cc7264efe39e1 | 2022-01-23T09:46:32.000Z | [
"pytorch",
"vilt",
"arxiv:1505.04870",
"arxiv:2102.03334",
"transformers",
"license:apache-2.0"
] | null | false | dandelin | null | dandelin/vilt-b32-finetuned-flickr30k | 60 | 1 | transformers | 5,652 | ---
license: apache-2.0
---
# Vision-and-Language Transformer (ViLT), fine-tuned on Flickr30k
Vision-and-Language Transformer (ViLT) model fine-tuned on [Flickr30k](https://arxiv.org/abs/1505.04870#:~:text=The%20Flickr30k%20dataset%20has%20become,for%20sentence%2Dbased%20image%20description.&text=Such%20annotations%20are%20essential%20for,entity%20mentions%20in%20an%20image.). It was introduced in the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Kim et al. and first released in [this repository](https://github.com/dandelin/ViLT).
Disclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Intended uses & limitations
You can use the model for image and text retrieval.
### How to use
Here is how to use the model in PyTorch:
```
from transformers import ViltProcessor, ViltForImageAndTextRetrieval
import requests
from PIL import Image
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
texts = ["An image of two cats chilling on a couch", "A football player scoring a goal"]
processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-finetuned-flickr30k")
model = ViltForImageAndTextRetrieval.from_pretrained("dandelin/vilt-b32-finetuned-flickr30k")
# prepare inputs
encoding = processor(image, text, return_tensors="pt")
# forward pass
scores = dict()
for text in texts:
encoding = processor(image, text, return_tensors="pt")
outputs = model(**encoding)
scores[text] = outputs.logits[0, :].item()
```
## Training data
(to do)
## Training procedure
### Preprocessing
(to do)
### Pretraining
(to do)
## Evaluation results
(to do)
### BibTeX entry and citation info
```bibtex
@misc{kim2021vilt,
title={ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision},
author={Wonjae Kim and Bokyung Son and Ildoo Kim},
year={2021},
eprint={2102.03334},
archivePrefix={arXiv},
primaryClass={stat.ML}
}
``` |
facebook/convnext-base-384 | 493ade9a30c4c8ce13d85da7fb0fdbe3e0e066b1 | 2022-02-26T12:16:12.000Z | [
"pytorch",
"tf",
"convnext",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2201.03545",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | facebook | null | facebook/convnext-base-384 | 60 | null | transformers | 5,653 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# ConvNeXT (base-sized model)
ConvNeXT model trained on ImageNet-1k at resolution 384x384. It was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Liu et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt).
Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnext) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import ConvNextFeatureExtractor, ConvNextForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
feature_extractor = ConvNextFeatureExtractor.from_pretrained("facebook/convnext-base-384")
model = ConvNextForImageClassification.from_pretrained("facebook/convnext-base-384")
inputs = feature_extractor(image, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label]),
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnext).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2201-03545,
author = {Zhuang Liu and
Hanzi Mao and
Chao{-}Yuan Wu and
Christoph Feichtenhofer and
Trevor Darrell and
Saining Xie},
title = {A ConvNet for the 2020s},
journal = {CoRR},
volume = {abs/2201.03545},
year = {2022},
url = {https://arxiv.org/abs/2201.03545},
eprinttype = {arXiv},
eprint = {2201.03545},
timestamp = {Thu, 20 Jan 2022 14:21:35 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2201-03545.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
facebook/convnext-large-224 | 0dedb40e29ccd026c79cabd94ef6c3c2a4bcdd9a | 2022-03-02T19:04:49.000Z | [
"pytorch",
"tf",
"convnext",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2201.03545",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | facebook | null | facebook/convnext-large-224 | 60 | null | transformers | 5,654 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# ConvNeXT (large-sized model)
ConvNeXT model trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Liu et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt).
Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnext) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import ConvNextFeatureExtractor, ConvNextForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
feature_extractor = ConvNextFeatureExtractor.from_pretrained("facebook/convnext-large-224")
model = ConvNextForImageClassification.from_pretrained("facebook/convnext-large-224")
inputs = feature_extractor(image, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label]),
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnext).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2201-03545,
author = {Zhuang Liu and
Hanzi Mao and
Chao{-}Yuan Wu and
Christoph Feichtenhofer and
Trevor Darrell and
Saining Xie},
title = {A ConvNet for the 2020s},
journal = {CoRR},
volume = {abs/2201.03545},
year = {2022},
url = {https://arxiv.org/abs/2201.03545},
eprinttype = {arXiv},
eprint = {2201.03545},
timestamp = {Thu, 20 Jan 2022 14:21:35 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2201-03545.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
facebook/wav2vec2-base-10k-voxpopuli-ft-pl | fd1af7cf77bcb00ecf62c91771c3771a3e863bdc | 2021-07-06T01:52:01.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pl",
"arxiv:2101.00390",
"transformers",
"audio",
"voxpopuli",
"license:cc-by-nc-4.0"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-base-10k-voxpopuli-ft-pl | 60 | 1 | transformers | 5,655 | ---
language: pl
tags:
- audio
- automatic-speech-recognition
- voxpopuli
license: cc-by-nc-4.0
---
# Wav2Vec2-Base-VoxPopuli-Finetuned
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained on the 10K unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390) and fine-tuned on the transcribed data in pl (refer to Table 1 of paper for more information).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Usage for inference
In the following it is shown how the model can be used in inference on a sample of the [Common Voice dataset](https://commonvoice.mozilla.org/en/datasets)
```python
#!/usr/bin/env python3
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torchaudio
import torch
# resample audio
# load model & processor
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-pl")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-10k-voxpopuli-ft-pl")
# load dataset
ds = load_dataset("common_voice", "pl", split="validation[:1%]")
# common voice does not match target sampling rate
common_voice_sample_rate = 48000
target_sample_rate = 16000
resampler = torchaudio.transforms.Resample(common_voice_sample_rate, target_sample_rate)
# define mapping fn to read in sound file and resample
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
speech = resampler(speech)
batch["speech"] = speech[0]
return batch
# load all audio files
ds = ds.map(map_to_array)
# run inference on the first 5 data samples
inputs = processor(ds[:5]["speech"], sampling_rate=target_sample_rate, return_tensors="pt", padding=True)
# inference
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, axis=-1)
print(processor.batch_decode(predicted_ids))
```
|
filco306/gpt2-tweet-paraphraser | 55baaeb80e7057bc9641d0711956b0b5e6cdb108 | 2021-08-28T23:34:31.000Z | [
"pytorch",
"text-generation",
"arxiv:2010.05700",
"transformers"
] | text-generation | false | filco306 | null | filco306/gpt2-tweet-paraphraser | 60 | null | transformers | 5,656 | # GPT2 Tweet style transfer paraphraser
This is the trained Tweet-model from the paper [Reformulating Unsupervised Style Transfer as Paraphrase Generation](https://arxiv.org/abs/2010.05700) by Krishna K. et al. Note that I (the uploader) am not the author of the paper. Permission to upload to Huggingface was given by the main author.
## Citation
If you found this model useful, please cite the original work:
```
@inproceedings{style20,
author={Kalpesh Krishna and John Wieting and Mohit Iyyer},
Booktitle = {Empirical Methods in Natural Language Processing},
Year = "2020",
Title={Reformulating Unsupervised Style Transfer as Paraphrase Generation},
}
``` |
flax-community/gpt-neo-125M-code-search-py | f8e55c5a2348e00e286318f870ad162c68b1a152 | 2021-07-26T14:06:51.000Z | [
"pytorch",
"jax",
"tensorboard",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | flax-community | null | flax-community/gpt-neo-125M-code-search-py | 60 | null | transformers | 5,657 | # GPT-Code-Clippy-125M-Code-Search-Py
> **Please refer to our new [GitHub Wiki](https://github.com/ncoop57/gpt-code-clippy/wiki) which documents our efforts in detail in creating the open source version of GitHub Copilot**
## Model Description
GPT-CC-125M-Code-Search is a [GPT-Neo-125M model](https://huggingface.co/EleutherAI/gpt-neo-125M) finetuned using causal language modeling on only the python language in the [CodeSearchNet Challenge dataset](https://huggingface.co/datasets/code_search_net). This model is specialized to autocomplete methods in the python language.
## Training data
[CodeSearchNet Challenge dataset](https://huggingface.co/datasets/code_search_net).
## Training procedure
The training script used to train this model can be found [here](https://github.com/ncoop57/gpt-code-clippy/blob/camera-ready/training/run_clm_flax.py).
```bash
./run_clm_flax.py \
--output_dir $HOME/gpt-neo-125M-code-search-py \
--model_name_or_path="EleutherAI/gpt-neo-125M" \
--dataset_name code_search_net \
--dataset_config_name="python" \
--do_train --do_eval \
--block_size="512" \
--per_device_train_batch_size="32" \
--per_device_eval_batch_size="64" \
--preprocessing_num_workers="8" \
--learning_rate="1.2e-4" \
--num_train_epochs 20 \
--warmup_steps 3000 \
--adam_beta1="0.9" \
--adam_beta2="0.95" \
--weight_decay="0.1" \
--overwrite_output_dir \
--logging_steps="25" \
--eval_steps="500" \
--push_to_hub="False" \
--report_to="all" \
--dtype="bfloat16" \
--skip_memory_metrics="True" \
--save_steps="500" \
--save_total_limit 10 \
--report_to="wandb" \
--run_name="gpt-neo-125M-code-search-py"
```
## Intended Use and Limitations
The model is finetuned methods from the python language and is intended to autocomplete python methods given some prompt (method signature and docstring).
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
```py
from transformers import AutoModelForCausalLM, AutoTokenizer, FlaxAutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("flax-community/gpt-neo-125M-code-clippy-code-search-py")
tokenizer = AutoTokenizer.from_pretrained("flax-community/gpt-neo-125M-code-clippy-code-search-py")
prompt = """def greet(name):
'''A function to greet user. Given a user name it should say hello'''
"""
input_ids = tokenizer(prompt, return_tensors='pt').input_ids.to(device)
start = input_ids.size(1)
out = model.generate(input_ids, do_sample=True, max_length=50, num_beams=2,
early_stopping=True, eos_token_id=tokenizer.eos_token_id, )
print(tokenizer.decode(out[0][start:]))
```
### Limitations and Biases
The model is intended to be used for research purposes and comes with no guarantees of quality of generated code.
GPT-CC is finetuned from GPT-Neo and might have inherited biases and limitations from it. See [GPT-Neo model card](https://huggingface.co/EleutherAI/gpt-neo-125M#limitations-and-biases) for details.
## Eval results
Coming soon... |
fspanda/electra-medical-small-discriminator | 237898f5ee8bac6197ad0c2860caea3b38f123dc | 2020-10-29T00:30:38.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
] | null | false | fspanda | null | fspanda/electra-medical-small-discriminator | 60 | null | transformers | 5,658 | Entry not found |
huggingtweets/bestmusiclyric-bygpt3 | 6b65f651bea6e5c09207516e0860529232335fb7 | 2021-05-21T20:28:19.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/bestmusiclyric-bygpt3 | 60 | null | transformers | 5,659 | ---
language: en
thumbnail: https://www.huggingtweets.com/bestmusiclyric-bygpt3/1621260459372/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/2113290180/images-1_400x400.jpeg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1284655541227323395/4E-Y6plH_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Best Music Lyric & Wisdom_by_GPT3</div>
<div style="text-align: center; font-size: 14px;">@bestmusiclyric-bygpt3</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Best Music Lyric & Wisdom_by_GPT3.
| Data | Best Music Lyric | Wisdom_by_GPT3 |
| --- | --- | --- |
| Tweets downloaded | 3248 | 293 |
| Retweets | 1092 | 3 |
| Short tweets | 834 | 86 |
| Tweets kept | 1322 | 204 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/101pevjn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bestmusiclyric-bygpt3's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3qkafun2) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3qkafun2/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/bestmusiclyric-bygpt3')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/funnyordie | 02da8d24a1be3958820d34839b944f9c15fc4ecc | 2022-01-04T19:39:10.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/funnyordie | 60 | null | transformers | 5,660 | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/894956741573525504/YFg6jiNP_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Funny Or Die</div>
<div style="text-align: center; font-size: 14px;">@funnyordie</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Funny Or Die.
| Data | Funny Or Die |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 237 |
| Short tweets | 190 |
| Tweets kept | 2823 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/zjkuy05u/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @funnyordie's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2jaeb619) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2jaeb619/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/funnyordie')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/sentienter | 808b0296edbe288c92c8211d6601ad876db2d006 | 2021-05-22T22:28:11.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/sentienter | 60 | null | transformers | 5,661 | ---
language: en
thumbnail: https://www.huggingtweets.com/sentienter/1616642835417/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1274873508711940097/BKZv8mxD_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Walker 🤖 AI Bot </div>
<div style="font-size: 15px">@sentienter bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@sentienter's tweets](https://twitter.com/sentienter).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 77 |
| Retweets | 16 |
| Short tweets | 5 |
| Tweets kept | 56 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2se5p98l/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sentienter's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/27jgnob0) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/27jgnob0/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/sentienter')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
jonatasgrosman/wav2vec2-xls-r-1b-dutch | 3caa6c25336dfa23f6585773cebf4a0ccc8765be | 2022-07-27T23:38:48.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"nl",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/wav2vec2-xls-r-1b-dutch | 60 | 1 | transformers | 5,662 | ---
language:
- nl
license: apache-2.0
tags:
- automatic-speech-recognition
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- nl
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: XLS-R Wav2Vec2 Dutch by Jonatas Grosman
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: nl
metrics:
- name: Test WER
type: wer
value: 10.38
- name: Test CER
type: cer
value: 3.04
- name: Test WER (+LM)
type: wer
value: 6.83
- name: Test CER (+LM)
type: cer
value: 2.31
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: nl
metrics:
- name: Dev WER
type: wer
value: 31.12
- name: Dev CER
type: cer
value: 15.92
- name: Dev WER (+LM)
type: wer
value: 23.95
- name: Dev CER (+LM)
type: cer
value: 14.18
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: nl
metrics:
- name: Test WER
type: wer
value: 20.41
---
# Fine-tuned XLS-R 1B model for speech recognition in Dutch
Fine-tuned [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on Dutch using the train and validation splits of [Common Voice 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0), [Multilingual LibriSpeech](https://www.openslr.org/94/), and [Voxpopuli](https://github.com/facebookresearch/voxpopuli).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool, and thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
## Usage
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-xls-r-1b-dutch")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "nl"
MODEL_ID = "jonatasgrosman/wav2vec2-xls-r-1b-dutch"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
```
## Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-xls-r-1b-dutch --dataset mozilla-foundation/common_voice_8_0 --config nl --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-xls-r-1b-dutch --dataset speech-recognition-community-v2/dev_data --config nl --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr-1b-dutch,
title={Fine-tuned {XLS-R} 1{B} model for speech recognition in {D}utch},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-xls-r-1b-dutch}},
year={2022}
}
```
|
kco4776/soongsil-bert-wellness | 040ac5d8c56c6512f2d3aa1732f7b7c84a168057 | 2021-12-19T15:23:09.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | kco4776 | null | kco4776/soongsil-bert-wellness | 60 | null | transformers | 5,663 | ## References
- [Soongsil-BERT](https://github.com/jason9693/Soongsil-BERT) |
lewtun/xlm-roberta-base-finetuned-marc | 207ce3118b631d5f792f20f20b0fa9c3775ea503 | 2021-10-15T21:10:49.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"dataset:amazon_reviews_multi",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | lewtun | null | lewtun/xlm-roberta-base-finetuned-marc | 60 | 1 | transformers | 5,664 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: xlm-roberta-base-finetuned-marc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9932
- Mae: 0.4838
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.05 | 1.0 | 860 | 1.0007 | 0.5074 |
| 0.9166 | 2.0 | 1720 | 0.9932 | 0.4838 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
malay-huggingface/xlnet-tiny-bahasa-cased | 1cff829bede4df4bf3438e5310e71df6d9348f9f | 2021-09-18T13:50:09.000Z | [
"pytorch",
"xlnet",
"ms",
"transformers"
] | null | false | malay-huggingface | null | malay-huggingface/xlnet-tiny-bahasa-cased | 60 | null | transformers | 5,665 | ---
language: ms
---
# xlnet-tiny-bahasa-cased
Pretrained XLNET tiny language model for Malay.
## Pretraining Corpus
`xlnet-tiny-bahasa-cased` model was pretrained on ~1.4 Billion words. Below is list of data we trained on,
1. [cleaned local texts](https://github.com/huseinzol05/malay-dataset/tree/master/dumping/clean).
2. [translated The Pile](https://github.com/huseinzol05/malay-dataset/tree/master/corpus/pile).
## Pretraining details
- All steps can reproduce from here, [Malaya/pretrained-model/xlnet](https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/xlnet).
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import XLNetModel, XLNetTokenizer
model = XLNetModel.from_pretrained('malay-huggingface/xlnet-tiny-bahasa-cased')
tokenizer = XLNetTokenizer.from_pretrained(
'malay-huggingface/xlnet-tiny-bahasa-cased',
do_lower_case = False,
)
``` |
nlp-waseda/gpt2-small-japanese-wikipedia | 5fb982276618b4e8bd97c483b041fecdbe245e25 | 2021-12-28T06:31:38.000Z | [
"pytorch",
"gpt2",
"text-generation",
"ja",
"dataset:wikipedia",
"transformers",
"license:cc-by-sa-4.0"
] | text-generation | false | nlp-waseda | null | nlp-waseda/gpt2-small-japanese-wikipedia | 60 | 1 | transformers | 5,666 | ---
language:
- ja
license: cc-by-sa-4.0
datasets:
- wikipedia
widget:
- text: "早稲田 大学 で 自然 言語 処理 を"
---
# nlp-waseda/gpt2-small-japanese-wikipedia
This model is Japanese GPT-2 pretrained on Japanese Wikipedia.
## Intended uses & limitations
You can use the raw model for text generation or fine-tune it to a downstream task.
Note that the texts should be segmented into words using Juman++ in advance.
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='nlp-waseda/gpt2-small-japanese-wikipedia')
>>> set_seed(42)
>>> generator("早稲田 大学 で 自然 言語 処理 を", max_length=30, do_sample=True, pad_token_id=2, num_return_sequences=5)
[{'generated_text': '早稲田 大学 で 自然 言語 処理 を 学び 、 1969 年 に は 同 大学院 を 修了 。 東京 芝浦 電気 株式 会社 に 就職 後 、 情報 処理'},
{'generated_text': '早稲田 大学 で 自然 言語 処理 を 学び 、 帰国 後 は 立教 大学 理学部 助手 を 務めた 。 1978 年 に 神奈川 県立 湘南 高等 学校 校長 に 就任'},
{'generated_text': '早稲田 大学 で 自然 言語 処理 を 研究 。 1972 年 に 早稲田 大学 文学部 ドイツ 文学 専攻 を 卒業 し 、 同 年 から 1979 年 まで 上智 大学'},
{'generated_text': '早稲田 大学 で 自然 言語 処理 を 専攻 する 。 1979 年 東京 農工 大学 農学 部 卒業 。 1980 年 同 大学院 農学 研究 科 修士 課程 修了 。'},
{'generated_text': '早稲田 大学 で 自然 言語 処理 を 専攻 し ながら 、 日本 で 活動 する 自然 言語 研究 家 。 大学 時代 は 東京 大学 理学部 の 助手 を 務め'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import ReformerTokenizer, GPT2Model
tokenizer = ReformerTokenizer.from_pretrained('nlp-waseda/gpt2-small-japanese-wikipedia')
model = GPT2Model.from_pretrained('nlp-waseda/gpt2-small-japanese-wikipedia')
text = "早稲田 大学 で 自然 言語 処理 を"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Training data
The GPT-2 model was pretrained on Japanese Wikipedia, dumped on 2021-12-20.
## Training procedure
### Preprocessing
The texts are normalized using zenhan, segmented into words using Juman++, and tokenized using SentencePiece. Juman++ 2.0.0-rc3 was used for pretraining.
The model was trained on 8 NVIDIA A100 GPUs.
|
speechbrain/asr-conformer-transformerlm-ksponspeech | 5928b5da43df4df102c4c2885f74b45707cc291d | 2022-06-25T03:07:00.000Z | [
"kr",
"dataset:ksponspeech",
"arxiv:2106.04624",
"speechbrain",
"automatic-speech-recognition",
"CTC",
"Attention",
"Conformer",
"pytorch",
"license:apache-2.0"
] | automatic-speech-recognition | false | speechbrain | null | speechbrain/asr-conformer-transformerlm-ksponspeech | 60 | 3 | speechbrain | 5,667 | ---
language: "kr"
thumbnail:
tags:
- automatic-speech-recognition
- CTC
- Attention
- Conformer
- pytorch
- speechbrain
license: "apache-2.0"
datasets:
- ksponspeech
metrics:
- wer
- cer
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# Conformer for KsponSpeech (with Transformer LM)
This repository provides all the necessary tools to perform automatic speech
recognition from an end-to-end system pretrained on KsponSpeech (Kr) within
SpeechBrain. For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io).
The performance of the model is the following:
| Release | eval clean CER | eval other CER | GPUs |
| :------: | :------------: | :------------: | :---------: |
| 09-05-21 | 7.48% | 8.38% | 6xA100 80GB |
## Pipeline description
This ASR system is composed of 3 different but linked blocks:
- Tokenizer (unigram) that transforms words into subword units and trained with
the train transcriptions of KsponSpeech.
- Neural language model (Transformer LM) trained on the train transcriptions of KsponSpeech
- Acoustic model made of a conformer encoder and a joint decoder with CTC +
transformer. Hence, the decoding also incorporates the CTC probabilities.
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *transcribe_file* if needed.
## Install SpeechBrain
First of all, please install SpeechBrain with the following command:
```
pip install speechbrain==0.5.10
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Transcribing your own audio files (in Korean)
```
from speechbrain.pretrained import EncoderDecoderASR
asr_model = EncoderDecoderASR.from_hparams(source="ddwkim/asr-conformer-transformerlm-ksponspeech", savedir="pretrained_models/asr-conformer-transformerlm-ksponspeech", run_opts={"device":"cuda"})
asr_model.transcribe_file("ddwkim/asr-conformer-transformerlm-ksponspeech/record_0_16k.wav")
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
## Parallel Inference on a Batch
Please, [see this Colab notebook](https://colab.research.google.com/drive/10N98aGoeLGfh6Hu6xOCH5BbjVTVYgCyB?usp=sharing) on using the pretrained model
### Training
The model was trained with SpeechBrain (Commit hash: 'fd9826c').
To train it from scratch follow these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```bash
cd speechbrain
pip install -r requirements.txt
pip install .
```
3. Run Training:
```bash
cd recipes/KsponSpeech/ASR/transformer
python train.py hparams/conformer_medium.yaml --data_folder=your_data_folder
```
You can find our training results (models, logs, etc) at the subdirectories.
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
# **About SpeechBrain**
- Website: https://speechbrain.github.io/
- Code: https://github.com/speechbrain/speechbrain/
- HuggingFace: https://huggingface.co/speechbrain/
# **Citing SpeechBrain**
Please, cite SpeechBrain if you use it for your research or business.
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
```
# Citing the model
```bibtex
@misc{returnzero,
title = {ReturnZero Conformer Korean ASR model},
author = {Dongwon Kim and Dongwoo Kim and Roh Jeongkyu},
year = {2021},
howpublished = {\url{https://huggingface.co/ddwkim/asr-conformer-transformerlm-ksponspeech}},
}
```
# Citing KsponSpeech dataset
```bibtex
@Article{app10196936,
AUTHOR = {Bang, Jeong-Uk and Yun, Seung and Kim, Seung-Hi and Choi, Mu-Yeol and Lee, Min-Kyu and Kim, Yeo-Jeong and Kim, Dong-Hyun and Park, Jun and Lee, Young-Jik and Kim, Sang-Hun},
TITLE = {KsponSpeech: Korean Spontaneous Speech Corpus for Automatic Speech Recognition},
JOURNAL = {Applied Sciences},
VOLUME = {10},
YEAR = {2020},
NUMBER = {19},
ARTICLE-NUMBER = {6936},
URL = {https://www.mdpi.com/2076-3417/10/19/6936},
ISSN = {2076-3417},
DOI = {10.3390/app10196936}
}
```
|
textattack/distilbert-base-cased-snli | 313497ea96b4d773ac95b665a56a3d5d5ef0d3ca | 2020-07-06T16:37:00.000Z | [
"pytorch",
"distilbert",
"transformers"
] | null | false | textattack | null | textattack/distilbert-base-cased-snli | 60 | null | transformers | 5,668 | ## TextAttack Model Card
This `distilbert-base-cased` model was fine-tuned for sequence classificationusing TextAttack
and the snli dataset loaded using the `nlp` library. The model was fine-tuned
for 3 epochs with a batch size of 256, a learning
rate of 2e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.8768542979069295, as measured by the
eval set accuracy, found after 2 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
|
vaughnw128/DialoGPT-medium-sexybabeycord | 824ce2be568da70429a376cc6402c70504cc4ebb | 2022-01-27T22:10:31.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | vaughnw128 | null | vaughnw128/DialoGPT-medium-sexybabeycord | 60 | null | transformers | 5,669 | ---
tags:
- conversational
---
We love owen |
inovex/multi2convai-corona-de-bert | 053d067e5a87d90516c8b56ecd2d9eadd6f03454 | 2022-03-01T09:18:20.000Z | [
"pytorch",
"bert",
"text-classification",
"de",
"transformers",
"license:mit"
] | text-classification | false | inovex | null | inovex/multi2convai-corona-de-bert | 60 | 1 | transformers | 5,670 | ---
tags:
- text-classification
- pytorch
- transformers
widget:
- text: "Muss ich eine Maske tragen?"
license: mit
language: de
---
# Multi2ConvAI-Corona: finetuned Bert for German
This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project:
- domain: Corona (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases)))
- language: German (de)
- model type: finetuned Bert
## How to run
Requires:
- Huggingface transformers
### Run with Huggingface Transformers
````python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-logistics-de-bert")
model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-logistics-de-bert")
````
## Further information on Multi2ConvAI:
- https://multi2conv.ai
- https://github.com/inovex/multi2convai
- mailto: [email protected] |
allenai/aspire-biencoder-compsci-spec | 00f38c1584ec14f372c22acc4b6e9b11efb35d77 | 2022-04-24T19:39:24.000Z | [
"pytorch",
"bert",
"feature-extraction",
"arxiv:2111.08366",
"transformers",
"license:apache-2.0"
] | feature-extraction | false | allenai | null | allenai/aspire-biencoder-compsci-spec | 60 | null | transformers | 5,671 | ---
license: apache-2.0
---
## Overview
Model included in a paper for modeling fine grained similarity between documents:
**Title**: "Multi-Vector Models with Textual Guidance for Fine-Grained Scientific Document Similarity"
**Authors**: Sheshera Mysore, Arman Cohan, Tom Hope
**Paper**: https://arxiv.org/abs/2111.08366
**Github**: https://github.com/allenai/aspire
**Note**: In the context of the paper, this model is referred to as `Specter-CoCite_Spec` and represents a baseline bi-encoder for scientific document similarity. This model is similar in architecture to the [`allenai/specter`](https://github.com/allenai/specter) model but is trained on co-citation data instead of citation data.
## Model Card
### Model description
This model is a BERT bi-encoder model trained for similarity of title-abstract pairs in biomedical scientific papers. The model is **initialized with the SPECTER model**. This model inputs the title and abstract of a paper and represents it with a single vector obtained by a scalar mix of the CLS token at every layer of the base encoder. These scalar mix parameters can be important for performance in some datasets. Importantly, these scalar mix weights are not included as part of this HF model, if you wish to use these parameters please download the full model at: [`aspire-biencoder-compsci-spec-full.zip`](https://drive.google.com/file/d/1AHtzyEpyn7DeFYOdt86ik4n0tGaG5kMC/view?usp=sharing).
### Training data
The model is trained on pairs of co-cited papers in a contrastive learning setup. The model is trained on 1.2 million biomedical paper pairs. In training the model negative examples for the contrastive loss are obtained as random in-batch negatives. Co-citations are obtained from the full text of papers, for example - the papers in brackets below are all co-cited and each pairs title and abstracts would be used as a training pair:
> The idea of distant supervision has been proposed and used widely in Relation Extraction (Mintz et al., 2009; Riedel et al., 2010; Hoffmann et al., 2011; Surdeanu et al., 2012) , where the source of labels is an external knowledge base.
### Training procedure
The model was trained with the Adam Optimizer and a learning rate of 2e-5 with 1000 warm-up steps followed by linear decay of the learning rate. The model training convergence is checked with the loss on a held out dev set consisting of co-cited paper pairs.
### Intended uses & limitations
This model is trained for document similarity tasks in **computer science** scientific text using a single vector per document. Here, the documents are the title and abstract of a paper. With appropriate fine-tuning the model can also be used for other tasks such as classification. Since the training data comes primarily from computer science, performance on other domains may be poorer.
### How to use
Follow instructions for use detailed on the model github repo: https://github.com/allenai/aspire#specter-cocite
### Variable and metrics
This model is evaluated on information retrieval datasets with document level queries. Performance here is reported on CSFCube (computer science/English). This is detailed on [github](https://github.com/allenai/aspire) and in our [paper](https://arxiv.org/abs/2111.08366). CSFCube presents a finer-grained query via selected sentences in a query abstract based on which a finer-grained retrieval must be made from candidate abstracts. The bi-encoder above ignores the finer grained query sentences and uses the whole abstract - this presents a baseline in the paper.
We rank documents by the L2 distance between the query and candidate documents.
### Evaluation results
The released model `aspire-biencoder-compsci-spec` (and `aspire-biencoder-compsci-spec-full`) is compared against `allenai/specter`. `aspire-biencoder-compsci-spec-full`<sup>*</sup> is the performance reported in our paper by averaging over 3 re-runs of the model. The released models `aspire-biencoder-compsci-spec` and `aspire-biencoder-compsci-spec-full` are the single best run among the 3 re-runs.
| | CSFCube aggregated | CSFCube aggregated|
|--------------------------------------------:|:---------:|:-------:|
| | MAP | NDCG%20 |
| `specter` | 34.23 | 53.28 |
| `aspire-biencoder-compsci-spec-full`<sup>*</sup> | 37.90 | 58.16 |
| `aspire-biencoder-compsci-spec` | 37.17 | 57.91 |
| `aspire-biencoder-compsci-spec-full` | 37.67 | 59.26 |
**Alternative models:**
Besides the above models consider these alternative models also released in the Aspire paper:
[`aspire-biencoder-biomed-scib`](https://huggingface.co/allenai/aspire-biencoder-biomed-scib): If you wanted to run on biomedical papers.
|
smeoni/nbme-deberta-large | 418cca0a10d998784b929c867f5f44352b9c6062 | 2022-04-23T18:29:48.000Z | [
"pytorch",
"tensorboard",
"deberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | fill-mask | false | smeoni | null | smeoni/nbme-deberta-large | 60 | null | transformers | 5,672 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: nbme-deberta-large
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nbme-deberta-large
This model is a fine-tuned version of [microsoft/deberta-large](https://huggingface.co/microsoft/deberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8806
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.358 | 1.0 | 1850 | 1.1622 |
| 1.0073 | 2.0 | 3700 | 0.9461 |
| 0.8837 | 3.0 | 5550 | 0.8806 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
allenai/mtk-instruct-11b-def-pos | ff993b4a27cc7cea557ecf3ccbebc96711b6f97c | 2022-05-27T22:20:08.000Z | [
"pytorch",
"t5",
"text2text-generation",
"multilingual",
"dataset:natural instructions v2.0",
"arxiv:1910.10683",
"arxiv:2204.07705",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | allenai | null | allenai/mtk-instruct-11b-def-pos | 60 | 1 | transformers | 5,673 | ---
language: multilingual
license: apache-2.0
datasets:
- natural instructions v2.0
---
# Model description
Tk-Instruct is a series of encoder-decoder Transformer models that are trained to solve various NLP tasks by following in-context instructions (plain language task definitions, k-shot examples, explanations, etc). Built upon the pre-trained [T5 models](https://arxiv.org/abs/1910.10683), they are fine-tuned on a large number of tasks & instructions that are collected in the [Natural Instructions benchmark](https://github.com/allenai/natural-instructions), which contains 1600+ tasks in 70+ broach categories in total. This enables the model to not only process the training tasks, but also generalize to many unseen tasks without further parameter update.
More resources for using the model:
- **Paper**: [link](https://arxiv.org/abs/2204.07705)
- **Code repository**: [Tk-Instruct](https://github.com/yizhongw/Tk-Instruct)
- **Official Website**: [Natural Instructions](https://instructions.apps.allenai.org/)
- **All released models**: [allenai/tk-instruct](https://huggingface.co/models?search=allenai/tk-instruct)
## Intended uses & limitations
Tk-Instruct can be used to do many NLP tasks by following instructions.
### How to use
When instructing the model, task definition or demonstration examples or explanations should be prepended to the original input and fed into the model. You can easily try Tk-Instruct models as follows:
```python
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
>>> tokenizer = AutoTokenizer.from_pretrained("allenai/tk-instruct-3b-def")
>>> model = AutoModelForSeq2SeqLM.from_pretrained("allenai/tk-instruct-3b-def")
>>> input_ids = tokenizer.encode(
"Definition: return the currency of the given country. Now complete the following example - Input: India. Output:",
return_tensors="pt")
>>> output = model.generate(input_ids, max_length=10)
>>> output = tokenizer.decode(output[0], skip_special_tokens=True) # model should output 'Indian Rupee'
>>> input_ids = tokenizer.encode(
"Definition: negate the following sentence. Input: John went to school. Output:",
return_tensors="pt")
>>> output = model.generate(input_ids, max_length=10)
>>> output = tokenizer.decode(output[0], skip_special_tokens=True) # model should output 'John did not go to shool.'
```
### Limitations
We are still working on understanding the behaviors of these models, but here are several issues we have found:
- Models are generally sensitive to the instruction. Sometimes rewording the instruction can lead to very different output.
- Models are not always compliant to the instruction. Sometimes the model don't follow your instruction (e.g., when you ask the model to generate one sentence, it might still generate one word or a long story).
- Models might totally fail on some tasks.
If you find serious issues or any interesting result, you are welcome to share with us!
## Training data
Tk-Instruct is trained using the tasks & instructions in [Natural Instructions benchmark](https://github.com/allenai/natural-instructions), which contains 1600+ tasks in 70+ broach categories in total. We follow the official train/test split. Tk-Instruct model series were trained using 757 tasks, and mTk-Instruct series were trained using 1271 tasks (including some non-English tasks).
The training tasks are in 64 broad categories, such as text categorization / question answering / sentiment analysis / summarization / grammar error detection / dialogue generation / etc. The other 12 categories are selected for evaluation.
## Training procedure
All our models are initialized from either T5 models or mT5 models. Because generating the output can be regarded as a form of language modeling, we used their [LM adapted version](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#lm-adapted-t511lm100k). All data is converted into a text-to-text format, and models are fine-tuned to maximize the likelihood of the output sequence.
Our [released models](https://huggingface.co/models?search=allenai/tk-instruct) are in different sizes, and each of them was trained with a specific type of instruction encoding. For instance, `tk-instruct-3b-def-pos` was initialized from [t5-xl-lm-adapt](https://huggingface.co/google/t5-xl-lm-adapt), and it saw task definition & 2 positive examples as the instruction during training time.
Although they are trained with only one type of instruction encodings, we found they can usually work with other type of encodings at test time (see more in our paper).
### BibTeX entry and citation info
```bibtex
@article{wang2022benchmarking,
title={Benchmarking Generalization via In-Context Instructions on 1,600+ Language Tasks},
author={Yizhong Wang and Swaroop Mishra and Pegah Alipoormolabashi and Yeganeh Kordi and Amirreza Mirzaei and A. Arunkumar and Arjun Ashok and Arut Selvan Dhanasekaran and Atharva Naik and David Stap and Eshaan Pathak and Giannis Karamanolakis and Haizhi Gary Lai and Ishan Purohit and Ishani Mondal and Jacob Anderson and Kirby Kuznia and Krima Doshi and Maitreya Patel and Kuntal Kumar Pal and M. Moradshahi and Mihir Parmar and Mirali Purohit and Neeraj Varshney and Phani Rohitha Kaza and Pulkit Verma and Ravsehaj Singh Puri and Rushang Karia and Shailaja Keyur Sampat and Savan Doshi and Siddharth Deepak Mishra and Sujan C. Reddy and Sumanta Patro and Tanay Dixit and Xu-dong Shen and Chitta Baral and Yejin Choi and Hannaneh Hajishirzi and Noah A. Smith and Daniel Khashabi},
year={2022},
archivePrefix={arXiv},
eprint={2204.07705},
primaryClass={cs.CL},
}
``` |
RogerKam/roberta_fine_tuned_sentiment_newsmtsc | a34ebe62615db87dedf1fd7f24d881ac45e12fc8 | 2022-06-09T14:27:18.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | RogerKam | null | RogerKam/roberta_fine_tuned_sentiment_newsmtsc | 60 | null | transformers | 5,674 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta_fine_tuned_sentiment_newsmtsc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta_fine_tuned_sentiment_newsmtsc
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6134
- Accuracy: 0.7713
- F1 Score: 0.7710
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.10.0+cu111
- Datasets 2.2.2
- Tokenizers 0.12.1
|
plncmm/beto-clinical-wl-es | 05f09fd37d7d25dbb3321507b57057769929c646 | 2022-06-07T23:06:51.000Z | [
"pytorch",
"bert",
"fill-mask",
"es",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | fill-mask | false | plncmm | null | plncmm/beto-clinical-wl-es | 60 | null | transformers | 5,675 | ---
language:
- es
widget:
- text: "Periodontitis [MASK] generalizada severa."
- text: "Caries dentinaria [MASK]."
- text: "Movilidad aumentada en pza [MASK]."
- text: "Pcte con dm en tto con [MASK]."
- text: "Pcte con erc en tto con [MASK]."
tags:
- generated_from_trainer
model-index:
- name: bio-bert-base-spanish-wwm-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# plncmm/beto-clinical-wl-es
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on the Chilean waiting list dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
respect5716/koenbert-base | b92f1c3b06024a5c280886766c69560e33c98616 | 2022-07-17T04:52:33.000Z | [
"pytorch",
"bert",
"feature-extraction",
"ko",
"transformers"
] | feature-extraction | false | respect5716 | null | respect5716/koenbert-base | 60 | null | transformers | 5,676 | ---
language: ko
---
# koenbert-base
최근 다양한 한국어 언어 모델들이 개발 및 공유되고 있습니다. 하지만 이러한 모델들은 한국어만 지원하기 때문에 Dialog system, Information retrieval 등 다양한 도메인에서 제작되는 영어 데이터를 활용하기 어렵다는 한계점이 있습니다. Multilingual 모델의 경우 지원하는 언어의 수가 많아 모델 크기가 크고 한국어 성능이 떨어진다는 단점이 있습니다. 이러한 한계점을 해소하고 한국어 모델의 활용도를 높이기 위해 한국어 언어 모델에 영어를 학습하는 프로젝트를 진행하고 있습니다. 모델에 대한 자세한 정보는 [Github repo](https://github.com/respect5716/kobert-to-koenbert)에서 확인해주세요.
## 실행
```python
from transfomers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('respect5716/koen-bert-base')
model = AutoModel.from_pretrained('respect5716/koen-bert-base')
```
|
ij5/kobart | 8e4c5ae44bbe6a155c6d898874300a6c1eb58ffc | 2022-07-19T11:54:49.000Z | [
"pytorch",
"bart",
"feature-extraction",
"transformers",
"license:mit"
] | feature-extraction | false | ij5 | null | ij5/kobart | 60 | null | transformers | 5,677 | ---
license: mit
---
|
SIMAS-UN/blaming_geopolitics | 8821e15051d319514d66e2a7650c6dc1a2f0257a | 2022-07-24T04:02:06.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | SIMAS-UN | null | SIMAS-UN/blaming_geopolitics | 60 | null | transformers | 5,678 | Entry not found |
anzorq/ru-kbd_lat-t5-small | 0873e058cf8d5f47a8cda38185cba306e78b8613 | 2022-07-27T10:14:33.000Z | [
"pytorch",
"t5",
"text2text-generation",
"ru",
"kbd",
"dataset:anzorq/kbd_lat-ru",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | anzorq | null | anzorq/ru-kbd_lat-t5-small | 60 | null | transformers | 5,679 | ---
language:
- ru
- kbd
license: mit
tags:
- generated_from_trainer
datasets:
- anzorq/kbd_lat-ru
metrics:
- bleu
model-index:
- name: tst-translation
results:
- task:
name: translation
type: translation
dataset:
name: anzorq/kbd_lat-ru anzorq--kbd-ru
type: anzorq/kbd_lat-ru
args: anzorq--kbd-ru
metrics:
- name: Bleu
type: bleu
value: 12.649
widget:
- text: "ru->kbd: Я иду домой."
example_title: "Я иду домой."
- text: "ru->kbd: Дети играют во дворе."
example_title: "Дети играют во дворе."
- text: "ru->kbd: Сколько тебе лет?"
example_title: "Сколько тебе лет?"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tst-translation
This model is a fine-tuned version of [anzorq/kbd_lat-835k_ru-3M_t5-small](https://huggingface.co/anzorq/kbd_lat-835k_ru-3M_t5-small) on the anzorq/kbd_lat-ru anzorq--kbd-ru dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6000
- Bleu: 12.649
- Gen Len: 11.8018
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 30
- eval_batch_size: 30
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 35.0
### Training results
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.10.0+cu113
- Datasets 1.16.0
- Tokenizers 0.12.1
|
derwahnsinn/gpt2-mediumBIGBANG | dbe9db8f7e319854cac777a0cfcc064d2d382139 | 2022-07-28T03:29:24.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | derwahnsinn | null | derwahnsinn/gpt2-mediumBIGBANG | 60 | null | transformers | 5,680 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-mediumBIGBANG
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-mediumBIGBANG
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.4441
- eval_runtime: 136.0007
- eval_samples_per_second: 61.044
- eval_steps_per_second: 7.632
- epoch: 10.7
- step: 11103
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Forest/gpt2-fanfic | b908209a8b686492239e2ff3773fb2333c8a1477 | 2021-05-21T09:44:04.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | Forest | null | Forest/gpt2-fanfic | 59 | null | transformers | 5,681 | Entry not found |
Greg1901/BertSummaDev_summariser | 94f60550472a7d98fd2c369b4ea785adfa0fd1e6 | 2021-07-24T15:23:23.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Greg1901 | null | Greg1901/BertSummaDev_summariser | 59 | null | transformers | 5,682 | Entry not found |
Harveenchadha/vakyansh-wav2vec2-tamil-tam-250 | 0bd7c7d87da18a71b246ce3e543244bdba983e36 | 2021-09-22T07:55:33.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ta",
"arxiv:2107.07402",
"transformers",
"audio",
"speech",
"license:mit",
"model-index"
] | automatic-speech-recognition | false | Harveenchadha | null | Harveenchadha/vakyansh-wav2vec2-tamil-tam-250 | 59 | null | transformers | 5,683 | ---
language: ta
#datasets:
#- Interspeech 2021
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
license: mit
model-index:
- name: Wav2Vec2 Vakyansh Tamil Model by Harveen Chadha
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ta
type: common_voice
args: ta
metrics:
- name: Test WER
type: wer
value: 53.64
---
## Pretrained Model
Fine-tuned on Multilingual Pretrained Model [CLSRIL-23](https://arxiv.org/abs/2107.07402). The original fairseq checkpoint is present [here](https://github.com/Open-Speech-EkStep/vakyansh-models). When using this model, make sure that your speech input is sampled at 16kHz.
**Note: The result from this model is without a language model so you may witness a higher WER in some cases.**
## Dataset
This model was trained on 4200 hours of Hindi Labelled Data. The labelled data is not present in public domain as of now.
## Training Script
Models were trained using experimental platform setup by Vakyansh team at Ekstep. Here is the [training repository](https://github.com/Open-Speech-EkStep/vakyansh-wav2vec2-experimentation).
In case you want to explore training logs on wandb they are [here](https://wandb.ai/harveenchadha/tamil-finetuning-multilingual).
## [Colab Demo](https://github.com/harveenchadha/bol/blob/main/demos/hf/tamil/hf_tamil_tnm_4200_demo.ipynb)
## Usage
The model can be used directly (without a language model) as follows:
```python
import soundfile as sf
import torch
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import argparse
def parse_transcription(wav_file):
# load pretrained model
processor = Wav2Vec2Processor.from_pretrained("Harveenchadha/vakyansh-wav2vec2-tamil-tam-250")
model = Wav2Vec2ForCTC.from_pretrained("Harveenchadha/vakyansh-wav2vec2-tamil-tam-250")
# load audio
audio_input, sample_rate = sf.read(wav_file)
# pad input values and return pt tensor
input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values
# INFERENCE
# retrieve logits & take argmax
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe
transcription = processor.decode(predicted_ids[0], skip_special_tokens=True)
print(transcription)
```
## Evaluation
The model can be evaluated as follows on the hindi test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ta", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("Harveenchadha/vakyansh-wav2vec2-tamil-tam-250")
model = Wav2Vec2ForCTC.from_pretrained("Harveenchadha/vakyansh-wav2vec2-tamil-tam-250")
model.to("cuda")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]'
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids, skip_special_tokens=True)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 53.64 %
[**Colab Evaluation**](https://github.com/harveenchadha/bol/blob/main/demos/hf/tamil/hf_vakyansh_tamil_tnm_4200_evaluation_common_voice.ipynb)
## Credits
Thanks to Ekstep Foundation for making this possible. The vakyansh team will be open sourcing speech models in all the Indic Languages. |
Helsinki-NLP/opus-mt-en-afa | 9ce92c06934cc12f849cc9139a1174f8ef5fde7e | 2021-01-18T08:04:43.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"so",
"ti",
"am",
"he",
"mt",
"ar",
"afa",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-afa | 59 | null | transformers | 5,684 | ---
language:
- en
- so
- ti
- am
- he
- mt
- ar
- afa
tags:
- translation
license: apache-2.0
---
### eng-afa
* source group: English
* target group: Afro-Asiatic languages
* OPUS readme: [eng-afa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-afa/README.md)
* model: transformer
* source language(s): eng
* target language(s): acm afb amh apc ara arq ary arz hau_Latn heb kab mlt rif_Latn shy_Latn som tir
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-afa/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-afa/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-afa/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-amh.eng.amh | 11.6 | 0.504 |
| Tatoeba-test.eng-ara.eng.ara | 12.0 | 0.404 |
| Tatoeba-test.eng-hau.eng.hau | 10.2 | 0.429 |
| Tatoeba-test.eng-heb.eng.heb | 32.3 | 0.551 |
| Tatoeba-test.eng-kab.eng.kab | 1.6 | 0.191 |
| Tatoeba-test.eng-mlt.eng.mlt | 17.7 | 0.551 |
| Tatoeba-test.eng.multi | 14.4 | 0.375 |
| Tatoeba-test.eng-rif.eng.rif | 1.7 | 0.103 |
| Tatoeba-test.eng-shy.eng.shy | 0.8 | 0.090 |
| Tatoeba-test.eng-som.eng.som | 16.0 | 0.429 |
| Tatoeba-test.eng-tir.eng.tir | 2.7 | 0.238 |
### System Info:
- hf_name: eng-afa
- source_languages: eng
- target_languages: afa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-afa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'so', 'ti', 'am', 'he', 'mt', 'ar', 'afa']
- src_constituents: {'eng'}
- tgt_constituents: {'som', 'rif_Latn', 'tir', 'kab', 'arq', 'afb', 'amh', 'arz', 'heb', 'shy_Latn', 'apc', 'mlt', 'thv', 'ara', 'hau_Latn', 'acm', 'ary'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-afa/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-afa/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: afa
- short_pair: en-afa
- chrF2_score: 0.375
- bleu: 14.4
- brevity_penalty: 1.0
- ref_len: 58110.0
- src_name: English
- tgt_name: Afro-Asiatic languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: afa
- prefer_old: False
- long_pair: eng-afa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-es-he | 4cfa5a4ded55d0cf1160feaa4ccf78c23f3561b5 | 2021-01-18T08:24:50.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"he",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-he | 59 | null | transformers | 5,685 | ---
language:
- es
- he
tags:
- translation
license: apache-2.0
---
### es-he
* source group: Spanish
* target group: Hebrew
* OPUS readme: [spa-heb](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-heb/README.md)
* model: transformer
* source language(s): spa
* target language(s): heb
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-12-10.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-heb/opus-2020-12-10.zip)
* test set translations: [opus-2020-12-10.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-heb/opus-2020-12-10.test.txt)
* test set scores: [opus-2020-12-10.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-heb/opus-2020-12-10.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.spa.heb | 43.6 | 0.636 |
### System Info:
- hf_name: es-he
- source_languages: spa
- target_languages: heb
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-heb/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['es', 'he']
- src_constituents: ('Spanish', {'spa'})
- tgt_constituents: ('Hebrew', {'heb'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: spa-heb
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-heb/opus-2020-12-10.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-heb/opus-2020-12-10.test.txt
- src_alpha3: spa
- tgt_alpha3: heb
- chrF2_score: 0.636
- bleu: 43.6
- brevity_penalty: 0.992
- ref_len: 12112.0
- src_name: Spanish
- tgt_name: Hebrew
- train_date: 2020-12-10 00:00:00
- src_alpha2: es
- tgt_alpha2: he
- prefer_old: False
- short_pair: es-he
- helsinki_git_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96
- transformers_git_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de
- port_machine: LM0-400-22516.local
- port_time: 2020-12-11-11:41 |
Vasanth/bert-base-uncased-qa-squad2 | 6a516b9d7cec1aa88159e125f50de159f693892a | 2022-02-08T13:54:11.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | Vasanth | null | Vasanth/bert-base-uncased-qa-squad2 | 59 | null | transformers | 5,686 | "hello"
|
WikinewsSum/bert2bert-multi-en-wiki-news | b47941ade29374c94cc0f197a545c6a1e616ab89 | 2020-08-11T09:05:49.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | WikinewsSum | null | WikinewsSum/bert2bert-multi-en-wiki-news | 59 | null | transformers | 5,687 | Entry not found |
ainize/gpt2-mcu-script-large | ba9555ee702589283d84b5a0a4dfe009c4c096cb | 2021-05-21T12:03:49.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | ainize | null | ainize/gpt2-mcu-script-large | 59 | 1 | transformers | 5,688 | Entry not found |
flax-community/gpt-code-clippy-125M-1024-f | 5ad3368b5018423a62b5aafe7fff8a20221338a0 | 2021-07-18T03:40:23.000Z | [
"pytorch",
"jax",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | flax-community | null | flax-community/gpt-code-clippy-125M-1024-f | 59 | 1 | transformers | 5,689 | Entry not found |
google/bert_uncased_L-12_H-256_A-4 | 2ab9a0f41435a4d23d4cbc11cc3e7c922545f13d | 2021-05-19T17:26:24.000Z | [
"pytorch",
"jax",
"bert",
"arxiv:1908.08962",
"transformers",
"license:apache-2.0"
] | null | false | google | null | google/bert_uncased_L-12_H-256_A-4 | 59 | null | transformers | 5,690 | ---
thumbnail: https://huggingface.co/front/thumbnails/google.png
license: apache-2.0
---
BERT Miniatures
===
This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with WordPiece masking).
We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher.
Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity.
You can download the 24 BERT miniatures either from the [official BERT Github page](https://github.com/google-research/bert/), or via HuggingFace from the links below:
| |H=128|H=256|H=512|H=768|
|---|:---:|:---:|:---:|:---:|
| **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]|
| **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]|
| **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]|
| **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]|
| **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]|
| **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]|
Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model.
Here are the corresponding GLUE scores on the test set:
|Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX|
|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0|
|BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1|
|BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6|
|BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5|
For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs:
- batch sizes: 8, 16, 32, 64, 128
- learning rates: 3e-4, 1e-4, 5e-5, 3e-5
If you use these models, please cite the following paper:
```
@article{turc2019,
title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models},
author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1908.08962v2 },
year={2019}
}
```
[2_128]: https://huggingface.co/google/bert_uncased_L-2_H-128_A-2
[2_256]: https://huggingface.co/google/bert_uncased_L-2_H-256_A-4
[2_512]: https://huggingface.co/google/bert_uncased_L-2_H-512_A-8
[2_768]: https://huggingface.co/google/bert_uncased_L-2_H-768_A-12
[4_128]: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2
[4_256]: https://huggingface.co/google/bert_uncased_L-4_H-256_A-4
[4_512]: https://huggingface.co/google/bert_uncased_L-4_H-512_A-8
[4_768]: https://huggingface.co/google/bert_uncased_L-4_H-768_A-12
[6_128]: https://huggingface.co/google/bert_uncased_L-6_H-128_A-2
[6_256]: https://huggingface.co/google/bert_uncased_L-6_H-256_A-4
[6_512]: https://huggingface.co/google/bert_uncased_L-6_H-512_A-8
[6_768]: https://huggingface.co/google/bert_uncased_L-6_H-768_A-12
[8_128]: https://huggingface.co/google/bert_uncased_L-8_H-128_A-2
[8_256]: https://huggingface.co/google/bert_uncased_L-8_H-256_A-4
[8_512]: https://huggingface.co/google/bert_uncased_L-8_H-512_A-8
[8_768]: https://huggingface.co/google/bert_uncased_L-8_H-768_A-12
[10_128]: https://huggingface.co/google/bert_uncased_L-10_H-128_A-2
[10_256]: https://huggingface.co/google/bert_uncased_L-10_H-256_A-4
[10_512]: https://huggingface.co/google/bert_uncased_L-10_H-512_A-8
[10_768]: https://huggingface.co/google/bert_uncased_L-10_H-768_A-12
[12_128]: https://huggingface.co/google/bert_uncased_L-12_H-128_A-2
[12_256]: https://huggingface.co/google/bert_uncased_L-12_H-256_A-4
[12_512]: https://huggingface.co/google/bert_uncased_L-12_H-512_A-8
[12_768]: https://huggingface.co/google/bert_uncased_L-12_H-768_A-12
|
m3hrdadfi/wav2vec2-large-xlsr-persian-shemo | f9aa526bb0408f48543d0359dca089555adefc05 | 2021-07-06T10:48:23.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"fa",
"dataset:shemo",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | m3hrdadfi | null | m3hrdadfi/wav2vec2-large-xlsr-persian-shemo | 59 | 1 | transformers | 5,691 | ---
language: fa
datasets:
- shemo
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
widget:
- label: ShEMO sample 250
src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-persian-shemo/resolve/main/sample250.flac
- label: ShEMO sample 52
src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-persian-shemo/resolve/main/sample52.flac
model-index:
- name: XLSR Wav2Vec2 Persian (Farsi) ShEMO by Mehrdad Farahani
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: ShEMO fa
type: shemo
args: fa
metrics:
- name: Test WER
type: wer
value: 30.00
---
# Wav2Vec2-Large-XLSR-53-Persian ShEMO
Fine-tuned [Wav2Vec2-Large-XLSR-53-Persian V2](https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-persian-v2) in Persian (Farsi) using [ShEMO](https://www.kaggle.com/mansourehk/shemo-persian-speech-emotion-detection-database). When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
**Requirements**
```bash
# requirement packages
!pip install git+https://github.com/huggingface/datasets.git
!pip install git+https://github.com/huggingface/transformers.git
!pip install torchaudio
!pip install librosa
!pip install jiwer
!pip install hazm
!pip install num2fawords
```
**Prediction**
```python
import librosa
import torch
import torchaudio
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from datasets import load_dataset
from num2fawords import words, ordinal_words
import numpy as np
import hazm
import re
import string
import IPython.display as ipd
_normalizer = hazm.Normalizer()
chars_to_ignore = [
",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�",
"#", "!", "؟", "?", "«", "»", "،", "(", ")", "؛", "'ٔ", "٬",'ٔ', ",", "?",
".", "!", "-", ";", ":",'"',"“", "%", "‘", "”", "�", "–", "…", "_", "”", '“', '„',
'ā', 'š',
# "ء",
]
# In case of farsi
chars_to_ignore = chars_to_ignore + list(string.ascii_lowercase + string.digits)
chars_to_mapping = {
'ك': 'ک', 'دِ': 'د', 'بِ': 'ب', 'زِ': 'ز', 'ذِ': 'ذ', 'شِ': 'ش', 'سِ': 'س', 'ى': 'ی',
'ي': 'ی', 'أ': 'ا', 'ؤ': 'و', "ے": "ی", "ۀ": "ه", "ﭘ": "پ", "ﮐ": "ک", "ﯽ": "ی",
"ﺎ": "ا", "ﺑ": "ب", "ﺘ": "ت", "ﺧ": "خ", "ﺩ": "د", "ﺱ": "س", "ﻀ": "ض", "ﻌ": "ع",
"ﻟ": "ل", "ﻡ": "م", "ﻢ": "م", "ﻪ": "ه", "ﻮ": "و", 'ﺍ': "ا", 'ة': "ه",
'ﯾ': "ی", 'ﯿ': "ی", 'ﺒ': "ب", 'ﺖ': "ت", 'ﺪ': "د", 'ﺮ': "ر", 'ﺴ': "س", 'ﺷ': "ش",
'ﺸ': "ش", 'ﻋ': "ع", 'ﻤ': "م", 'ﻥ': "ن", 'ﻧ': "ن", 'ﻭ': "و", 'ﺭ': "ر", "ﮔ": "گ",
# "ها": " ها", "ئ": "ی",
"a": " ای ", "b": " بی ", "c": " سی ", "d": " دی ", "e": " ایی ", "f": " اف ",
"g": " جی ", "h": " اچ ", "i": " آی ", "j": " جی ", "k": " کی ", "l": " ال ",
"m": " ام ", "n": " ان ", "o": " او ", "p": " پی ", "q": " کیو ", "r": " آر ",
"s": " اس ", "t": " تی ", "u": " یو ", "v": " وی ", "w": " دبلیو ", "x": " اکس ",
"y": " وای ", "z": " زد ",
"\u200c": " ", "\u200d": " ", "\u200e": " ", "\u200f": " ", "\ufeff": " ",
}
def multiple_replace(text, chars_to_mapping):
pattern = "|".join(map(re.escape, chars_to_mapping.keys()))
return re.sub(pattern, lambda m: chars_to_mapping[m.group()], str(text))
def remove_special_characters(text, chars_to_ignore_regex):
text = re.sub(chars_to_ignore_regex, '', text).lower() + " "
return text
def normalizer(batch, chars_to_ignore, chars_to_mapping):
chars_to_ignore_regex = f"""[{"".join(chars_to_ignore)}]"""
text = batch["sentence"].lower().strip()
text = _normalizer.normalize(text)
text = multiple_replace(text, chars_to_mapping)
text = remove_special_characters(text, chars_to_ignore_regex)
text = re.sub(" +", " ", text)
_text = []
for word in text.split():
try:
word = int(word)
_text.append(words(word))
except:
_text.append(word)
text = " ".join(_text) + " "
text = text.strip() + " "
batch["sentence"] = text
return batch
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
speech_array = speech_array.squeeze().numpy()
speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000)
batch["speech"] = speech_array
return batch
def predict(batch):
features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)[0]
return batch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-persian-shemo")
model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-persian-shemo").to(device)
dataset = load_dataset("csv", data_files={"test": "/content/fa/dataset/test.csv"}, delimiter="\t")["test"]
dataset = dataset.map(
normalizer,
fn_kwargs={"chars_to_ignore": chars_to_ignore, "chars_to_mapping": chars_to_mapping},
remove_columns=list(set(dataset.column_names) - set(['sentence', 'path']))
)
dataset = dataset.map(speech_file_to_array_fn)
result = dataset.map(predict)
max_items = np.random.randint(0, len(result), 20).tolist()
for i in max_items:
reference, predicted = result["sentence"][i], result["predicted"][i]
print("reference:", reference)
print("predicted:", predicted)
print('---')
```
**Output:**
```text
reference: همون شبی که قسم خوردی منو از جونت بیشتر دوست داری و تا آخر عمر کنار من می مونی همون شبی که به من وعده دادی بزرگترین جشن های ازدواج رو برام بگیری
predicted: همون شبی که قسم خوردی منو از جونت بیشتر دوستاری و تا آخر عمر کنار من می مونیمو یبی که به من وعض دادین بزرگترین جشن های ازدواج و برام بگیری
---
reference: خودتون دم به ساعت فحشش می دین کتکش می زنین بس نیست
predicted: خودتون دم به ساعت فشش می دیم کتاکش می زنیم بس نیست
---
reference: خونه
predicted: خونه
---
reference: شلوغش نکن
predicted: شلوغش نکن
---
reference: برای بقیه سوییت هایی در نظر گرفتم
predicted: برای بقی سویید هایی در نظر گرفتم
---
reference: برو گمشو برو گمشو برو بیرون
predicted: برو گمشو برو گمشو برو بیرون
---
reference: فقط یک سال بعد از خاتمه جنگ بود که حقیقت رو فهمیدی
predicted: فقط یک سال بعد از خاتمه جنگ بود که حقیقت و فهمیدید
---
reference: غیر از اون دو نفری که اینجا خوابیدند کسان دیگه ای از دوستانشو به تو معرفی نکرده
predicted: غیر از اون دو نفری که اینجا خوابیدند کسانه دیگه ای از دوستانشو به تو معرفی نکرده
---
reference: من می دونم اینجایی درو واز کن کویی کوئک
predicted: من می دونم این جایی د رو واز کن کوری فکر
---
reference: نویسنده باید چهار تا چشم داشته باشه چهار تا گوش
predicted: نویسند باید چهار تا چشم داشته باشه و چهار تا گوش
---
reference: غیر از اون دو نفری که اینجا خوابیدند کسان دیگه ای از دوستانشو به تو معرفی نکرده
predicted: غیر از اون دو نفری که اینجا خوابیدند کسانه دیگه ای از دوستانشو به تو معرفی نکرده
---
reference: پس همراهان من چه می کنن چه می کنن که این سرکرده کولی ها تونسته خودشو اینجا برسونه
predicted: پس همرا حال من چه می کنن چه می کنن که این سرکرده کلی ها تونسته خودش رو اینجا برسونه
---
reference: گوش بدید مادمازل حقیقت اینه که من دلم می خواد به شما کمک کنم زیبایی و جوانی شما دل منو به رحم میاره به من اعتماد کنید دلم می خواد بتونم شما رو از مرگ نجات بدم
predicted: هوش بدید مادماز حقیقت اینه که من دلم می خواد به شما کمک کنم زیبای و جوانی شما دل منو به رحم می آره به من اعتماد کنید دلم می خواد بتونم شما رو از مرگ نجات بدم
---
reference: قربان به نظر می رسه شما نه تنها به مرگ رونالد دریو بلکه به مرگ خانم مونرو هم مشکوکید
predicted: قربان به نظر می رسه شما نه تن ها به مرگ رونال گریو بلکه به مرگ خانم مونرا مشکوکین
---
reference: برای اینکه شما رو دوست دارم
predicted: برای اینکه شما رو دوست دارم
---
reference: مرتبه اول دنبال جسدی می گشتن که انداخته بودن کنار خیابون
predicted: حر تبه اول دنبال جسدی می گشتند که انداخته بودن کنار خیابون
---
reference: خونه
predicted: خونه
---
reference: کدبانوی جدید این طبقه هستم
predicted: کدبانوی جدید این طبقه هستم
---
reference: و این برات خیلی گرون تموم شد
predicted: و این برات خیلی گرون تموم شد
---
reference: خب چرا نمی دین به خودشون
predicted: خبچرا نمی تون به خودشون
```
## Evaluation
The model can be evaluated as follows on the Persian (Farsi) test data of Common Voice.
```python
import librosa
import torch
import torchaudio
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from datasets import load_dataset, load_metric
from num2fawords import words, ordinal_words
import numpy as np
import hazm
import re
import string
_normalizer = hazm.Normalizer()
chars_to_ignore = [
",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�",
"#", "!", "؟", "?", "«", "»", "،", "(", ")", "؛", "'ٔ", "٬",'ٔ', ",", "?",
".", "!", "-", ";", ":",'"',"“", "%", "‘", "”", "�", "–", "…", "_", "”", '“', '„',
'ā', 'š',
# "ء",
]
# In case of farsi
chars_to_ignore = chars_to_ignore + list(string.ascii_lowercase + string.digits)
chars_to_mapping = {
'ك': 'ک', 'دِ': 'د', 'بِ': 'ب', 'زِ': 'ز', 'ذِ': 'ذ', 'شِ': 'ش', 'سِ': 'س', 'ى': 'ی',
'ي': 'ی', 'أ': 'ا', 'ؤ': 'و', "ے": "ی", "ۀ": "ه", "ﭘ": "پ", "ﮐ": "ک", "ﯽ": "ی",
"ﺎ": "ا", "ﺑ": "ب", "ﺘ": "ت", "ﺧ": "خ", "ﺩ": "د", "ﺱ": "س", "ﻀ": "ض", "ﻌ": "ع",
"ﻟ": "ل", "ﻡ": "م", "ﻢ": "م", "ﻪ": "ه", "ﻮ": "و", 'ﺍ': "ا", 'ة': "ه",
'ﯾ': "ی", 'ﯿ': "ی", 'ﺒ': "ب", 'ﺖ': "ت", 'ﺪ': "د", 'ﺮ': "ر", 'ﺴ': "س", 'ﺷ': "ش",
'ﺸ': "ش", 'ﻋ': "ع", 'ﻤ': "م", 'ﻥ': "ن", 'ﻧ': "ن", 'ﻭ': "و", 'ﺭ': "ر", "ﮔ": "گ",
# "ها": " ها", "ئ": "ی",
"a": " ای ", "b": " بی ", "c": " سی ", "d": " دی ", "e": " ایی ", "f": " اف ",
"g": " جی ", "h": " اچ ", "i": " آی ", "j": " جی ", "k": " کی ", "l": " ال ",
"m": " ام ", "n": " ان ", "o": " او ", "p": " پی ", "q": " کیو ", "r": " آر ",
"s": " اس ", "t": " تی ", "u": " یو ", "v": " وی ", "w": " دبلیو ", "x": " اکس ",
"y": " وای ", "z": " زد ",
"\u200c": " ", "\u200d": " ", "\u200e": " ", "\u200f": " ", "\ufeff": " ",
}
def multiple_replace(text, chars_to_mapping):
pattern = "|".join(map(re.escape, chars_to_mapping.keys()))
return re.sub(pattern, lambda m: chars_to_mapping[m.group()], str(text))
def remove_special_characters(text, chars_to_ignore_regex):
text = re.sub(chars_to_ignore_regex, '', text).lower() + " "
return text
def normalizer(batch, chars_to_ignore, chars_to_mapping):
chars_to_ignore_regex = f"""[{"".join(chars_to_ignore)}]"""
text = batch["sentence"].lower().strip()
text = _normalizer.normalize(text)
text = multiple_replace(text, chars_to_mapping)
text = remove_special_characters(text, chars_to_ignore_regex)
text = re.sub(" +", " ", text)
_text = []
for word in text.split():
try:
word = int(word)
_text.append(words(word))
except:
_text.append(word)
text = " ".join(_text) + " "
text = text.strip() + " "
batch["sentence"] = text
return batch
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
speech_array = speech_array.squeeze().numpy()
speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000)
batch["speech"] = speech_array
return batch
def predict(batch):
features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)[0]
return batch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-persian-shemo")
model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-persian-shemo").to(device)
dataset = load_dataset("csv", data_files={"test": "/content/fa/dataset/test.csv"}, delimiter="\t")["test"]
dataset = dataset.map(
normalizer,
fn_kwargs={"chars_to_ignore": chars_to_ignore, "chars_to_mapping": chars_to_mapping},
remove_columns=list(set(dataset.column_names) - set(['sentence', 'path']))
)
dataset = dataset.map(speech_file_to_array_fn)
result = dataset.map(predict)
wer = load_metric("wer")
print("WER: {:.2f}".format(100 * wer.compute(predictions=result["predicted"], references=result["sentence"])))
```
**Test Result:**
- WER: 31.00%
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found [here](https://colab.research.google.com/github/m3hrdadfi/notebooks/blob/main/Fine_Tune_XLSR_Wav2Vec2_on_Persian_ShEMO_ASR_with_%F0%9F%A4%97_Transformers_ipynb.ipynb) |
macedonizer/mk-gpt2 | e0038f08ff5b5f0d932e0a6f511e1b3925f832d6 | 2021-09-22T08:58:46.000Z | [
"pytorch",
"gpt2",
"text-generation",
"mk",
"dataset:wiki-mk",
"dataset:time-mk-news-2010-2015",
"transformers",
"license:apache-2.0"
] | text-generation | false | macedonizer | null | macedonizer/mk-gpt2 | 59 | null | transformers | 5,692 | ---
language:
- mk
thumbnail: https://huggingface.co/macedonizer/mk-roberta-base/blaze-koneski.jpg
license: apache-2.0
datasets:
- wiki-mk
- time-mk-news-2010-2015
---
# mk-gpt2
Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
## Model description
mk-gpt2 is a transformers model pretrained on a very large corpus of Macedonian data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the Macedonian language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
import random
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained('macedonizer/mk-gpt2') \
model = AutoModelWithLMHead.from_pretrained('macedonizer/mk-gpt2')
input_text = 'Скопје е '
if len(input_text) == 0: \
encoded_input = tokenizer(input_text, return_tensors="pt") \
output = model.generate( \
bos_token_id=random.randint(1, 50000), \
do_sample=True, \
top_k=50, \
max_length=1024, \
top_p=0.95, \
num_return_sequences=1, \
) \
else: \
encoded_input = tokenizer(input_text, return_tensors="pt") \
output = model.generate( \
**encoded_input, \
bos_token_id=random.randint(1, 50000), \
do_sample=True, \
top_k=50, \
max_length=1024, \
top_p=0.95, \
num_return_sequences=1, \
)
decoded_output = [] \
for sample in output: \
decoded_output.append(tokenizer.decode(sample, skip_special_tokens=True))
print(decoded_output) |
microsoft/deberta-xlarge-v2 | 314bc0db16b9890225cd5c531ae7f0dabfc0cc74 | 2021-02-11T02:04:50.000Z | [
"pytorch",
"deberta-v2",
"en",
"transformers",
"deberta",
"license:mit"
] | null | false | microsoft | null | microsoft/deberta-xlarge-v2 | 59 | null | transformers | 5,693 | ---
language: en
tags: deberta
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
license: mit
---
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
## This model is DEPRECATED, please use [DeBERTa-V2-XLarge](https://huggingface.co/microsoft/deberta-v2-xlarge)
|
monologg/koelectra-base-generator | fe6a7147be11ae58af0f78206f558c8e31e8c5c9 | 2021-10-20T16:55:00.000Z | [
"pytorch",
"electra",
"fill-mask",
"ko",
"transformers",
"korean",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | monologg | null | monologg/koelectra-base-generator | 59 | null | transformers | 5,694 | ---
language: ko
license: apache-2.0
tags:
- korean
---
# KoELECTRA (Base Generator)
Pretrained ELECTRA Language Model for Korean (`koelectra-base-generator`)
For more detail, please see [original repository](https://github.com/monologg/KoELECTRA/blob/master/README_EN.md).
## Usage
### Load model and tokenizer
```python
>>> from transformers import ElectraModel, ElectraTokenizer
>>> model = ElectraModel.from_pretrained("monologg/koelectra-base-generator")
>>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-generator")
```
### Tokenizer example
```python
>>> from transformers import ElectraTokenizer
>>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-generator")
>>> tokenizer.tokenize("[CLS] 한국어 ELECTRA를 공유합니다. [SEP]")
['[CLS]', '한국어', 'E', '##L', '##EC', '##T', '##RA', '##를', '공유', '##합니다', '.', '[SEP]']
>>> tokenizer.convert_tokens_to_ids(['[CLS]', '한국어', 'E', '##L', '##EC', '##T', '##RA', '##를', '공유', '##합니다', '.', '[SEP]'])
[2, 18429, 41, 6240, 15229, 6204, 20894, 5689, 12622, 10690, 18, 3]
```
## Example using ElectraForMaskedLM
```python
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="monologg/koelectra-base-generator",
tokenizer="monologg/koelectra-base-generator"
)
print(fill_mask("나는 {} 밥을 먹었다.".format(fill_mask.tokenizer.mask_token)))
```
|
monsoon-nlp/muril-adapted-local | 9f79abba6e9e39a2fefc3dc7ecdcb4354f60b1a3 | 2021-05-20T00:11:39.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"en",
"hi",
"bn",
"ta",
"as",
"gu",
"kn",
"ks",
"ml",
"mr",
"ne",
"or",
"pa",
"sa",
"sd",
"te",
"ur",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | monsoon-nlp | null | monsoon-nlp/muril-adapted-local | 59 | 2 | transformers | 5,695 | ---
language:
- en
- hi
- bn
- ta
- as
- gu
- kn
- ks
- ml
- mr
- ne
- or
- pa
- sa
- sd
- te
- ur
license: apache-2.0
---
## MuRIL - Unofficial
Multilingual Representations for Indian Languages : Google open sourced
this BERT model pre-trained on 17 Indian languages, and their transliterated
counterparts.
The model was trained using a self-supervised masked language modeling task. We do whole word masking with a maximum of 80 predictions. The model was trained for 1000K steps, with a batch size of 4096, and a max sequence length of 512.
Original model on TFHub: https://tfhub.dev/google/MuRIL/1
*Official release now on HuggingFace (March 2021)* https://huggingface.co/google/muril-base-cased
License: Apache 2.0
### About this upload
I ported the TFHub .pb model to .h5 and then pytorch_model.bin for
compatibility with Transformers.
|
pearsonkyle/gpt2-exomachina | f4d75d137497bebdd3984a26418de58b1c324218 | 2021-05-23T10:57:32.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | pearsonkyle | null | pearsonkyle/gpt2-exomachina | 59 | null | transformers | 5,696 | # Exo-Machina
A deep language model, GPT-2, is trained on scientific manuscripts from NASA's Astrophysical Data System pertaining to extrasolar planets and the references therein. This pilot study uses the abstracts of each article as training data in order to explore correlations in scientific literature from a language perspective. A language model is a mathematical representation for an algorithm used to generate sequences in the same way a human would to form sentances. Each word or letter in a sentance is encoded to a numerical value (e.g. using word2vec) and is appended to a list forming sequences that represent up to a paragraph worth of text. The sequences are fed into the [GPT-2](https://openai.com/blog/better-language-models/) 117M model and trained for 500,000 steps with fine tuning. After training, the language model is used to generate new text from scratch and from user input.
- ### [Browse samples](https://pearsonkyle.github.io/Exo-Machina/)
- ### [Train a model on Google Colab](https://colab.research.google.com/drive/1Pur0rFi5YVdn7axYRacXWFMic4NxRexV?usp=sharing)
### Get started fast:
```python
from transformers import pipeline
exo = pipeline('text-generation',model='pearsonkyle/gpt2-exomachina', tokenizer='gpt2', config={'max_length':1600})
machina = lambda text: exo(text)[0]['generated_text']
print(machina("Transiting exoplanets are"))
```
## Training Samples
~40,000 Abstracts from NASA's Astrophysical data system (ADS) and ArXiv.

A few generated samples are below:
- *We can remotely sense an atmosphere by observing its reflected, transmitted, or emitted light in varying geometries. This light will contain information on the planetary conditions including* `temperature, pressure, composition, and cloud optical thickness. One such property that is important is...`
- *The reflectance of Earth's vegetation suggests*
`that large, deciduous forest fires are composed of mostly dry, unprocessed material that is distributed in a nearly patchy fashion. The distributions of these fires are correlated with temperature, and also with vegetation...`
- *Directly imaged exoplanets probe* `key aspects of planet formation and evolution theory, as well as atmospheric and interior physics. These insights have led to numerous direct imaging instruments for exoplanets, many using polarimetry. However, current instruments take`
Letting the scrape run for ~2 hours found articles from these publications:
```
5364 - The Astrophysical Journal
3365 - Astronomy and Astrophysics
2704 - Monthly Notices of the Royal Astronomical Society
1355 - The Astronomical Journal
617 - arXiv e-prints
498 - Icarus
388 - Publications of the Astronomical Society of the Pacific
324 - The Astrophysical Journal Supplement Series
245 - Nature
187 - Journal of Geophysical Research
167 - Science
145 - Astronomische Nachrichten
129 - Planetary and Space Science
114 - Space Science Reviews
109 - Geophysical Research Letters
``` |
stanlochten/t5-KGQgen | a9132bf5135433431a8e014afa1572c59a29d209 | 2021-07-09T15:53:30.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | stanlochten | null | stanlochten/t5-KGQgen | 59 | 1 | transformers | 5,697 | T5-base model fine-tuned for question generation from knowledge graphs. Can be used to generate questions from linearized knowledge graphs, meaning graphs in the form of its all its triples listed in the following format:
`<A> answer node(s) <H> head <R> relation <T> tail <H> head <R> relation <T> tail ... etc ...`,
where `answer node(s)` refers to the node(s) which should contain the answer to the generated question.
To load the model:
```
from transformers import T5ForConditionalGeneration, T5TokenizerFast
model = T5ForConditionalGeneration.from_pretrained('stanlochten/t5-KGQgen')
tokenizer = T5TokenizerFast.from_pretrained('t5-base', extra_ids=0,
additional_special_tokens = ['<A>', '<H>', '<R>', '<T>'])
```
To generate questions from your graphs, where `graphs` is a list of strings for each graph:
```
print('Tokenizing...')
inputs = tokenizer(graphs, return_tensors="pt", padding=True, truncation=True)
print('Predicting...')
y_hats = model.generate(inputs.input_ids)
print('Decoding...')
preds = tokenizer.batch_decode(y_hats, skip_special_tokens=True, clean_up_tokenization_spaces=True)
```
Good luck! |
ali2066/bert-base-uncased_token_itr0_0.0001_TRAIN_webDiscourse_TEST_test_set_05_03_2022-05_51_01 | a37fd7d061d8395c51026c577233da84358d404b | 2022-03-05T04:53:43.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ali2066 | null | ali2066/bert-base-uncased_token_itr0_0.0001_TRAIN_webDiscourse_TEST_test_set_05_03_2022-05_51_01 | 59 | null | transformers | 5,698 | Entry not found |
ali2066/bert-base-uncased_token_itr0_0.0001_TRAIN_essays_TEST_test_set_05_03_2022-05_58_31 | fd11a9a77cd60ff90bb7f040908262a63f58bb46 | 2022-03-05T05:00:58.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | ali2066 | null | ali2066/bert-base-uncased_token_itr0_0.0001_TRAIN_essays_TEST_test_set_05_03_2022-05_58_31 | 59 | null | transformers | 5,699 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.