modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pszemraj/bigbird-pegasus-large-K-booksum | c3138586bd440f4f67f38a2dbb81a00e10a21da3 | 2022-07-15T08:54:09.000Z | [
"pytorch",
"bigbird_pegasus",
"text2text-generation",
"en",
"dataset:kmfoda/booksum",
"arxiv:2105.08209",
"transformers",
"summarization",
"summarisation",
"summary",
"notes",
"bigbird_pegasus_",
"pegasus",
"bigbird",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | pszemraj | null | pszemraj/bigbird-pegasus-large-K-booksum | 172 | 0 | transformers | 3,800 | ---
language:
- en
tags:
- summarization
- summarisation
- summary
- notes
- bigbird_pegasus_
- pegasus
- bigbird
license: apache-2.0
datasets:
- kmfoda/booksum
metrics:
- rouge
widget:
- text: large earthquakes along a given fault segment do not occur at random intervals
because it takes time to accumulate the strain energy for the rupture. The rates
at which tectonic plates move and accumulate strain at their boundaries are approximately
uniform. Therefore, in first approximation, one may expect that large ruptures
of the same fault segment will occur at approximately constant time intervals.
If subsequent main shocks have different amounts of slip across the fault, then
the recurrence time may vary, and the basic idea of periodic mainshocks must be
modified. For great plate boundary ruptures the length and slip often vary by
a factor of 2. Along the southern segment of the San Andreas fault the recurrence
interval is 145 years with variations of several decades. The smaller the standard
deviation of the average recurrence interval, the more specific could be the long
term prediction of a future mainshock.
example_title: earthquakes
- text: " A typical feed-forward neural field algorithm. Spatiotemporal coordinates\
\ are fed into a neural network that predicts values in the reconstructed domain.\
\ Then, this domain is mapped to the sensor domain where sensor measurements are\
\ available as supervision. Class and Section Problems Addressed Generalization\
\ (Section 2) Inverse problems, ill-posed problems, editability; symmetries. Hybrid\
\ Representations (Section 3) Computation & memory efficiency, representation\
\ capacity, editability: Forward Maps (Section 4) Inverse problems Network Architecture\
\ (Section 5) Spectral bias, integration & derivatives. Manipulating Neural Fields\
\ (Section 6) Edit ability, constraints, regularization. Table 2: The five classes\
\ of techniques in the neural field toolbox each addresses problems that arise\
\ in learning, inference, and control. (Section 3). We can supervise reconstruction\
\ via differentiable forward maps that transform Or project our domain (e.g, 3D\
\ reconstruction via 2D images; Section 4) With appropriate network architecture\
\ choices, we can overcome neural network spectral biases (blurriness) and efficiently\
\ compute derivatives and integrals (Section 5). Finally, we can manipulate neural\
\ fields to add constraints and regularizations, and to achieve editable representations\
\ (Section 6). Collectively, these classes constitute a 'toolbox' of techniques\
\ to help solve problems with neural fields There are three components in a conditional\
\ neural field: (1) An encoder or inference function \u20AC that outputs the conditioning\
\ latent variable 2 given an observation 0 E(0) =2. 2 is typically a low-dimensional\
\ vector, and is often referred to aS a latent code Or feature code_ (2) A mapping\
\ function 4 between Z and neural field parameters O: Y(z) = O; (3) The neural\
\ field itself $. The encoder \u20AC finds the most probable z given the observations\
\ O: argmaxz P(2/0). The decoder maximizes the inverse conditional probability\
\ to find the most probable 0 given Z: arg- max P(Olz). We discuss different encoding\
\ schemes with different optimality guarantees (Section 2.1.1), both global and\
\ local conditioning (Section 2.1.2), and different mapping functions Y (Section\
\ 2.1.3) 2. Generalization Suppose we wish to estimate a plausible 3D surface\
\ shape given a partial or noisy point cloud. We need a suitable prior over the\
\ sur- face in its reconstruction domain to generalize to the partial observations.\
\ A neural network expresses a prior via the function space of its architecture\
\ and parameters 0, and generalization is influenced by the inductive bias of\
\ this function space (Section 5)."
example_title: scientific paper
- text: ' the big variety of data coming from diverse sources is one of the key properties
of the big data phenomenon. It is, therefore, beneficial to understand how data
is generated in various environments and scenarios, before looking at what should
be done with this data and how to design the best possible architecture to accomplish
this The evolution of IT architectures, described in Chapter 2, means that the
data is no longer processed by a few big monolith systems, but rather by a group
of services In parallel to the processing layer, the underlying data storage has
also changed and became more distributed This, in turn, required a significant
paradigm shift as the traditional approach to transactions (ACID) could no longer
be supported. On top of this, cloud computing is becoming a major approach with
the benefits of reducing costs and providing on-demand scalability but at the
same time introducing concerns about privacy, data ownership, etc In the meantime
the Internet continues its exponential growth: Every day both structured and unstructured
data is published and available for processing: To achieve competitive advantage
companies have to relate their corporate resources to external services, e.g.
financial markets, weather forecasts, social media, etc While several of the sites
provide some sort of API to access the data in a more orderly fashion; countless
sources require advanced web mining and Natural Language Processing (NLP) processing
techniques: Advances in science push researchers to construct new instruments
for observing the universe O conducting experiments to understand even better
the laws of physics and other domains. Every year humans have at their disposal
new telescopes, space probes, particle accelerators, etc These instruments generate
huge streams of data, which need to be stored and analyzed. The constant drive
for efficiency in the industry motivates the introduction of new automation techniques
and process optimization: This could not be done without analyzing the precise
data that describe these processes. As more and more human tasks are automated,
machines provide rich data sets, which can be analyzed in real-time to drive efficiency
to new levels. Finally, it is now evident that the growth of the Internet of Things
is becoming a major source of data. More and more of the devices are equipped
with significant computational power and can generate a continuous data stream
from their sensors. In the subsequent sections of this chapter, we will look at
the domains described above to see what they generate in terms of data sets. We
will compare the volumes but will also look at what is characteristic and important
from their respective points of view. 3.1 The Internet is undoubtedly the largest
database ever created by humans. While several well described; cleaned, and structured
data sets have been made available through this medium, most of the resources
are of an ambiguous, unstructured, incomplete or even erroneous nature. Still,
several examples in the areas such as opinion mining, social media analysis, e-governance,
etc, clearly show the potential lying in these resources. Those who can successfully
mine and interpret the Internet data can gain unique insight and competitive advantage
in their business An important area of data analytics on the edge of corporate
IT and the Internet is Web Analytics.'
example_title: data science textbook
- text: "Transformer-based models have shown to be very useful for many NLP tasks.\
\ However, a major limitation of transformers-based models is its O(n^2)O(n 2)\
\ time & memory complexity (where nn is sequence length). Hence, it's computationally\
\ very expensive to apply transformer-based models on long sequences n > 512n>512.\
\ Several recent papers, e.g. Longformer, Performer, Reformer, Clustered attention\
\ try to remedy this problem by approximating the full attention matrix. You can\
\ checkout \U0001F917's recent blog post in case you are unfamiliar with these\
\ models.\nBigBird (introduced in paper) is one of such recent models to address\
\ this issue. BigBird relies on block sparse attention instead of normal attention\
\ (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a\
\ much lower computational cost compared to BERT. It has achieved SOTA on various\
\ tasks involving very long sequences such as long documents summarization, question-answering\
\ with long contexts.\nBigBird RoBERTa-like model is now available in \U0001F917\
Transformers. The goal of this post is to give the reader an in-depth understanding\
\ of big bird implementation & ease one's life in using BigBird with \U0001F917\
Transformers. But, before going into more depth, it is important to remember that\
\ the BigBird's attention is an approximation of BERT's full attention and therefore\
\ does not strive to be better than BERT's full attention, but rather to be more\
\ efficient. It simply allows to apply transformer-based models to much longer\
\ sequences since BERT's quadratic memory requirement quickly becomes unbearable.\
\ Simply put, if we would have \u221E compute & \u221E time, BERT's attention\
\ would be preferred over block sparse attention (which we are going to discuss\
\ in this post).\nIf you wonder why we need more compute when working with longer\
\ sequences, this blog post is just right for you!\nSome of the main questions\
\ one might have when working with standard BERT-like attention include:\nDo all\
\ tokens really have to attend to all other tokens? Why not compute attention\
\ only over important tokens? How to decide what tokens are important? How to\
\ attend to just a few tokens in a very efficient way? In this blog post, we will\
\ try to answer those questions.\nWhat tokens should be attended to? We will give\
\ a practical example of how attention works by considering the sentence 'BigBird\
\ is now available in HuggingFace for extractive question answering'. In BERT-like\
\ attention, every word would simply attend to all other tokens.\nLet's think\
\ about a sensible choice of key tokens that a queried token actually only should\
\ attend to by writing some pseudo-code. Will will assume that the token available\
\ is queried and build a sensible list of key tokens to attend to.\n>>> # let's\
\ consider following sentence as an example >>> example = ['BigBird', 'is', 'now',\
\ 'available', 'in', 'HuggingFace', 'for', 'extractive', 'question', 'answering']\n\
>>> # further let's assume, we're trying to understand the representation of 'available'\
\ i.e. >>> query_token = 'available' >>> # We will initialize an empty `set` and\
\ fill up the tokens of our interest as we proceed in this section. >>> key_tokens\
\ = [] # => currently 'available' token doesn't have anything to attend Nearby\
\ tokens should be important because, in a sentence (sequence of words), the current\
\ word is highly dependent on neighboring past & future tokens. This intuition\
\ is the idea behind the concept of sliding attention."
example_title: bigbird blog intro
inference:
parameters:
max_length: 64
no_repeat_ngram_size: 2
encoder_no_repeat_ngram_size: 3
repetition_penalty: 2.4
length_penalty: 0.5
num_beams: 4
early_stopping: true
model-index:
- name: pszemraj/bigbird-pegasus-large-K-booksum
results:
- task:
type: summarization
name: Summarization
dataset:
name: kmfoda/booksum
type: kmfoda/booksum
config: kmfoda--booksum
split: test
metrics:
- name: ROUGE-1
type: rouge
value: 34.0847
verified: true
- name: ROUGE-2
type: rouge
value: 5.9222
verified: true
- name: ROUGE-L
type: rouge
value: 16.3885
verified: true
- name: ROUGE-LSUM
type: rouge
value: 31.6159
verified: true
- name: loss
type: loss
value: 3.522040605545044
verified: true
- name: gen_len
type: gen_len
value: 254.3676
verified: true
---
# bigbird pegasus on the booksum dataset
>_this is the "latest" version of the model that has been trained the longest, currently at 70k steps_
- **GOAL:** A summarization model that 1) summarizes the source content accurately 2) _more important IMO_ produces summaries that are easy to read and understand (* cough * unlike arXiv * cough *)
- This model attempts to help with that by using the [booksum](https://arxiv.org/abs/2105.08209) dataset to provide **explanatory summarization**
- Explanatory Summary - A summary that both consolidates information and also explains why said consolidated information is important.
- This model was trained for seven epochs total (approx 70,000 steps) and is closer to finished.
- Will continue to improve (slowly, now that it has been trained for a long time) based on any result findings/feedback.
- starting checkpoint was `google/bigbird-pegasus-large-bigpatent`
---
# example usage
> An extended example, including a demo of batch summarization, is [here](https://colab.research.google.com/gist/pszemraj/2c8c0aecbcd4af6e9cbb51e195be10e2/bigbird-pegasus-large-booksum-20k-example.ipynb).
- create the summarizer object:
```
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
from transformers import pipeline
_model = AutoModelForSeq2SeqLM.from_pretrained(
"pszemraj/bigbird-pegasus-large-K-booksum",
low_cpu_mem_usage=True,
)
_tokenizer = AutoTokenizer.from_pretrained(
"pszemraj/bigbird-pegasus-large-K-booksum",
)
summarizer = pipeline(
"summarization",
model=_model,
tokenizer=_tokenizer
)
```
- define text to be summarized, and pass it through the pipeline. Boom done.
```
wall_of_text = "your text to be summarized goes here."
result = summarizer(
wall_of_text,
min_length=16,
max_length=256,
no_repeat_ngram_size=3,
clean_up_tokenization_spaces=True,
)
print(result[0]['summary_text'])
```
## Alternate Checkpoint
- if experiencing runtime/memory issues, try [this earlier checkpoint](https://huggingface.co/pszemraj/bigbird-pegasus-large-booksum-40k-K) at 40,000 steps which is almost as good at the explanatory summarization task but runs faster.
---
# Results
- note that while the dataset has three subsets (chapter, book, paragraph) - see the [paper](https://arxiv.org/abs/2105.08209). the scores below are run in aggregate. The paper has some benchmark scores listed, which this model competes with.
- note that eval generations are run & computed at a length of 128 tokens.
```
'eval_gen_len': 126.9791,
'eval_loss': 4.00944709777832,
'eval_rouge1': 27.6028,
'eval_rouge2': 4.6556,
'eval_rougeL': 14.5259,
'eval_rougeLsum': 25.6632,
'eval_runtime': 29847.4812,
'eval_samples_per_second': 0.05,
'eval_steps_per_second': 0.05}
``` |
UBC-NLP/ptsm_t5_paraphraser | 95d01b08b095d9057388c694f7cb133dc9b4a97d | 2022-07-05T18:34:19.000Z | [
"pytorch",
"t5",
"text2text-generation",
"arxiv:2204.04611",
"transformers",
"license:cc-by-nc-3.0",
"autotrain_compatible"
] | text2text-generation | false | UBC-NLP | null | UBC-NLP/ptsm_t5_paraphraser | 172 | null | transformers | 3,801 | ---
license: cc-by-nc-3.0
---
# T5-base model trained for text paraphrase
You can load this model by:
```python
from transformers import T5ForConditionalGeneration,T5TokenizerFast
model = T5ForConditionalGeneration.from_pretrained(model_name_or_path)
tokenizer = T5TokenizerFast.from_pretrained(model_name_or_path)
```
A prefix "paraphrase: " should be added in font of the input sequence, i.e.:
```python
input_st = "paraphrase: " + text + " </s>"
```
You can find our scripts for generation in our [project GitHub](https://github.com/chiyuzhang94/PTSM/tree/main/paraphrase_generate)
Please find more training details in our paper:
[Decay No More: A Persistent Twitter Dataset for Learning Social Meaning](https://arxiv.org/pdf/2204.04611.pdf)
Accepted by 1st Workshop on Novel Evaluation Approaches for Text Classification Systems on Social Media @ ICWSM-2022
```
@inproceedings{zhang2022decay,
title={Decay No More: A Persistent Twitter Dataset for Learning Social Meaning},
author={Zhang, Chiyu and Abdul-Mageed, Muhammad and Nagoudi, El Moatez Billah},
booktitle ={Proceedings of 1st Workshop on Novel Evaluation Approaches for Text Classification Systems on Social Media (NEATCLasS)},
year={2022},
url = {https://arxiv.org/pdf/2204.04611.pdf},
publisher = {{AAAI} Press},
}
``` |
webshop/il_search_bart | 982e94251ef9e9da0a7d35b10011866bbc94adf6 | 2022-06-16T00:03:31.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | webshop | null | webshop/il_search_bart | 172 | null | transformers | 3,802 | Entry not found |
dminiotas05/distilbert-base-uncased-finetuned-ft650_10class | 4ebde7f138a1023af09aebad251a20a783a035d6 | 2022-07-08T14:58:07.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | dminiotas05 | null | dminiotas05/distilbert-base-uncased-finetuned-ft650_10class | 172 | null | transformers | 3,803 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-ft650_10class
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ft650_10class
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9674
- Accuracy: 0.2207
- F1: 0.2002
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 2.1088 | 1.0 | 188 | 2.0460 | 0.1807 | 0.1324 |
| 1.9628 | 2.0 | 376 | 1.9867 | 0.2173 | 0.1821 |
| 1.8966 | 3.0 | 564 | 1.9693 | 0.2193 | 0.1936 |
| 1.8399 | 4.0 | 752 | 1.9674 | 0.2207 | 0.2002 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
DeepChem/ChemBERTa-77M-MTR | 66b895cab8adebea0cb59a8effa66b2020f204ca | 2022-01-20T17:55:55.000Z | [
"pytorch",
"roberta",
"transformers"
] | null | false | DeepChem | null | DeepChem/ChemBERTa-77M-MTR | 171 | 1 | transformers | 3,804 | Entry not found |
csebuetnlp/mT5_m2o_english_crossSum | 978a27fe57143fd862224d2f3c46bfa7d9cf8a7b | 2022-04-22T15:06:41.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"am",
"ar",
"az",
"bn",
"my",
"zh",
"en",
"fr",
"gu",
"ha",
"hi",
"ig",
"id",
"ja",
"rn",
"ko",
"ky",
"mr",
"ne",
"om",
"ps",
"fa",
"pcm",
"pt",
"pa",
"ru",
"gd",
"sr",
"si",
"so",
"es",
"sw",
"ta",
"te",
"th",
"ti",
"tr",
"uk",
"ur",
"uz",
"vi",
"cy",
"yo",
"arxiv:2112.08804",
"transformers",
"summarization",
"mT5",
"autotrain_compatible"
] | summarization | false | csebuetnlp | null | csebuetnlp/mT5_m2o_english_crossSum | 171 | null | transformers | 3,805 | ---
tags:
- summarization
- mT5
language:
- am
- ar
- az
- bn
- my
- zh
- en
- fr
- gu
- ha
- hi
- ig
- id
- ja
- rn
- ko
- ky
- mr
- ne
- om
- ps
- fa
- pcm
- pt
- pa
- ru
- gd
- sr
- si
- so
- es
- sw
- ta
- te
- th
- ti
- tr
- uk
- ur
- uz
- vi
- cy
- yo
licenses:
- cc-by-nc-sa-4.0
widget:
- text: "Videos that say approved vaccines are dangerous and cause autism, cancer or infertility are among those that will be taken down, the company said. The policy includes the termination of accounts of anti-vaccine influencers. Tech giants have been criticised for not doing more to counter false health information on their sites. In July, US President Joe Biden said social media platforms were largely responsible for people's scepticism in getting vaccinated by spreading misinformation, and appealed for them to address the issue. YouTube, which is owned by Google, said 130,000 videos were removed from its platform since last year, when it implemented a ban on content spreading misinformation about Covid vaccines. In a blog post, the company said it had seen false claims about Covid jabs \"spill over into misinformation about vaccines in general\". The new policy covers long-approved vaccines, such as those against measles or hepatitis B. \"We're expanding our medical misinformation policies on YouTube with new guidelines on currently administered vaccines that are approved and confirmed to be safe and effective by local health authorities and the WHO,\" the post said, referring to the World Health Organization."
---
# mT5-m2o-english-CrossSum
This repository contains the many-to-one (m2o) mT5 checkpoint finetuned on all cross-lingual pairs of the [CrossSum](https://huggingface.co/datasets/csebuetnlp/CrossSum) dataset, where the target summary was in **english**, i.e. this model tries to **summarize text written in any language in English.** For finetuning details and scripts, see the [paper](https://arxiv.org/abs/2112.08804) and the [official repository](https://github.com/csebuetnlp/CrossSum).
## Using this model in `transformers` (tested on 4.11.0.dev0)
```python
import re
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
WHITESPACE_HANDLER = lambda k: re.sub('\s+', ' ', re.sub('\n+', ' ', k.strip()))
article_text = """Videos that say approved vaccines are dangerous and cause autism, cancer or infertility are among those that will be taken down, the company said. The policy includes the termination of accounts of anti-vaccine influencers. Tech giants have been criticised for not doing more to counter false health information on their sites. In July, US President Joe Biden said social media platforms were largely responsible for people's scepticism in getting vaccinated by spreading misinformation, and appealed for them to address the issue. YouTube, which is owned by Google, said 130,000 videos were removed from its platform since last year, when it implemented a ban on content spreading misinformation about Covid vaccines. In a blog post, the company said it had seen false claims about Covid jabs "spill over into misinformation about vaccines in general". The new policy covers long-approved vaccines, such as those against measles or hepatitis B. "We're expanding our medical misinformation policies on YouTube with new guidelines on currently administered vaccines that are approved and confirmed to be safe and effective by local health authorities and the WHO," the post said, referring to the World Health Organization."""
model_name = "csebuetnlp/mT5_m2o_english_crossSum"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
input_ids = tokenizer(
[WHITESPACE_HANDLER(article_text)],
return_tensors="pt",
padding="max_length",
truncation=True,
max_length=512
)["input_ids"]
output_ids = model.generate(
input_ids=input_ids,
max_length=84,
no_repeat_ngram_size=2,
num_beams=4
)[0]
summary = tokenizer.decode(
output_ids,
skip_special_tokens=True,
clean_up_tokenization_spaces=False
)
print(summary)
```
## Citation
If you use this model, please cite the following paper:
```
@article{hasan2021crosssum,
author = {Tahmid Hasan and Abhik Bhattacharjee and Wasi Uddin Ahmad and Yuan-Fang Li and Yong-bin Kang and Rifat Shahriyar},
title = {CrossSum: Beyond English-Centric Cross-Lingual Abstractive Text Summarization for 1500+ Language Pairs},
journal = {CoRR},
volume = {abs/2112.08804},
year = {2021},
url = {https://arxiv.org/abs/2112.08804},
eprinttype = {arXiv},
eprint = {2112.08804}
}
``` |
microsoft/markuplm-large | eb3050bd84ff27279fe2669b0fafbc54805c3cb3 | 2022-01-11T12:33:09.000Z | [
"pytorch",
"markuplm",
"arxiv:2110.08518",
"transformers"
] | null | false | microsoft | null | microsoft/markuplm-large | 171 | 4 | transformers | 3,806 | # MarkupLM
**Multimodal (text +markup language) pre-training for [Document AI](https://www.microsoft.com/en-us/research/project/document-ai/)**
## Introduction
MarkupLM is a simple but effective multi-modal pre-training method of text and markup language for visually-rich document understanding and information extraction tasks, such as webpage QA and webpage information extraction. MarkupLM archives the SOTA results on multiple datasets. For more details, please refer to our paper:
[MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518) Junlong Li, Yiheng Xu, Lei Cui, Furu Wei
|
cardiffnlp/tweet-topic-19-multi | 307927c515fadb84eec00da911e2bbdaef3ffeef | 2022-06-09T10:35:40.000Z | [
"pytorch",
"tf",
"roberta",
"text-classification",
"arxiv:2202.03829",
"transformers"
] | text-classification | false | cardiffnlp | null | cardiffnlp/tweet-topic-19-multi | 171 | null | transformers | 3,807 | # tweet-topic-19-multi
This is a roBERTa-base model trained on ~90m tweets until the end of 2019 (see [here](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m)), and finetuned for multi-label topic classification on a corpus of 11,267 tweets.
The original roBERTa-base model can be found [here](https://huggingface.co/cardiffnlp/twitter-roberta-base-2021-124m) and the original reference paper is [TweetEval](https://github.com/cardiffnlp/tweeteval). This model is suitable for English.
- Reference Paper: [TimeLMs paper](https://arxiv.org/abs/2202.03829).
- Git Repo: [TimeLMs official repository](https://github.com/cardiffnlp/timelms).
<b>Labels</b>:
| <span style="font-weight:normal">0: arts_&_culture</span> | <span style="font-weight:normal">5: fashion_&_style</span> | <span style="font-weight:normal">10: learning_&_educational</span> | <span style="font-weight:normal">15: science_&_technology</span> |
|-----------------------------|---------------------|----------------------------|--------------------------|
| 1: business_&_entrepreneurs | 6: film_tv_&_video | 11: music | 16: sports |
| 2: celebrity_&_pop_culture | 7: fitness_&_health | 12: news_&_social_concern | 17: travel_&_adventure |
| 3: diaries_&_daily_life | 8: food_&_dining | 13: other_hobbies | 18: youth_&_student_life |
| 4: family | 9: gaming | 14: relationships | |
## Full classification example
```python
from transformers import AutoModelForSequenceClassification, TFAutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
from scipy.special import expit
MODEL = f"cardiffnlp/tweet-topic-19-multi"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
class_mapping = model.config.id2label
text = "It is great to see athletes promoting awareness for climate change."
tokens = tokenizer(text, return_tensors='pt')
output = model(**tokens)
scores = output[0][0].detach().numpy()
scores = expit(scores)
predictions = (scores >= 0.5) * 1
# TF
#tf_model = TFAutoModelForSequenceClassification.from_pretrained(MODEL)
#class_mapping = model.config.id2label
#text = "It is great to see athletes promoting awareness for climate change."
#tokens = tokenizer(text, return_tensors='tf')
#output = tf_model(**tokens)
#scores = output[0][0]
#scores = expit(scores)
#predictions = (scores >= 0.5) * 1
# Map to classes
for i in range(len(predictions)):
if predictions[i]:
print(class_mapping[i])
```
Output:
```
news_&_social_concern
sports
``` |
IDEA-CCNL/Randeng-Pegasus-238M-Chinese | 82892ea47cd837c97a8d821d7198331195e0d0c5 | 2022-06-30T07:00:00.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"zh",
"arxiv:1912.08777",
"transformers",
"summarization",
"chinese",
"autotrain_compatible"
] | summarization | false | IDEA-CCNL | null | IDEA-CCNL/Randeng-Pegasus-238M-Chinese | 171 | 2 | transformers | 3,808 | ---
language: zh
tags:
- summarization
- chinese
inference: False
---
IDEA-CCNL/Randeng-Pegasus-238M-Chinese model (Chinese),codes has merged into [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
The 523M million parameter randeng_pegasus_large model, training with sampled gap sentence ratios on 180G Chinese data, and stochastically sample important sentences. The pretraining task just same as the paper [PEGASUS: Pre-training with Extracted Gap-sentences for
Abstractive Summarization](https://arxiv.org/pdf/1912.08777.pdf) mentioned.
Different from the English version of pegasus, considering that the Chinese sentence piece is unstable, we use jieba and Bertokenizer as the tokenizer in chinese pegasus model.
We also pretained a large model , available with [IDEA-CCNL/Randeng-Pegasus-523M-Chinese](https://huggingface.co/IDEA-CCNL/Randeng-Pegasus-523M-Chinese)
Task: Summarization
## Usage
```python
from transformers import PegasusForConditionalGeneration
# Need to download tokenizers_pegasus.py and other Python script from Fengshenbang-LM github repo in advance,
# or you can download tokenizers_pegasus.py and data_utils.py in https://huggingface.co/IDEA-CCNL/Randeng_Pegasus_238M/tree/main
# Stronly recomend you git clone the Fengshenbang-LM repo:
# 1. git clone https://github.com/IDEA-CCNL/Fengshenbang-LM
# 2. cd Fengshenbang-LM/fengshen/examples/pegasus/
# and then you will see the tokenizers_pegasus.py and data_utils.py which are needed by pegasus model
from tokenizers_pegasus import PegasusTokenizer
model = PegasusForConditionalGeneration.from_pretrained("IDEA-CCNL/Randeng-Pegasus-238M-Chinese")
tokenizer = PegasusTokenizer.from_pretrained("IDEA-CCNL/Randeng-Pegasus-238M-Chinese")
text = "据微信公众号“界面”报道,4日上午10点左右,中国发改委反垄断调查小组突击查访奔驰上海办事处,调取数据材料,并对多名奔驰高管进行了约谈。截止昨日晚9点,包括北京梅赛德斯-奔驰销售服务有限公司东区总经理在内的多名管理人员仍留在上海办公室内"
inputs = tokenizer(text, max_length=512, return_tensors="pt")
# Generate Summary
summary_ids = model.generate(inputs["input_ids"])
tokenizer.batch_decode(summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
# model output: 截止昨日晚9点,包括北京梅赛德斯-奔驰销售服务有限公司东区总经理在内的多名管理人员仍留在上海办公室内
```
## Citation
If you find the resource is useful, please cite the following website in your paper.
```
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2022},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` |
tahercoolguy/gpt-neox-bit | badab2056f28157226af57e2206991266787fedd | 2022-07-22T12:52:33.000Z | [
"pytorch",
"gpt_neox",
"text-generation",
"transformers",
"license:apache-2.0"
] | text-generation | false | tahercoolguy | null | tahercoolguy/gpt-neox-bit | 171 | null | transformers | 3,809 | ---
license: apache-2.0
---
|
TehranNLP-org/bert-base-uncased-cls-sst2 | 39e787510bf6883a49951d9ac50107e2b909e632 | 2022-05-01T11:44:45.000Z | [
"pytorch",
"tf",
"bert",
"text-classification",
"transformers"
] | text-classification | false | TehranNLP-org | null | TehranNLP-org/bert-base-uncased-cls-sst2 | 170 | null | transformers | 3,810 | Entry not found |
allenai/unifiedqa-v2-t5-3b-1251000 | b1a4a8f236b9f3717d0083235fcc5a688649c2d2 | 2022-02-22T05:38:59.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | allenai | null | allenai/unifiedqa-v2-t5-3b-1251000 | 170 | null | transformers | 3,811 | # Further details: https://github.com/allenai/unifiedqa
|
gagan3012/ViTGPT2_vizwiz | 4a1d4301fb350671d479c198edef15b490ab9509 | 2022-02-07T05:54:26.000Z | [
"pytorch",
"vision-encoder-decoder",
"transformers",
"generated_from_trainer",
"image-to-text",
"model-index"
] | image-to-text | false | gagan3012 | null | gagan3012/ViTGPT2_vizwiz | 170 | null | transformers | 3,812 | ---
tags:
- generated_from_trainer
- image-to-text
model-index:
- name: ViTGPT2_vizwiz
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ViTGPT2_vizwiz
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0719
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.1207 | 0.07 | 1000 | 0.0906 |
| 0.0916 | 0.14 | 2000 | 0.0861 |
| 0.0879 | 0.2 | 3000 | 0.0840 |
| 0.0856 | 0.27 | 4000 | 0.0822 |
| 0.0834 | 0.34 | 5000 | 0.0806 |
| 0.0817 | 0.41 | 6000 | 0.0795 |
| 0.0812 | 0.48 | 7000 | 0.0785 |
| 0.0808 | 0.55 | 8000 | 0.0779 |
| 0.0796 | 0.61 | 9000 | 0.0771 |
| 0.0786 | 0.68 | 10000 | 0.0767 |
| 0.0774 | 0.75 | 11000 | 0.0762 |
| 0.0772 | 0.82 | 12000 | 0.0758 |
| 0.0756 | 0.89 | 13000 | 0.0754 |
| 0.0759 | 0.96 | 14000 | 0.0750 |
| 0.0756 | 1.02 | 15000 | 0.0748 |
| 0.0726 | 1.09 | 16000 | 0.0745 |
| 0.0727 | 1.16 | 17000 | 0.0745 |
| 0.0715 | 1.23 | 18000 | 0.0742 |
| 0.0726 | 1.3 | 19000 | 0.0741 |
| 0.072 | 1.37 | 20000 | 0.0738 |
| 0.0723 | 1.43 | 21000 | 0.0735 |
| 0.0715 | 1.5 | 22000 | 0.0734 |
| 0.0724 | 1.57 | 23000 | 0.0732 |
| 0.0723 | 1.64 | 24000 | 0.0730 |
| 0.0718 | 1.71 | 25000 | 0.0729 |
| 0.07 | 1.78 | 26000 | 0.0728 |
| 0.0702 | 1.84 | 27000 | 0.0726 |
| 0.0704 | 1.91 | 28000 | 0.0725 |
| 0.0703 | 1.98 | 29000 | 0.0725 |
| 0.0686 | 2.05 | 30000 | 0.0726 |
| 0.0687 | 2.12 | 31000 | 0.0726 |
| 0.0688 | 2.19 | 32000 | 0.0724 |
| 0.0677 | 2.25 | 33000 | 0.0724 |
| 0.0665 | 2.32 | 34000 | 0.0725 |
| 0.0684 | 2.39 | 35000 | 0.0723 |
| 0.0678 | 2.46 | 36000 | 0.0722 |
| 0.0686 | 2.53 | 37000 | 0.0722 |
| 0.067 | 2.59 | 38000 | 0.0721 |
| 0.0669 | 2.66 | 39000 | 0.0721 |
| 0.0673 | 2.73 | 40000 | 0.0721 |
| 0.0673 | 2.8 | 41000 | 0.0720 |
| 0.0662 | 2.87 | 42000 | 0.0720 |
| 0.0681 | 2.94 | 43000 | 0.0719 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
ghanashyamvtatti/roberta-fake-news | 3ac92babef4d120bb478b789c00d48225b96008a | 2021-05-20T16:33:04.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | ghanashyamvtatti | null | ghanashyamvtatti/roberta-fake-news | 170 | null | transformers | 3,813 | A fake news detector using RoBERTa.
Dataset: https://www.kaggle.com/clmentbisaillon/fake-and-real-news-dataset
Training involved using hyperparameter search with 10 trials. |
huggingtweets/normmacdonald | 06a7e2866691801be88f21f0b1270872ddbf5c25 | 2021-05-22T16:47:41.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/normmacdonald | 170 | null | transformers | 3,814 | ---
language: en
thumbnail: https://www.huggingtweets.com/normmacdonald/1617162362414/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1281990037/Unknown_400x400.jpeg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Norm Macdonald 🤖 AI Bot </div>
<div style="font-size: 15px">@normmacdonald bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@normmacdonald's tweets](https://twitter.com/normmacdonald).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3190 |
| Retweets | 160 |
| Short tweets | 275 |
| Tweets kept | 2755 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3j8ka0eq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @normmacdonald's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3stwuwin) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3stwuwin/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/normmacdonald')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
nateraw/vit-base-patch16-224-cifar10 | b55eeb4221d3a568f627078ed6f27b967810be3d | 2022-01-28T10:22:01.000Z | [
"pytorch",
"vit",
"image-classification",
"dataset:cifar10",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | nateraw | null | nateraw/vit-base-patch16-224-cifar10 | 170 | 3 | transformers | 3,815 | ---
tags:
- image-classification
- vision
- pytorch
license: apache-2.0
datasets:
- cifar10
metrics:
- accuracy
thumbnail: https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4
---
# Vision Transformer Fine Tuned on CIFAR10
Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 million images, 21,843 classes) and **fine-tuned on CIFAR10** at resolution 224x224.
Check out the code at my [my Github repo](https://github.com/nateraw/huggingface-vit-finetune).
## Usage
```python
from transformers import ViTFeatureExtractor, ViTForImageClassification
from PIL import Image
import requests
url = 'https://www.cs.toronto.edu/~kriz/cifar-10-sample/dog10.png'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = ViTFeatureExtractor.from_pretrained('nateraw/vit-base-patch16-224-cifar10')
model = ViTForImageClassification.from_pretrained('nateraw/vit-base-patch16-224-cifar10')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
preds = outputs.logits.argmax(dim=1)
classes = [
'airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'
]
classes[preds[0]]
```
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
Note that this model does not provide any fine-tuned heads, as these were zero'd by Google researchers. However, the model does include the pre-trained pooler, which can be used for downstream tasks (such as image classification).
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
|
vitouphy/wav2vec2-xls-r-300m-phoneme | ac268b1bf8433073b39e5f16925bf631f05dfa10 | 2022-05-19T07:13:47.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | vitouphy | null | vitouphy/wav2vec2-xls-r-300m-phoneme | 170 | null | transformers | 3,816 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-xls-r-300m-phoneme
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-phoneme
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3327
- Cer: 0.1332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- training_steps: 7000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4324 | 1.32 | 1000 | 3.3693 | 0.9091 |
| 2.1751 | 2.65 | 2000 | 1.1382 | 0.2397 |
| 1.3986 | 3.97 | 3000 | 0.4886 | 0.1452 |
| 1.2285 | 5.3 | 4000 | 0.3842 | 0.1351 |
| 1.142 | 6.62 | 5000 | 0.3505 | 0.1349 |
| 1.1075 | 7.95 | 6000 | 0.3323 | 0.1317 |
| 1.0867 | 9.27 | 7000 | 0.3265 | 0.1315 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
MultiTrickFox/bloom-2b5_Zen | a80e3ea3fcff6a5a098f9fba294cab4d0e43fa43 | 2022-07-16T11:00:12.000Z | [
"pytorch",
"bloom",
"text-generation",
"transformers"
] | text-generation | false | MultiTrickFox | null | MultiTrickFox/bloom-2b5_Zen | 170 | null | transformers | 3,817 | #####
## Bloom2.5B Zen ##
#####
Bloom (2.5 B) Scientific Model fine-tuned on Zen knowledge
#####
## Usage ##
#####
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("MultiTrickFox/bloom-2b5_Zen")
model = AutoModelForCausalLM.from_pretrained("MultiTrickFox/bloom-2b5_Zen")
tokenizer.pad_token_id = tokenizer.eos_token_id
generator = pipeline('text-generation', model=model, tokenizer=tokenizer)
inp = [ """Today""", """Yesterday""" ]
out = generator(
inp, do_sample=True,
temperature=.7,
typical_p=.6,
#top_p=.9,
repetition_penalty=1.2,
max_new_tokens=666,
max_time=60, # seconds
)
for o in out: print(o[0]['generated_text'])
``` |
Bhumika/roberta-base-finetuned-sst2 | eee69c5668a64e30f2e9cc61d3975afc724a7880 | 2021-10-25T06:17:25.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"model-index"
] | text-classification | false | Bhumika | null | Bhumika/roberta-base-finetuned-sst2 | 169 | 3 | transformers | 3,818 | ---
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: roberta-base-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.944954128440367
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-sst2
This model was trained from scratch on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3000
- Accuracy: 0.9450
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:-----:|:--------:|:---------------:|
| 0.1106 | 1.0 | 4210 | 0.9255 | 0.3326 |
| 0.1497 | 2.0 | 8420 | 0.9369 | 0.2858 |
| 0.1028 | 3.0 | 12630 | 0.3128 | 0.9335 |
| 0.0872 | 4.0 | 16840 | 0.3000 | 0.9450 |
| 0.0571 | 5.0 | 21050 | 0.3378 | 0.9427 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
Helsinki-NLP/opus-mt-zh-de | 04388d8cb09ccbf0fda70feeeec41b7d85ca3ec4 | 2020-08-21T14:42:52.000Z | [
"pytorch",
"marian",
"text2text-generation",
"zh",
"de",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-zh-de | 169 | null | transformers | 3,819 | ---
language:
- zh
- de
tags:
- translation
license: apache-2.0
---
### zho-deu
* source group: Chinese
* target group: German
* OPUS readme: [zho-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-deu/README.md)
* model: transformer-align
* source language(s): cmn cmn_Bopo cmn_Hang cmn_Hani cmn_Hira cmn_Kana cmn_Latn lzh_Hani wuu_Hani yue_Hani
* target language(s): deu
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-deu/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-deu/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-deu/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.zho.deu | 32.1 | 0.522 |
### System Info:
- hf_name: zho-deu
- source_languages: zho
- target_languages: deu
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-deu/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['zh', 'de']
- src_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'}
- tgt_constituents: {'deu'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-deu/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-deu/opus-2020-06-17.test.txt
- src_alpha3: zho
- tgt_alpha3: deu
- short_pair: zh-de
- chrF2_score: 0.522
- bleu: 32.1
- brevity_penalty: 0.9540000000000001
- ref_len: 19102.0
- src_name: Chinese
- tgt_name: German
- train_date: 2020-06-17
- src_alpha2: zh
- tgt_alpha2: de
- prefer_old: False
- long_pair: zho-deu
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
KoichiYasuoka/roberta-small-japanese-luw-upos | b493f7d03078d5c0d9ce3f4c29192dc831dd8180 | 2022-05-24T06:25:43.000Z | [
"pytorch",
"roberta",
"token-classification",
"ja",
"dataset:universal_dependencies",
"transformers",
"japanese",
"pos",
"dependency-parsing",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | token-classification | false | KoichiYasuoka | null | KoichiYasuoka/roberta-small-japanese-luw-upos | 169 | null | transformers | 3,820 | ---
language:
- "ja"
tags:
- "japanese"
- "token-classification"
- "pos"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "token-classification"
widget:
- text: "国境の長いトンネルを抜けると雪国であった。"
---
# roberta-small-japanese-luw-upos
## Model Description
This is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from [roberta-small-japanese-aozora](https://huggingface.co/KoichiYasuoka/roberta-small-japanese-aozora). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification,TokenClassificationPipeline
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-small-japanese-luw-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-small-japanese-luw-upos")
pipeline=TokenClassificationPipeline(tokenizer=tokenizer,model=model,aggregation_strategy="simple")
nlp=lambda x:[(x[t["start"]:t["end"]],t["entity_group"]) for t in pipeline(x)]
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/roberta-small-japanese-luw-upos")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa models
|
deepklarity/poster2plot | 749bdc7871ec8f5ffec753f21448cdc2bc1a1a27 | 2021-11-22T19:56:30.000Z | [
"pytorch",
"vision-encoder-decoder",
"en",
"transformers",
"image-classification",
"image-captioning"
] | image-classification | false | deepklarity | null | deepklarity/poster2plot | 169 | 1 | transformers | 3,821 | ---
language: en
tags:
- image-classification
- image-captioning
---
# Poster2Plot
An image captioning model to generate movie/t.v show plot from poster. It generates decent plots but is no way perfect. We are still working on improving the model.
## Live demo on Hugging Face Spaces: https://huggingface.co/spaces/deepklarity/poster2plot
# Model Details
The base model uses a Vision Transformer (ViT) model as an image encoder and GPT-2 as a decoder.
We used the following models:
* Encoder: [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k)
* Decoder: [gpt2](https://huggingface.co/gpt2)
# Datasets
Publicly available IMDb datasets were used to train the model.
# How to use
## In PyTorch
```python
import torch
import re
import requests
from PIL import Image
from transformers import AutoTokenizer, AutoFeatureExtractor, VisionEncoderDecoderModel
# Pattern to ignore all the text after 2 or more full stops
regex_pattern = "[.]{2,}"
def post_process(text):
try:
text = text.strip()
text = re.split(regex_pattern, text)[0]
except Exception as e:
print(e)
pass
return text
def predict(image, max_length=64, num_beams=4):
pixel_values = feature_extractor(images=image, return_tensors="pt").pixel_values
pixel_values = pixel_values.to(device)
with torch.no_grad():
output_ids = model.generate(
pixel_values,
max_length=max_length,
num_beams=num_beams,
return_dict_in_generate=True,
).sequences
preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
pred = post_process(preds[0])
return pred
model_name_or_path = "deepklarity/poster2plot"
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# Load model.
model = VisionEncoderDecoderModel.from_pretrained(model_name_or_path)
model.to(device)
print("Loaded model")
feature_extractor = AutoFeatureExtractor.from_pretrained(model.encoder.name_or_path)
print("Loaded feature_extractor")
tokenizer = AutoTokenizer.from_pretrained(model.decoder.name_or_path, use_fast=True)
if model.decoder.name_or_path == "gpt2":
tokenizer.pad_token = tokenizer.eos_token
print("Loaded tokenizer")
url = "https://upload.wikimedia.org/wikipedia/en/2/26/Moana_Teaser_Poster.jpg"
with Image.open(requests.get(url, stream=True).raw) as image:
pred = predict(image)
print(pred)
```
|
huggingtweets/atlassian | 43a6eceff7358412812946c5d465edd0aa17e36e | 2021-06-17T00:20:35.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/atlassian | 169 | null | transformers | 3,822 | ---
language: en
thumbnail: https://www.huggingtweets.com/atlassian/1623889197185/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1377989668189405192/II6ZfJPK_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Atlassian</div>
<div style="text-align: center; font-size: 14px;">@atlassian</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Atlassian.
| Data | Atlassian |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 824 |
| Short tweets | 58 |
| Tweets kept | 2367 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2i1f4hr0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @atlassian's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/olb55vh0) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/olb55vh0/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/atlassian')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
navteca/electra-base-squad2 | 1983e4c58b41d0a967e957d5df8579f85543d861 | 2021-03-10T15:30:09.000Z | [
"pytorch",
"electra",
"question-answering",
"en",
"dataset:squad_v2",
"transformers",
"license:mit",
"autotrain_compatible"
] | question-answering | false | navteca | null | navteca/electra-base-squad2 | 169 | null | transformers | 3,823 | ---
datasets:
- squad_v2
language: en
license: mit
pipeline_tag: question-answering
tags:
- electra
- question-answering
---
# Electra base model for QA (SQuAD 2.0)
This model uses [electra-base](https://huggingface.co/google/electra-base-discriminator).
## Training Data
The models have been trained on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset.
It can be used for question answering task.
## Usage and Performance
The trained model can be used like this:
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
# Load model & tokenizer
electra_model = AutoModelForQuestionAnswering.from_pretrained('navteca/electra-base-squad2')
electra_tokenizer = AutoTokenizer.from_pretrained('navteca/electra-base-squad2')
# Get predictions
nlp = pipeline('question-answering', model=electra_model, tokenizer=electra_tokenizer)
result = nlp({
'question': 'How many people live in Berlin?',
'context': 'Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.'
})
print(result)
#{
# "answer": "3,520,031"
# "end": 36,
# "score": 0.99983448,
# "start": 27,
#}
```
|
ptaszynski/yacis-electra-small-japanese | 01f2eca3bdbaf8d536ba7b74fb33fd2c9b853ed4 | 2022-01-13T01:43:17.000Z | [
"pytorch",
"ja",
"dataset:YACIS corpus",
"transformers",
"license:cc-by-sa-4.0"
] | null | false | ptaszynski | null | ptaszynski/yacis-electra-small-japanese | 169 | 2 | transformers | 3,824 | ---
language: ja
license: cc-by-sa-4.0
datasets:
- YACIS corpus
---
# yacis-electra-small
This is [ELECTRA](https://github.com/google-research/electra) Small model for Japanese pretrained on 354 million sentences / 5.6 billion words of [YACIS](https://github.com/ptaszynski/yacis-corpus) blog corpus.
The corpus was tokenized for pretraining with [MeCab](https://taku910.github.io/mecab/). Subword tokenization was done with WordPiece.
## Model architecture
This model uses ELECTRA Small model settings, 12 layers, 128 dimensions of hidden states, and 12 attention heads.
Vocabulary size was set to 32,000 tokens.
## Training data and libraries
YACIS-ELECTRA is trained on the whole of [YACIS](https://github.com/ptaszynski/yacis-corpus) blog corpus, which is a Japanese blog corpus containing 5.6 billion words in 354 million sentences.
The corpus was originally split into sentences using custom rules, and each sentence was tokenized using [MeCab](https://taku910.github.io/mecab/). Subword tokenization for pretraining was done with WordPiece.
We used original [ELECTRA](https://github.com/google-research/electra) repository for pretraining. The pretrainig process took 7 days and 6 hours under the following environment: CPU: Intel Core i9-7920X, RAM: 132 GB, GPU: GeForce GTX 1080 Ti x1.
## Licenses
The pretrained model with all attached files is licensed under [CC BY-SA 4.0](http://creativecommons.org/licenses/by-sa/4.0/), or Creative Commons Attribution-ShareAlike 4.0 International License.
<a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" /></a>
## Citations
Please, cite the model using the following citation.
```
@inproceedings{shibata2022yacis-electra,
title={日本語大規模ブログコーパスYACISに基づいたELECTRA事前学習済み言語モデルの作成及び性能評価},
% title={Development and performance evaluation of ELECTRA pretrained language model based on YACIS large-scale Japanese blog corpus [in Japanese]}, %% for English citations
author={柴田 祥伍 and プタシンスキ ミハウ and エロネン ユーソ and ノヴァコフスキ カロル and 桝井 文人},
% author={Shibata, Shogo and Ptaszynski, Michal and Eronen, Juuso and Nowakowski, Karol and Masui, Fumito}, %% for English citations
booktitle={言語処理学会第28回年次大会(NLP2022) (予定)},
% booktitle={Proceedings of The 28th Annual Meeting of The Association for Natural Language Processing (NLP2022)}, %% for English citations
pages={1--4},
year={2022}
}
```
The model was build using sentences from YACIS corpus, which should be cited using at least one of the following refrences.
```
@inproceedings{ptaszynski2012yacis,
title={YACIS: A five-billion-word corpus of Japanese blogs fully annotated with syntactic and affective information},
author={Ptaszynski, Michal and Dybala, Pawel and Rzepka, Rafal and Araki, Kenji and Momouchi, Yoshio},
booktitle={Proceedings of the AISB/IACAP world congress},
pages={40--49},
year={2012},
howpublished = "\url{https://github.com/ptaszynski/yacis-corpus}"
}
```
```
@article{ptaszynski2014automatically,
title={Automatically annotating a five-billion-word corpus of Japanese blogs for sentiment and affect analysis},
author={Ptaszynski, Michal and Rzepka, Rafal and Araki, Kenji and Momouchi, Yoshio},
journal={Computer Speech \& Language},
volume={28},
number={1},
pages={38--55},
year={2014},
publisher={Elsevier},
howpublished = "\url{https://github.com/ptaszynski/yacis-corpus}"
}
``` |
DTAI-KULeuven/robbertje-1-gb-merged | 2b0bfb1015068ad0aee8d78789aba5f5a857353a | 2022-02-24T09:56:43.000Z | [
"pytorch",
"roberta",
"fill-mask",
"nl",
"dataset:oscar",
"dataset:oscar (NL)",
"dataset:dbrd",
"dataset:lassy-ud",
"dataset:europarl-mono",
"dataset:conll2002",
"arxiv:2101.05716",
"transformers",
"Dutch",
"Flemish",
"RoBERTa",
"RobBERT",
"RobBERTje",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | DTAI-KULeuven | null | DTAI-KULeuven/robbertje-1-gb-merged | 168 | null | transformers | 3,825 | ---
language: "nl"
thumbnail: "https://github.com/iPieter/RobBERT/raw/master/res/robbert_logo.png"
tags:
- Dutch
- Flemish
- RoBERTa
- RobBERT
- RobBERTje
license: mit
datasets:
- oscar
- oscar (NL)
- dbrd
- lassy-ud
- europarl-mono
- conll2002
widget:
- text: "Hallo, ik ben RobBERTje, een gedistilleerd <mask> taalmodel van de KU Leuven."
---
<p align="center">
<img src="https://github.com/iPieter/robbertje/raw/master/images/robbertje_logo_with_name.png" alt="RobBERTje: A collection of distilled Dutch BERT-based models" width="75%">
</p>
# About RobBERTje
RobBERTje is a collection of distilled models based on [RobBERT](http://github.com/iPieter/robbert). There are multiple models with different sizes and different training settings, which you can choose for your use-case.
We are also continuously working on releasing better-performing models, so watch [the repository](http://github.com/iPieter/robbertje) for updates.
# News
- **February 21, 2022**: Our paper about RobBERTje has been published in [volume 11 of CLIN journal](https://www.clinjournal.org/clinj/article/view/131)!
- **July 2, 2021**: Publicly released 4 RobBERTje models.
- **May 12, 2021**: RobBERTje was accepted at [CLIN31](https://www.clin31.ugent.be) for an oral presentation!
# The models
| Model | Description | Parameters | Training size | Huggingface id |
|--------------|-------------|------------------|-------------------|------------------------------------------------------------------------------------|
| Non-shuffled | Trained on the non-shuffled variant of the oscar corpus, without any operations to preserve this order during training and distillation. | 74 M | 1 GB | [DTAI-KULeuven/robbertje-1-gb-non-shuffled](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-non-shuffled) |
| Shuffled | Trained on the publicly available and shuffled OSCAR corpus. | 74 M | 1 GB | [DTAI-KULeuven/robbertje-1-gb-shuffled](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-shuffled) |
| Merged (p=0.5) | Same as the non-shuffled variant, but sequential sentences of the same document are merged with a probability of 50%. | 74 M | 1 GB | this model |
| BORT | A smaller version with 8 attention heads instead of 12 and 4 layers instead of 6 (and 12 for RobBERT). | 46 M | 1 GB | [DTAI-KULeuven/robbertje-1-gb-bort](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-bort) |
# Results
## Intrinsic results
We calculated the _pseudo perplexity_ (PPPL) from [cite](), which is a built-in metric in our distillation library. This metric gives an indication of how well the model captures the input distribution.
| Model | PPPL |
|-------------------|-----------|
| RobBERT (teacher) | 7.76 |
| Non-shuffled | 12.95 |
| Shuffled | 18.74 |
| Merged (p=0.5) | 17.10 |
| BORT | 26.44 |
## Extrinsic results
We also evaluated our models on sereral downstream tasks, just like the teacher model RobBERT. Since that evaluation, a [Dutch NLI task named SICK-NL](https://arxiv.org/abs/2101.05716) was also released and we evaluated our models with it as well.
| Model | DBRD | DIE-DAT | NER | POS |SICK-NL |
|------------------|-----------|-----------|-----------|-----------|----------|
| RobBERT (teacher)|94.4 | 99.2 |89.1 |96.4 | 84.2 |
| Non-shuffled |90.2 | 98.4 |82.9 |95.5 | 83.4 |
| Shuffled |92.5 | 98.2 |82.7 |95.6 | 83.4 |
| Merged (p=0.5) |92.9 | 96.5 |81.8 |95.2 | 82.8 |
| BORT |89.6 | 92.2 |79.7 |94.3 | 81.0 |
|
cambridgeltl/trans-encoder-cross-simcse-bert-base | 9a73024eee6719e7622a29d6d2b31c06611bb0fb | 2021-11-26T18:24:44.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | cambridgeltl | null | cambridgeltl/trans-encoder-cross-simcse-bert-base | 168 | null | transformers | 3,826 | Entry not found |
imjeffhi/pokemon_classifier | 83deabd5a137d78fbd62b4c2b11595a888cf3fa6 | 2022-01-01T00:55:49.000Z | [
"pytorch",
"vit",
"image-classification",
"transformers"
] | image-classification | false | imjeffhi | null | imjeffhi/pokemon_classifier | 168 | 3 | transformers | 3,827 | [](https://ainize.web.app/redirect?git_repo=https://github.com/imjeffhi4/pokemon-classifier)
# Pokémon Classifier
# Intro
A fine-tuned version of ViT-base on a collected set of Pokémon images. You can read more about the model [here](https://medium.com/@imjeffhi4/tutorial-using-vision-transformer-vit-to-create-a-pok%C3%A9mon-classifier-cb3f26ff2c20).
# Using the model
```python
from transformers import ViTForImageClassification, ViTFeatureExtractor
from PIL import Image
import torch
# Loading in Model
device = "cuda" if torch.cuda.is_available() else "cpu"
model = ViTForImageClassification.from_pretrained( "imjeffhi/pokemon_classifier").to(device)
feature_extractor = ViTFeatureExtractor.from_pretrained('imjeffhi/pokemon_classifier')
# Caling the model on a test image
img = Image.open('test.jpg')
extracted = feature_extractor(images=img, return_tensors='pt').to(device)
predicted_id = model(**extracted).logits.argmax(-1).item()
predicted_pokemon = model.config.id2label[predicted_id]
``` |
openai/imagegpt-medium | 62aa3f00eb16af0e9e7f7d02b2db9c3fa625ed51 | 2022-06-30T06:46:11.000Z | [
"pytorch",
"imagegpt",
"dataset:imagenet-21k",
"transformers",
"vision",
"license:apache-2.0"
] | null | false | openai | null | openai/imagegpt-medium | 168 | 0 | transformers | 3,828 | ---
license: apache-2.0
tags:
- vision
datasets:
- imagenet-21k
---
# ImageGPT (medium-sized model)
ImageGPT (iGPT) model pre-trained on ImageNet ILSVRC 2012 (14 million images, 21,843 classes) at resolution 32x32. It was introduced in the paper [Generative Pretraining from Pixels](https://cdn.openai.com/papers/Generative_Pretraining_from_Pixels_V2.pdf) by Chen et al. and first released in [this repository](https://github.com/openai/image-gpt). See also the official [blog post](https://openai.com/blog/image-gpt/).
Disclaimer: The team releasing ImageGPT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The ImageGPT (iGPT) is a transformer decoder model (GPT-like) pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-21k, at a resolution of 32x32 pixels.
The goal for the model is simply to predict the next pixel value, given the previous ones.
By pre-training the model, it learns an inner representation of images that can then be used to:
- extract features useful for downstream tasks: one can either use ImageGPT to produce fixed image features, in order to train a linear model (like a sklearn logistic regression model or SVM). This is also referred to as "linear probing".
- perform (un)conditional image generation.
## Intended uses & limitations
You can use the raw model for either feature extractor or (un) conditional image generation. See the [model hub](https://huggingface.co/models?search=openai/imagegpt) to all ImageGPT variants.
### How to use
Here is how to use this model in PyTorch to perform unconditional image generation:
```python
from transformers import ImageGPTFeatureExtractor, ImageGPTForCausalImageModeling
import torch
import matplotlib.pyplot as plt
import numpy as np
feature_extractor = ImageGPTFeatureExtractor.from_pretrained('openai/imagegpt-medium')
model = ImageGPTForCausalImageModeling.from_pretrained('openai/imagegpt-medium')
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
# unconditional generation of 8 images
batch_size = 8
context = torch.full((batch_size, 1), model.config.vocab_size - 1) #initialize with SOS token
context = torch.tensor(context).to(device)
output = model.generate(pixel_values=context, max_length=model.config.n_positions + 1, temperature=1.0, do_sample=True, top_k=40)
clusters = feature_extractor.clusters
n_px = feature_extractor.size
samples = output[:,1:].cpu().detach().numpy()
samples_img = [np.reshape(np.rint(127.5 * (clusters[s] + 1.0)), [n_px, n_px, 3]).astype(np.uint8) for s in samples] # convert color cluster tokens back to pixels
f, axes = plt.subplots(1, batch_size, dpi=300)
for img, ax in zip(samples_img, axes):
ax.axis('off')
ax.imshow(img)
```
## Training data
The ImageGPT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes.
## Training procedure
### Preprocessing
Images are first resized/rescaled to the same resolution (32x32) and normalized across the RGB channels. Next, color-clustering is performed. This means that every pixel is turned into one of 512 possible cluster values. This way, one ends up with a sequence of 32x32 = 1024 pixel values, rather than 32x32x3 = 3072, which is prohibitively large for Transformer-based models.
### Pretraining
Training details can be found in section 3.4 of v2 of the paper.
## Evaluation results
For evaluation results on several image classification benchmarks, we refer to the original paper.
### BibTeX entry and citation info
```bibtex
@InProceedings{pmlr-v119-chen20s,
title = {Generative Pretraining From Pixels},
author = {Chen, Mark and Radford, Alec and Child, Rewon and Wu, Jeffrey and Jun, Heewoo and Luan, David and Sutskever, Ilya},
booktitle = {Proceedings of the 37th International Conference on Machine Learning},
pages = {1691--1703},
year = {2020},
editor = {III, Hal Daumé and Singh, Aarti},
volume = {119},
series = {Proceedings of Machine Learning Research},
month = {13--18 Jul},
publisher = {PMLR},
pdf = {http://proceedings.mlr.press/v119/chen20s/chen20s.pdf},
url = {https://proceedings.mlr.press/v119/chen20s.html
}
```
```bibtex
@inproceedings{deng2009imagenet,
title={Imagenet: A large-scale hierarchical image database},
author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li},
booktitle={2009 IEEE conference on computer vision and pattern recognition},
pages={248--255},
year={2009},
organization={Ieee}
}
``` |
sentence-transformers/msmarco-MiniLM-L12-cos-v5 | ac9b7aaeea782db12fcd41670de2092ecce4ff65 | 2022-06-15T23:55:59.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"arxiv:1908.10084",
"sentence-transformers",
"sentence-similarity",
"transformers"
] | sentence-similarity | false | sentence-transformers | null | sentence-transformers/msmarco-MiniLM-L12-cos-v5 | 168 | null | sentence-transformers | 3,829 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# msmarco-MiniLM-L12-cos-v5
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and was designed for **semantic search**. It has been trained on 500k (query, answer) pairs from the [MS MARCO Passages dataset](https://github.com/microsoft/MSMARCO-Passage-Ranking). For an introduction to semantic search, have a look at: [SBERT.net - Semantic Search](https://www.sbert.net/examples/applications/semantic-search/README.html)
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer, util
query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
#Load the model
model = SentenceTransformer('sentence-transformers/msmarco-MiniLM-L12-cos-v5')
#Encode query and documents
query_emb = model.encode(query)
doc_emb = model.encode(docs)
#Compute dot score between query and all document embeddings
scores = util.dot_score(query_emb, doc_emb)[0].cpu().tolist()
#Combine docs & scores
doc_score_pairs = list(zip(docs, scores))
#Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
for doc, score in doc_score_pairs:
print(score, doc)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the correct pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take average of all tokens
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output.last_hidden_state #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
#Encode text
def encode(texts):
# Tokenize sentences
encoded_input = tokenizer(texts, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input, return_dict=True)
# Perform pooling
embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
return embeddings
# Sentences we want sentence embeddings for
query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/msmarco-MiniLM-L12-cos-v5")
model = AutoModel.from_pretrained("sentence-transformers/msmarco-MiniLM-L12-cos-v5")
#Encode query and docs
query_emb = encode(query)
doc_emb = encode(docs)
#Compute dot score between query and all document embeddings
scores = torch.mm(query_emb, doc_emb.transpose(0, 1))[0].cpu().tolist()
#Combine docs & scores
doc_score_pairs = list(zip(docs, scores))
#Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
for doc, score in doc_score_pairs:
print(score, doc)
```
## Technical Details
In the following some technical details how this model must be used:
| Setting | Value |
| --- | :---: |
| Dimensions | 768 |
| Produces normalized embeddings | Yes |
| Pooling-Method | Mean pooling |
| Suitable score functions | dot-product (`util.dot_score`), cosine-similarity (`util.cos_sim`), or euclidean distance |
Note: When loaded with `sentence-transformers`, this model produces normalized embeddings with length 1. In that case, dot-product and cosine-similarity are equivalent. dot-product is preferred as it is faster. Euclidean distance is proportional to dot-product and can also be used.
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
toastynews/electra-hongkongese-base-discriminator | 6b179082fdd82e255f3d37ff97a83bbe8174a227 | 2020-07-07T17:55:51.000Z | [
"pytorch",
"tf",
"electra",
"pretraining",
"yue",
"transformers",
"license:apache-2.0"
] | null | false | toastynews | null | toastynews/electra-hongkongese-base-discriminator | 168 | null | transformers | 3,830 | ---
language: yue
license: apache-2.0
metrics:
- DRCD
- openrice-senti
- lihkg-cat
- wordshk-sem
---
# ELECTRA Hongkongese Base
## Model description
ELECTRA trained exclusively with data from Hong Kong. A signaficant amount of Hongkongese/Cantonese/Yue is included in the training data.
## Intended uses & limitations
This model is an alternative to Chinese models. It may offer better performance for tasks catering to the langauge usage of Hong Kongers. Yue Wikipedia is used which is much smaller than Chinese Wikipedia; this model will lack the breath of knowledge compared to other Chinese models.
#### How to use
This is the base model trained from the official repo. Further finetuning will be needed for use on downstream tasks. Other model sizes are also available.
#### Limitations and bias
The training data consists of mostly news articles and blogs. There is probably a bias towards formal language usage.
## Training data
The following is the list of data sources. Total characters is about 507M.
| Data | % |
| ------------------------------------------------- | --: |
| News Articles / Blogs | 58% |
| Yue Wikipedia / EVCHK | 18% |
| Restaurant Reviews | 12% |
| Forum Threads | 12% |
| Online Fiction | 1% |
The following is the distribution of different languages within the corpus.
| Language | % |
| ------------------------------------------------- | --: |
| Standard Chinese | 62% |
| Hongkongese | 30% |
| English | 8% |
## Training procedure
Model was trained on a single TPUv3 from the official repo with the default parameters.
| Parameter | Value |
| ------------------------------------------------ | ----: |
| Batch Size | 256 |
| Max Sequence Size | 512 |
| Vocab Size | 30000 |
*Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC)*
## Eval results
Average evaluation task results over 10 runs. Comparison using the original repo model and code. Chinese models are available from [Joint Laboratory of HIT and iFLYTEK Research (HFL)](https://huggingface.co/hfl)
| Model | DRCD (EM/F1) | openrice-senti | lihkg-cat | wordshk-sem |
|:-----------:|:------------:|:--------------:|:---------:|:-----------:|
| Chinese | 86.6 / 91.7 | 79.1 | 67.4 | 88.1 |
| Hongkongese | 83.0 / 89.6 | 81.5 | 70.0 | 90.1 |
|
yoshitomo-matsubara/bert-base-uncased-qqp | 6ac7d79763a5f90f027981122555b03bcf394e93 | 2021-05-29T21:52:35.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:qqp",
"transformers",
"qqp",
"glue",
"torchdistill",
"license:apache-2.0"
] | text-classification | false | yoshitomo-matsubara | null | yoshitomo-matsubara/bert-base-uncased-qqp | 168 | null | transformers | 3,831 | ---
language: en
tags:
- bert
- qqp
- glue
- torchdistill
license: apache-2.0
datasets:
- qqp
metrics:
- f1
- accuracy
---
`bert-base-uncased` fine-tuned on QQP dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb).
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/qqp/ce/bert_base_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **77.9**.
|
microsoft/resnet-34 | 4eb25387d2fc7c0108695bde3a590faa63132e22 | 2022-07-01T17:33:37.000Z | [
"pytorch",
"tf",
"resnet",
"image-classification",
"dataset:imagenet-1k",
"arxiv:1512.03385",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | microsoft | null | microsoft/resnet-34 | 168 | null | transformers | 3,832 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
---
# ResNet-34 v1.5
ResNet model pre-trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by He et al.
Disclaimer: The team releasing ResNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
ResNet (Residual Network) is a convolutional neural network that democratized the concepts of residual learning and skip connections. This enables to train much deeper models.
This is ResNet v1.5, which differs from the original model: in the bottleneck blocks which require downsampling, v1 has stride = 2 in the first 1x1 convolution, whereas v1.5 has stride = 2 in the 3x3 convolution. This difference makes ResNet50 v1.5 slightly more accurate (\~0.5% top1) than v1, but comes with a small performance drawback (~5% imgs/sec) according to [Nvidia](https://catalog.ngc.nvidia.com/orgs/nvidia/resources/resnet_50_v1_5_for_pytorch).

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=resnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoFeatureExtractor, ResNetForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/resnet-34")
model = ResNetForImageClassification.from_pretrained("microsoft/resnet-34")
inputs = feature_extractor(image, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label])
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/resnet).
### BibTeX entry and citation info
```bibtex
@inproceedings{he2016deep,
title={Deep residual learning for image recognition},
author={He, Kaiming and Zhang, Xiangyu and Ren, Shaoqing and Sun, Jian},
booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
pages={770--778},
year={2016}
}
```
|
LiYuan/amazon-query-product-ranking | 3496770750474054cbc36156f260683a3b56603b | 2022-04-28T13:09:08.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | LiYuan | null | LiYuan/amazon-query-product-ranking | 168 | null | transformers | 3,833 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-mnli-amazon-query-shopping
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-mnli-amazon-query-shopping
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an [Amazon shopping query dataset](https://www.aicrowd.com/challenges/esci-challenge-for-improving-product-search). The code for the fine-tuning process can be found
[here](https://github.com/vanderbilt-data-science/sna). This model is uncased: it does
not make a difference between english and English.
It achieves the following results on the evaluation set:
- Loss: 0.8244
- Accuracy: 0.6617
## Model description
DistilBERT is a transformers model, smaller and faster than BERT, which was pretrained on the same corpus in a
self-supervised fashion, using the BERT base model as a teacher. This means it was pretrained on the raw texts only,
with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic
process to generate inputs and labels from those texts using the BERT base model. We replaced its head with our shopping relevance category to fine-tune it on 571,223 rows of training set while validate it on 142,806 rows of dev set. Finally, we evaluated our model performance on a held-out test set: 79,337 rows.
## Intended uses & limitations
DistilBERT is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification, or question answering. This fine-tuned version of DistilBERT is used to predict the relevance between one query and one product description. It also can be used to rerank the relevance order of products given one query for the amazon platform or other e-commerce platforms.
The limitations are this trained model is focusing on queries and products on Amazon. If you apply this model to other domains, it may perform poorly.
## How to use
You can use this model directly by downloading the trained weights and configurations like the below code snippet:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("LiYuan/amazon-query-product-ranking")
model = AutoModelForSequenceClassification.from_pretrained("LiYuan/amazon-query-product-ranking")
```
## Training and evaluation data
Download all the raw [dataset](https://www.aicrowd.com/challenges/esci-challenge-for-improving-product-search/dataset_files) from the Amazon KDD Cup website.
1. Concatenate the all product attributes from the product dataset
2. Join it with a training query dataset
3. Stratified Split the merged data into 571,223-row training, 142,806-row validation, 79,337-row test set
4. Train on the full training set
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8981 | 1.0 | 35702 | 0.8662 | 0.6371 |
| 0.7837 | 2.0 | 71404 | 0.8244 | 0.6617 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
PrimeQA/tydiqa-boolean-question-classifier | 3a86131ea15cce3baeabba78cb64d0d35d373f67 | 2022-06-28T20:19:31.000Z | [
"pytorch",
"bert",
"text-classification",
"arxiv:1810.04805",
"arxiv:2206.08441",
"transformers",
"license:apache-2.0"
] | text-classification | false | PrimeQA | null | PrimeQA/tydiqa-boolean-question-classifier | 168 | null | transformers | 3,834 | ---
license: apache-2.0
---
## Model description
A question type classification model based on multilingual BERT.
The question type classifier takes as input the question, and returns a label that distinguishes between boolean and short answer extractive questions.
The model was initialized with [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) and fine-tuned on the answerable subset of [TyDiQA](https://huggingface.co/datasets/tydiqa) train questions.
## Intended uses & limitations
You can use the raw model for question classification. Biases associated with the pre-existing language model, bert-base-multilingual-cased, may be present in our fine-tuned model, tydiqa-boolean-question-classifier.
## Usage
You can use this model directly in the the [PrimeQA](https://github.com/primeqa/primeqa) framework for supporting boolean question in reading comprehension as in this [example](https://github.com/primeqa/primeqa/tree/main/examples/boolqa).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
```bibtex
@misc{https://doi.org/10.48550/arxiv.2206.08441,
author = {McCarley, Scott and
Bornea, Mihaela and
Rosenthal, Sara and
Ferritto, Anthony and
Sultan, Md Arafat and
Sil, Avirup and
Florian, Radu},
title = {GAAMA 2.0: An Integrated System that Answers Boolean and Extractive Questions},
journal = {CoRR},
publisher = {arXiv},
year = {2022},
url = {https://arxiv.org/abs/2206.08441},
}
``` |
sijunhe/nezha-base-wwm | 629e6589f8a820e9622129e28d0c8625515de299 | 2022-06-24T03:55:20.000Z | [
"pytorch",
"nezha",
"fill-mask",
"arxiv:1909.00204",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
] | fill-mask | false | sijunhe | null | sijunhe/nezha-base-wwm | 168 | null | transformers | 3,835 | ---
license: afl-3.0
---
**Please use 'Bert' related tokenizer classes and 'Nezha' related model classes**
[NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204)
Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu.
The original checkpoints can be found [here](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/NEZHA-PyTorch)
## Example Usage
```
from transformers import BertTokenizer, NezhaModel
tokenizer = BertTokenizer.from_pretrained("sijunhe/nezha-base-wwm")
model = NezhaModel.from_pretrained("sijunhe/nezha-base-wwm")
text = "我爱北京天安门"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
``` |
lewiswu1209/Winnie | bfc6cdfebe4c203d3d24560dc0b241e800c54caa | 2022-07-27T17:07:55.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"license:mit"
] | text-generation | false | lewiswu1209 | null | lewiswu1209/Winnie | 168 | null | transformers | 3,836 | ---
license: mit
---
|
akshatpandeyme/DialoGPT-small-AnyaBot | c8132ec6356dbceb9157e3540da92a4fc639babb | 2022-07-27T06:12:16.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | akshatpandeyme | null | akshatpandeyme/DialoGPT-small-AnyaBot | 168 | null | transformers | 3,837 | ---
tags:
- conversational
---
# Anya conv. bot |
Helsinki-NLP/opus-mt-ar-ru | 4be9d95f8445c11b9f25fd6d128dd57cc38ce152 | 2021-01-18T07:47:44.000Z | [
"pytorch",
"marian",
"text2text-generation",
"ar",
"ru",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-ar-ru | 167 | null | transformers | 3,838 | ---
language:
- ar
- ru
tags:
- translation
license: apache-2.0
---
### ara-rus
* source group: Arabic
* target group: Russian
* OPUS readme: [ara-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-rus/README.md)
* model: transformer
* source language(s): apc ara arz
* target language(s): rus
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-rus/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-rus/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-rus/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ara.rus | 42.5 | 0.605 |
### System Info:
- hf_name: ara-rus
- source_languages: ara
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ar', 'ru']
- src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- tgt_constituents: {'rus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-rus/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-rus/opus-2020-07-03.test.txt
- src_alpha3: ara
- tgt_alpha3: rus
- short_pair: ar-ru
- chrF2_score: 0.605
- bleu: 42.5
- brevity_penalty: 0.97
- ref_len: 21830.0
- src_name: Arabic
- tgt_name: Russian
- train_date: 2020-07-03
- src_alpha2: ar
- tgt_alpha2: ru
- prefer_old: False
- long_pair: ara-rus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
PlanTL-GOB-ES/roberta-base-ca | 94aef5c4319113211ed8860eb08eb13a862bd0fd | 2021-11-09T09:32:51.000Z | [
"pytorch",
"roberta",
"fill-mask",
"ca",
"transformers",
"masked-lm",
"BERTa",
"catalan",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | PlanTL-GOB-ES | null | PlanTL-GOB-ES/roberta-base-ca | 167 | 2 | transformers | 3,839 | ---
language: "ca"
tags:
- masked-lm
- BERTa
- catalan
widget:
- text: "El Català és una llengua molt <mask>."
- text: "Salvador Dalí va viure a <mask>."
- text: "La Costa Brava té les millors <mask> d'Espanya."
- text: "El cacaolat és un batut de <mask>."
- text: "<mask> és la capital de la Garrotxa."
- text: "Vaig al <mask> a buscar bolets."
- text: "Antoni Gaudí vas ser un <mask> molt important per la ciutat."
- text: "Catalunya és una referència en <mask> a nivell europeu."
license: apache-2.0
---
# BERTa: RoBERTa-based Catalan language model
## BibTeX citation
If you use any of these resources (datasets or models) in your work, please cite our latest paper:
```bibtex
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
## Model description
BERTa is a transformer-based masked language model for the Catalan language.
It is based on the [RoBERTA](https://github.com/pytorch/fairseq/tree/master/examples/roberta) base model
and has been trained on a medium-size corpus collected from publicly available corpora and crawlers.
## Training corpora and preprocessing
The training corpus consists of several corpora gathered from web crawling and public corpora.
The publicly available corpora are:
1. the Catalan part of the [DOGC](http://opus.nlpl.eu/DOGC-v2.php) corpus, a set of documents from the Official Gazette of the Catalan Government
2. the [Catalan Open Subtitles](http://opus.nlpl.eu/download.php?f=OpenSubtitles/v2018/mono/OpenSubtitles.raw.ca.gz), a collection of translated movie subtitles
3. the non-shuffled version of the Catalan part of the [OSCAR](https://traces1.inria.fr/oscar/) corpus \\\\cite{suarez2019asynchronous},
a collection of monolingual corpora, filtered from [Common Crawl](https://commoncrawl.org/about/)
4. The [CaWac](http://nlp.ffzg.hr/resources/corpora/cawac/) corpus, a web corpus of Catalan built from the .cat top-level-domain in late 2013
the non-deduplicated version
5. the [Catalan Wikipedia articles](https://ftp.acc.umu.se/mirror/wikimedia.org/dumps/cawiki/20200801/) downloaded on 18-08-2020.
The crawled corpora are:
6. The Catalan General Crawling, obtained by crawling the 500 most popular .cat and .ad domains
7. the Catalan Government Crawling, obtained by crawling the .gencat domain and subdomains, belonging to the Catalan Government
8. the ACN corpus with 220k news items from March 2015 until October 2020, crawled from the [Catalan News Agency](https://www.acn.cat/)
To obtain a high-quality training corpus, each corpus have preprocessed with a pipeline of operations, including among the others,
sentence splitting, language detection, filtering of bad-formed sentences and deduplication of repetitive contents.
During the process, we keep document boundaries are kept.
Finally, the corpora are concatenated and further global deduplication among the corpora is applied.
The final training corpus consists of about 1,8B tokens.
## Tokenization and pretraining
The training corpus has been tokenized using a byte version of [Byte-Pair Encoding (BPE)](https://github.com/openai/gpt-2)
used in the original [RoBERTA](https://github.com/pytorch/fairseq/tree/master/examples/roberta) model with a vocabulary size of 52,000 tokens.
The BERTa pretraining consists of a masked language model training that follows the approach employed for the RoBERTa base model
with the same hyperparameters as in the original work.
The training lasted a total of 48 hours with 16 NVIDIA V100 GPUs of 16GB DDRAM.
## Evaluation
## CLUB benchmark
The BERTa model has been fine-tuned on the downstream tasks of the Catalan Language Understanding Evaluation benchmark (CLUB),
that has been created along with the model.
It contains the following tasks and their related datasets:
1. Part-of-Speech Tagging (POS)
Catalan-Ancora: from the [Universal Dependencies treebank](https://github.com/UniversalDependencies/UD_Catalan-AnCora) of the well-known Ancora corpus
2. Named Entity Recognition (NER)
**[AnCora Catalan 2.0.0](https://zenodo.org/record/4762031#.YKaFjqGxWUk)**: extracted named entities from the original [Ancora](https://doi.org/10.5281/zenodo.4762030) version,
filtering out some unconventional ones, like book titles, and transcribed them into a standard CONLL-IOB format
3. Text Classification (TC)
**[TeCla](https://doi.org/10.5281/zenodo.4627197)**: consisting of 137k news pieces from the Catalan News Agency ([ACN](https://www.acn.cat/)) corpus
4. Semantic Textual Similarity (STS)
**[Catalan semantic textual similarity](https://doi.org/10.5281/zenodo.4529183)**: consisting of more than 3000 sentence pairs, annotated with the semantic similarity between them,
scraped from the [Catalan Textual Corpus](https://doi.org/10.5281/zenodo.4519349)
5. Question Answering (QA):
**[ViquiQuAD](https://doi.org/10.5281/zenodo.4562344)**: consisting of more than 15,000 questions outsourced from Catalan Wikipedia randomly chosen from a set of 596 articles that were originally written in Catalan.
**[XQuAD](https://doi.org/10.5281/zenodo.4526223)**: the Catalan translation of XQuAD, a multilingual collection of manual translations of 1,190 question-answer pairs from English Wikipedia used only as a _test set_
Here are the train/dev/test splits of the datasets:
| Task (Dataset) | Total | Train | Dev | Test |
|:--|:--|:--|:--|:--|
| NER (Ancora) |13,581 | 10,628 | 1,427 | 1,526 |
| POS (Ancora)| 16,678 | 13,123 | 1,709 | 1,846 |
| STS | 3,073 | 2,073 | 500 | 500 |
| TC (TeCla) | 137,775 | 110,203 | 13,786 | 13,786|
| QA (ViquiQuAD) | 14,239 | 11,255 | 1,492 | 1,429 |
_The fine-tuning on downstream tasks have been performed with the HuggingFace [**Transformers**](https://github.com/huggingface/transformers) library_
## Results
Below the evaluation results on the CLUB tasks compared with the multilingual mBERT, XLM-RoBERTa models and
the Catalan WikiBERT-ca model
| Task | NER (F1) | POS (F1) | STS (Pearson) | TC (accuracy) | QA (ViquiQuAD) (F1/EM) | QA (XQuAD) (F1/EM) |
| ------------|:-------------:| -----:|:------|:-------|:------|:----|
| BERTa | **88.13** | **98.97** | **79.73** | **74.16** | **86.97/72.29** | **68.89/48.87** |
| mBERT | 86.38 | 98.82 | 76.34 | 70.56 | 86.97/72.22 | 67.15/46.51 |
| XLM-RoBERTa | 87.66 | 98.89 | 75.40 | 71.68 | 85.50/70.47 | 67.10/46.42 |
| WikiBERT-ca | 77.66 | 97.60 | 77.18 | 73.22 | 85.45/70.75 | 65.21/36.60 |
## Intended uses & limitations
The model is ready-to-use only for masked language modelling to perform the Fill Mask task (try the inference API or read the next section)
However, the is intended to be fine-tuned on non-generative downstream tasks such as Question Answering, Text Classification or Named Entity Recognition.
---
## Using BERTa
## Load model and tokenizer
``` python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("PlanTL-GOB-ES/roberta-base-ca-cased")
model = AutoModelForMaskedLM.from_pretrained("PlanTL-GOB-ES/roberta-base-ca-cased")
```
## Fill Mask task
Below, an example of how to use the masked language modelling task with a pipeline.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='PlanTL-GOB-ES/roberta-base-ca-cased')
>>> unmasker("Situada a la costa de la mar Mediterrània, <mask> s'assenta en una plana formada "
"entre els deltes de les desembocadures dels rius Llobregat, al sud-oest, "
"i Besòs, al nord-est, i limitada pel sud-est per la línia de costa,"
"i pel nord-oest per la serralada de Collserola "
"(amb el cim del Tibidabo, 516,2 m, com a punt més alt) que segueix paral·lela "
"la línia de costa encaixant la ciutat en un perímetre molt definit.")
[
{
"sequence": " Situada a la costa de la mar Mediterrània, <mask> s'assenta en una plana formada "
"entre els deltes de les desembocadures dels rius Llobregat, al sud-oest, "
"i Besòs, al nord-est, i limitada pel sud-est per la línia de costa,"
"i pel nord-oest per la serralada de Collserola "
"(amb el cim del Tibidabo, 516,2 m, com a punt més alt) que segueix paral·lela "
"la línia de costa encaixant la ciutat en un perímetre molt definit.",
"score": 0.4177263379096985,
"token": 734,
"token_str": " Barcelona"
},
{
"sequence": " Situada a la costa de la mar Mediterrània, <mask> s'assenta en una plana formada "
"entre els deltes de les desembocadures dels rius Llobregat, al sud-oest, "
"i Besòs, al nord-est, i limitada pel sud-est per la línia de costa,"
"i pel nord-oest per la serralada de Collserola "
"(amb el cim del Tibidabo, 516,2 m, com a punt més alt) que segueix paral·lela "
"la línia de costa encaixant la ciutat en un perímetre molt definit.",
"score": 0.10696165263652802,
"token": 3849,
"token_str": " Badalona"
},
{
"sequence": " Situada a la costa de la mar Mediterrània, <mask> s'assenta en una plana formada "
"entre els deltes de les desembocadures dels rius Llobregat, al sud-oest, "
"i Besòs, al nord-est, i limitada pel sud-est per la línia de costa,"
"i pel nord-oest per la serralada de Collserola "
"(amb el cim del Tibidabo, 516,2 m, com a punt més alt) que segueix paral·lela "
"la línia de costa encaixant la ciutat en un perímetre molt definit.",
"score": 0.08135009557008743,
"token": 19349,
"token_str": " Collserola"
},
{
"sequence": " Situada a la costa de la mar Mediterrània, <mask> s'assenta en una plana formada "
"entre els deltes de les desembocadures dels rius Llobregat, al sud-oest, "
"i Besòs, al nord-est, i limitada pel sud-est per la línia de costa,"
"i pel nord-oest per la serralada de Collserola "
"(amb el cim del Tibidabo, 516,2 m, com a punt més alt) que segueix paral·lela "
"la línia de costa encaixant la ciutat en un perímetre molt definit.",
"score": 0.07330769300460815,
"token": 4974,
"token_str": " Terrassa"
},
{
"sequence": " Situada a la costa de la mar Mediterrània, <mask> s'assenta en una plana formada "
"entre els deltes de les desembocadures dels rius Llobregat, al sud-oest, "
"i Besòs, al nord-est, i limitada pel sud-est per la línia de costa,"
"i pel nord-oest per la serralada de Collserola "
"(amb el cim del Tibidabo, 516,2 m, com a punt més alt) que segueix paral·lela "
"la línia de costa encaixant la ciutat en un perímetre molt definit.",
"score": 0.03317456692457199,
"token": 14333,
"token_str": " Gavà"
}
]
```
This model was originally published as [bsc/roberta-base-ca-cased](https://huggingface.co/bsc/roberta-base-ca-cased).
## Funding
This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.
## Disclaimer
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.
In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.
Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.
En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos. |
datummd/NCBI_BC5CDR_disease | dc3c67689f98311940b9812352a624695554857f | 2021-08-31T13:59:31.000Z | [
"pytorch",
"bert",
"token-classification",
"en",
"dataset:ncbi_disease",
"dataset:BC5CDR-diseases",
"dataset:LitCOVID-pubtator",
"transformers",
"BioBERT",
"Diseases",
"NER",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | datummd | null | datummd/NCBI_BC5CDR_disease | 167 | 4 | transformers | 3,840 | ---
language:
- en
tags:
- BioBERT
- Diseases
- NER
license: apache-2.0
datasets:
- ncbi_disease
- BC5CDR-diseases
- LitCOVID-pubtator
---
BioBERT model fine-tuned in NER task with BC5CDR-diseases and NCBI-diseases corpus along with selected pubtator annotations from LitCOVID dataset
This was fine-tuned in order to use it in a datummd/bionlp system which is available at: https://github.com/datummd/bionlp
|
mse30/bart-base-finetuned-pubmed | 2dcf6798d3889087bf314087dfc84a22c92e26d1 | 2021-10-14T15:19:57.000Z | [
"pytorch",
"bart",
"text2text-generation",
"dataset:scientific_papers",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | mse30 | null | mse30/bart-base-finetuned-pubmed | 167 | null | transformers | 3,841 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- scientific_papers
metrics:
- rouge
model-index:
- name: bart-base-finetuned-pubmed
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: scientific_papers
type: scientific_papers
args: pubmed
metrics:
- name: Rouge1
type: rouge
value: 9.1984
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-pubmed
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the scientific_papers dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9804
- Rouge1: 9.1984
- Rouge2: 4.3091
- Rougel: 7.9739
- Rougelsum: 8.6759
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 2.2869 | 1.0 | 29981 | 2.1241 | 9.0852 | 4.1152 | 7.842 | 8.5395 | 20.0 |
| 2.1469 | 2.0 | 59962 | 2.0225 | 9.1609 | 4.2437 | 7.9311 | 8.6273 | 20.0 |
| 2.113 | 3.0 | 89943 | 1.9959 | 9.3086 | 4.3305 | 8.0363 | 8.7713 | 20.0 |
| 2.0632 | 4.0 | 119924 | 1.9804 | 9.1984 | 4.3091 | 7.9739 | 8.6759 | 20.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
praeclarum/cuneiform | df2918a9626a4c09fb82acc73fdbd409d654cfb2 | 2022-07-20T02:27:31.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"cuneiform",
"akkadian",
"sumerian",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | praeclarum | null | praeclarum/cuneiform | 167 | null | transformers | 3,842 | ---
license: mit
tags:
- cuneiform
- akkadian
- sumerian
---
# Sumerian and Akkadian Cuneiform Language Translator
This is a translation network that understands Sumerian and Akkadian languages written in cuneiform.
It was trained on cuneiform transcribed in the CDLI ATF format. For example:
```text
translate Akkadian to English: 1(disz){d}szul3-ma-nu-_sag man gal?_-u2 _man_ dan-nu _man kisz_
```
The network was trained to translate from the ancient languages:
* Akkadian
* Sumerian
written in transcribed cuneiform to English.
The network requires a prompt telling it the direction of translation. For example:
* `translate Akkadian to English: `
* `translate English to Sumerian: `
## Limitations
It was *not* trained to translate between the ancient languages but it is capable of it
(`translate Sumerian to Akkadian: `). Buyer beware.
Its vocabulary does not include ā, ḫ, ī, ř, š, ṣ, or ū so those letters are replaced with the unadorned letter (or "sh" in the case of š and ṣ).
|
Bman/DialoGPT-medium-shrek | e094faecd63fc4c0b604ea73b30425a5853d8fd0 | 2022-07-25T02:15:16.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Bman | null | Bman/DialoGPT-medium-shrek | 167 | null | transformers | 3,843 | ---
tags:
- conversational
---
# Shrek DialoGPT Model |
Alvenir/wav2vec2-base-da | 912002487cffc16dbedcad24db521596a05ef33c | 2021-11-28T11:35:11.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"da",
"transformers",
"speech",
"license:apache-2.0"
] | null | false | Alvenir | null | Alvenir/wav2vec2-base-da | 166 | 4 | transformers | 3,844 | ---
language: da
tags:
- speech
license: apache-2.0
---
# Wav2vec2-base for Danish
This wav2vec2-base model has been pretrained on ~1300 hours of danish speech data. The pretraining data consists of podcasts and audiobooks and is unfortunately not public available. However, we were allowed to distribute the pretrained model.
This model was pretrained on 16kHz sampled speech audio. When using the model, make sure to use speech audio sampled at 16kHz.
The pre-training was done using the fairseq library in January 2021.
It needs to be fine-tuned to perform speech recognition.
# Finetuning
In order to finetune the model to speech recognition, you can draw inspiration from this [notebook tutorial](https://colab.research.google.com/drive/1FjTsqbYKphl9kL-eILgUc-bl4zVThL8F) or [this blog post tutorial](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2). |
HooshvareLab/roberta-fa-zwnj-base-ner | eb045188128311055e23e2ac0941e76071fcdbd6 | 2021-05-20T11:55:34.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"token-classification",
"fa",
"transformers",
"autotrain_compatible"
] | token-classification | false | HooshvareLab | null | HooshvareLab/roberta-fa-zwnj-base-ner | 166 | null | transformers | 3,845 | ---
language: fa
---
# RobertaNER
This model fine-tuned for the Named Entity Recognition (NER) task on a mixed NER dataset collected from [ARMAN](https://github.com/HaniehP/PersianNER), [PEYMA](http://nsurl.org/2019-2/tasks/task-7-named-entity-recognition-ner-for-farsi/), and [WikiANN](https://elisa-ie.github.io/wikiann/) that covered ten types of entities:
- Date (DAT)
- Event (EVE)
- Facility (FAC)
- Location (LOC)
- Money (MON)
- Organization (ORG)
- Percent (PCT)
- Person (PER)
- Product (PRO)
- Time (TIM)
## Dataset Information
| | Records | B-DAT | B-EVE | B-FAC | B-LOC | B-MON | B-ORG | B-PCT | B-PER | B-PRO | B-TIM | I-DAT | I-EVE | I-FAC | I-LOC | I-MON | I-ORG | I-PCT | I-PER | I-PRO | I-TIM |
|:------|----------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|
| Train | 29133 | 1423 | 1487 | 1400 | 13919 | 417 | 15926 | 355 | 12347 | 1855 | 150 | 1947 | 5018 | 2421 | 4118 | 1059 | 19579 | 573 | 7699 | 1914 | 332 |
| Valid | 5142 | 267 | 253 | 250 | 2362 | 100 | 2651 | 64 | 2173 | 317 | 19 | 373 | 799 | 387 | 717 | 270 | 3260 | 101 | 1382 | 303 | 35 |
| Test | 6049 | 407 | 256 | 248 | 2886 | 98 | 3216 | 94 | 2646 | 318 | 43 | 568 | 888 | 408 | 858 | 263 | 3967 | 141 | 1707 | 296 | 78 |
## Evaluation
The following tables summarize the scores obtained by model overall and per each class.
**Overall**
| Model | accuracy | precision | recall | f1 |
|:----------:|:--------:|:---------:|:--------:|:--------:|
| Roberta | 0.994849 | 0.949816 | 0.960235 | 0.954997 |
**Per entities**
| | number | precision | recall | f1 |
|:---: |:------: |:---------: |:--------: |:--------: |
| DAT | 407 | 0.844869 | 0.869779 | 0.857143 |
| EVE | 256 | 0.948148 | 1.000000 | 0.973384 |
| FAC | 248 | 0.957529 | 1.000000 | 0.978304 |
| LOC | 2884 | 0.965422 | 0.968100 | 0.966759 |
| MON | 98 | 0.937500 | 0.918367 | 0.927835 |
| ORG | 3216 | 0.943662 | 0.958333 | 0.950941 |
| PCT | 94 | 1.000000 | 0.968085 | 0.983784 |
| PER | 2646 | 0.957030 | 0.959562 | 0.958294 |
| PRO | 318 | 0.963636 | 1.000000 | 0.981481 |
| TIM | 43 | 0.739130 | 0.790698 | 0.764045 |
## How To Use
You use this model with Transformers pipeline for NER.
### Installing requirements
```bash
pip install transformers
```
### How to predict using pipeline
```python
from transformers import AutoTokenizer
from transformers import AutoModelForTokenClassification # for pytorch
from transformers import TFAutoModelForTokenClassification # for tensorflow
from transformers import pipeline
model_name_or_path = "HooshvareLab/roberta-fa-zwnj-base-ner"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForTokenClassification.from_pretrained(model_name_or_path) # Pytorch
# model = TFAutoModelForTokenClassification.from_pretrained(model_name_or_path) # Tensorflow
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "در سال ۲۰۱۳ درگذشت و آندرتیکر و کین برای او مراسم یادبود گرفتند."
ner_results = nlp(example)
print(ner_results)
```
## Questions?
Post a Github issue on the [ParsNER Issues](https://github.com/hooshvare/parsner/issues) repo. |
bespin-global/klue-bert-base-aihub-mrc | bce73303f9411910709d5fcf097715446c047344 | 2022-06-18T05:09:55.000Z | [
"pytorch",
"bert",
"question-answering",
"ko",
"dataset:aihub",
"transformers",
"mrc",
"license:cc-by-nc-4.0",
"autotrain_compatible"
] | question-answering | false | bespin-global | null | bespin-global/klue-bert-base-aihub-mrc | 166 | 1 | transformers | 3,846 | ---
language: ko
tags:
- bert
- mrc
datasets:
- aihub
license: cc-by-nc-4.0
---
## Demo
- [https://huggingface.co/spaces/bespin-global/Bespin-QuestionAnswering](https://huggingface.co/spaces/bespin-global/Bespin-QuestionAnswering)
## Finetuning
- Pretrain Model : [klue/bert-base](https://github.com/KLUE-benchmark/KLUE)
- Dataset for fine-tuning : [AIHub 기계독해 데이터셋](https://aihub.or.kr/aidata/86)
- 표준 데이터 셋(25m) + 설명 가능 데이터 셋(10m)
- Random Sampling (random_seed: 1234)
- Train : 30m
- Test : 5m
- Parameters of Training
```
{
"epochs": 4,
"batch_size":8,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 3e-05
},
"weight_decay: 0.01
}
```
## Usage
```python
## Load Transformers library
import torch
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
def predict_answer(qa_text_pair):
# Encoding
encodings = tokenizer(context, question,
max_length=512,
truncation=True,
padding="max_length",
return_token_type_ids=False,
return_offsets_mapping=True
)
encodings = {key: torch.tensor([val]).to(device) for key, val in encodings.items()}
# Predict
pred = model(encodings["input_ids"], attention_mask=encodings["attention_mask"])
start_logits, end_logits = pred.start_logits, pred.end_logits
token_start_index, token_end_index = start_logits.argmax(dim=-1), end_logits.argmax(dim=-1)
pred_ids = encodings["input_ids"][0][token_start_index: token_end_index + 1]
answer_text = tokenizer.decode(pred_ids)
# Offset
answer_start_offset = int(encodings['offset_mapping'][0][token_start_index][0][0])
answer_end_offset = int(encodings['offset_mapping'][0][token_end_index][0][1])
answer_offset = (answer_start_offset, answer_end_offset)
return {'answer_text':answer_text, 'answer_offset':answer_offset}
## Load fine-tuned MRC model by HuggingFace Model Hub ##
HUGGINGFACE_MODEL_PATH = "bespin-global/klue-bert-base-aihub-mrc"
tokenizer = AutoTokenizer.from_pretrained(HUGGINGFACE_MODEL_PATH)
model = AutoModelForQuestionAnswering.from_pretrained(HUGGINGFACE_MODEL_PATH).to(device)
## Predict ##
context = '''애플 M2(Apple M2)는 애플이 설계한 중앙 처리 장치(CPU)와 그래픽 처리 장치(GPU)의 ARM 기반 시스템이다.
인텔 코어(Intel Core)에서 맥킨토시 컴퓨터용으로 설계된 2세대 ARM 아키텍처이다. 애플은 2022년 6월 6일 WWDC에서 맥북 에어, 13인치 맥북 프로와 함께 M2를 발표했다.
애플 M1의 후속작이다. M2는 TSMC의 '향상된 5나노미터 기술' N5P 공정으로 만들어졌으며, 이전 세대 M1보다 25% 증가한 200억개의 트랜지스터를 포함하고 있으며, 최대 24기가바이트의 RAM과 2테라바이트의 저장공간으로 구성할 수 있다.
8개의 CPU 코어(성능 4개, 효율성 4개)와 최대 10개의 GPU 코어를 가지고 있다. M2는 또한 메모리 대역폭을 100 GB/s로 증가시킨다.
애플은 기존 M1 대비 CPU가 최대 18%, GPU가 최대 35% 향상됐다고 주장하고 있으며,[1] 블룸버그통신은 M2맥스에 CPU 코어 12개와 GPU 코어 38개가 포함될 것이라고 보도했다.'''
question = "m2가 m1에 비해 얼마나 좋아졌어?"
qa_text_pair = {'context':context, 'question':question}
result = predict_answer(qa_text_pair)
print('Answer Text: ', result['answer_text']) # 기존 M1 대비 CPU가 최대 18 %, GPU가 최대 35 % 향상
print('Answer Offset: ', result['answer_offset']) # (410, 446)
```
## Citing & Authors
<!--- Describe where people can find more information -->
[Jaehyeong](https://huggingface.co/jaehyeong) at [Bespin Global](https://www.bespinglobal.com/) |
deep-learning-analytics/segformer_semantic_segmentation | 398ff9faf7ef7379bb0cd96107efbcd19d2b4903 | 2022-01-04T12:25:46.000Z | [
"pytorch",
"segformer",
"transformers"
] | null | false | deep-learning-analytics | null | deep-learning-analytics/segformer_semantic_segmentation | 166 | null | transformers | 3,847 | Entry not found |
facebook/wav2vec2-xls-r-1b-en-to-15 | b072afc9a7ed212179ac4fe2755287b5e65dc2c5 | 2022-05-26T22:27:12.000Z | [
"pytorch",
"speech-encoder-decoder",
"automatic-speech-recognition",
"multilingual",
"en",
"de",
"tr",
"fa",
"sv",
"mn",
"zh",
"cy",
"ca",
"sl",
"et",
"id",
"ar",
"ta",
"lv",
"ja",
"dataset:common_voice",
"dataset:multilingual_librispeech",
"dataset:covost2",
"arxiv:2111.09296",
"transformers",
"speech",
"xls_r",
"xls_r_translation",
"license:apache-2.0"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-xls-r-1b-en-to-15 | 166 | null | transformers | 3,848 | ---
language:
- multilingual
- en
- de
- tr
- fa
- sv
- mn
- zh
- cy
- ca
- sl
- et
- id
- ar
- ta
- lv
- ja
datasets:
- common_voice
- multilingual_librispeech
- covost2
tags:
- speech
- xls_r
- automatic-speech-recognition
- xls_r_translation
pipeline_tag: automatic-speech-recognition
license: apache-2.0
widget:
- example_title: English
src: https://cdn-media.huggingface.co/speech_samples/common_voice_en_18301577.mp3
---
# Wav2Vec2-XLS-R-1B-EN-15
Facebook's Wav2Vec2 XLS-R fine-tuned for **Speech Translation.**

This is a [SpeechEncoderDecoderModel](https://huggingface.co/transformers/model_doc/speechencoderdecoder.html) model.
The encoder was warm-started from the [**`facebook/wav2vec2-xls-r-1b`**](https://huggingface.co/facebook/wav2vec2-xls-r-1b) checkpoint and
the decoder from the [**`facebook/mbart-large-50`**](https://huggingface.co/facebook/mbart-large-50) checkpoint.
Consequently, the encoder-decoder model was fine-tuned on 15 `en` -> `{lang}` translation pairs of the [Covost2 dataset](https://huggingface.co/datasets/covost2).
The model can translate from spoken `en` (Engish) to the following written languages `{lang}`:
`en` -> {`de`, `tr`, `fa`, `sv-SE`, `mn`, `zh-CN`, `cy`, `ca`, `sl`, `et`, `id`, `ar`, `ta`, `lv`, `ja`}
For more information, please refer to Section *5.1.1* of the [official XLS-R paper](https://arxiv.org/abs/2111.09296).
## Usage
### Demo
The model can be tested on [**this space**](https://huggingface.co/spaces/facebook/XLS-R-1B-EN-15).
You can select the target language, record some audio in English,
and then sit back and see how well the checkpoint can translate the input.
### Example
As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the
transcripts by passing the speech features to the model.
You can use the model directly via the ASR pipeline. By default, the checkpoint will
translate spoken English to written German. To change the written target language,
you need to pass the correct `forced_bos_token_id` to `generate(...)` to condition
the decoder on the correct target language.
To select the correct `forced_bos_token_id` given your choosen language id, please make use
of the following mapping:
```python
MAPPING = {
"de": 250003,
"tr": 250023,
"fa": 250029,
"sv": 250042,
"mn": 250037,
"zh": 250025,
"cy": 250007,
"ca": 250005,
"sl": 250052,
"et": 250006,
"id": 250032,
"ar": 250001,
"ta": 250044,
"lv": 250017,
"ja": 250012,
}
```
As an example, if you would like to translate to Swedish, you can do the following:
```python
from datasets import load_dataset
from transformers import pipeline
# select correct `forced_bos_token_id`
forced_bos_token_id = MAPPING["sv"]
# replace following lines to load an audio file of your choice
librispeech_en = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
audio_file = librispeech_en[0]["file"]
asr = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-xls-r-1b-en-to-15", feature_extractor="facebook/wav2vec2-xls-r-1b-en-to-15")
translation = asr(audio_file, forced_bos_token_id=forced_bos_token_id)
```
or step-by-step as follows:
```python
import torch
from transformers import Speech2Text2Processor, SpeechEncoderDecoderModel
from datasets import load_dataset
model = SpeechEncoderDecoderModel.from_pretrained("facebook/wav2vec2-xls-r-1b-en-to-15")
processor = Speech2Text2Processor.from_pretrained("facebook/wav2vec2-xls-r-1b-en-to-15")
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# select correct `forced_bos_token_id`
forced_bos_token_id = MAPPING["sv"]
inputs = processor(ds[0]["audio"]["array"], sampling_rate=ds[0]["audio"]["array"]["sampling_rate"], return_tensors="pt")
generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"], forced_bos_token_id=forced_bos_token)
transcription = processor.batch_decode(generated_ids)
```
## Results `en` -> `{lang}`
See the row of **XLS-R (1B)** for the performance on [Covost2](https://huggingface.co/datasets/covost2) for this model.

## More XLS-R models for `{lang}` -> `en` Speech Translation
- [Wav2Vec2-XLS-R-300M-EN-15](https://huggingface.co/facebook/wav2vec2-xls-r-300m-en-to-15)
- [Wav2Vec2-XLS-R-1B-EN-15](https://huggingface.co/facebook/wav2vec2-xls-r-1b-en-to-15)
- [Wav2Vec2-XLS-R-2B-EN-15](https://huggingface.co/facebook/wav2vec2-xls-r-2b-en-to-15)
- [Wav2Vec2-XLS-R-2B-22-16](https://huggingface.co/facebook/wav2vec2-xls-r-2b-22-to-16)
|
felixhusen/poem | 86fdf8a8870ca99cb9a5aabea2a2032bfc8b6491 | 2021-05-21T16:01:12.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | felixhusen | null | felixhusen/poem | 166 | null | transformers | 3,849 | Entry not found |
projectaligned/gpt2-xl-reddit-writingprompts-behavior-cloning | c8d27145b10de890168e638b9139c9628fbbc9de | 2021-05-23T11:41:20.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | projectaligned | null | projectaligned/gpt2-xl-reddit-writingprompts-behavior-cloning | 166 | null | transformers | 3,850 | _deprecated_
This model is fine-tuned on data from https://www.reddit.com/r/WritingPrompts/
- The model is based on gpt2-xl
- The prompt responses to the top 1000 prompts (by upvote) are used to fine-tune the model. |
sagorsarker/codeswitch-hineng-lid-lince | 69ccbc5d4f9c8d066dd76f7cdb4e376aa63187ac | 2021-05-19T01:00:45.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"hi",
"en",
"dataset:lince",
"transformers",
"codeswitching",
"hindi-english",
"language-identification",
"license:mit",
"autotrain_compatible"
] | token-classification | false | sagorsarker | null | sagorsarker/codeswitch-hineng-lid-lince | 166 | null | transformers | 3,851 | ---
language:
- hi
- en
datasets:
- lince
license: mit
tags:
- codeswitching
- hindi-english
- language-identification
---
# codeswitch-hineng-lid-lince
This is a pretrained model for **language identification** of `hindi-english` code-mixed data used from [LinCE](https://ritual.uh.edu/lince/home)
This model is trained for this below repository.
[https://github.com/sagorbrur/codeswitch](https://github.com/sagorbrur/codeswitch)
To install codeswitch:
```
pip install codeswitch
```
## Identify Language
* **Method-1**
```py
from transformers import AutoTokenizer, AutoModelForTokenClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("sagorsarker/codeswitch-hineng-lid-lince")
model = AutoModelForTokenClassification.from_pretrained("sagorsarker/codeswitch-hineng-lid-lince")
lid_model = pipeline('ner', model=model, tokenizer=tokenizer)
lid_model("put any hindi english code-mixed sentence")
```
* **Method-2**
```py
from codeswitch.codeswitch import LanguageIdentification
lid = LanguageIdentification('hin-eng')
text = "" # your code-mixed sentence
result = lid.identify(text)
print(result)
```
|
vanilladucky/Friends_chatting_bot_redefined | d317a87fed7ccf6778ef530c9607266ce15294a6 | 2022-03-20T10:08:19.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | vanilladucky | null | vanilladucky/Friends_chatting_bot_redefined | 166 | null | transformers | 3,852 | ---
tags:
- conversational
---
# My Awesome Model
|
Felix92/doctr-dummy-torch-crnn-vgg16-bn | 0063b3cd672db5586e093b3f8f33ebde051d1707 | 2022-05-25T21:34:04.000Z | [
"pytorch",
"en",
"transformers",
"image-to-text"
] | image-to-text | false | Felix92 | null | Felix92/doctr-dummy-torch-crnn-vgg16-bn | 166 | null | transformers | 3,853 |
---
language: en
pipeline_tag: image-to-text
---
<p align="center">
<img src="https://github.com/mindee/doctr/releases/download/v0.3.1/Logo_doctr.gif" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: recognition
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
|
patrickvonplaten/hubert-xlarge-ls960-ft-4-gram | bf7facb778dd4a9613a5d936a726524e2706b372 | 2022-05-23T11:02:46.000Z | [
"pytorch",
"hubert",
"automatic-speech-recognition",
"en",
"dataset:librispeech_asr",
"transformers",
"audio",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/hubert-xlarge-ls960-ft-4-gram | 166 | 2 | transformers | 3,854 | ---
language: en
datasets:
- librispeech_asr
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
license: apache-2.0
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: patrickvonplaten/hubert-xlarge-ls960-ft-4-gram
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 1.71
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 3.06
---
# Hubert-XLarge-ls960-ft + 4-gram
This model is identical to [Facebook's hubert-xlarge-ls960-ft](https://huggingface.co/facebook/hubert-xlarge-ls960-ft), but is
augmented with an English 4-gram. The `4-gram.arpa.gz` of [Librispeech's official ngrams](https://www.openslr.org/11) is used.
## Evaluation
This code snippet shows how to evaluate **patrickvonplaten/hubert-xlarge-ls960-ft-4-gram** on LibriSpeech's "clean" and "other" test data.
```python
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torch
from jiwer import wer
model_id = "patrickvonplaten/hubert-xlarge-ls960-ft-4-gram"
librispeech_eval = load_dataset("librispeech_asr", "other", split="test")
model = AutoModelForCTC.from_pretrained(model_id).to("cuda")
processor = AutoProcessor.from_pretrained(model_id)
def map_to_pred(batch):
inputs = processor(batch["audio"]["array"], sampling_rate=16_000, return_tensors="pt")
inputs = {k: v.to("cuda") for k,v in inputs.items()}
with torch.no_grad():
logits = model(**inputs).logits
transcription = processor.batch_decode(logits.cpu().numpy()).text[0]
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, remove_columns=["audio"])
print(wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
|---|---|
| 1.71 | 3.06 | |
ArthurZ/jukebox-5b-lyrics | 2de0fe8b3a95105ef4138ce7d946e930ee029df7 | 2022-07-26T06:02:43.000Z | [
"pytorch",
"jukebox",
"en",
"arxiv:2005.00341",
"transformers",
"MusicGeneration"
] | null | false | ArthurZ | null | ArthurZ/jukebox-5b-lyrics | 166 | 4 | transformers | 3,855 | ---
language:
- en
tags:
- MusicGeneration
---
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Jukebox
## Overview
The Jukebox model was proposed in [Jukebox: A generative model for music](https://arxiv.org/pdf/2005.00341.pdf)
by Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford,
Ilya Sutskever.
This model proposes a generative music model which can be produce minute long samples which can bne conditionned on
artist, genre and lyrics.
The abstract from the paper is the following:
We introduce Jukebox, a model that generates
music with singing in the raw audio domain. We
tackle the long context of raw audio using a multiscale VQ-VAE to compress it to discrete codes,
and modeling those using autoregressive Transformers. We show that the combined model at
scale can generate high-fidelity and diverse songs
with coherence up to multiple minutes. We can
condition on artist and genre to steer the musical
and vocal style, and on unaligned lyrics to make
the singing more controllable. We are releasing
thousands of non cherry-picked samples, along
with model weights and code.
Tips:
This model is very slow for now, and takes 18h to generate a minute long audio.
This model was contributed by [Arthur Zucker](https://huggingface.co/ArthurZ).
The original code can be found [here](https://github.com/openai/jukebox).
|
pyronear/rexnet1_0x | 689637bf1a679965d4be6ac94de3a0ddcab9401f | 2022-07-17T23:45:55.000Z | [
"pytorch",
"onnx",
"dataset:pyronear/openfire",
"arxiv:2007.00992",
"transformers",
"image-classification",
"license:apache-2.0"
] | image-classification | false | pyronear | null | pyronear/rexnet1_0x | 166 | null | transformers | 3,856 | ---
license: apache-2.0
tags:
- image-classification
- pytorch
- onnx
datasets:
- pyronear/openfire
---
# ReXNet-1.0x model
Pretrained on a dataset for wildfire binary classification (soon to be shared). The ReXNet architecture was introduced in [this paper](https://arxiv.org/pdf/2007.00992.pdf).
## Model description
The core idea of the author is to add a customized Squeeze-Excitation layer in the residual blocks that will prevent channel redundancy.
## Installation
### Prerequisites
Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install PyroVision.
### Latest stable release
You can install the last stable release of the package using [pypi](https://pypi.org/project/pyrovision/) as follows:
```shell
pip install pyrovision
```
or using [conda](https://anaconda.org/pyronear/pyrovision):
```shell
conda install -c pyronear pyrovision
```
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
```shell
git clone https://github.com/pyronear/pyro-vision.git
pip install -e pyro-vision/.
```
## Usage instructions
```python
from PIL import Image
from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
from torchvision.transforms.functional import InterpolationMode
from pyrovision.models import model_from_hf_hub
model = model_from_hf_hub("pyronear/rexnet1_0x").eval()
img = Image.open(path_to_an_image).convert("RGB")
# Preprocessing
config = model.default_cfg
transform = Compose([
Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
PILToTensor(),
ConvertImageDtype(torch.float32),
Normalize(config['mean'], config['std'])
])
input_tensor = transform(img).unsqueeze(0)
# Inference
with torch.inference_mode():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
## Citation
Original paper
```bibtex
@article{DBLP:journals/corr/abs-2007-00992,
author = {Dongyoon Han and
Sangdoo Yun and
Byeongho Heo and
Young Joon Yoo},
title = {ReXNet: Diminishing Representational Bottleneck on Convolutional Neural
Network},
journal = {CoRR},
volume = {abs/2007.00992},
year = {2020},
url = {https://arxiv.org/abs/2007.00992},
eprinttype = {arXiv},
eprint = {2007.00992},
timestamp = {Mon, 06 Jul 2020 15:26:01 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2007-00992.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Source of this implementation
```bibtex
@software{Fernandez_Holocron_2020,
author = {Fernandez, François-Guillaume},
month = {5},
title = {{Holocron}},
url = {https://github.com/frgfm/Holocron},
year = {2020}
}
```
|
Yuetian/bert-base-uncased-finetuned-plutchik-emotion | d1f799821dc5bd723b98d52501f3c6f8aa42893c | 2022-07-25T04:41:30.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"license:mit"
] | text-classification | false | Yuetian | null | Yuetian/bert-base-uncased-finetuned-plutchik-emotion | 166 | null | transformers | 3,857 | ---
license: mit
---
|
ivan-savchuk/cross-encoder-ms-marco-MiniLM-L-12-v2-tuned_mediqa-v1 | f1cf7d1a13fd9d331784f8825adff0161e26670e | 2022-07-28T12:45:53.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | ivan-savchuk | null | ivan-savchuk/cross-encoder-ms-marco-MiniLM-L-12-v2-tuned_mediqa-v1 | 166 | null | transformers | 3,858 | Entry not found |
anas/wav2vec2-large-xlsr-arabic | f82ee80d0276f42e4f607efd9d14f452e5756004 | 2021-07-05T19:27:53.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"ar",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | anas | null | anas/wav2vec2-large-xlsr-arabic | 165 | 0 | transformers | 3,859 | ---
language: ar
datasets:
- common_voice: Common Voice Corpus 4
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Hasni XLSR Wav2Vec2 Large 53
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ar
type: common_voice
args: ar
metrics:
- name: Test WER
type: wer
value: 52.18
---
# Wav2Vec2-Large-XLSR-53-Arabic
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Arabic using the [Common Voice Corpus 4](https://commonvoice.mozilla.org/en/datasets) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ar", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anas/wav2vec2-large-xlsr-arabic")
model = Wav2Vec2ForCTC.from_pretrained("anas/wav2vec2-large-xlsr-arabic")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Arabic test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ar", split="test")
processor = Wav2Vec2Processor.from_pretrained("anas/wav2vec2-large-xlsr-arabic")
model = Wav2Vec2ForCTC.from_pretrained("anas/wav2vec2-large-xlsr-arabic/")
model.to("cuda")
chars_to_ignore_regex = '[\,\؟\.\!\-\;\\:\'\"\☭\«\»\؛\—\ـ\_\،\“\%\‘\”\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
batch["sentence"] = re.sub('[a-z]','',batch["sentence"])
batch["sentence"] = re.sub("[إأٱآا]", "ا", batch["sentence"])
noise = re.compile(""" ّ | # Tashdid
َ | # Fatha
ً | # Tanwin Fath
ُ | # Damma
ٌ | # Tanwin Damm
ِ | # Kasra
ٍ | # Tanwin Kasr
ْ | # Sukun
ـ # Tatwil/Kashida
""", re.VERBOSE)
batch["sentence"] = re.sub(noise, '', batch["sentence"])
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 52.18 %
## Training
The Common Voice Corpus 4 `train`, `validation`, datasets were used for training
The script used for training can be found [here](https://github.com/anashas/Fine-Tuning-of-XLSR-Wav2Vec2-on-Arabic)
Twitter: [here](https://twitter.com/hasnii_anas)
Email: [email protected] |
ushikado/yuyuyui-chatbot | 0c7304176fef97ebd6a1d547c3323d74d9550df2 | 2021-05-23T13:27:10.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"ja",
"transformers"
] | text-generation | false | ushikado | null | ushikado/yuyuyui-chatbot | 165 | 2 | transformers | 3,860 | ---
language: ja
inference: false
---
# yuyuyui-chatbot
This model is based on [rinna/japanese-gpt2-medium](https://huggingface.co/rinna/japanese-gpt2-medium) and finetuned on Yuyuyui scenario corpus.
## Usage
The model takes a sequence of utterances (context) to generate a subsequent utterance (response). Each utterance begins with a **character token** and ends with an **EOS token**. Use the unspecified character token `<某>` for user inputs.
Put a character token after your question or query to generate a response from a specific character. In this case, make sure that an EOS token is not appended automatically by the tokenizer. Otherwise the model will interpret the trailing EOS as an empty utterance and try to add another random character token.
Simple example:
```python
from transformers import T5Tokenizer, AutoModelForCausalLM
tokenizer = T5Tokenizer.from_pretrained("ushikado/yuyuyui-chatbot")
model = AutoModelForCausalLM.from_pretrained("ushikado/yuyuyui-chatbot")
query_text = "<某>神樹様について教えてください。</s><上里 ひなた>"
input_tensor = tokenizer.encode(query_text, add_special_tokens=False, return_tensors="pt")
output_list = model.generate(input_tensor, max_length=100, do_sample=True, pad_token_id=tokenizer.eos_token_id)
output_text = tokenizer.decode(output_list[0])
print(output_text)
"""
<某> 神樹様について教えてください。</s> <上里 ひなた> 造反神は、神樹様の分裂を煽り出して、神樹様の中の一体感を高める存在です。</s>
"""
```
Accumulate dialog history to make responses more context-aware:
```python
class Interlocutor():
def __init__(self, tokenizer, model, character_token, max_context_length=512, max_response_length=128):
self.tokenizer = tokenizer
self.model = model
self.character_token = character_token
self.max_context_length = max_context_length
self.max_response_length = max_response_length
self.context = ""
return
def generate(self, query):
nanigashi = self.tokenizer.additional_special_tokens[0]
nanigashi_id = self.tokenizer.additional_special_tokens_ids[0]
self.context += nanigashi + query + self.tokenizer.eos_token + self.character_token
context_tensor = self.tokenizer.encode(self.context, add_special_tokens=False, return_tensors="pt")
context_length = context_tensor.size()[-1]
if self.max_context_length < context_length:
context_tensor = context_tensor.narrow(1, context_length - self.max_context_length, self.max_context_length)
context_length = context_tensor.size()[-1]
max_length = context_length + self.max_response_length
context_tensor = self.model.generate(context_tensor, do_sample=True, max_length=max_length,
pad_token_id=self.tokenizer.eos_token_id)
self.context = re.sub(self.tokenizer.eos_token, "", self.tokenizer.decode(context_tensor[0]))
response = self.context[self.context.rindex(self.character_token) + len(self.character_token) : ].strip()
print(response)
interlocutor = Interlocutor(tokenizer, model, "<加賀城 雀>")
interlocutor.generate("何しようかな。")
"""
そうだなぁ。せっかく徳島に来たんだから、何か食べたいよなー。</s>
"""
interlocutor.generate("例えば?")
"""
スパムとかいう高級料理はちょっとなぁ。あとは可愛い雑貨とか、おやつとか。</s>
"""
interlocutor.generate("徳島ラーメンじゃないの?")
"""
あー、確か徳島ラーメンってのがあって、それも美味しいんだよね。</s>
"""
interlocutor.generate("ここから近いお店があるんだって。行ってみよう!")
"""
わー! 何だか賑やかでいい感じだね。</s>
"""
interlocutor.generate("さっそく注文するね。")
"""
んー! ずっーと揚げ鶏が好きだったけど、今日は初めてまるまる鶏肉を注文してみるよ。</s>
"""
print(interlocutor.context)
"""
<某> 何しようかな。</s> <加賀城 雀> そうだなぁ。せっかく徳島に来たんだから、何か食べたいよなー。</s> <某> 例えば?</s> <加賀城 雀> スパムとかいう高級料理はちょっとなぁ。あとは可愛い雑貨とか、おやつとか。</s> <某> 徳島ラーメンじゃないの?</s> <加賀城 雀> あー、確か徳島ラーメンってのがあって、それも美味しいんだよね。</s> <某> ここから近いお店があるんだって。行ってみよう!</s> <加賀城 雀> わー! 何だか賑やかでいい感じだね。</s> <某> さっそく注文するね。</s> <加賀城 雀> んー! ずっーと揚げ鶏が好きだったけど、今日は初めてまるまる鶏肉を注文してみるよ。</s>
"""
```
## List of character tokens
`<某>` is _unspecified (nanigashi)_. Use for user inputs or mobs.
```plain
<某>
<結城 友奈>
<東郷 美森>
<犬吠埼 風>
<犬吠埼 樹>
<三好 夏凜>
<乃木 園子>
<鷲尾 須美>
<三ノ輪 銀>
<乃木 若葉>
<上里 ひなた>
<土居 球子>
<伊予島 杏>
<郡 千景>
<高嶋 友奈>
<白鳥 歌野>
<藤森 水都>
<秋原 雪花>
<古波蔵 棗>
<楠 芽吹>
<加賀城 雀>
<弥勒 夕海子>
<山伏 しずく>
<山伏 シズク>
<国土 亜耶>
<赤嶺 友奈>
<弥勒 蓮華>
<桐生 静>
<安芸 真鈴>
<花本 美佳>
```
## Licence
TBD. |
mlnotes/tape | 2c72f2efa0520efafe6f50edfcc552226bf569cf | 2022-06-01T15:43:03.000Z | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | false | mlnotes | null | mlnotes/tape | 165 | null | transformers | 3,861 | Entry not found |
Felix92/doctr-dummy-torch-crnn-mobilenet-v3-small | dc97c7e0efaa522cebe1badd0de7fffbdec13a22 | 2022-05-25T21:33:45.000Z | [
"pytorch",
"en",
"transformers",
"image-to-text"
] | image-to-text | false | Felix92 | null | Felix92/doctr-dummy-torch-crnn-mobilenet-v3-small | 165 | null | transformers | 3,862 |
---
language: en
pipeline_tag: image-to-text
---
<p align="center">
<img src="https://github.com/mindee/doctr/releases/download/v0.3.1/Logo_doctr.gif" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: recognition
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
|
juliensimon/wav2vec2-conformer-rel-pos-large-finetuned-speech-commands | 660e621ec8c4ceea1702da292137b3bf938a4367 | 2022-06-27T21:43:27.000Z | [
"pytorch",
"wav2vec2-conformer",
"audio-classification",
"en",
"dataset:speech_commands",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | audio-classification | false | juliensimon | null | juliensimon/wav2vec2-conformer-rel-pos-large-finetuned-speech-commands | 165 | 1 | transformers | 3,863 | ---
license: apache-2.0
language: en
tags:
- generated_from_trainer
datasets:
- speech_commands
metrics:
- accuracy
model-index:
- name: wav2vec2-conformer-rel-pos-large-finetuned-speech-commands
results:
- task:
type: audio-classification
name: audio classification
dataset:
type: speech_commands
name: speech_commands
split: v0.02
metrics:
- type: accuracy
value: 0.9724
name: accuracy
---
# wav2vec2-conformer-rel-pos-large-finetuned-speech-commands
### Model description
This model is a fine-tuned version of [facebook/wav2vec2-conformer-rel-pos-large](https://huggingface.co/facebook/wav2vec2-conformer-rel-pos-large) on the [speech_commands](https://huggingface.co/datasets/speech_commands) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5245
- Accuracy: 0.9724
#### Intended uses & limitations
The model can spot one of the following keywords: "Yes", "No", "Up", "Down", "Left", "Right", "On", "Off", "Stop", "Go", "Zero", "One", "Two", "Three", "Four", "Five", "Six", "Seven", "Eight", "Nine", "Bed", "Bird", "Cat", "Dog", "Happy", "House", "Marvin", "Sheila", "Tree", "Wow", "Backward", "Forward", "Follow", "Learn", "Visual".
The repository includes sample files that I recorded (WAV, 16Khz sampling rate, mono). The simplest way to use the model is with the ```pipeline``` API:
```
>>> from transformers import pipeline
>>> p = pipeline("audio-classification", model="juliensimon/wav2vec2-conformer-rel-pos-large-finetuned-speech-commands")
>>> p("up16k.wav")
[{'score': 0.7008192539215088, 'label': 'up'}, {'score': 0.04346614331007004, 'label': 'off'}, {'score': 0.029526518657803535, 'label': 'left'}, {'score': 0.02905120886862278, 'label': 'stop'}, {'score': 0.027142534032464027, 'label': 'on'}]
>>> p("stop16k.wav")
[{'score': 0.6969656944274902, 'label': 'stop'}, {'score': 0.03391443192958832, 'label': 'up'}, {'score': 0.027382319793105125, 'label': 'seven'}, {'score': 0.020835857838392258, 'label': 'five'}, {'score': 0.018051736056804657, 'label': 'down'}]
>>> p("marvin16k.wav")
[{'score': 0.5276530981063843, 'label': 'marvin'}, {'score': 0.04645705968141556, 'label': 'down'}, {'score': 0.038583893328905106, 'label': 'backward'}, {'score': 0.03578080236911774, 'label': 'wow'}, {'score': 0.03178196772933006, 'label': 'bird'}]
```
You can also use them model with the ```Auto```API:
```
>>> import torch, librosa
>>> from transformers import AutoModelForAudioClassification, Wav2Vec2FeatureExtractor
>>> feature_extractor = Wav2Vec2FeatureExtractor()
>>> model = AutoModelForAudioClassification.from_pretrained("juliensimon/wav2vec2-conformer-rel-pos-large-finetuned-speech-commands")
>>> audio, rate = librosa.load("up16k.wav", sr = 16000)
>>> inputs = feature_extractor(audio, sampling_rate=16000, return_tensors = "pt")
>>> logits = model(inputs['input_values'])
>>> logits
SequenceClassifierOutput(loss=None, logits=tensor([[-0.4635, -1.0112, 4.7935, 0.8528, 1.6265, 0.6456, 1.5423, 2.0132,
1.6103, 0.5847, -2.2526, 0.8839, 0.8163, -1.5655, -1.4160, -0.4196,
-0.1097, -1.8827, 0.6609, -0.2022, 0.0971, -0.6205, 0.4492, 0.0926,
-2.4848, 0.2630, -0.4584, -2.4327, -1.1654, 0.3897, -0.3374, -1.2418,
-0.1045, 0.2827, -1.5667, -0.0963]], grad_fn=<AddmmBackward0>), hidden_states=None, attentions=None)
>>> classes = torch.softmax(logits.logits, dim=1)
>>> torch.set_printoptions(precision=3, sci_mode=False)
>>> classes
tensor([[ 0.004, 0.002, 0.701, 0.014, 0.030, 0.011,
0.027, 0.043, 0.029, 0.010, 0.001, 0.014,
0.013, 0.001, 0.001, 0.004, 0.005, 0.001,
0.011, 0.005, 0.006, 0.003, 0.009, 0.006,
0.000, 0.008, 0.004, 0.001, 0.002, 0.009,
0.004, 0.002, 0.005, 0.008, 0.001, 0.005]],
grad_fn=<SoftmaxBackward0>)
>>> top_class = torch.argmax(logits.logits, dim=1)
>>> top_class
tensor([2])
>>> model.config.id2label[top_class.numpy()[0]]
'up'
```
### Training and evaluation data
- subset: v0.02
- full training set
- full validation set
### Training procedure
The model was fine-tuned on [Amazon SageMaker](https://aws.amazon.com/sagemaker), using an [ml.p3dn.24xlarge](https://aws.amazon.com/fr/ec2/instance-types/p3/) instance (8 NVIDIA V100 GPUs). Total training time for 10 epochs was 4.5 hours.
#### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
#### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.2901 | 1.0 | 83 | 2.0542 | 0.8875 |
| 1.8375 | 2.0 | 166 | 1.5610 | 0.9316 |
| 1.4957 | 3.0 | 249 | 1.1850 | 0.9558 |
| 1.1917 | 4.0 | 332 | 0.9159 | 0.9695 |
| 1.0449 | 5.0 | 415 | 0.7624 | 0.9687 |
| 0.9319 | 6.0 | 498 | 0.6444 | 0.9715 |
| 0.8559 | 7.0 | 581 | 0.5806 | 0.9711 |
| 0.8199 | 8.0 | 664 | 0.5394 | 0.9721 |
| 0.7949 | 9.0 | 747 | 0.5245 | 0.9724 |
| 0.7975 | 10.0 | 830 | 0.5256 | 0.9721 |
#### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
SushantGautam/SportsSum | 3a4104cd63c9be5577d966f71d101b8f9cd8c707 | 2022-07-23T06:45:09.000Z | [
"pytorch",
"led",
"text2text-generation",
"en",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | SushantGautam | null | SushantGautam/SportsSum | 165 | null | transformers | 3,864 | ---
language:
- en
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: SportsSum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SportsSum
This model is a fine-tuned version of [allenai/led-base-16384-ms2](https://huggingface.co/allenai/led-base-16384-ms2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2759
- Rouge1: 52.3608
- Rouge2: 27.6526
- Rougel: 31.8509
- Rougelsum: 49.9086
- Gen Len: 248.1199
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 36
- eval_batch_size: 36
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
|
BoxCrab/DialoGPT-small-Strider | 554e2d99f54c79bcb61f9cac90eb6eeedf3a84e8 | 2022-07-16T07:50:45.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | BoxCrab | null | BoxCrab/DialoGPT-small-Strider | 165 | null | transformers | 3,865 | ---
tags:
- conversational
---
# Dirk Strider DialoGPT Model |
bhadresh-savani/electra-base-emotion | 1e8c5c4dcdc26c845b56b7ca5c6499f610ea8c8b | 2022-07-14T07:01:38.000Z | [
"pytorch",
"tf",
"jax",
"electra",
"text-classification",
"en",
"dataset:emotion",
"transformers",
"emotion",
"license:apache-2.0",
"model-index"
] | text-classification | false | bhadresh-savani | null | bhadresh-savani/electra-base-emotion | 164 | null | transformers | 3,866 | ---
language:
- en
thumbnail: https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4
tags:
- text-classification
- emotion
- pytorch
license: apache-2.0
datasets:
- emotion
metrics:
- Accuracy, F1 Score
model-index:
- name: bhadresh-savani/electra-base-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: default
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.9265
verified: true
- name: Precision Macro
type: precision
value: 0.911532655431019
verified: true
- name: Precision Micro
type: precision
value: 0.9265
verified: true
- name: Precision Weighted
type: precision
value: 0.9305456360257519
verified: true
- name: Recall Macro
type: recall
value: 0.8536923122511134
verified: true
- name: Recall Micro
type: recall
value: 0.9265
verified: true
- name: Recall Weighted
type: recall
value: 0.9265
verified: true
- name: F1 Macro
type: f1
value: 0.8657529340483895
verified: true
- name: F1 Micro
type: f1
value: 0.9265
verified: true
- name: F1 Weighted
type: f1
value: 0.924844632421077
verified: true
- name: loss
type: loss
value: 0.3268870413303375
verified: true
---
# Electra-base-emotion
## Model description:
## Model Performance Comparision on Emotion Dataset from Twitter:
| Model | Accuracy | F1 Score | Test Sample per Second |
| --- | --- | --- | --- |
| [Distilbert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/distilbert-base-uncased-emotion) | 93.8 | 93.79 | 398.69 |
| [Bert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/bert-base-uncased-emotion) | 94.05 | 94.06 | 190.152 |
| [Roberta-base-emotion](https://huggingface.co/bhadresh-savani/roberta-base-emotion) | 93.95 | 93.97| 195.639 |
| [Albert-base-v2-emotion](https://huggingface.co/bhadresh-savani/albert-base-v2-emotion) | 93.6 | 93.65 | 182.794 |
| [Electra-base-emotion](https://huggingface.co/bhadresh-savani/electra-base-emotion) | 91.95 | 91.90 | 472.72 |
## How to Use the model:
```python
from transformers import pipeline
classifier = pipeline("text-classification",model='bhadresh-savani/electra-base-emotion', return_all_scores=True)
prediction = classifier("I love using transformers. The best part is wide range of support and its easy to use", )
print(prediction)
"""
Output:
[[
{'label': 'sadness', 'score': 0.0006792712374590337},
{'label': 'joy', 'score': 0.9959300756454468},
{'label': 'love', 'score': 0.0009452480007894337},
{'label': 'anger', 'score': 0.0018055217806249857},
{'label': 'fear', 'score': 0.00041110432357527316},
{'label': 'surprise', 'score': 0.0002288572577526793}
]]
"""
```
## Dataset:
[Twitter-Sentiment-Analysis](https://huggingface.co/nlp/viewer/?dataset=emotion).
## Training procedure
[Colab Notebook](https://github.com/bhadreshpsavani/ExploringSentimentalAnalysis/blob/main/SentimentalAnalysisWithDistilbert.ipynb)
## Eval results
```json
{
'epoch': 8.0,
'eval_accuracy': 0.9195,
'eval_f1': 0.918975455617076,
'eval_loss': 0.3486028015613556,
'eval_runtime': 4.2308,
'eval_samples_per_second': 472.726,
'eval_steps_per_second': 7.564
}
```
## Reference:
* [Natural Language Processing with Transformer By Lewis Tunstall, Leandro von Werra, Thomas Wolf](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/) |
nlptown/flaubert_small_cased_sentiment | e024d6296b3f800fa6cc165a9b5e4a5adf0dff94 | 2022-05-17T07:43:58.000Z | [
"pytorch",
"tf",
"flaubert",
"text-classification",
"fr",
"dataset:amazon_reviews_multi",
"transformers",
"license:mit"
] | text-classification | false | nlptown | null | nlptown/flaubert_small_cased_sentiment | 164 | 1 | transformers | 3,867 | ---
language:
- fr
datasets:
- amazon_reviews_multi
license: mit
---
# flaubert_small_cased_sentiment
This is a `flaubert_small_cased` model finetuned for sentiment analysis on product reviews in French. It predicts the sentiment of the review, from `very_negative` (1 star) to `very_positive` (5 stars).
This model is intended for direct use as a sentiment analysis model for French product reviews, or for further finetuning on related sentiment analysis tasks.
## Training data
The training data consists of the French portion of `amazon_reviews_multi`, supplemented with another 140,000 similar reviews.
## Accuracy
The finetuned model was evaluated on the French test set of `amazon_reviews_multi`.
- Accuracy (exact) is the exact match on the number of stars.
- Accuracy (off-by-1) is the percentage of reviews where the number of stars the model predicts differs by a maximum of 1 from the number given by the human reviewer.
| Language | Accuracy (exact) | Accuracy (off-by-1) |
| -------- | ---------------------- | ------------------- |
| French | 61.56% | 95.66%
## Contact
[NLP Town](https://www.nlp.town) offers a suite of sentiment models for a wide range of languages, including an improved multilingual model through [RapidAPI](https://rapidapi.com/nlp-town-nlp-town-default/api/multilingual-sentiment-analysis2/).
Feel free to contact us for questions, feedback and/or requests for similar models. |
asahi417/lmqg-mbart-large-cc25-jaquad | f4eaebfbfa4faca6beafd3c2635f8f0f151dc216 | 2022-06-09T12:16:42.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"ja",
"dataset:asahi417/qg_jaquad",
"transformers",
"question generation",
"license:cc-by-4.0",
"autotrain_compatible"
] | text2text-generation | false | asahi417 | null | asahi417/lmqg-mbart-large-cc25-jaquad | 164 | null | transformers | 3,868 | ---
language: ja
tags:
- question generation
license: cc-by-4.0
datasets:
- asahi417/qg_jaquad
metrics:
- bleu
- meteor
- rouge
- bertscore
widget:
- text: "ゾフィーは貴族出身ではあったが王族出身ではなく、ハプスブルク家の皇位継承者であるフランツ・フェルディナントとの結婚は貴賤結婚となった。皇帝フランツ・ヨーゼフは、2人の間に生まれた子孫が皇位を継がないことを条件として結婚を承認していた。視察が予定されている<hl>6月28日<hl>は2人の14回目の結婚記念日であった。"
example_title: "Question Generation Example 1"
- text: "『クマのプーさん』の物語はまず1925年12月24日、『イヴニング・ニュース』紙のクリスマス特集号に短編作品として掲載された。これは『クマのプーさん』の第一章にあたる作品で、このときだけは挿絵をJ.H.ダウドがつけている。その後作品10話と挿絵が整い、刊行に先駆けて「イーヨーの誕生日」のエピソードが1926年8月に『ロイヤルマガジン』に、同年10月9日に『ニューヨーク・イヴニング・ポスト』紙に掲載されたあと、同年10月14日にロンドンで(メシュエン社)、21日にニューヨークで(ダットン社)『クマのプーさん』が刊行された。前著『ぼくたちがとてもちいさかったころ』がすでに大きな成功を収めていたこともあり、イギリスでは初版は前著の7倍に当たる<hl>3万5000部<hl>が刷られた。他方のアメリカでもその年の終わりまでに15万部を売り上げている。ただし依然として人気のあった前著を売り上げで追い越すには数年の時間を要した。"
example_title: "Question Generation Example 2"
- text: "フェルメールの作品では、17世紀のオランダの画家、ヨハネス・フェルメールの作品について記述する。フェルメールの作品は、疑問作も含め<hl>30数点<hl>しか現存しない。現存作品はすべて油彩画で、版画、下絵、素描などは残っていない。以下には若干の疑問作も含め、37点の基本情報を記載し、各作品について略説する。収録順序、推定制作年代は『「フェルメールとその時代展」図録』による。日本語の作品タイトルについては、上掲図録のほか、『「フェルメール展」図録』、『フェルメール生涯と作品』による。便宜上「1650年代の作品」「1660年代の作品」「1670年代の作品」の3つの節を設けたが、フェルメールの作品には制作年代不明のものが多く、推定制作年代については研究者や文献によって若干の差がある。"
example_title: "Question Generation Example 3"
- text: "東大寺は、六宗兼学の場として世に広く知られるようになった。六宗とはすなわち、法相宗(法性宗)、三論宗、倶舎宗(薩婆多宗)、成実宗、華厳宗(花厳宗)、律宗のことであり、すべて<hl>中国<hl>から起こり、伝来したものであった。当時の宗とは、教団というよりは仏教教理の学派に近い。それゆえ、兼学の場ができたとも言える。この様な兼学の形態は、南都の寺院では広く見られたものである。この六宗兼学の場(後、真言、天台加わって八宗兼学の場)の性格は、現在の東大寺でも見られるが、中でも重んじられたのが、本尊の大仏の性格が華厳経の教えに則ったものであることからも分かるように、華厳宗である。"
example_title: "Question Generation Example 4"
pipeline_tag: text2text-generation
---
# MBART LARGE CC25 fine-tuned for Japanese Question Generation
MBART LARGE CC25 Model fine-tuned on Japanese question generation dataset (JaQuAD) with an extensive hyper-parameter search.
- [Online Demo](https://autoqg.net/)
- [Project Repository](https://github.com/asahi417/lm-question-generation)
## Overview
**Language model:** mbart-large-cc25
**Language:** Japanese (ja)
**Downstream-task:** Question Generation
**Training data:** JaQuAD
**Eval data:** JaQuAD
**Code:** See [our repository](https://github.com/asahi417/lm-question-generation)
## Usage
### In Transformers
```python
from transformers import pipeline
model_path = 'asahi417/lmqg-mbart-large-cc25-jaquad'
pipe = pipeline("text2text-generation", model_path)
# Question Genration
paragraph = '東大寺は、六宗兼学の場として世に広く知られるようになった。六宗とはすなわち、法相宗(法性宗)、三論宗、倶舎宗(薩婆多宗)、成実宗、華厳宗(花厳宗)、律宗のことであり、すべて中国から起こり、伝来したものであった。'
# highlight an answer in the paragraph to generate question
answer = '中国'
highlight_token = '<hl>'
input_text = paragraph.replace(answer, '{0} {1} {0}'.format(highlight_token, answer))
generation = pipe(input_text)
print(generation)
>>> [{'generated_text': '六宗はどの国から起こったものでありますか。'}]
```
## Evaluations
Evaluation on the test set of [JaQuAD QG dataset](https://huggingface.co/datasets/asahi417/qg_jaquad).
All evaluations were done using our [evaluation script](https://github.com/asahi417/lm-question-generation).
| BLEU 4 | ROUGE L | METEOR | BERTScore |
| ------ | -------- | ------ | --------- |
| 32.15 | 52.94 | 29.97 | 82.25 |
- [metric file](https://huggingface.co/asahi417/lmqg-mbart-large-cc25-jaquad/raw/main/eval/metric.first.sentence.paragraph_answer.question.asahi417_qg_jaquad.default.json)
## Fine-tuning Parameters
We ran grid search to find the best hyper-parameters and continued fine-tuning until the validation metric decrease.
The best hyper-parameters can be found [here](https://huggingface.co/asahi417/lmqg-mbart-large-cc25-jaquad/raw/main/trainer_config.json), and fine-tuning script is released in [our repository](https://github.com/asahi417/lm-question-generation).
## Citation
TBA
|
jplu/adel-dbpedia-linking | 4187ee10a50fb812a3a798840b21df7b748faddd | 2022-07-22T14:20:54.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | jplu | null | jplu/adel-dbpedia-linking | 164 | null | transformers | 3,869 | Entry not found |
MoritzLaurer/DeBERTa-v3-base-mnli | 3d5861d4fd73bd03dcb8a414558dfb53d2b75188 | 2022-01-15T14:51:04.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"en",
"arxiv:2006.03654",
"transformers",
"zero-shot-classification"
] | zero-shot-classification | false | MoritzLaurer | null | MoritzLaurer/DeBERTa-v3-base-mnli | 163 | 2 | transformers | 3,870 | ---
language:
- en
tags:
- text-classification
- zero-shot-classification
metrics:
- accuracy
pipeline_tag: zero-shot-classification
---
# DeBERTa-v3-base-mnli-fever-anli
## Model description
This model was trained on the MultiNLI dataset, which consists of 392 702 NLI hypothesis-premise pairs.
The base model is [DeBERTa-v3-base from Microsoft](https://huggingface.co/microsoft/deberta-v3-base). The v3 variant of DeBERTa substantially outperforms previous versions of the model by including a different pre-training objective, see annex 11 of the original [DeBERTa paper](https://arxiv.org/pdf/2006.03654.pdf). For a more powerful model, check out [DeBERTa-v3-base-mnli-fever-anli](https://huggingface.co/MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli) which was trained on even more data.
## Intended uses & limitations
#### How to use the model
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_name = "MoritzLaurer/DeBERTa-v3-base-mnli"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
premise = "I first thought that I liked the movie, but upon second thought it was actually disappointing."
hypothesis = "The movie was good."
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
prediction = torch.softmax(output["logits"][0], -1).tolist()
label_names = ["entailment", "neutral", "contradiction"]
prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
print(prediction)
```
### Training data
This model was trained on the MultiNLI dataset, which consists of 392 702 NLI hypothesis-premise pairs.
### Training procedure
DeBERTa-v3-base-mnli was trained using the Hugging Face trainer with the following hyperparameters.
```
training_args = TrainingArguments(
num_train_epochs=5, # total number of training epochs
learning_rate=2e-05,
per_device_train_batch_size=32, # batch size per device during training
per_device_eval_batch_size=32, # batch size for evaluation
warmup_ratio=0.1, # number of warmup steps for learning rate scheduler
weight_decay=0.06, # strength of weight decay
fp16=True # mixed precision training
)
```
### Eval results
The model was evaluated using the matched test set and achieves 0.90 accuracy.
## Limitations and bias
Please consult the original DeBERTa paper and literature on different NLI datasets for potential biases.
### BibTeX entry and citation info
If you want to cite this model, please cite the original DeBERTa paper, the respective NLI datasets and include a link to this model on the Hugging Face hub.
### Ideas for cooperation or questions?
If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/)
### Debugging and issues
Note that DeBERTa-v3 was released recently and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers==4.13 might solve some issues. |
asahi417/lmqg-bart-large-squad | 620ed38a2815c0defdb465331e9af5d39f292f66 | 2022-06-09T18:13:02.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:asahi417/qg_squad",
"transformers",
"question generation",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | asahi417 | null | asahi417/lmqg-bart-large-squad | 163 | null | transformers | 3,871 | ---
language:
- en
tags:
- question generation
license: mit
datasets:
- asahi417/qg_squad
metrics:
- bleu
- meteor
- rouge
- bertscore
- moverscore
widget:
- text: "<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records."
example_title: "Example 1"
- text: "Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records."
example_title: "Example 2"
- text: "Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> ."
example_title: "Example 3"
---
# BART LARGE fine-tuned for English Question Generation
BART LARGE Model fine-tuned on English question generation dataset (SQuAD) with an extensive hyper-parameter search.
- [Online Demo](https://autoqg.net/)
- [Project Repository](https://github.com/asahi417/lm-question-generation)
## Overview
**Language model:** facebook/bart-large
**Language:** English (en)
**Downstream-task:** Question Generation
**Training data:** SQuAD
**Eval data:** SQuAD
**Code:** See [our repository](https://github.com/asahi417/lm-question-generation)
## Usage
### In Transformers
```python
from transformers import pipeline
model_path = 'asahi417/lmqg-bart-large-squad'
pipe = pipeline("text2text-generation", model_path)
paragraph = 'Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.'
# highlight an answer in the paragraph to generate question
answer = 'Etta James'
highlight_token = '<hl>'
input_text = paragraph.replace(answer, '{0} {1} {0}'.format(highlight_token, answer))
input_text = 'generate question: {}'.format(input_text) # add task specific prefix
generation = pipe(input_text)
print(generation)
>>> [{'generated_text': 'What is the name of the biopic that Beyonce starred in?'}]
```
## Evaluations
Evaluation on the test set of [SQuAD QG dataset](https://huggingface.co/datasets/asahi417/qg_squad).
The results are comparable with the [leaderboard](https://paperswithcode.com/sota/question-generation-on-squad11) and previous works.
All evaluations were done using our [evaluation script](https://github.com/asahi417/lm-question-generation).
| BLEU 4 | ROUGE L | METEOR | BERTScore | MoverScore |
| ------ | -------- | ------ | --------- | ---------- |
| 26.16 | 53.84 | 27.07 | 91.00 | 64.99 |
- [metric file](https://huggingface.co/asahi417/lmqg-bart-large-squad/raw/main/eval/metric.first.sentence.paragraph_answer.question.asahi417_qg_squad.default.json)
## Fine-tuning Parameters
We ran grid search to find the best hyper-parameters and continued fine-tuning until the validation metric decrease.
The best hyper-parameters can be found [here](https://huggingface.co/asahi417/lmqg-bart-large-squad/raw/main/trainer_config.json), and fine-tuning script is released in [our repository](https://github.com/asahi417/lm-question-generation).
## Citation
TBA
|
gagan3012/keytotext | e72f603ed3cf09dc685c70f7f06465613cf42dad | 2021-03-11T20:23:32.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | gagan3012 | null | gagan3012/keytotext | 163 | null | transformers | 3,872 | # keytotext
Idea is to build a model which will take keywords as inputs and generate sentences as outputs.
### Model:
Two Models have been built:
- Using T5-base size = 850 MB can be found here: https://huggingface.co/gagan3012/keytotext
- Using T5-small size = 230 MB can be found here: https://huggingface.co/gagan3012/keytotext-small
#### Usage:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("gagan3012/keytotext-small")
model = AutoModelWithLMHead.from_pretrained("gagan3012/keytotext-small")
```
### Demo:
[](https://share.streamlit.io/gagan3012/keytotext/app.py)
https://share.streamlit.io/gagan3012/keytotext/app.py

### Example:
['India', 'Wedding'] -> We are celebrating today in New Delhi with three wedding anniversary parties.
|
josmunpen/mt5-small-spanish-summarization | 555ac6380d9199146da607252d2686ca36b4053e | 2021-11-03T09:47:51.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"es",
"dataset:larazonpublico",
"dataset:es",
"transformers",
"summarization",
"spanish",
"license:apache-2.0",
"autotrain_compatible"
] | summarization | false | josmunpen | null | josmunpen/mt5-small-spanish-summarization | 163 | null | transformers | 3,873 |
---
language:
- es
thumbnail:
tags:
- summarization
- mt5
- spanish
license: apache-2.0
datasets:
- larazonpublico
- es
metrics:
- rouge
widget:
- text: "La Guardia Civil ha desarticulado un grupo organizado dedicado a copiar en los examenes teoricos para la obtencion del permiso de conducir. Para ello, empleaban receptores y camaras de alta tecnologia y operaban desde la misma sede del Centro de examenes de la Direccion General de Trafico (DGT) en Mostoles. Es lo que han llamado la Operacion pinga.
El grupo desarticulado ofrecia el servicio de transporte y tecnologia para copiar y poder aprobar. Por dicho servicio cobraban 1.000 euros. Los investigadores sorprendieron in fraganti a una mujer intentando copiar en el examen. Portaba una chaqueta con dispositivos electronicos ocultos, concretamente un telefono movil al que estaba conectada una camara que habia sido insertada en la parte frontal de la chaqueta para transmitir online el examen y que orientada al ordenador del Centro de Examenes en el que aparecen las preguntas, permitia visualizar las imagenes en otro ordenador alojado en el interior de un vehiculo estacionado en las inmediaciones del centro. En este vehiculo, se encontraban el resto del grupo desarticulado con varios ordenadores portatiles y tablets abiertos y conectados a paginas de test de la DGT para consultar las respuestas. Estos, comunicaban con la mujer que estaba en el aula haciendo el examen a traves de un diminuto receptor bluetooth que portaba en el interior de su oido.
Luis de Lama, portavoz de la Guardia Civil de Trafico destaca que los ciudadanos, eran de origen chino, y copiaban en el examen utilizando la tecnologia facilitada por una organizacion. Destaca que, ademas de parte del fraude que supone copiar en un examen muchos de estos ciudadanos desconocian el idioma, no hablan ni entienden el español lo que supone un grave riesgo para la seguridad vial por desconocer las señales y letreros que avisan en carretera de muchas incidencias.
"
---
# mt5-small-spanish-summarization
## Model description
This is a mt5-small model finetuned for generating headlines from the body of the news in Spanish.
## Training data
The model was trained with 58425 news extracted from the La Razón (31477) and Público (26948) newspapers. These news belong to the following categories: "España", "Cultura", "Economía", "Igualdad" and "Política".
## Training procedure
It was trained with Google Colab's GPU Tesla P100-PCIE-16GB for 2 epochs.
### Hyperparameters
{evaluation_strategy = "epoch",
learning_rate = 2e-4,
per_device_train_batch_size = 6,
per_device_eval_batch_size = 6,
weight_decay = 0.01,
save_total_limi t= 3,
num_train_epochs = 2,
predict_with_generate = True,
fp16 = False}
## Eval results
| metric | score |
| --- | ----- |
| rouge1 | 44.03 |
| rouge2 | 28.2900 |
| rougeL | 40.54 |
| rougeLsum | 40.5587 |
### BibTeX entry and citation info
```bibtex
@inproceedings{ mt5lrpjosmunpen,
year={2020},
}
``` |
loodos/bert-base-turkish-uncased | 7875a51367752147af6ac44b131992284a4543b3 | 2021-05-19T22:04:30.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"tr",
"transformers"
] | null | false | loodos | null | loodos/bert-base-turkish-uncased | 163 | 2 | transformers | 3,874 | ---
language: tr
---
# Turkish Language Models with Huggingface's Transformers
As R&D Team at Loodos, we release cased and uncased versions of most recent language models for Turkish. More details about pretrained models and evaluations on downstream tasks can be found [here (our repo)](https://github.com/Loodos/turkish-language-models).
# Turkish BERT-Base (uncased)
This is BERT-Base model which has 12 encoder layers with 768 hidden layer size trained on uncased Turkish dataset.
## Usage
Using AutoModel and AutoTokenizer from Transformers, you can import the model as described below.
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("loodos/bert-base-turkish-uncased", do_lower_case=False)
model = AutoModel.from_pretrained("loodos/bert-base-turkish-uncased")
normalizer = TextNormalization()
normalized_text = normalizer.normalize(text, do_lower_case=True, is_turkish=True)
tokenizer.tokenize(normalized_text)
```
### Notes on Tokenizers
Currently, Huggingface's tokenizers (which were written in Python) have a bug concerning letters "ı, i, I, İ" and non-ASCII Turkish specific letters. There are two reasons.
1- Vocabulary and sentence piece model is created with NFC/NFKC normalization but tokenizer uses NFD/NFKD. NFD/NFKD normalization changes text that contains Turkish characters I-ı, İ-i, Ç-ç, Ö-ö, Ş-ş, Ğ-ğ, Ü-ü. This causes wrong tokenization, wrong training and loss of information. Some tokens are never trained.(like "şanlıurfa", "öğün", "çocuk" etc.) NFD/NFKD normalization is not proper for Turkish.
2- Python's default ```string.lower()``` and ```string.upper()``` make the conversions
- "I" and "İ" to 'i'
- 'i' and 'ı' to 'I'
respectively. However, in Turkish, 'I' and 'İ' are two different letters.
We opened an [issue](https://github.com/huggingface/transformers/issues/6680) in Huggingface's github repo about this bug. Until it is fixed, in case you want to train your model with uncased data, we provide a simple text normalization module (`TextNormalization()` in the code snippet above) in our [repo](https://github.com/Loodos/turkish-language-models).
## Details and Contact
You contact us to ask a question, open an issue or give feedback via our github [repo](https://github.com/Loodos/turkish-language-models).
## Acknowledgments
Many thanks to TFRC Team for providing us cloud TPUs on Tensorflow Research Cloud to train our models.
|
yhavinga/t5-v1.1-base-dutch-cnn-test | 85572934973c29c53fb27037c4280162fbe316ba | 2022-01-19T10:31:39.000Z | [
"pytorch",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"nl",
"dataset:yhavinga/mc4_nl_cleaned",
"dataset:ml6team/cnn_dailymail_nl",
"transformers",
"summarization",
"seq2seq",
"license:apache-2.0",
"autotrain_compatible"
] | summarization | false | yhavinga | null | yhavinga/t5-v1.1-base-dutch-cnn-test | 163 | 1 | transformers | 3,875 | ---
language:
- nl
datasets:
- yhavinga/mc4_nl_cleaned
- ml6team/cnn_dailymail_nl
tags:
- summarization
- t5
- seq2seq
license: apache-2.0
pipeline_tag: summarization
widget:
- text: "Het Van Goghmuseum in Amsterdam heeft vier kostbare prenten verworven van Mary Cassatt, de Amerikaanse impressionistische kunstenaar en tijdgenoot van Vincent van Gogh. Dat heeft het museum woensdagmiddag op een persconferentie bekendgemaakt. Het gaat om drie grote kleurenetsen en een zwart-wit litho met voorstellingen van vrouwen. Voor deze prenten, die afkomstig zijn van een Amerikaanse verzamelaar, betaalde het museum ruim 1,4 miljoen euro. Drie grote fondsen en een aantal particulieren hebben samen de aankoopsom beschikbaar gesteld. Mary Stevenson Cassatt (1844-1926) woonde en werkte lange tijd in Frankrijk. Ze staat met haar impressionistische schilderijen en tekeningen te boek als een van de vernieuwers van de Parijse kunstwereld in de late negentiende eeuw. Het Van Goghmuseum rekent haar prenten „tot het mooiste wat op grafisch gebied in het fin de siècle is geproduceerd”. De drie aangekochte kleurenetsen – Het doorpassen, De brief en Badende vrouw – komen uit een serie van tien waarmee Cassatt haar naam als (prent)kunstenaar definitief vestigde. Ze maakte de etsen na een bezoek in 1890 aan een tentoonstelling van Japanse prenten in Parijs. Over die expositie schreef de Amerikaanse aan haar vriendin Berthe Morisot, een andere vrouwelijke impressionist: „We kunnen de Japanse prenten in de Beaux-Arts gaan bekijken. Echt, die mag je niet missen. Als je kleurenprenten wilt maken, is er niets mooiers voorstelbaar. Ik droom ervan en denk nergens anders meer aan dan aan kleur op koper."
- text: "Afgelopen zaterdagochtend werden Hunga Tonga en Hunga Hapai opnieuw twee aparte eilanden toen de vulkaan met een hevige explosie uitbarstte. De aanloop tot de uitbarsting begon al eind vorig jaar met kleinere explosies. Begin januari nam de activiteit af en dachten geologen dat de vulkaan tot rust was gekomen. Toch barstte hij afgelopen zaterdag opnieuw uit, veel heviger dan de uitbarstingen ervoor. Vlák voor deze explosie stortte het kilometerslange verbindingsstuk in en verdween onder het water. De eruptie duurde acht minuten. De wolk van as en giftige gasdeeltjes, zoals zwaveloxide, die daarbij vrijkwam, reikte tot dertig kilometer hoogte en was zo’n vijfhonderd kilometer breed. Ter vergelijking: de pluimen uit de recente vulkaanuitbarsting op La Palma reikten maximaal zo’n vijf kilometer hoog. De hoofdstad van Tonga, vijfenzestig kilometer verderop is bedekt met een dikke laag as. Dat heeft bijvoorbeeld gevolgen voor de veiligheid van het drinkwater op Tonga. De uitbarsting van de onderzeese vulkaan in de eilandstaat Tonga afgelopen zaterdag was bijzonder heftig. De eruptie veroorzaakte een tsunami die reikte van Nieuw-Zeeland tot de Verenigde Staten en in Nederland ging de luchtdruk omhoog. Geologen verwachten niet dat de vulkaan op Tonga voor een lange wereldwijde afkoeling zorgt, zoals bij andere hevige vulkaanuitbarstingen het geval is geweest. De vulkaan ligt onder water tussen de onbewoonde eilandjes Hunga Tonga (0,39 vierkante kilometer) en Hunga Ha’apai (0,65 vierkante kilometer). Magma dat bij kleinere uitbarsting in 2009 en 2014 omhoog kwam, koelde af en vormde een verbindingsstuk tussen de twee eilanden in. Een explosie van een onderwatervulkaan als die bij Tonga is heftiger dan bijvoorbeeld die uitbarsting op La Palma. „Dat komt doordat het vulkanisme hier veroorzaakt wordt door subductie: de Pacifische plaat zinkt onder Tonga de aardmantel in en neemt water mee omlaag”, zegt hoogleraar paleogeografie Douwe van Hinsbergen van de Universiteit Utrecht. „Dit water komt met magma als gas, als waterdamp, mee omhoog. Dat voert de druk onder de aardkost enorm op. Arwen Deuss, geowetenschapper aan de Universiteit Utrecht, vergelijkt het met een fles cola. „Wanneer je een fles cola schudt, zal het gas er met veel geweld uitkomen. Dat is waarschijnlijk wat er gebeurd is op Tonga, maar we weten het niet precies.”"
---
# T5 v1.1 Base finetuned for CNN news summarization in Dutch 🇳🇱
This model is [t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) finetuned on [CNN Dailymail NL](https://huggingface.co/datasets/ml6team/cnn_dailymail_nl)
For a demo of the Dutch CNN summarization models, head over to the Hugging Face Spaces for
the **[Netherformer 📰](https://huggingface.co/spaces/flax-community/netherformer)** example application!
Rouge scores for this model are listed below.
## Tokenizer
* SentencePiece tokenizer trained from scratch for Dutch on mC4 nl cleaned with scripts from the Huggingface
Transformers [Flax examples](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling).
## Dataset
All models listed below are trained on of the `full` configuration (39B tokens) of
[cleaned Dutch mC4](https://huggingface.co/datasets/yhavinga/mc4_nl_cleaned),
which is the original mC4, except
* Documents that contained words from a selection of the Dutch and English [List of Dirty Naught Obscene and Otherwise Bad Words](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words) are removed
* Sentences with less than 3 words are removed
* Sentences with a word of more than 1000 characters are removed
* Documents with less than 5 sentences are removed
* Documents with "javascript", "lorum ipsum", "terms of use", "privacy policy", "cookie policy", "uses cookies",
"use of cookies", "use cookies", "elementen ontbreken", "deze printversie" are removed.
## Models
TL;DR: [yhavinga/t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) is the best model.
* `yhavinga/t5-base-dutch` is a re-training of the Dutch T5 base v1.0 model trained during the summer 2021
Flax/Jax community week. Accuracy was improved from 0.64 to 0.70.
* The two T5 v1.1 base models are an uncased and cased version of `t5-v1.1-base`, again pre-trained from scratch on Dutch,
with a tokenizer also trained from scratch. The t5 v1.1 models are slightly different from the t5 models, and the
base models are trained with a dropout of 0.0. For fine-tuning it is intended to set this back to 0.1.
* The large cased model is a pre-trained Dutch version of `t5-v1.1-large`. Training of t5-v1.1-large proved difficult.
Without dropout regularization, the training would diverge at a certain point. With dropout training went better,
be it much slower than training the t5-model. At some point convergance was too slow to warrant further training.
The latest checkpoint, training scripts and metrics are available for reference. For actual fine-tuning the cased
base model is probably the better choice.
| | model | train seq len | acc | loss | batch size | epochs | steps | dropout | optim | lr | duration |
|---------------------------------------------------------------------------------------------------|---------|---------------|----------|----------|------------|--------|---------|---------|-----------|------|----------|
| [yhavinga/t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | T5 | 512 | 0,70 | 1,38 | 128 | 1 | 528481 | 0.1 | adafactor | 5e-3 | 2d 9h |
| [yhavinga/t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | t5-v1.1 | 1024 | 0,73 | 1,20 | 64 | 2 | 1014525 | 0.0 | adafactor | 5e-3 | 5d 5h |
| [yhavinga/t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | t5-v1.1 | 1024 | **0,78** | **0,96** | 64 | 2 | 1210000 | 0.0 | adafactor | 5e-3 | 6d 6h |
| [yhavinga/t5-v1.1-large-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-large-dutch-cased) | t5-v1.1 | 512 | 0,76 | 1,07 | 64 | 1 | 1120000 | 0.1 | adafactor | 5e-3 | 86 13h |
The cased t5-v1.1 Dutch models were fine-tuned on summarizing the CNN Daily Mail dataset.
| | model | input len | target len | Rouge1 | Rouge2 | RougeL | RougeLsum | Test Gen Len | epochs | batch size | steps | duration |
|-------------------------------------------------------------------------------------------------------|---------|-----------|------------|--------|--------|--------|-----------|--------------|--------|------------|-------|----------|
| [yhavinga/t5-v1.1-base-dutch-cnn-test](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cnn-test) | t5-v1.1 | 1024 | 96 | 34,8 | 13,6 | 25,2 | 32,1 | 79 | 6 | 64 | 26916 | 2h 40m |
| [yhavinga/t5-v1.1-large-dutch-cnn-test](https://huggingface.co/yhavinga/t5-v1.1-large-dutch-cnn-test) | t5-v1.1 | 1024 | 96 | 34,4 | 13,6 | 25,3 | 31,7 | 81 | 5 | 16 | 89720 | 11h |
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/). The HuggingFace 🤗 ecosystem was also
instrumental in many, if not all parts of the training. The following repositories where helpful in setting up the TPU-VM,
and training the models:
* [Gsarti's Pretrain and Fine-tune a T5 model with Flax on GCP](https://github.com/gsarti/t5-flax-gcp)
* [HUggingFace Flax MLM examples](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling)
* [Flax/Jax Community week t5-base-dutch](https://huggingface.co/flax-community/t5-base-dutch)
Created by [Yeb Havinga](https://www.linkedin.com/in/yeb-havinga-86530825/) |
ixa-ehu/roberta-eus-euscrawl-large-cased | 8762305040656446e41da360b0183d3f9e2f8262 | 2022-03-16T11:49:05.000Z | [
"pytorch",
"roberta",
"fill-mask",
"eu",
"arxiv:2203.08111",
"transformers",
"basque",
"license:cc-by-nc-4.0",
"autotrain_compatible"
] | fill-mask | false | ixa-ehu | null | ixa-ehu/roberta-eus-euscrawl-large-cased | 163 | 1 | transformers | 3,876 | ---
language: eu
license: cc-by-nc-4.0
tags:
- basque
- roberta
---
# Roberta-eus Euscrawl large cased
This is a RoBERTa model for Basque model presented in [Does corpus quality really matter for low-resource languages?](https://arxiv.org/abs/2203.08111). There are several models for Basque using the RoBERTa architecture, using different corpora:
- roberta-eus-euscrawl-base-cased: Basque RoBERTa model trained on Euscrawl, a corpus created using tailored crawling from Basque sites. EusCrawl contains 12,528k documents and 423M tokens.
- roberta-eus-euscrawl-large-cased: RoBERTa large trained on EusCrawl.
- roberta-eus-mC4-base-cased: Basque RoBERTa model trained on the Basque portion of mc4 dataset.
- roberta-eus-CC100-base-cased: Basque RoBERTa model trained on Basque portion of cc100 dataset.
The models have been tested on five different downstream tasks for Basque: Topic classification, Sentiment analysis, Stance detection, Named Entity Recognition (NER), and Question Answering (refer to the [paper](https://arxiv.org/abs/2203.08111) for more details). See summary of results below:
| Model | Topic class. | Sentiment | Stance det. | NER | QA | Average |
|----------------------------------|--------------|-----------|-------------|----------|----------|----------|
| roberta-eus-euscrawl-base-cased | 76.2 | 77.7 | 57.4 | 86.8 | 34.6 | 66.5 |
| roberta-eus-euscrawl-large-cased | **77.6** | 78.8 | 62.9 | **87.2** | **38.3** | **69.0** |
| roberta-eus-mC4-base-cased | 75.3 | **80.4** | 59.1 | 86.0 | 35.2 | 67.2 |
| roberta-eus-CC100-base-cased | 76.2 | 78.8 | **63.4** | 85.2 | 35.8 | 67.9 |
If you use any of these models, please cite the following paper:
```
@misc{artetxe2022euscrawl,
title={Does corpus quality really matter for low-resource languages?},
author={Mikel Artetxe, Itziar Aldabe, Rodrigo Agerri,
Olatz Perez-de-Viñaspre, Aitor Soroa},
year={2022},
eprint={2203.08111},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Felix92/doctr-dummy-torch-vgg16-bn-r | 760cb447ee57d0fc9cd76c3f5eeb5078d2e60cb5 | 2022-04-14T07:36:36.000Z | [
"pytorch",
"en",
"transformers"
] | null | false | Felix92 | null | Felix92/doctr-dummy-torch-vgg16-bn-r | 163 | null | transformers | 3,877 |
---
language: en
---
<p align="center">
<img src="https://github.com/mindee/doctr/releases/download/v0.3.1/Logo_doctr.gif" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: classification
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
|
malmarjeh/t5-arabic-text-summarization | fab697056620790cf5f5a4e80e54fe5b6c796d93 | 2022-06-29T14:14:41.000Z | [
"pytorch",
"t5",
"text2text-generation",
"ar",
"transformers",
"Arabic T5",
"T5",
"MSA",
"Arabic Text Summarization",
"Arabic News Title Generation",
"Arabic Paraphrasing",
"autotrain_compatible"
] | text2text-generation | false | malmarjeh | null | malmarjeh/t5-arabic-text-summarization | 163 | null | transformers | 3,878 | ---
language:
- ar
tags:
- Arabic T5
- T5
- MSA
- Arabic Text Summarization
- Arabic News Title Generation
- Arabic Paraphrasing
widget:
- text: "شهدت مدينة طرابلس، مساء أمس الأربعاء، احتجاجات شعبية وأعمال شغب لليوم الثالث على التوالي، وذلك بسبب تردي الوضع المعيشي والاقتصادي. واندلعت مواجهات عنيفة وعمليات كر وفر ما بين الجيش اللبناني والمحتجين استمرت لساعات، إثر محاولة فتح الطرقات المقطوعة، ما أدى إلى إصابة العشرات من الطرفين."
---
# An Arabic abstractive text summarization model
A fine-tuned AraT5 model on a dataset of 84,764 paragraph-summary pairs.
More details on the fine-tuning of this model will be released later.
The model can be used as follows:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline
from arabert.preprocess import ArabertPreprocessor
model_name="malmarjeh/t5-arabic-text-summarization"
preprocessor = ArabertPreprocessor(model_name="")
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
pipeline = pipeline("text2text-generation",model=model,tokenizer=tokenizer)
text = "شهدت مدينة طرابلس، مساء أمس الأربعاء، احتجاجات شعبية وأعمال شغب لليوم الثالث على التوالي، وذلك بسبب تردي الوضع المعيشي والاقتصادي. واندلعت مواجهات عنيفة وعمليات كر وفر ما بين الجيش اللبناني والمحتجين استمرت لساعات، إثر محاولة فتح الطرقات المقطوعة، ما أدى إلى إصابة العشرات من الطرفين."
text = preprocessor.preprocess(text)
result = pipeline(text,
pad_token_id=tokenizer.eos_token_id,
num_beams=3,
repetition_penalty=3.0,
max_length=200,
length_penalty=1.0,
no_repeat_ngram_size = 3)[0]['generated_text']
result
>>> 'مواجهات عنيفة بين الجيش اللبناني ومحتجين في طرابلس'
```
## Contact:
**Mohammad Bani Almarjeh**: [Linkedin](https://www.linkedin.com/in/mohammad-bani-almarjeh/) | <[email protected]>
|
emilys/twitter-roberta-base-CoNLL | 740b982cf4186376bede6e9629e51ea45a4b6f97 | 2022-07-01T12:13:20.000Z | [
"pytorch",
"roberta",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | emilys | null | emilys/twitter-roberta-base-CoNLL | 163 | null | transformers | 3,879 | ---
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: twitter-roberta-base-CoNLL
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.953111963957951
- name: Recall
type: recall
value: 0.9612924941097274
- name: F1
type: f1
value: 0.9571847507331379
- name: Accuracy
type: accuracy
value: 0.9925820645613489
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter-roberta-base-CoNLL
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0423
- Precision: 0.9531
- Recall: 0.9613
- F1: 0.9572
- Accuracy: 0.9926
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 64
- eval_batch_size: 1024
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 0.11 | 25 | 0.2063 | 0.6517 | 0.6659 | 0.6587 | 0.9386 |
| No log | 0.23 | 50 | 0.0810 | 0.8373 | 0.8766 | 0.8565 | 0.9771 |
| No log | 0.34 | 75 | 0.0651 | 0.8937 | 0.9058 | 0.8997 | 0.9827 |
| No log | 0.45 | 100 | 0.0537 | 0.9014 | 0.9135 | 0.9074 | 0.9849 |
| No log | 0.57 | 125 | 0.0464 | 0.9097 | 0.9244 | 0.9170 | 0.9867 |
| No log | 0.68 | 150 | 0.0423 | 0.9243 | 0.9350 | 0.9296 | 0.9885 |
| No log | 0.8 | 175 | 0.0381 | 0.9250 | 0.9438 | 0.9343 | 0.9900 |
| No log | 0.91 | 200 | 0.0388 | 0.9264 | 0.9446 | 0.9354 | 0.9896 |
| No log | 1.02 | 225 | 0.0394 | 0.9328 | 0.9441 | 0.9384 | 0.9898 |
| No log | 1.14 | 250 | 0.0423 | 0.9348 | 0.9458 | 0.9403 | 0.9896 |
| No log | 1.25 | 275 | 0.0432 | 0.9304 | 0.9406 | 0.9355 | 0.9892 |
| No log | 1.36 | 300 | 0.0382 | 0.9393 | 0.9473 | 0.9433 | 0.9901 |
| No log | 1.48 | 325 | 0.0381 | 0.9326 | 0.9504 | 0.9414 | 0.9901 |
| No log | 1.59 | 350 | 0.0387 | 0.9337 | 0.9524 | 0.9429 | 0.9902 |
| No log | 1.7 | 375 | 0.0365 | 0.9404 | 0.9475 | 0.9439 | 0.9901 |
| No log | 1.82 | 400 | 0.0382 | 0.9431 | 0.9517 | 0.9474 | 0.9905 |
| No log | 1.93 | 425 | 0.0373 | 0.9399 | 0.9524 | 0.9461 | 0.9903 |
| No log | 2.05 | 450 | 0.0367 | 0.9440 | 0.9556 | 0.9497 | 0.9910 |
| No log | 2.16 | 475 | 0.0396 | 0.9400 | 0.9551 | 0.9475 | 0.9907 |
| 0.0771 | 2.27 | 500 | 0.0353 | 0.9442 | 0.9574 | 0.9508 | 0.9912 |
| 0.0771 | 2.39 | 525 | 0.0394 | 0.9401 | 0.9507 | 0.9454 | 0.9906 |
| 0.0771 | 2.5 | 550 | 0.0370 | 0.9447 | 0.9522 | 0.9485 | 0.9910 |
| 0.0771 | 2.61 | 575 | 0.0352 | 0.9404 | 0.9541 | 0.9472 | 0.9908 |
| 0.0771 | 2.73 | 600 | 0.0386 | 0.9345 | 0.9554 | 0.9448 | 0.9908 |
| 0.0771 | 2.84 | 625 | 0.0366 | 0.9428 | 0.9576 | 0.9502 | 0.9916 |
| 0.0771 | 2.95 | 650 | 0.0353 | 0.9427 | 0.9546 | 0.9486 | 0.9913 |
| 0.0771 | 3.07 | 675 | 0.0359 | 0.9412 | 0.9544 | 0.9478 | 0.9911 |
| 0.0771 | 3.18 | 700 | 0.0356 | 0.9476 | 0.9593 | 0.9534 | 0.9920 |
| 0.0771 | 3.3 | 725 | 0.0345 | 0.9484 | 0.9586 | 0.9535 | 0.9918 |
| 0.0771 | 3.41 | 750 | 0.0345 | 0.9427 | 0.9557 | 0.9492 | 0.9916 |
| 0.0771 | 3.52 | 775 | 0.0364 | 0.9389 | 0.9569 | 0.9478 | 0.9914 |
| 0.0771 | 3.64 | 800 | 0.0360 | 0.9430 | 0.9584 | 0.9507 | 0.9915 |
| 0.0771 | 3.75 | 825 | 0.0387 | 0.9458 | 0.9552 | 0.9505 | 0.9915 |
| 0.0771 | 3.86 | 850 | 0.0347 | 0.9468 | 0.9576 | 0.9521 | 0.9917 |
| 0.0771 | 3.98 | 875 | 0.0357 | 0.9445 | 0.9574 | 0.9509 | 0.9915 |
| 0.0771 | 4.09 | 900 | 0.0382 | 0.9464 | 0.9578 | 0.9521 | 0.9918 |
| 0.0771 | 4.2 | 925 | 0.0391 | 0.9475 | 0.9562 | 0.9518 | 0.9918 |
| 0.0771 | 4.32 | 950 | 0.0428 | 0.9466 | 0.9547 | 0.9506 | 0.9912 |
| 0.0771 | 4.43 | 975 | 0.0404 | 0.9459 | 0.9554 | 0.9506 | 0.9913 |
| 0.0118 | 4.55 | 1000 | 0.0403 | 0.9375 | 0.9549 | 0.9461 | 0.9909 |
| 0.0118 | 4.66 | 1025 | 0.0369 | 0.9482 | 0.9586 | 0.9534 | 0.9919 |
| 0.0118 | 4.77 | 1050 | 0.0374 | 0.9457 | 0.9584 | 0.9520 | 0.9918 |
| 0.0118 | 4.89 | 1075 | 0.0359 | 0.9507 | 0.9571 | 0.9539 | 0.9923 |
| 0.0118 | 5.0 | 1100 | 0.0373 | 0.9453 | 0.9594 | 0.9523 | 0.9919 |
| 0.0118 | 5.11 | 1125 | 0.0370 | 0.9499 | 0.9594 | 0.9546 | 0.9924 |
| 0.0118 | 5.23 | 1150 | 0.0388 | 0.9510 | 0.9601 | 0.9555 | 0.9922 |
| 0.0118 | 5.34 | 1175 | 0.0395 | 0.9486 | 0.9559 | 0.9522 | 0.9920 |
| 0.0118 | 5.45 | 1200 | 0.0391 | 0.9495 | 0.9591 | 0.9543 | 0.9924 |
| 0.0118 | 5.57 | 1225 | 0.0378 | 0.9517 | 0.9588 | 0.9552 | 0.9923 |
| 0.0118 | 5.68 | 1250 | 0.0388 | 0.9515 | 0.9615 | 0.9565 | 0.9924 |
| 0.0118 | 5.8 | 1275 | 0.0384 | 0.9512 | 0.9610 | 0.9560 | 0.9924 |
| 0.0118 | 5.91 | 1300 | 0.0395 | 0.9530 | 0.9613 | 0.9571 | 0.9924 |
| 0.0118 | 6.02 | 1325 | 0.0408 | 0.9499 | 0.9569 | 0.9534 | 0.9919 |
| 0.0118 | 6.14 | 1350 | 0.0412 | 0.9481 | 0.9616 | 0.9548 | 0.9922 |
| 0.0118 | 6.25 | 1375 | 0.0413 | 0.9521 | 0.9591 | 0.9556 | 0.9924 |
| 0.0118 | 6.36 | 1400 | 0.0412 | 0.9466 | 0.9584 | 0.9525 | 0.9917 |
| 0.0118 | 6.48 | 1425 | 0.0405 | 0.9504 | 0.9608 | 0.9556 | 0.9921 |
| 0.0118 | 6.59 | 1450 | 0.0400 | 0.9517 | 0.9615 | 0.9566 | 0.9925 |
| 0.0118 | 6.7 | 1475 | 0.0398 | 0.9510 | 0.9594 | 0.9552 | 0.9923 |
| 0.0049 | 6.82 | 1500 | 0.0395 | 0.9523 | 0.9615 | 0.9569 | 0.9925 |
| 0.0049 | 6.93 | 1525 | 0.0392 | 0.9520 | 0.9623 | 0.9571 | 0.9927 |
| 0.0049 | 7.05 | 1550 | 0.0390 | 0.9511 | 0.9593 | 0.9552 | 0.9923 |
| 0.0049 | 7.16 | 1575 | 0.0393 | 0.9520 | 0.9611 | 0.9565 | 0.9925 |
| 0.0049 | 7.27 | 1600 | 0.0389 | 0.9512 | 0.9613 | 0.9562 | 0.9925 |
| 0.0049 | 7.39 | 1625 | 0.0405 | 0.9518 | 0.9613 | 0.9565 | 0.9924 |
| 0.0049 | 7.5 | 1650 | 0.0410 | 0.9512 | 0.9606 | 0.9559 | 0.9925 |
| 0.0049 | 7.61 | 1675 | 0.0408 | 0.9526 | 0.9613 | 0.9569 | 0.9925 |
| 0.0049 | 7.73 | 1700 | 0.0436 | 0.9482 | 0.9610 | 0.9545 | 0.9922 |
| 0.0049 | 7.84 | 1725 | 0.0419 | 0.9495 | 0.9625 | 0.9560 | 0.9924 |
| 0.0049 | 7.95 | 1750 | 0.0429 | 0.9525 | 0.9618 | 0.9571 | 0.9926 |
| 0.0049 | 8.07 | 1775 | 0.0419 | 0.9509 | 0.9615 | 0.9562 | 0.9924 |
| 0.0049 | 8.18 | 1800 | 0.0422 | 0.9510 | 0.9601 | 0.9555 | 0.9923 |
| 0.0049 | 8.3 | 1825 | 0.0417 | 0.9521 | 0.9603 | 0.9562 | 0.9924 |
| 0.0049 | 8.41 | 1850 | 0.0415 | 0.9529 | 0.9611 | 0.9570 | 0.9925 |
| 0.0049 | 8.52 | 1875 | 0.0416 | 0.9523 | 0.9611 | 0.9567 | 0.9924 |
| 0.0049 | 8.64 | 1900 | 0.0419 | 0.9504 | 0.9608 | 0.9556 | 0.9922 |
| 0.0049 | 8.75 | 1925 | 0.0417 | 0.9520 | 0.9610 | 0.9564 | 0.9924 |
| 0.0049 | 8.86 | 1950 | 0.0419 | 0.9535 | 0.9621 | 0.9578 | 0.9926 |
| 0.0049 | 8.98 | 1975 | 0.0422 | 0.9531 | 0.9620 | 0.9575 | 0.9927 |
| 0.0022 | 9.09 | 2000 | 0.0423 | 0.9531 | 0.9613 | 0.9572 | 0.9926 |
| 0.0022 | 9.2 | 2025 | 0.0426 | 0.9520 | 0.9615 | 0.9567 | 0.9925 |
| 0.0022 | 9.32 | 2050 | 0.0425 | 0.9515 | 0.9606 | 0.9560 | 0.9925 |
| 0.0022 | 9.43 | 2075 | 0.0422 | 0.9517 | 0.9613 | 0.9565 | 0.9925 |
| 0.0022 | 9.55 | 2100 | 0.0423 | 0.9513 | 0.9606 | 0.9560 | 0.9925 |
| 0.0022 | 9.66 | 2125 | 0.0424 | 0.9513 | 0.9605 | 0.9559 | 0.9925 |
| 0.0022 | 9.77 | 2150 | 0.0423 | 0.9522 | 0.9611 | 0.9566 | 0.9925 |
| 0.0022 | 9.89 | 2175 | 0.0423 | 0.9522 | 0.9613 | 0.9567 | 0.9925 |
| 0.0022 | 10.0 | 2200 | 0.0422 | 0.9525 | 0.9616 | 0.9570 | 0.9925 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
PrimeQA/squad-v1-xlm-roberta-large | dba97fb0d333fb5cddd8681c65be715448af2b90 | 2022-07-07T20:28:50.000Z | [
"pytorch",
"xlm-roberta",
"multilingual",
"arxiv:1606.05250",
"arxiv:1910.11856",
"arxiv:1911.02116",
"transformers",
"MRC",
"SQuAD 1.1",
"xlm-roberta-large",
"license:apache-2.0"
] | null | false | PrimeQA | null | PrimeQA/squad-v1-xlm-roberta-large | 163 | null | transformers | 3,880 |
---
tags:
- MRC
- SQuAD 1.1
- xlm-roberta-large
language:
- multilingual
license: apache-2.0
---
# Model description
An XLM-RoBERTa reading comprehension model for [SQuAD 1.1](https://aclanthology.org/D16-1264/).
The model is initialized with [xlm-roberta-large](https://huggingface.co/xlm-roberta-large/) and fine-tuned on the [SQuAD 1.1 train data](https://huggingface.co/datasets/squad).
## Intended uses & limitations
You can use the raw model for the reading comprehension task. Biases associated with the pre-existing language model, xlm-roberta-large, that we used may be present in our fine-tuned model, squad-v1-xlm-roberta-large. This model is used for zero-shot decoding of [MLQA](https://huggingface.co/datasets/mlqa) and [XQuAD](https://huggingface.co/datasets/xquad) datasets.
## Usage
You can use this model directly with the [PrimeQA](https://github.com/primeqa/primeqa) pipeline for reading comprehension [squad.ipynb](https://github.com/primeqa/primeqa/blob/main/notebooks/mrc/squad.ipynb).
```bibtex
@article{2016arXiv160605250R,
author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},
Konstantin and {Liang}, Percy},
title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}",
journal = {arXiv e-prints},
year = 2016,
eid = {arXiv:1606.05250},
pages = {arXiv:1606.05250},
archivePrefix = {arXiv},
eprint = {1606.05250},
}
```
```bibtex
@article{lewis2019mlqa,
title={MLQA: Evaluating Cross-lingual Extractive Question Answering},
author={Lewis, Patrick and Oguz, Barlas and Rinott, Ruty and Riedel, Sebastian and Schwenk, Holger},
journal={arXiv preprint arXiv:1910.07475},
year={2019}
}
```
```bibtex
@article{Artetxe:etal:2019,
author = {Mikel Artetxe and Sebastian Ruder and Dani Yogatama},
title = {On the cross-lingual transferability of monolingual representations},
journal = {CoRR},
volume = {abs/1910.11856},
year = {2019},
archivePrefix = {arXiv},
eprint = {1910.11856}
}
```
```bibtex
@article{DBLP:journals/corr/abs-1911-02116,
author = {Alexis Conneau and
Kartikay Khandelwal and
Naman Goyal and
Vishrav Chaudhary and
Guillaume Wenzek and
Francisco Guzm{\'{a}}n and
Edouard Grave and
Myle Ott and
Luke Zettlemoyer and
Veselin Stoyanov},
title = {Unsupervised Cross-lingual Representation Learning at Scale},
journal = {CoRR},
volume = {abs/1911.02116},
year = {2019},
url = {http://arxiv.org/abs/1911.02116},
eprinttype = {arXiv},
eprint = {1911.02116},
timestamp = {Mon, 11 Nov 2019 18:38:09 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1911-02116.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
CAMeL-Lab/bert-base-arabic-camelbert-mix-did-nadi | cc503d0d5753536c644b157852932e825048635e | 2021-10-17T11:05:12.000Z | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] | text-classification | false | CAMeL-Lab | null | CAMeL-Lab/bert-base-arabic-camelbert-mix-did-nadi | 162 | null | transformers | 3,881 | ---
language:
- ar
license: apache-2.0
widget:
- text: "عامل ايه ؟"
---
# CAMeLBERT-Mix DID NADI Model
## Model description
**CAMeLBERT-Mix DID NADI Model** is a dialect identification (DID) model that was built by fine-tuning the [CAMeLBERT-Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix/) model.
For the fine-tuning, we used the [NADI Coountry-level](https://sites.google.com/view/nadi-shared-task) dataset, which includes 21 labels.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-Mix DID NADI model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> did = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-did-nadi')
>>> sentences = ['عامل ايه ؟', 'شلونك ؟ شخبارك ؟']
>>> did(sentences)
[{'label': 'Egypt', 'score': 0.920274019241333},
{'label': 'Saudi_Arabia', 'score': 0.26750022172927856}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
MutazYoune/Absa_AspectSentiment_hotels | be75f8d59f178f496fde1f16e95e70444d246e41 | 2021-05-18T21:42:54.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | MutazYoune | null | MutazYoune/Absa_AspectSentiment_hotels | 162 | null | transformers | 3,882 | Entry not found |
Narrativa/byt5-base-tweet-hate-detection | f064959ebf565c9a83e6fb6626574c177170186f | 2021-06-30T15:05:08.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"en",
"dataset:tweets_hate_speech_detection",
"arxiv:1907.06292",
"arxiv:1910.10683",
"transformers",
"hate",
"speech",
"autotrain_compatible"
] | text2text-generation | false | Narrativa | null | Narrativa/byt5-base-tweet-hate-detection | 162 | 5 | transformers | 3,883 | ---
language: en
datasets:
- tweets_hate_speech_detection
tags:
- hate
- speech
widget:
- text: "@user black lives really matter?"
---
# ByT5-base fine-tuned for Hate Speech Detection (on Tweets)
[ByT5](https://huggingface.co/google/byt5-base) base fine-tuned on [tweets hate speech detection](https://huggingface.co/datasets/tweets_hate_speech_detection) dataset for **Sequence Classification** downstream task.
# Details of ByT5 - Base 🧠
ByT5 is a tokenizer-free version of [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and generally follows the architecture of [MT5](https://huggingface.co/google/mt5-base).
ByT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task.
ByT5 works especially well on noisy text data,*e.g.*, `google/byt5-base` significantly outperforms [mt5-base](https://huggingface.co/google/mt5-base) on [TweetQA](https://arxiv.org/abs/1907.06292).
Paper: [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/pdf/1910.10683.pdf)
Authors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel*
## Details of the downstream task (Sequence Classification as Text generation) - Dataset 📚
[tweets_hate_speech_detection](hhttps://huggingface.co/datasets/tweets_hate_speech_detection)
The objective of this task is to detect hate speech in tweets. For the sake of simplicity, we say a tweet contains hate speech if it has a racist or sexist sentiment associated with it. So, the task is to classify racist or sexist tweets from other tweets.
Formally, given a training sample of tweets and labels, where label ‘1’ denotes the tweet is racist/sexist and label ‘0’ denotes the tweet is not racist/sexist, your objective is to predict the labels on the given test dataset.
- Data Instances:
The dataset contains a label denoting is the tweet a hate speech or not
```json
{'label': 0, # not a hate speech
'tweet': ' @user when a father is dysfunctional and is so selfish he drags his kids into his dysfunction. #run'}
```
- Data Fields:
**label**: 1 - it is a hate speech, 0 - not a hate speech
**tweet**: content of the tweet as a string
- Data Splits:
The data contains training data with **31962** entries
## Test set metrics 🧾
We created a representative test set with the 5% of the entries.
The dataset is so imbalanced and we got a **F1 score of 79.8**
## Model in Action 🚀
```sh
git clone https://github.com/huggingface/transformers.git
pip install -q ./transformers
```
```python
from transformers import AutoTokenizer, T5ForConditionalGeneration
ckpt = 'Narrativa/byt5-base-tweet-hate-detection'
tokenizer = AutoTokenizer.from_pretrained(ckpt)
model = T5ForConditionalGeneration.from_pretrained(ckpt).to("cuda")
def classify_tweet(tweet):
inputs = tokenizer([tweet], padding='max_length', truncation=True, max_length=512, return_tensors='pt')
input_ids = inputs.input_ids.to('cuda')
attention_mask = inputs.attention_mask.to('cuda')
output = model.generate(input_ids, attention_mask=attention_mask)
return tokenizer.decode(output[0], skip_special_tokens=True)
classify_tweet('here goes your tweet...')
```
Created by: [Narrativa](https://www.narrativa.com/)
About Narrativa: Natural Language Generation (NLG) | Gabriele, our machine learning-based platform, builds and deploys natural language solutions. #NLG #AI |
Helsinki-NLP/opus-mt-tc-big-en-el | 26c3999f0b30223bbbbe826c5c89bdf726d7bd71 | 2022-06-01T13:04:21.000Z | [
"pytorch",
"marian",
"text2text-generation",
"el",
"en",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-big-en-el | 162 | null | transformers | 3,884 | ---
language:
- el
- en
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-en-el
results:
- task:
name: Translation eng-ell
type: translation
args: eng-ell
dataset:
name: flores101-devtest
type: flores_101
args: eng ell devtest
metrics:
- name: BLEU
type: bleu
value: 27.4
- task:
name: Translation eng-ell
type: translation
args: eng-ell
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: eng-ell
metrics:
- name: BLEU
type: bleu
value: 55.4
---
# opus-mt-tc-big-en-el
Neural machine translation model for translating from English (en) to Modern Greek (1453-) (el).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-13
* source language(s): eng
* target language(s): ell
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-03-13.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ell/opusTCv20210807+bt_transformer-big_2022-03-13.zip)
* more information released models: [OPUS-MT eng-ell README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-ell/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"If I weren't broke, I'd buy it.",
"I received your telegram."
]
model_name = "pytorch-models/opus-mt-tc-big-en-el"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Αν δεν ήμουν άφραγκος, θα το αγόραζα.
# Έλαβα το τηλεγράφημα σου.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-en-el")
print(pipe("If I weren't broke, I'd buy it."))
# expected output: Αν δεν ήμουν άφραγκος, θα το αγόραζα.
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-03-13.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ell/opusTCv20210807+bt_transformer-big_2022-03-13.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-03-13.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ell/opusTCv20210807+bt_transformer-big_2022-03-13.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| eng-ell | tatoeba-test-v2021-08-07 | 0.73660 | 55.4 | 10899 | 66884 |
| eng-ell | flores101-devtest | 0.53952 | 27.4 | 1012 | 26615 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 3405783
* port time: Wed Apr 13 16:52:58 EEST 2022
* port machine: LM0-400-22516.local
|
Felix92/doctr-dummy-torch-resnet18 | 1b46de7ced522d5fcfb49c6e6c635c3c26fc5170 | 2022-04-14T07:39:52.000Z | [
"pytorch",
"en",
"transformers"
] | null | false | Felix92 | null | Felix92/doctr-dummy-torch-resnet18 | 162 | null | transformers | 3,885 |
---
language: en
---
<p align="center">
<img src="https://github.com/mindee/doctr/releases/download/v0.3.1/Logo_doctr.gif" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: classification
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
|
Felix92/doctr-dummy-torch-resnet31 | a98d9d54751a725b80d4e88d9ff2bf2a777c1226 | 2022-04-14T07:42:21.000Z | [
"pytorch",
"en",
"transformers"
] | null | false | Felix92 | null | Felix92/doctr-dummy-torch-resnet31 | 162 | null | transformers | 3,886 |
---
language: en
---
<p align="center">
<img src="https://github.com/mindee/doctr/releases/download/v0.3.1/Logo_doctr.gif" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: classification
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
|
Felix92/doctr-dummy-torch-resnet34 | fe7dbf9c393ddee576d03938738bd8408a412ebc | 2022-04-14T07:48:34.000Z | [
"pytorch",
"en",
"transformers"
] | null | false | Felix92 | null | Felix92/doctr-dummy-torch-resnet34 | 162 | null | transformers | 3,887 |
---
language: en
---
<p align="center">
<img src="https://github.com/mindee/doctr/releases/download/v0.3.1/Logo_doctr.gif" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: classification
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
|
Felix92/doctr-dummy-torch-resnet34-wide | 799d196f8b345784140cdb4f95893a2e92f8c753 | 2022-04-14T07:51:35.000Z | [
"pytorch",
"en",
"transformers"
] | null | false | Felix92 | null | Felix92/doctr-dummy-torch-resnet34-wide | 162 | null | transformers | 3,888 |
---
language: en
---
<p align="center">
<img src="https://github.com/mindee/doctr/releases/download/v0.3.1/Logo_doctr.gif" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: classification
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
|
Felix92/doctr-dummy-torch-resnet50 | af9ab31ad823a7dd8a942778eb752cdc1dedee45 | 2022-04-14T08:06:25.000Z | [
"pytorch",
"en",
"transformers"
] | null | false | Felix92 | null | Felix92/doctr-dummy-torch-resnet50 | 162 | null | transformers | 3,889 |
---
language: en
---
<p align="center">
<img src="https://github.com/mindee/doctr/releases/download/v0.3.1/Logo_doctr.gif" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: classification
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
|
Felix92/doctr-dummy-torch-db-resnet50-rotation | 1dfc54b8c142f21337bae1ad0396e093c9639399 | 2022-04-14T08:59:04.000Z | [
"pytorch",
"en",
"transformers"
] | null | false | Felix92 | null | Felix92/doctr-dummy-torch-db-resnet50-rotation | 162 | null | transformers | 3,890 |
---
language: en
---
<p align="center">
<img src="https://github.com/mindee/doctr/releases/download/v0.3.1/Logo_doctr.gif" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: detection
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
|
Felix92/doctr-dummy-torch-linknet-resnet18 | c1c72e4ad92e29a79ced012722e2f05ff535a680 | 2022-04-14T09:02:14.000Z | [
"pytorch",
"en",
"transformers"
] | null | false | Felix92 | null | Felix92/doctr-dummy-torch-linknet-resnet18 | 162 | null | transformers | 3,891 |
---
language: en
---
<p align="center">
<img src="https://github.com/mindee/doctr/releases/download/v0.3.1/Logo_doctr.gif" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: detection
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
|
Felix92/doctr-dummy-torch-linknet-resnet34 | 9c16f162f7b750fd4683a557eed8132667041bff | 2022-04-14T09:17:24.000Z | [
"pytorch",
"en",
"transformers"
] | null | false | Felix92 | null | Felix92/doctr-dummy-torch-linknet-resnet34 | 162 | null | transformers | 3,892 |
---
language: en
---
<p align="center">
<img src="https://github.com/mindee/doctr/releases/download/v0.3.1/Logo_doctr.gif" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: detection
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
|
Felix92/doctr-dummy-torch-linknet-resnet50 | c3017c9a02a74620ac54204bfc8548a0f05eb58d | 2022-04-14T09:20:06.000Z | [
"pytorch",
"en",
"transformers"
] | null | false | Felix92 | null | Felix92/doctr-dummy-torch-linknet-resnet50 | 162 | null | transformers | 3,893 |
---
language: en
---
<p align="center">
<img src="https://github.com/mindee/doctr/releases/download/v0.3.1/Logo_doctr.gif" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: detection
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
|
Felix92/doctr-dummy-torch-crnn-mobilenet-v3-large | 949a3f701b962e09650fceea80e974d9c39b1f3f | 2022-04-14T09:27:19.000Z | [
"pytorch",
"en",
"transformers"
] | null | false | Felix92 | null | Felix92/doctr-dummy-torch-crnn-mobilenet-v3-large | 162 | null | transformers | 3,894 |
---
language: en
---
<p align="center">
<img src="https://github.com/mindee/doctr/releases/download/v0.3.1/Logo_doctr.gif" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: recognition
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
|
Felix92/doctr-dummy-torch-fasterrcnn-mobilenet-v3-large-fpn | 7b14a117cd512cf28ebe92dbd53ce632117b4918 | 2022-04-14T09:28:24.000Z | [
"pytorch",
"en",
"transformers"
] | null | false | Felix92 | null | Felix92/doctr-dummy-torch-fasterrcnn-mobilenet-v3-large-fpn | 162 | null | transformers | 3,895 |
---
language: en
---
<p align="center">
<img src="https://github.com/mindee/doctr/releases/download/v0.3.1/Logo_doctr.gif" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: obj_detection
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
|
dbmdz/bert-base-german-europeana-td-cased | 4385d7ebbbfa13baec7db3cc2cf9415944e607cd | 2022-04-29T13:29:25.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | dbmdz | null | dbmdz/bert-base-german-europeana-td-cased | 162 | null | transformers | 3,896 | ---
license: mit
---
|
adamlin/distilbert-base-cased-sgd_qa-step5000 | 593404c329812d8283e5dba10df64fabc4da9d60 | 2021-02-09T15:02:35.000Z | [
"pytorch",
"distilbert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | adamlin | null | adamlin/distilbert-base-cased-sgd_qa-step5000 | 161 | null | transformers | 3,897 | Entry not found |
asahi417/lmqg-t5-large-squad | 4c5c2f87963e691d076b61274fa7773efd0570cb | 2022-06-09T22:43:17.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:asahi417/qg_squad",
"transformers",
"question generation",
"license:cc-by-4.0",
"autotrain_compatible"
] | text2text-generation | false | asahi417 | null | asahi417/lmqg-t5-large-squad | 161 | 1 | transformers | 3,898 | ---
language: en
tags:
- question generation
license: cc-by-4.0
datasets:
- asahi417/qg_squad
metrics:
- bleu
- meteor
- rouge
- bertscore
- moverscore
widget:
- text: "generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records."
example_title: "Question Generation Example 1"
- text: "generate question: Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records."
example_title: "Question Generation Example 2"
- text: "generate question: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> ."
example_title: "Question Generation Example 3"
pipeline_tag: text2text-generation
---
# T5 LARGE fine-tuned for English Question Generation
T5 LARGE Model fine-tuned on English question generation dataset (SQuAD) with an extensive hyper-parameter search.
- [Online Demo](https://autoqg.net/)
- [Project Repository](https://github.com/asahi417/lm-question-generation)
## Overview
**Language model:** t5-large
**Language:** English (en)
**Downstream-task:** Question Generation
**Training data:** SQuAD
**Eval data:** SQuAD
**Code:** See [our repository](https://github.com/asahi417/lm-question-generation)
## Usage
### In Transformers
```python
from transformers import pipeline
model_path = 'asahi417/lmqg-t5-large-squad'
pipe = pipeline("text2text-generation", model_path)
paragraph = 'Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.'
# highlight an answer in the paragraph to generate question
answer = 'Etta James'
highlight_token = '<hl>'
input_text = paragraph.replace(answer, '{0} {1} {0}'.format(highlight_token, answer))
input_text = 'generate question: {}'.format(input_text) # add task specific prefix
generation = pipe(input_text)
print(generation)
>>> [{'generated_text': 'What is the name of the biopic that Beyonce starred in?'}]
```
## Evaluations
Evaluation on the test set of [SQuAD QG dataset](https://huggingface.co/datasets/asahi417/qg_squad).
The results are comparable with the [leaderboard](https://paperswithcode.com/sota/question-generation-on-squad11) and previous works.
All evaluations were done using our [evaluation script](https://github.com/asahi417/lm-question-generation).
| BLEU 4 | ROUGE L | METEOR | BERTScore | MoverScore |
| ------ | -------- | ------ | --------- | ---------- |
| 27.21 | 54.13 | 27.69 | 90.99 | 65.29 |
- [metric file](https://huggingface.co/asahi417/lmqg-t5-large-squad/raw/main/eval/metric.first.sentence.paragraph_answer.question.asahi417_qg_squad.default.json)
## Fine-tuning Parameters
We ran grid search to find the best hyper-parameters and continued fine-tuning until the validation metric decrease.
The best hyper-parameters can be found [here](https://huggingface.co/asahi417/lmqg-t5-large-squad/raw/main/trainer_config.json), and fine-tuning script is released in [our repository](https://github.com/asahi417/lm-question-generation).
## Citation
TBA
|
cambridgeltl/trans-encoder-cross-simcse-bert-large | 893fab51dfd0e8da9ecd523aa856dc71af91b88a | 2021-11-26T18:28:18.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | cambridgeltl | null | cambridgeltl/trans-encoder-cross-simcse-bert-large | 161 | null | transformers | 3,899 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.