modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
list
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
lapcameraatp/cameragiamsat
lapcameraatp
2021-10-20T08:53:25Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
ERROR: type should be string, got "https://camerasaigon24h.com\nhttps://cameragiamsat360.com\nhttps://lapdatcameracongty.vn\nhttps://lapdatcamerawifi.vn\nhttps://lapcamerawifi.com\nhttps://giacameraquansat.com\nhttps://cameraquansatre.com\nhttps://cameraanninhwifi.com\n\nhttps://camerawifigiadinh.com/\nhttps://lapcameratanphu.com\nhttp://camerathehemoi.com\nhttp://lapcameratanbinh.com\nhttp://lapcamerabinhtan.com\nhttp://lapcameraquan2giare.com\nhttp://cameraquan12.com\nhttp://cameraquan3giare.com\nhttp://lapdatcameraquan4.com\nhttp://lapdatcameraquan10.com\nhttp://lapdatcameraquan7.com\nhttp://camerabinhthanh.com\nhttp://lapcameraquan9giare.com\nhttp://lapdatcameraquan11.com\nhttp://lapcameragiarethuduc.com\nhttp://lapdatcameraquan6.com\nhttp://lapdatcameraquan5.com\nhttp://lapcameraquan1.com\nhttp://cameraquan8.com\nhttp://cameranhatranggiare.com\nhttp://lapcamerahocmon.com\nhttp://lapcameragiaregovap.com\nhttp://lapcameraphunhuan.com\nhttp://cameragiarebinhduong.com\nhttp://phanphoicameragiare.com\nhttp://camerawifigiadinh.com/\nhttp://cameraphanthietgiare.com/"
aditeyabaral/sentencetransformer-bert-hinglish-small
aditeyabaral
2021-10-20T06:28:16Z
9
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # aditeyabaral/sentencetransformer-bert-hinglish-small This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('aditeyabaral/sentencetransformer-bert-hinglish-small') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-bert-hinglish-small') model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-bert-hinglish-small') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-bert-hinglish-small) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 4617 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 100, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Edomonndo/opus-mt-ja-en-finetuned-ja-to-en_test
Edomonndo
2021-10-20T06:22:41Z
15
0
transformers
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - bleu model_index: - name: opus-mt-ja-en-finetuned-ja-to-en_test results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation metric: name: Bleu type: bleu value: 80.2723 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-ja-en-finetuned-ja-to-en_test This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ja-en](https://huggingface.co/Helsinki-NLP/opus-mt-ja-en) on an unkown dataset. It achieves the following results on the evaluation set: - Loss: 0.4737 - Bleu: 80.2723 - Gen Len: 16.5492 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:| | 1.1237 | 1.0 | 247 | 0.6131 | 60.9383 | 16.4152 | | 0.5395 | 2.0 | 494 | 0.5274 | 67.5705 | 16.2883 | | 0.3584 | 3.0 | 741 | 0.5122 | 71.3098 | 16.3777 | | 0.2563 | 4.0 | 988 | 0.4887 | 73.6639 | 16.401 | | 0.138 | 5.0 | 1235 | 0.4796 | 76.7942 | 16.4873 | | 0.0979 | 6.0 | 1482 | 0.4849 | 76.9404 | 16.6162 | | 0.0792 | 7.0 | 1729 | 0.4806 | 78.9831 | 16.5442 | | 0.0569 | 8.0 | 1976 | 0.4765 | 79.3461 | 16.4873 | | 0.0299 | 9.0 | 2223 | 0.4751 | 79.7901 | 16.4863 | | 0.0204 | 10.0 | 2470 | 0.4737 | 80.2723 | 16.5492 | ### Framework versions - Transformers 4.9.1 - Pytorch 1.9.0+cu111 - Datasets 1.10.2 - Tokenizers 0.10.3
Bagus/wav2vec2-xlsr-greek-speech-emotion-recognition
Bagus
2021-10-20T05:38:41Z
37
1
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "audio", "audio-classification", "speech", "el", "dataset:aesdd", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2022-03-02T23:29:04Z
--- language: el datasets: - aesdd tags: - audio - audio-classification - speech license: apache-2.0 --- ~~~ # requirement packages !pip install git+https://github.com/huggingface/datasets.git !pip install git+https://github.com/huggingface/transformers.git !pip install torchaudio !pip install librosa !git clone https://github.com/m3hrdadfi/soxan cd soxan ~~~ # prediction ~~~ import torch import torch.nn as nn import torch.nn.functional as F import torchaudio from transformers import AutoConfig, Wav2Vec2FeatureExtractor import librosa import IPython.display as ipd import numpy as np import pandas as pd ~~~ ~~~ device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model_name_or_path = "Bagus/wav2vec2-xlsr-greek-speech-emotion-recognition" config = AutoConfig.from_pretrained(model_name_or_path) feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(model_name_or_path) sampling_rate = feature_extractor.sampling_rate model = Wav2Vec2ForSpeechClassification.from_pretrained(model_name_or_path).to(device) ~~~ ~~~ def speech_file_to_array_fn(path, sampling_rate): speech_array, _sampling_rate = torchaudio.load(path) resampler = torchaudio.transforms.Resample(_sampling_rate) speech = resampler(speech_array).squeeze().numpy() return speech def predict(path, sampling_rate): speech = speech_file_to_array_fn(path, sampling_rate) inputs = feature_extractor(speech, sampling_rate=sampling_rate, return_tensors="pt", padding=True) inputs = {key: inputs[key].to(device) for key in inputs} with torch.no_grad(): logits = model(**inputs).logits scores = F.softmax(logits, dim=1).detach().cpu().numpy()[0] outputs = [{"Emotion": config.id2label[i], "Score": f"{round(score * 100, 3):.1f}%"} for i, score in enumerate(scores)] return outputs ~~~ # prediction ~~~ # path for a sample path = '/data/jtes_v1.1/wav/f01/ang/f01_ang_01.wav' outputs = predict(path, sampling_rate) ~~~ ~~~ [{'Emotion': 'anger', 'Score': '98.3%'}, {'Emotion': 'disgust', 'Score': '0.0%'}, {'Emotion': 'fear', 'Score': '0.4%'}, {'Emotion': 'happiness', 'Score': '0.7%'}, {'Emotion': 'sadness', 'Score': '0.5%'}] ~~~
Manishl7/xlm-roberta-large-language-detection
Manishl7
2021-10-20T05:20:44Z
20
1
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
Language Detection Model for Nepali, English, Hindi and Spanish Model fine tuned on xlm-roberta-large
aditeyabaral/sentencetransformer-distilbert-hinglish-big
aditeyabaral
2021-10-20T01:24:00Z
153
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # aditeyabaral/sentencetransformer-distilbert-hinglish-big This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('aditeyabaral/sentencetransformer-distilbert-hinglish-big') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-distilbert-hinglish-big') model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-distilbert-hinglish-big') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-distilbert-hinglish-big) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 4617 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 100, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
huggingtweets/iamdevloper
huggingtweets
2021-10-19T20:59:40Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/iamdevloper/1634677176847/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1178631635606151168/yIlrcg4o_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">I Am Devloper</div> <div style="text-align: center; font-size: 14px;">@iamdevloper</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from I Am Devloper. | Data | I Am Devloper | | --- | --- | | Tweets downloaded | 3244 | | Retweets | 190 | | Short tweets | 233 | | Tweets kept | 2821 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2k1120ro/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @iamdevloper's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2wr63mia) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2wr63mia/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/iamdevloper') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
aditeyabaral/sentencetransformer-bert-hinglish-big
aditeyabaral
2021-10-19T19:38:38Z
6
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # aditeyabaral/sentencetransformer-bert-hinglish-big This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('aditeyabaral/sentencetransformer-bert-hinglish-big') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-bert-hinglish-big') model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-bert-hinglish-big') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-bert-hinglish-big) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 4617 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 100, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
hugggof/ConvTasNet-DAMP-Vocals
hugggof
2021-10-19T19:28:08Z
0
2
null
[ "audacity", "region:us" ]
null
2022-03-02T23:29:05Z
--- tags: - audacity inference: false sample_rate: 8000 --- This is an Audacity wrapper for the model, forked from the repository `groadabike/ConvTasNet_DAMP-VSEP_enhboth`, This model was trained using the Asteroid library: https://github.com/asteroid-team/asteroid. The following info was copied directly from `groadabike/ConvTasNet_DAMP-VSEP_enhboth`: ### Description: This model was trained by Gerardo Roa Dabike using Asteroid. It was trained on the enh_both task of the DAMP-VSEP dataset. ### Training config: ```yaml data: channels: 1 n_src: 2 root_path: data sample_rate: 16000 samples_per_track: 10 segment: 3.0 task: enh_both filterbank: kernel_size: 20 n_filters: 256 stride: 10 main_args: exp_dir: exp/train_convtasnet help: None masknet: bn_chan: 256 conv_kernel_size: 3 hid_chan: 512 mask_act: relu n_blocks: 8 n_repeats: 4 n_src: 2 norm_type: gLN skip_chan: 256 optim: lr: 0.0003 optimizer: adam weight_decay: 0.0 positional arguments: training: batch_size: 12 early_stop: True epochs: 50 half_lr: True num_workers: 12 ``` ### Results: ```yaml si_sdr: 14.018196157142519 si_sdr_imp: 14.017103133809577 sdr: 14.498517291333885 sdr_imp: 14.463389151567865 sir: 24.149634529133372 sir_imp: 24.11450638936735 sar: 15.338597389045935 sar_imp: -137.30634122401517 stoi: 0.7639416744417206 stoi_imp: 0.1843383526963759 ``` ### License notice: This work "ConvTasNet_DAMP-VSEP_enhboth" is a derivative of DAMP-VSEP: Smule Digital Archive of Mobile Performances - Vocal Separation (Version 1.0.1) by Smule, Inc, used under Smule's Research Data License Agreement (Research only). "ConvTasNet_DAMP-VSEP_enhboth" is licensed under Attribution-ShareAlike 3.0 Unported by Gerardo Roa Dabike.
hugggof/ConvTasNet_Libri3Mix_sepnoisy_16k
hugggof
2021-10-19T19:26:57Z
0
1
null
[ "audacity", "region:us" ]
null
2022-03-02T23:29:05Z
--- tags: - audacity inference: false --- This is an Audacity wrapper for the model, forked from the repository `JorisCos/ConvTasNet_Libri3Mix_sepnoisy_16k`, This model was trained using the Asteroid library: https://github.com/asteroid-team/asteroid. The following info was copied directly from `JorisCos/ConvTasNet_Libri3Mix_sepnoisy_16k`: Description: This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid). It was trained on the `sep_noisy` task of the Libri3Mix dataset. Training config: ```yml data: n_src: 3 sample_rate: 16000 segment: 3 task: sep_noisy train_dir: data/wav16k/min/train-360 valid_dir: data/wav16k/min/dev filterbank: kernel_size: 32 n_filters: 512 stride: 16 masknet: bn_chan: 128 hid_chan: 512 mask_act: relu n_blocks: 8 n_repeats: 3 n_src: 3 skip_chan: 128 optim: lr: 0.001 optimizer: adam weight_decay: 0.0 training: batch_size: 8 early_stop: true epochs: 200 half_lr: true num_workers: 4 ``` Results: On Libri3Mix min test set : ```yml si_sdr: 5.926151147554517 si_sdr_imp: 10.282912158535625 sdr: 6.700975236867358 sdr_imp: 10.882972447337504 sir: 15.364110064569388 sir_imp: 18.574476587171688 sar: 7.918866830474568 sar_imp: -0.9638973409971135 stoi: 0.7713777027310713 stoi_imp: 0.2078696167973911 ``` License notice: This work "ConvTasNet_Libri3Mix_sepnoisy_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov, used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/). "ConvTasNet_Libri3Mix_sepnoisy_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino
hugggof/demucs_extra
hugggof
2021-10-19T19:23:31Z
0
0
null
[ "audacity", "region:us" ]
null
2022-03-02T23:29:05Z
--- tags: audacity --- ## Music Source Separation in the Waveform Domain This is the Demucs model, serialized from Facebook Research's pretrained models. From Facebook research: Demucs is based on U-Net convolutional architecture inspired by Wave-U-Net and SING, with GLUs, a BiLSTM between the encoder and decoder, specific initialization of weights and transposed convolutions in the decoder. This is the `demucs_extra` version, meaning that is was trained on the MusDB dataset, along with 150 extra songs of data. See [facebookresearch's repository](https://github.com/facebookresearch/demucs) for more information on Demucs.
huggingface-course/albert-tokenizer-without-normalizer
huggingface-course
2021-10-19T18:38:58Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
The purpose of this repo is to show the usefulness of saving the normalization operation used during the tokenizer training ```python from transformers import AutoTokenizer text = "This is a text with àccënts and CAPITAL LETTERS" tokenizer = AutoTokenizer.from_pretrained("albert-large-v2") print(tokenizer.convert_ids_to_tokens(tokenizer.encode(text))) # ['[CLS]', '▁this', '▁is', '▁a', '▁text', '▁with', '▁accent', 's', '▁and', '▁capital', '▁letters', '[SEP]'] tokenizer = AutoTokenizer.from_pretrained("huggingface-course/albert-tokenizer-without-normalizer") print(tokenizer.convert_ids_to_tokens(tokenizer.encode(text))) # ['[CLS]', '▁', '<unk>', 'his', '▁is', '▁a', '▁text', '▁with', '▁', '<unk>', 'cc', '<unk>', 'nts', '▁and', '▁', '<unk>', '▁', '<unk>', '[SEP]'] ```
meghana/hitalmqa-finetuned-squad
meghana
2021-10-19T17:34:53Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "question-answering", "generated_from_trainer", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer model-index: - name: hitalmqa-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hitalmqa-finetuned-squad This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
patrickvonplaten/wav2vec2-large-xlsr-turkish-demo-colab
patrickvonplaten
2021-10-19T17:18:47Z
5
2
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xlsr-turkish-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-turkish-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.4055 - Wer: 0.4800 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.0179 | 4.21 | 400 | 1.4935 | 1.0249 | | 0.7075 | 8.42 | 800 | 0.4546 | 0.6071 | | 0.3072 | 12.63 | 1200 | 0.3947 | 0.5401 | | 0.2145 | 16.84 | 1600 | 0.4049 | 0.5194 | | 0.1647 | 21.05 | 2000 | 0.4199 | 0.5003 | | 0.1338 | 25.26 | 2400 | 0.4144 | 0.4859 | | 0.116 | 29.47 | 2800 | 0.4055 | 0.4800 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1+cu102 - Datasets 1.13.3 - Tokenizers 0.10.3
doc2query/stackexchange-t5-base-v1
doc2query
2021-10-19T16:26:19Z
5
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "en", "dataset:flax-sentence-embeddings/stackexchange_title_best_voted_answer_jsonl", "arxiv:1904.08375", "arxiv:2104.08663", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - flax-sentence-embeddings/stackexchange_title_best_voted_answer_jsonl widget: - text: "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects." license: apache-2.0 --- # doc2query/stackexchange-t5-base-v1 This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on T5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)). It can be used for: - **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/UKPLab/beir) we have an example how to use docT5query with Pyserini. - **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. On [SBERT.net](https://www.sbert.net/examples/unsupervised_learning/query_generation/README.html) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models. ## Usage ```python from transformers import T5Tokenizer, T5ForConditionalGeneration model_name = 'doc2query/stackexchange-t5-base-v1' tokenizer = T5Tokenizer.from_pretrained(model_name) model = T5ForConditionalGeneration.from_pretrained(model_name) text = "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects." input_ids = tokenizer.encode(text, max_length=320, truncation=True, return_tensors='pt') outputs = model.generate( input_ids=input_ids, max_length=64, do_sample=True, top_p=0.95, num_return_sequences=5) print("Text:") print(text) print("\nGenerated Queries:") for i in range(len(outputs)): query = tokenizer.decode(outputs[i], skip_special_tokens=True) print(f'{i + 1}: {query}') ``` **Note:** `model.generate()` is non-deterministic. It produces different queries each time you run it. ## Training This model fine-tuned [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) for 449k training steps. For the training script, see the `train_script.py` in this repository. The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces. This model was trained on a (title, best_answer_pairs) from StackExchange.
patrickvonplaten/wav2vec2-large-xlsr-turkish-demo
patrickvonplaten
2021-10-19T14:00:49Z
9
0
transformers
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
## XLSR-Wav2Vec2 Fine-Tuned on Turkish Common Voice dataset The model was fine-tuned in a google colab for demonstration purposes. Please refer to [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for more information about the model.
doc2query/all-t5-base-v1
doc2query
2021-10-19T12:54:25Z
85
9
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "en", "dataset:sentence-transformers/reddit-title-body", "dataset:sentence-transformers/embedding-training-data", "arxiv:1904.08375", "arxiv:2104.08663", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - sentence-transformers/reddit-title-body - sentence-transformers/embedding-training-data widget: - text: "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects." license: apache-2.0 --- # doc2query/all-t5-base-v1 This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on T5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)). It can be used for: - **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/UKPLab/beir) we have an example how to use docT5query with Pyserini. - **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. On [SBERT.net](https://www.sbert.net/examples/unsupervised_learning/query_generation/README.html) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models. ## Usage ```python from transformers import T5Tokenizer, T5ForConditionalGeneration model_name = 'doc2query/all-t5-base-v1' tokenizer = T5Tokenizer.from_pretrained(model_name) model = T5ForConditionalGeneration.from_pretrained(model_name) text = "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects." input_ids = tokenizer.encode(text, max_length=384, truncation=True, return_tensors='pt') outputs = model.generate( input_ids=input_ids, max_length=64, do_sample=True, top_p=0.95, num_return_sequences=5) print("Text:") print(text) print("\nGenerated Queries:") for i in range(len(outputs)): query = tokenizer.decode(outputs[i], skip_special_tokens=True) print(f'{i + 1}: {query}') ``` **Note:** `model.generate()` is non-deterministic. It produces different queries each time you run it. ## Training This model fine-tuned [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) for 570k training steps. For the training script, see the `train_script.py` in this repository. The input-text was truncated to 384 word pieces. Output text was generated up to 64 word pieces. This model was trained on a large collection of datasets. For the exact datasets names and weights see the `data_config.json` in this repository. Most of the datasets are available at [https://huggingface.co/sentence-transformers](https://huggingface.co/sentence-transformers). The datasets include besides others: - (title, body) pairs from [Reddit](https://huggingface.co/datasets/sentence-transformers/reddit-title-body) - (title, body) pairs and (title, answer) pairs from StackExchange and Yahoo Answers! - (title, review) pairs from Amazon reviews - (query, paragraph) pairs from MS MARCO, NQ, and GooAQ - (question, duplicate_question) from Quora and WikiAnswers - (title, abstract) pairs from S2ORC ## Prefix This model was trained **without a prefix**. In contrast to [doc2query/all-with_prefix-t5-base-v1](https://huggingface.co/doc2query/all-with_prefix-t5-base-v1) you cannot specify what type of transformation (answer2question, review2title) etc. you will have. This can lead to a mixture of output values.
doc2query/all-with_prefix-t5-base-v1
doc2query
2021-10-19T12:52:47Z
1,990
10
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "en", "dataset:sentence-transformers/reddit-title-body", "dataset:sentence-transformers/embedding-training-data", "arxiv:1904.08375", "arxiv:2104.08663", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - sentence-transformers/reddit-title-body - sentence-transformers/embedding-training-data widget: - text: "text2reddit: Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects." license: apache-2.0 --- # doc2query/all-with_prefix-t5-base-v1 This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on T5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)). It can be used for: - **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/UKPLab/beir) we have an example how to use docT5query with Pyserini. - **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. On [SBERT.net](https://www.sbert.net/examples/unsupervised_learning/query_generation/README.html) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models. ## Usage ```python from transformers import T5Tokenizer, T5ForConditionalGeneration model_name = 'doc2query/all-with_prefix-t5-base-v1' tokenizer = T5Tokenizer.from_pretrained(model_name) model = T5ForConditionalGeneration.from_pretrained(model_name) prefix = "answer2question" text = "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects." text = prefix+": "+text input_ids = tokenizer.encode(text, max_length=384, truncation=True, return_tensors='pt') outputs = model.generate( input_ids=input_ids, max_length=64, do_sample=True, top_p=0.95, num_return_sequences=5) print("Text:") print(text) print("\nGenerated Queries:") for i in range(len(outputs)): query = tokenizer.decode(outputs[i], skip_special_tokens=True) print(f'{i + 1}: {query}') ``` **Note:** `model.generate()` is non-deterministic. It produces different queries each time you run it. ## Training This model fine-tuned [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) for 575k training steps. For the training script, see the `train_script.py` in this repository. The input-text was truncated to 384 word pieces. Output text was generated up to 64 word pieces. This model was trained on a large collection of datasets. For the exact datasets names and weights see the `data_config.json` in this repository. Most of the datasets are available at [https://huggingface.co/sentence-transformers](https://huggingface.co/sentence-transformers). The datasets include besides others: - (title, body) pairs from [Reddit](https://huggingface.co/datasets/sentence-transformers/reddit-title-body) - (title, body) pairs and (title, answer) pairs from StackExchange and Yahoo Answers! - (title, review) pairs from Amazon reviews - (query, paragraph) pairs from MS MARCO, NQ, and GooAQ - (question, duplicate_question) from Quora and WikiAnswers - (title, abstract) pairs from S2ORC ## Prefix This model was trained **with a prefix**: You start the text with a specific index that defines what type out output text you would like to receive. Depending on the prefix, the output is different. E.g. the above text about Python produces the following output: | Prefix | Output | | --- | --- | | answer2question | Why should I use python in my business? ; What is the difference between Python and.NET? ; what is the python design philosophy? | | review2title | Python a powerful and useful language ; A new and improved programming language ; Object-oriented, practical and accessibl | | abstract2title | Python: A Software Development Platform ; A Research Guide for Python X: Conceptual Approach to Programming ; Python : Language and Approach | | text2query | is python a low level language? ; what is the primary idea of python? ; is python a programming language? | These are all available pre-fixes: - text2reddit - question2title - answer2question - abstract2title - review2title - news2title - text2query - question2question For the datasets and weights for the different pre-fixes see `data_config.json` in this repository.
maximedb/autonlp-vaccinchat-22134694
maximedb
2021-10-19T12:50:01Z
5
0
transformers
[ "transformers", "pytorch", "tf", "roberta", "text-classification", "autonlp", "nl", "dataset:maximedb/autonlp-data-vaccinchat", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: nl widget: - text: "I love AutoNLP 🤗" datasets: - maximedb/autonlp-data-vaccinchat co2_eq_emissions: 14.525955245648218 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 22134694 - CO2 Emissions (in grams): 14.525955245648218 ## Validation Metrics - Loss: 1.7039562463760376 - Accuracy: 0.6369376479873717 - Macro F1: 0.5363181342408181 - Micro F1: 0.6369376479873717 - Weighted F1: 0.6309793486221543 - Macro Precision: 0.5533353910494714 - Micro Precision: 0.6369376479873717 - Weighted Precision: 0.676981050732216 - Macro Recall: 0.5828723356986293 - Micro Recall: 0.6369376479873717 - Weighted Recall: 0.6369376479873717 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/maximedb/autonlp-vaccinchat-22134694 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("maximedb/autonlp-vaccinchat-22134694", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("maximedb/autonlp-vaccinchat-22134694", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
DeepESP/gpt2-spanish-medium
DeepESP
2021-10-19T08:53:15Z
289
9
transformers
[ "transformers", "pytorch", "tf", "jax", "gpt2", "text-generation", "GPT-2", "Spanish", "ebooks", "nlg", "es", "dataset:ebooks", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- language: es tags: - GPT-2 - Spanish - ebooks - nlg datasets: - ebooks widget: - text: "Quisiera saber que va a suceder" license: mit --- # GPT2-Spanish GPT2-Spanish is a language generation model trained from scratch with 11.5GB of Spanish texts and with a Byte Pair Encoding (BPE) tokenizer that was trained for this purpose. The parameters used are the same as the medium version of the original OpenAI GPT2 model. ## Corpus This model was trained with a corpus of 11.5GB of texts corresponding to 3.5GB of Wikipedia articles and 8GB of books (narrative, short stories, theater, poetry, essays, and popularization). ## Tokenizer The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for Unicode characters) and a vocabulary size of 50257. The inputs are sequences of 1024 consecutive tokens. This tokenizer was trained from scratch with the Spanish corpus, since it was evidenced that the tokenizer of the English models presented limitations to capture the semantic relations of Spanish, due to the morphosyntactic differences between both languages. Apart from the special token "<|endoftext|>" for text ending in the OpenAI GPT-2 models, the tokens "<|talk|>", "<|ax1|>", "<|ax2|>" (..)"<|ax9|>" were included so that they can serve as prompts in future training. ## Training The model and tokenizer were trained using the Hugging Face libraries with an Nvidia Tesla V100 GPU with 16GB memory on Google Colab servers. ## Authors The model was trained by Alejandro Oñate Latorre (Spain) and Jorge Ortiz Fuentes (Chile), members of -Deep ESP-, an open-source community on Natural Language Processing in Spanish (https://t.me/joinchat/VoEp1bPrDYEexc6h). Thanks to the members of the community who collaborated with funding for the initial tests. ## Cautions The model generates texts according to the patterns learned in the training corpus. These data were not filtered, therefore, the model could generate offensive or discriminatory content.
hiiamsid/autonlp-Summarization-20684328
hiiamsid
2021-10-19T05:09:38Z
4
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "autonlp", "es", "dataset:hiiamsid/autonlp-data-Summarization", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- tags: autonlp language: es widget: - text: "I love AutoNLP 🤗" datasets: - hiiamsid/autonlp-data-Summarization co2_eq_emissions: 1133.9679082840014 --- # Model Trained Using AutoNLP - Problem type: Summarization - Model ID: 20684328 - CO2 Emissions (in grams): 1133.9679082840014 ## Validation Metrics - Loss: nan - Rouge1: 9.4193 - Rouge2: 0.91 - RougeL: 7.9376 - RougeLsum: 8.0076 - Gen Len: 10.65 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/hiiamsid/autonlp-Summarization-20684328 ```
SPGT/LiveSafe-DialoGPT
SPGT
2021-10-19T03:47:32Z
6
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- tags: - conversational license: mit --- ## LiveSafe chatbot response generation model based on DialogGPT
bdwjaya/t5-small-finetuned-xsum
bdwjaya
2021-10-19T03:34:18Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:xsum", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - xsum model-index: - name: t5-small-finetuned-xsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
tiennvcs/distilbert-base-uncased-finetuned-squad
tiennvcs
2021-10-19T02:41:19Z
4
0
transformers
[ "transformers", "pytorch", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
yazdipour/sparql-qald9-t5-base-2021-10-19_00-15
yazdipour
2021-10-19T00:37:58Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: sparql-qald9-t5-base-2021-10-19_00-15 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sparql-qald9-t5-base-2021-10-19_00-15 This model is a fine-tuned version of [yazdipour/text-to-sparql-t5-base-2021-10-18_16-15](https://huggingface.co/yazdipour/text-to-sparql-t5-base-2021-10-18_16-15) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Gen Len | P | R | F1 | Bleu-score | Bleu-precisions | Bleu-bp | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:------:|:----------:|:-----------------------------------------------------------------------------:|:-------:| | No log | 1.0 | 51 | 1.8998 | 19.0 | 0.3634 | 0.0387 | 0.1963 | 9.9428 | [71.94645844952593, 49.30006086427267, 35.36503683858004, 28.145941921072225] | 0.2294 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
yazdipour/text-to-sparql-t5-small-2021-10-18_23-00
yazdipour
2021-10-19T00:01:17Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - null model-index: - name: text-to-sparql-t5-small-2021-10-18_23-00 results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # text-to-sparql-t5-small-2021-10-18_23-00 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2284 - Gen Len: 19.0 - Bertscorer-p: 0.5644 - Bertscorer-r: 0.0815 - Bertscorer-f1: 0.3120 - Sacrebleu-score: 5.5690 - Sacrebleu-precisions: [89.6746395837541, 79.06489438259324, 71.93407601726916, 67.21220306665607] - Bleu-bp: 0.0728 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Gen Len | Bertscorer-p | Bertscorer-r | Bertscorer-f1 | Sacrebleu-score | Sacrebleu-precisions | Bleu-bp | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------------:|:------------:|:-------------:|:---------------:|:---------------------------------------------------------------------------:|:-------:| | 0.2808 | 1.0 | 4772 | 0.2284 | 19.0 | 0.5644 | 0.0815 | 0.3120 | 5.5690 | [89.6746395837541, 79.06489438259324, 71.93407601726916, 67.21220306665607] | 0.0728 | ### Framework versions - Transformers 4.10.0 - Pytorch 1.9.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
castorini/tct_colbert-v2-msmarco-cqe
castorini
2021-10-18T23:34:32Z
9
2
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
This model is to reproduce Contextualized Query Embeddings for Conversational Search described in the following paper: > Sheng-Chieh Lin, Jheng-Hong Yang, and Jimmy Lin. [Contextualized Query Embeddings for Conversational Search.](https://cs.uwaterloo.ca/~jimmylin/publications/Lin_etal_EMNLP2021.pdf) EMNLP, Nov 2021. This model is finetuend only on query ecoder with frezzed passage encoder. The starting point is the [tct_colbert-msmarco](https://huggingface.co/castorini/tct_colbert-msmarco/tree/main). The detailed usage of the model will be out soon on [Chatty Goose](https://github.com/castorini/chatty-goose). You can also check the fine-tuning and inference using tensorflow in our [CQE repo](https://github.com/castorini/CQE)
mmcquade11/autonlp-imdb-test-21134453
mmcquade11
2021-10-18T17:47:59Z
5
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "autonlp", "en", "dataset:mmcquade11/autonlp-data-imdb-test", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - mmcquade11/autonlp-data-imdb-test co2_eq_emissions: 38.102565360610484 --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 21134453 - CO2 Emissions (in grams): 38.102565360610484 ## Validation Metrics - Loss: 0.172550767660141 - Accuracy: 0.9355 - Precision: 0.9362853135644159 - Recall: 0.9346 - AUC: 0.98267064 - F1: 0.9354418977079372 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/mmcquade11/autonlp-imdb-test-21134453 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("mmcquade11/autonlp-imdb-test-21134453", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("mmcquade11/autonlp-imdb-test-21134453", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
JonatanGk/roberta-base-ca-finetuned-hate-speech-offensive-catalan
JonatanGk
2021-10-18T17:10:50Z
6
1
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta-base-ca-finetuned-mnli results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-ca-finetuned-mnli This model is a fine-tuned version of [BSC-TeMU/roberta-base-ca](https://huggingface.co/BSC-TeMU/roberta-base-ca) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4137 - Accuracy: 0.8778 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3699 | 1.0 | 1255 | 0.3712 | 0.8669 | | 0.3082 | 2.0 | 2510 | 0.3401 | 0.8766 | | 0.2375 | 3.0 | 3765 | 0.4137 | 0.8778 | | 0.1889 | 4.0 | 5020 | 0.4671 | 0.8733 | | 0.1486 | 5.0 | 6275 | 0.5205 | 0.8749 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.12.1 - Tokenizers 0.10.3
lewtun/results
lewtun
2021-10-18T13:16:42Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: results results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.925 - name: F1 type: f1 value: 0.9251012149383893 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2147 - Accuracy: 0.925 - F1: 0.9251 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8221 | 1.0 | 250 | 0.3106 | 0.9125 | 0.9102 | | 0.2537 | 2.0 | 500 | 0.2147 | 0.925 | 0.9251 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1+cu102 - Datasets 1.13.0 - Tokenizers 0.10.3
yazdipour/text-to-sparql-t5-small-2021-10-18_09-32
yazdipour
2021-10-18T10:33:05Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - null metrics: - f1 model-index: - name: text-to-sparql-t5-small-2021-10-18_09-32 results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation metrics: - name: F1 type: f1 value: 0.26458749175071716 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # text-to-sparql-t5-small-2021-10-18_09-32 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5119 - Gen Len: 19.0 - P: 0.4884 - R: 0.0583 - F1: 0.2646 - Score: 3.5425 - Bleu-precisions: [82.80295919500207, 62.695879280325016, 50.2215675749897, 44.03052700138759] - Bleu-bp: 0.0609 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Gen Len | P | R | F1 | Score | Bleu-precisions | Bleu-bp | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:------:|:------:|:----------------------------------------------------------------------------:|:-------:| | 0.7088 | 1.0 | 4772 | 0.5119 | 19.0 | 0.4884 | 0.0583 | 0.2646 | 3.5425 | [82.80295919500207, 62.695879280325016, 50.2215675749897, 44.03052700138759] | 0.0609 | ### Framework versions - Transformers 4.10.0 - Pytorch 1.9.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-egy
CAMeL-Lab
2021-10-18T10:18:01Z
134
2
transformers
[ "transformers", "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:04Z
--- language: - ar license: apache-2.0 widget: - text: 'عامل ايه ؟' --- # CAMeLBERT-CA POS-EGY Model ## Model description **CAMeLBERT-CA POS-EGY Model** is a Egyptian Arabic POS tagging model that was built by fine-tuning the [CAMeLBERT-CA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca/) model. For the fine-tuning, we used the ARZTB dataset . Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT). ## Intended uses You can use the CAMeLBERT-CA POS-EGY model as part of the transformers pipeline. This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon. #### How to use To use the model with a transformers pipeline: ```python >>> from transformers import pipeline >>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-egy') >>> text = 'عامل ايه ؟' >>> pos(text) [{'entity': 'adj', 'score': 0.9990943, 'index': 1, 'word': 'عامل', 'start': 0, 'end': 4}, {'entity': 'pron_interrog', 'score': 0.99863535, 'index': 2, 'word': 'ايه', 'start': 5, 'end': 8}, {'entity': 'punc', 'score': 0.99990875, 'index': 3, 'word': '؟', 'start': 9, 'end': 10}] ``` *Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually. ## Citation ```bibtex @inproceedings{inoue-etal-2021-interplay, title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models", author = "Inoue, Go and Alhafni, Bashar and Baimukan, Nurpeiis and Bouamor, Houda and Habash, Nizar", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Online)", publisher = "Association for Computational Linguistics", abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.", } ```
ahmetbagci/bert2bert-turkish-paraphrase-generation
ahmetbagci
2021-10-18T10:17:40Z
67
10
transformers
[ "transformers", "pytorch", "encoder-decoder", "text2text-generation", "paraphrasing", "seq2seq", "bert", "tr", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: - tr tags: - paraphrasing - encoder-decoder - seq2seq - bert --- #Bert2Bert Turkish Paraphrase Generation #INISTA 2021 #Comparison of Turkish Paraphrase Generation Models #Dataset The dataset used in model training was created with the combination of the translation of the QQP dataset and manually generated dataset. Dataset [Link](https://drive.google.com/file/d/1-2l9EwIzXZ7fUkNW1vdeF3lzQp2pygp_/view?usp=sharing) #How To Use ```python from transformers import BertTokenizerFast,EncoderDecoderModel tokenizer=BertTokenizerFast.from_pretrained("dbmdz/bert-base-turkish-cased") model = EncoderDecoderModel.from_pretrained("ahmetbagci/bert2bert-turkish-paraphrase-generation") text="son model arabalar çevreye daha mı az zarar veriyor?" input_ids = tokenizer(text, return_tensors="pt").input_ids output_ids = model.generate(input_ids) print(tokenizer.decode(output_ids[0], skip_special_tokens=True)) #sample output #son model arabalar çevre için daha az zararlı mı? ``` #Cite ```bibtex @INPROCEEDINGS{9548335, author={Bağcı, Ahmet and Amasyali, Mehmet Fatih}, booktitle={2021 International Conference on INnovations in Intelligent SysTems and Applications (INISTA)}, title={Comparison of Turkish Paraphrase Generation Models}, year={2021}, volume={}, number={}, pages={1-6}, doi={10.1109/INISTA52262.2021.9548335} } ```
CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-glf
CAMeL-Lab
2021-10-18T10:16:30Z
20
1
transformers
[ "transformers", "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:04Z
--- language: - ar license: apache-2.0 widget: - text: 'شلونك ؟ شخبارك ؟' --- # CAMeLBERT-Mix POS-GLF Model ## Model description **CAMeLBERT-Mix POS-GLF Model** is a Gulf Arabic POS tagging model that was built by fine-tuning the [CAMeLBERT-Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix/) model. For the fine-tuning, we used the [Gumar](https://camel.abudhabi.nyu.edu/annotated-gumar-corpus/) dataset . Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT). ## Intended uses You can use the CAMeLBERT-Mix POS-GLF model as part of the transformers pipeline. This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon. #### How to use To use the model with a transformers pipeline: ```python >>> from transformers import pipeline >>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-glf') >>> text = 'شلونك ؟ شخبارك ؟' >>> pos(text) [{'entity': 'pron_interrog', 'score': 0.82657206, 'index': 1, 'word': 'شلون', 'start': 0, 'end': 4}, {'entity': 'prep', 'score': 0.9771731, 'index': 2, 'word': '##ك', 'start': 4, 'end': 5}, {'entity': 'punc', 'score': 0.9999568, 'index': 3, 'word': '؟', 'start': 6, 'end': 7}, {'entity': 'noun', 'score': 0.9977217, 'index': 4, 'word': 'ش', 'start': 8, 'end': 9}, {'entity': 'noun', 'score': 0.99993783, 'index': 5, 'word': '##خبار', 'start': 9, 'end': 13}, {'entity': 'prep', 'score': 0.5309442, 'index': 6, 'word': '##ك', 'start': 13, 'end': 14}, {'entity': 'punc', 'score': 0.9999575, 'index': 7, 'word': '؟', 'start': 15, 'end': 16}] ``` *Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually. ## Citation ```bibtex @inproceedings{inoue-etal-2021-interplay, title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models", author = "Inoue, Go and Alhafni, Bashar and Baimukan, Nurpeiis and Bouamor, Houda and Habash, Nizar", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Online)", publisher = "Association for Computational Linguistics", abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.", } ```
CAMeL-Lab/bert-base-arabic-camelbert-da-pos-egy
CAMeL-Lab
2021-10-18T10:15:37Z
9
1
transformers
[ "transformers", "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:04Z
--- language: - ar license: apache-2.0 widget: - text: 'عامل ايه ؟' --- # CAMeLBERT-DA POS-EGY Model ## Model description **CAMeLBERT-DA POS-EGY Model** is a Egyptian Arabic POS tagging model that was built by fine-tuning the [CAMeLBERT-DA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-da/) model. For the fine-tuning, we used the ARZTB dataset . Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT). ## Intended uses You can use the CAMeLBERT-DA POS-EGY model as part of the transformers pipeline. This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon. #### How to use To use the model with a transformers pipeline: ```python >>> from transformers import pipeline >>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-da-pos-egy') >>> text = 'عامل ايه ؟' >>> pos(text) [{'entity': 'adj', 'score': 0.99843216, 'index': 1, 'word': 'عامل', 'start': 0, 'end': 4}, {'entity': 'pron_interrog', 'score': 0.9990083, 'index': 2, 'word': 'ايه', 'start': 5, 'end': 8}, {'entity': 'punc', 'score': 0.82973784, 'index': 3, 'word': '؟', 'start': 9, 'end': 10}] ``` *Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually. ## Citation ```bibtex @inproceedings{inoue-etal-2021-interplay, title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models", author = "Inoue, Go and Alhafni, Bashar and Baimukan, Nurpeiis and Bouamor, Houda and Habash, Nizar", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Online)", publisher = "Association for Computational Linguistics", abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.", } ```
CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-glf
CAMeL-Lab
2021-10-18T10:13:34Z
10
1
transformers
[ "transformers", "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:04Z
--- language: - ar license: apache-2.0 widget: - text: 'شلونك ؟ شخبارك ؟' --- # CAMeLBERT-CA POS-GLF Model ## Model description **CAMeLBERT-CA POS-GLF Model** is a Gulf Arabic POS tagging model that was built by fine-tuning the [CAMeLBERT-CA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca/) model. For the fine-tuning, we used the [Gumar](https://camel.abudhabi.nyu.edu/annotated-gumar-corpus/) dataset. Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT). ## Intended uses You can use the CAMeLBERT-CA POS-GLF model as part of the transformers pipeline. This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon. #### How to use To use the model with a transformers pipeline: ```python >>> from transformers import pipeline >>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-glf') >>> text = 'شلونك ؟ شخبارك ؟' >>> pos(text) [{'entity': 'noun', 'score': 0.99572617, 'index': 1, 'word': 'شلون', 'start': 0, 'end': 4}, {'entity': 'noun', 'score': 0.9411187, 'index': 2, 'word': '##ك', 'start': 4, 'end': 5}, {'entity': 'punc', 'score': 0.9999661, 'index': 3, 'word': '؟', 'start': 6, 'end': 7}, {'entity': 'noun', 'score': 0.99286526, 'index': 4, 'word': 'ش', 'start': 8, 'end': 9}, {'entity': 'noun', 'score': 0.9983397, 'index': 5, 'word': '##خبار', 'start': 9, 'end': 13}, {'entity': 'noun', 'score': 0.9609381, 'index': 6, 'word': '##ك', 'start': 13, 'end': 14}, {'entity': 'punc', 'score': 0.9999668, 'index': 7, 'word': '؟', 'start': 15, 'end': 16}] ``` *Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually. ## Citation ```bibtex @inproceedings{inoue-etal-2021-interplay, title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models", author = "Inoue, Go and Alhafni, Bashar and Baimukan, Nurpeiis and Bouamor, Houda and Habash, Nizar", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Online)", publisher = "Association for Computational Linguistics", abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.", } ```
CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-glf
CAMeL-Lab
2021-10-18T09:57:26Z
5
0
transformers
[ "transformers", "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:04Z
--- language: - ar license: apache-2.0 widget: - text: 'شلونك ؟ شخبارك ؟' --- # CAMeLBERT-MSA POS-GLF Model ## Model description **CAMeLBERT-MSA POS-GLF Model** is a Gulf Arabic POS tagging model that was built by fine-tuning the [CAMeLBERT-MSA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa/) model. For the fine-tuning, we used the [Gumar](https://camel.abudhabi.nyu.edu/annotated-gumar-corpus/) dataset. Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT). ## Intended uses You can use the CAMeLBERT-MSA POS-GLF model as part of the transformers pipeline. This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon. #### How to use To use the model with a transformers pipeline: ```python >>> from transformers import pipeline >>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-glf') >>> text = 'شلونك ؟ شخبارك ؟' >>> pos(text) [{'entity': 'adv_interrog', 'score': 0.5622676, 'index': 1, 'word': 'شلون', 'start': 0, 'end': 4}, {'entity': 'prep', 'score': 0.99969727, 'index': 2, 'word': '##ك', 'start': 4, 'end': 5}, {'entity': 'punc', 'score': 0.9999299, 'index': 3, 'word': '؟', 'start': 6, 'end': 7}, {'entity': 'noun', 'score': 0.9843815, 'index': 4, 'word': 'ش', 'start': 8, 'end': 9}, {'entity': 'noun', 'score': 0.9998467, 'index': 5, 'word': '##خبار', 'start': 9, 'end': 13}, {'entity': 'prep', 'score': 0.9993611, 'index': 6, 'word': '##ك', 'start': 13, 'end': 14}, {'entity': 'punc', 'score': 0.99993765, 'index': 7, 'word': '؟', 'start': 15, 'end': 16}] ``` *Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually. ## Citation ```bibtex @inproceedings{inoue-etal-2021-interplay, title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models", author = "Inoue, Go and Alhafni, Bashar and Baimukan, Nurpeiis and Bouamor, Houda and Habash, Nizar", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Online)", publisher = "Association for Computational Linguistics", abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.", } ```
tal-yifat/injury-report-distilgpt2-test
tal-yifat
2021-10-18T02:15:31Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: injury-report-distilgpt2-test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # injury-report-distilgpt2-test This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.5243 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 380 | 3.6525 | | 3.9116 | 2.0 | 760 | 3.5507 | | 3.6015 | 3.0 | 1140 | 3.5243 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
airKlizz/bart-large-multi-fr-wiki-news
airKlizz
2021-10-17T20:10:41Z
4
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "fr", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: fr license: mit ---
airKlizz/bert2bert-multi-fr-wiki-news
airKlizz
2021-10-17T20:10:30Z
7
0
transformers
[ "transformers", "pytorch", "encoder-decoder", "text2text-generation", "fr", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: fr license: mit ---
airKlizz/t5-base-multi-fr-wiki-news
airKlizz
2021-10-17T20:09:42Z
16
0
transformers
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "fr", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: fr license: mit ---
yazdipour/text-to-sparql-t5-small-2021-10-17_18-47
yazdipour
2021-10-17T19:48:35Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - null metrics: - f1 model-index: - name: text-to-sparql-t5-small-2021-10-17_18-47 results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation metrics: - name: F1 type: f1 value: 0.2345714420080185 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # text-to-sparql-t5-small-2021-10-17_18-47 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5258 - Gen Len: 19.0 - P: 0.4582 - R: 0.0278 - F1: 0.2346 - Score: 3.5848 - Bleu-precisions: [82.57739877107295, 62.13358857503344, 48.43062944877681, 41.90172321318059] - Bleu-bp: 0.0631 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Gen Len | P | R | F1 | Score | Bleu-precisions | Bleu-bp | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:------:|:------:|:----------------------------------------------------------------------------:|:-------:| | 0.7575 | 1.0 | 4807 | 0.5258 | 19.0 | 0.4582 | 0.0278 | 0.2346 | 3.5848 | [82.57739877107295, 62.13358857503344, 48.43062944877681, 41.90172321318059] | 0.0631 | ### Framework versions - Transformers 4.10.0 - Pytorch 1.9.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
huggingartists/bushido-zho
huggingartists
2021-10-17T16:58:48Z
5
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/bushido-zho", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/bushido-zho tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/6e5b165de8561df37790229c26b25692.959x959x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">BUSHIDO ZHO</div> <a href="https://genius.com/artists/bushido-zho"> <div style="text-align: center; font-size: 14px;">@bushido-zho</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from BUSHIDO ZHO. Dataset is available [here](https://huggingface.co/datasets/huggingartists/bushido-zho). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/bushido-zho") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/vtfjc0qi/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on BUSHIDO ZHO's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/iwclgqsj) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/iwclgqsj/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/bushido-zho') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/bushido-zho") model = AutoModelWithLMHead.from_pretrained("huggingartists/bushido-zho") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
ltrctelugu/ltrc-roberta
ltrctelugu
2021-10-17T16:45:03Z
5
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
RoBERTa trained on 8.8 Million Telugu Sentences
Anorak/nirvana
Anorak
2021-10-17T15:48:15Z
5
0
transformers
[ "transformers", "pytorch", "pegasus", "text2text-generation", "autonlp", "unk", "dataset:Anorak/autonlp-data-Niravana-test2", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
--- tags: autonlp language: unk widget: - text: "I love AutoNLP 🤗" datasets: - Anorak/autonlp-data-Niravana-test2 co2_eq_emissions: 4.214012748213151 --- # Model Trained Using AutoNLP - Problem type: Summarization - Model ID: 20384195 - CO2 Emissions (in grams): 4.214012748213151 ## Validation Metrics - Loss: 1.0120062828063965 - Rouge1: 41.1808 - Rouge2: 26.2564 - RougeL: 31.3106 - RougeLsum: 38.9991 - Gen Len: 58.45 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/Anorak/autonlp-Niravana-test2-20384195 ```
CAMeL-Lab/bert-base-arabic-camelbert-da-poetry
CAMeL-Lab
2021-10-17T12:09:56Z
5
0
transformers
[ "transformers", "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:1905.05700", "arxiv:2103.06678", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- language: - ar license: apache-2.0 widget: - text: 'الخيل والليل والبيداء تعرفني [SEP] والسيف والرمح والقرطاس والقلم' --- # CAMeLBERT-DA Poetry Classification Model ## Model description **CAMeLBERT-DA Poetry Classification Model** is a poetry classification model that was built by fine-tuning the [CAMeLBERT Dialectal Arabic (DA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-da/) model. For the fine-tuning, we used the [APCD](https://arxiv.org/pdf/1905.05700.pdf) dataset. Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT). ## Intended uses You can use the CAMeLBERT-DA Poetry Classification model as part of the transformers pipeline. This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon. #### How to use To use the model with a transformers pipeline: ```python >>> from transformers import pipeline >>> poetry = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-da-poetry') >>> # A list of verses where each verse consists of two parts. >>> verses = [ ['الخيل والليل والبيداء تعرفني' ,'والسيف والرمح والقرطاس والقلم'], ['قم للمعلم وفه التبجيلا' ,'كاد المعلم ان يكون رسولا'] ] >>> # A function that concatenates the halves of each verse by using the [SEP] token. >>> join_verse = lambda half: ' [SEP] '.join(half) >>> # Apply this to all the verses in the list. >>> verses = [join_verse(verse) for verse in verses] >>> poetry(sentences) [{'label': 'البسيط', 'score': 0.9874765276908875}, {'label': 'السلسلة', 'score': 0.6877778172492981}] ``` *Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually. ## Citation ```bibtex @inproceedings{inoue-etal-2021-interplay, title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models", author = "Inoue, Go and Alhafni, Bashar and Baimukan, Nurpeiis and Bouamor, Houda and Habash, Nizar", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Online)", publisher = "Association for Computational Linguistics", abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.", } ```
CAMeL-Lab/bert-base-arabic-camelbert-ca-poetry
CAMeL-Lab
2021-10-17T12:09:38Z
13
4
transformers
[ "transformers", "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:1905.05700", "arxiv:2103.06678", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- language: - ar license: apache-2.0 widget: - text: 'الخيل والليل والبيداء تعرفني [SEP] والسيف والرمح والقرطاس والقلم' --- # CAMeLBERT-CA Poetry Classification Model ## Model description **CAMeLBERT-CA Poetry Classification Model** is a poetry classification model that was built by fine-tuning the [CAMeLBERT Classical Arabic (CA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca/) model. For the fine-tuning, we used the [APCD](https://arxiv.org/pdf/1905.05700.pdf) dataset. Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT). ## Intended uses You can use the CAMeLBERT-CA Poetry Classification model as part of the transformers pipeline. This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon. #### How to use To use the model with a transformers pipeline: ```python >>> from transformers import pipeline >>> poetry = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-ca-poetry') >>> # A list of verses where each verse consists of two parts. >>> verses = [ ['الخيل والليل والبيداء تعرفني' ,'والسيف والرمح والقرطاس والقلم'], ['قم للمعلم وفه التبجيلا' ,'كاد المعلم ان يكون رسولا'] ] >>> # A function that concatenates the halves of each verse by using the [SEP] token. >>> join_verse = lambda half: ' [SEP] '.join(half) >>> # Apply this to all the verses in the list. >>> verses = [join_verse(verse) for verse in verses] >>> poetry(sentences) [{'label': 'البسيط', 'score': 0.9845284819602966}, {'label': 'الكامل', 'score': 0.752918004989624}] ``` *Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually. ## Citation ```bibtex @inproceedings{inoue-etal-2021-interplay, title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models", author = "Inoue, Go and Alhafni, Bashar and Baimukan, Nurpeiis and Bouamor, Houda and Habash, Nizar", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Online)", publisher = "Association for Computational Linguistics", abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.", } ```
castorini/monot5-large-msmarco
castorini
2021-10-17T11:20:56Z
576
3
transformers
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "text-generation-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
This model is a T5-large reranker fine-tuned on the MS MARCO passage dataset for 100k steps (or 10 epochs). For more details on how to use it, check the following links: - [A simple reranking example](https://github.com/castorini/pygaggle#a-simple-reranking-example) - [Rerank MS MARCO passages](https://github.com/castorini/pygaggle/blob/master/docs/experiments-msmarco-passage-subset.md) - [Rerank Robust04 documents](https://github.com/castorini/pygaggle/blob/master/docs/experiments-robust04-monot5-gpu.md) Paper describing the model: [Document Ranking with a Pretrained Sequence-to-Sequence Model](https://www.aclweb.org/anthology/2020.findings-emnlp.63/)
CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar-corpus6
CAMeL-Lab
2021-10-17T11:17:53Z
30
1
transformers
[ "transformers", "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:2103.06678", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- language: - ar license: apache-2.0 widget: - text: "عامل ايه ؟" --- # CAMeLBERT-Mix DID MADAR Corpus6 Model ## Model description **CAMeLBERT-Mix DID MADAR Corpus6 Model** is a dialect identification (DID) model that was built by fine-tuning the [CAMeLBERT-Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix/) model. For the fine-tuning, we used the [MADAR Corpus 6](https://camel.abudhabi.nyu.edu/madar-shared-task-2019/) dataset, which includes 6 labels. Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT). ## Intended uses You can use the CAMeLBERT-Mix DID MADAR Corpus6 model as part of the transformers pipeline. This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon. #### How to use To use the model with a transformers pipeline: ```python >>> from transformers import pipeline >>> did = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar6') >>> sentences = ['عامل ايه ؟', 'شلونك ؟ شخبارك ؟'] >>> did(sentences) [{'label': 'CAI', 'score': 0.9996405839920044}, {'label': 'DOH', 'score': 0.9997853636741638}] ``` *Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models ## Citation ```bibtex @inproceedings{inoue-etal-2021-interplay, title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models", author = "Inoue, Go and Alhafni, Bashar and Baimukan, Nurpeiis and Bouamor, Houda and Habash, Nizar", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Online)", publisher = "Association for Computational Linguistics", abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.", } ```
CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar-corpus26
CAMeL-Lab
2021-10-17T11:17:23Z
29
3
transformers
[ "transformers", "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:2103.06678", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- language: - ar license: apache-2.0 widget: - text: "عامل ايه ؟" --- # CAMeLBERT-Mix DID Madar Corpus26 Model ## Model description **CAMeLBERT-Mix DID Madar Corpus26 Model** is a dialect identification (DID) model that was built by fine-tuning the [CAMeLBERT-Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix/) model. For the fine-tuning, we used the [MADAR Corpus 26](https://camel.abudhabi.nyu.edu/madar-shared-task-2019/) dataset, which includes 26 labels. Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT). ## Intended uses You can use the CAMeLBERT-Mix DID Madar Corpus26 model as part of the transformers pipeline. This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon. #### How to use To use the model with a transformers pipeline: ```python >>> from transformers import pipeline >>> did = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar26') >>> sentences = ['عامل ايه ؟', 'شلونك ؟ شخبارك ؟'] >>> did(sentences) [{'label': 'CAI', 'score': 0.8751305937767029}, {'label': 'DOH', 'score': 0.9867215156555176}] ``` *Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually. ## Citation ```bibtex @inproceedings{inoue-etal-2021-interplay, title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models", author = "Inoue, Go and Alhafni, Bashar and Baimukan, Nurpeiis and Bouamor, Houda and Habash, Nizar", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Online)", publisher = "Association for Computational Linguistics", abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.", } ```
CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment
CAMeL-Lab
2021-10-17T11:15:54Z
7,487
43
transformers
[ "transformers", "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:2103.06678", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- language: - ar license: apache-2.0 widget: - text: "أنا بخير" --- # CAMeLBERT-DA SA Model ## Model description **CAMeLBERT-DA SA Model** is a Sentiment Analysis (SA) model that was built by fine-tuning the [CAMeLBERT Dialectal Arabic (DA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-da/) model. For the fine-tuning, we used the [ASTD](https://aclanthology.org/D15-1299.pdf), [ArSAS](http://lrec-conf.org/workshops/lrec2018/W30/pdf/22_W30.pdf), and [SemEval](https://aclanthology.org/S17-2088.pdf) datasets. Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)." * Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT). ## Intended uses You can use the CAMeLBERT-DA SA model directly as part of our [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component (*recommended*) or as part of the transformers pipeline. #### How to use To use the model with the [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) SA component: ```python >>> from camel_tools.sentiment import SentimentAnalyzer >>> sa = SentimentAnalyzer("CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment") >>> sentences = ['أنا بخير', 'أنا لست بخير'] >>> sa.predict(sentences) >>> ['positive', 'negative'] ``` You can also use the SA model directly with a transformers pipeline: ```python >>> from transformers import pipeline >>> sa = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-da-sentiment') >>> sentences = ['أنا بخير', 'أنا لست بخير'] >>> sa(sentences) [{'label': 'positive', 'score': 0.9616648554801941}, {'label': 'negative', 'score': 0.9779177904129028}] ``` *Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually. ## Citation ```bibtex @inproceedings{inoue-etal-2021-interplay, title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models", author = "Inoue, Go and Alhafni, Bashar and Baimukan, Nurpeiis and Bouamor, Houda and Habash, Nizar", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Online)", publisher = "Association for Computational Linguistics", abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.", } ```
CAMeL-Lab/bert-base-arabic-camelbert-da-ner
CAMeL-Lab
2021-10-17T11:13:27Z
49
0
transformers
[ "transformers", "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:04Z
--- language: - ar license: apache-2.0 widget: - text: "إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع" --- # CAMeLBERT-DA NER Model ## Model description **CAMeLBERT-DA NER Model** is a Named Entity Recognition (NER) model that was built by fine-tuning the [CAMeLBERT Dialectal Arabic (DA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-da/) model. For the fine-tuning, we used the [ANERcorp](https://camel.abudhabi.nyu.edu/anercorp/) dataset. Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)." * Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT). ## Intended uses You can use the CAMeLBERT-DA NER model directly as part of our [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) NER component (*recommended*) or as part of the transformers pipeline. #### How to use To use the model with the [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) NER component: ```python >>> from camel_tools.ner import NERecognizer >>> from camel_tools.tokenizers.word import simple_word_tokenize >>> ner = NERecognizer('CAMeL-Lab/bert-base-arabic-camelbert-da-ner') >>> sentence = simple_word_tokenize('إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع') >>> ner.predict_sentence(sentence) >>> ['O', 'B-LOC', 'O', 'O', 'O', 'O', 'B-LOC', 'I-LOC', 'I-LOC', 'O'] ``` You can also use the NER model directly with a transformers pipeline: ```python >>> from transformers import pipeline >>> ner = pipeline('ner', model='CAMeL-Lab/bert-base-arabic-camelbert-da-ner') >>> ner("إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع") [{'word': 'أبوظبي', 'score': 0.9895730018615723, 'entity': 'B-LOC', 'index': 2, 'start': 6, 'end': 12}, {'word': 'الإمارات', 'score': 0.8156259655952454, 'entity': 'B-LOC', 'index': 8, 'start': 33, 'end': 41}, {'word': 'العربية', 'score': 0.890906810760498, 'entity': 'I-LOC', 'index': 9, 'start': 42, 'end': 49}, {'word': 'المتحدة', 'score': 0.8169114589691162, 'entity': 'I-LOC', 'index': 10, 'start': 50, 'end': 57}] ``` *Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually. ## Citation ```bibtex @inproceedings{inoue-etal-2021-interplay, title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models", author = "Inoue, Go and Alhafni, Bashar and Baimukan, Nurpeiis and Bouamor, Houda and Habash, Nizar", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Online)", publisher = "Association for Computational Linguistics", abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a da of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.", } ```
CAMeL-Lab/bert-base-arabic-camelbert-msa-ner
CAMeL-Lab
2021-10-17T11:07:13Z
1,851
5
transformers
[ "transformers", "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:04Z
--- language: - ar license: apache-2.0 widget: - text: "إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع" --- # CAMeLBERT MSA NER Model ## Model description **CAMeLBERT MSA NER Model** is a Named Entity Recognition (NER) model that was built by fine-tuning the [CAMeLBERT Modern Standard Arabic (MSA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa/) model. For the fine-tuning, we used the [ANERcorp](https://camel.abudhabi.nyu.edu/anercorp/) dataset. Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678). "* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT). ## Intended uses You can use the CAMeLBERT MSA NER model directly as part of our [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) NER component (*recommended*) or as part of the transformers pipeline. #### How to use To use the model with the [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) NER component: ```python >>> from camel_tools.ner import NERecognizer >>> from camel_tools.tokenizers.word import simple_word_tokenize >>> ner = NERecognizer('CAMeL-Lab/bert-base-arabic-camelbert-msa-ner') >>> sentence = simple_word_tokenize('إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع') >>> ner.predict_sentence(sentence) >>> ['O', 'B-LOC', 'O', 'O', 'O', 'O', 'B-LOC', 'I-LOC', 'I-LOC', 'O'] ``` You can also use the NER model directly with a transformers pipeline: ```python >>> from transformers import pipeline >>> ner = pipeline('ner', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-ner') >>> ner("إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع") [{'word': 'أبوظبي', 'score': 0.9895730018615723, 'entity': 'B-LOC', 'index': 2, 'start': 6, 'end': 12}, {'word': 'الإمارات', 'score': 0.8156259655952454, 'entity': 'B-LOC', 'index': 8, 'start': 33, 'end': 41}, {'word': 'العربية', 'score': 0.890906810760498, 'entity': 'I-LOC', 'index': 9, 'start': 42, 'end': 49}, {'word': 'المتحدة', 'score': 0.8169114589691162, 'entity': 'I-LOC', 'index': 10, 'start': 50, 'end': 57}] ``` *Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually. ## Citation ```bibtex @inproceedings{inoue-etal-2021-interplay, title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models", author = "Inoue, Go and Alhafni, Bashar and Baimukan, Nurpeiis and Bouamor, Houda and Habash, Nizar", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Online)", publisher = "Association for Computational Linguistics", abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.", } ```
CAMeL-Lab/bert-base-arabic-camelbert-msa-did-nadi
CAMeL-Lab
2021-10-17T11:05:21Z
33
0
transformers
[ "transformers", "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:2103.06678", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- language: - ar license: apache-2.0 widget: - text: "عامل ايه ؟" --- # CAMeLBERT-MSA DID NADI Model ## Model description **CAMeLBERT-MSA DID NADI Model** is a dialect identification (DID) model that was built by fine-tuning the [CAMeLBERT Modern Standard Arabic (MSA)](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa/) model. For the fine-tuning, we used the [NADI Coountry-level](https://sites.google.com/view/nadi-shared-task) dataset, which includes 21 labels. Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT). ## Intended uses You can use the CAMeLBERT-MSA DID NADI model as part of the transformers pipeline. This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon. #### How to use To use the model with a transformers pipeline: ```python >>> from transformers import pipeline >>> did = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-did-nadi') >>> sentences = ['عامل ايه ؟', 'شلونك ؟ شخبارك ؟'] >>> did(sentences) [{'label': 'Egypt', 'score': 0.9242768287658691}, {'label': 'Saudi_Arabia', 'score': 0.3400847613811493}] ``` *Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually. ## Citation ```bibtex @inproceedings{inoue-etal-2021-interplay, title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models", author = "Inoue, Go and Alhafni, Bashar and Baimukan, Nurpeiis and Bouamor, Houda and Habash, Nizar", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Online)", publisher = "Association for Computational Linguistics", abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.", } ```
CAMeL-Lab/bert-base-arabic-camelbert-mix-did-nadi
CAMeL-Lab
2021-10-17T11:05:12Z
5
0
transformers
[ "transformers", "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:2103.06678", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- language: - ar license: apache-2.0 widget: - text: "عامل ايه ؟" --- # CAMeLBERT-Mix DID NADI Model ## Model description **CAMeLBERT-Mix DID NADI Model** is a dialect identification (DID) model that was built by fine-tuning the [CAMeLBERT-Mix](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-mix/) model. For the fine-tuning, we used the [NADI Coountry-level](https://sites.google.com/view/nadi-shared-task) dataset, which includes 21 labels. Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT). ## Intended uses You can use the CAMeLBERT-Mix DID NADI model as part of the transformers pipeline. This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon. #### How to use To use the model with a transformers pipeline: ```python >>> from transformers import pipeline >>> did = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-mix-did-nadi') >>> sentences = ['عامل ايه ؟', 'شلونك ؟ شخبارك ؟'] >>> did(sentences) [{'label': 'Egypt', 'score': 0.920274019241333}, {'label': 'Saudi_Arabia', 'score': 0.26750022172927856}] ``` *Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually. ## Citation ```bibtex @inproceedings{inoue-etal-2021-interplay, title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models", author = "Inoue, Go and Alhafni, Bashar and Baimukan, Nurpeiis and Bouamor, Houda and Habash, Nizar", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Online)", publisher = "Association for Computational Linguistics", abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.", } ```
lucius/distilroberta-base-finetuned-wikitext2
lucius
2021-10-17T10:40:14Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilroberta-base-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-finetuned-wikitext2 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8340 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.0827 | 1.0 | 2406 | 1.9227 | | 1.9993 | 2.0 | 4812 | 1.8828 | | 1.9614 | 3.0 | 7218 | 1.8172 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
lucius/distilgpt2-finetuned-wikitext2
lucius
2021-10-17T09:45:49Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilgpt2-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.6424 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.7608 | 1.0 | 2334 | 3.6655 | | 3.6335 | 2.0 | 4668 | 3.6455 | | 3.6066 | 3.0 | 7002 | 3.6424 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
gwima/ryan-sackmott
gwima
2021-10-17T03:15:08Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- tags: - conversational ---
amansolanki/autonlp-Tweet-Sentiment-Extraction-20114061
amansolanki
2021-10-17T00:32:35Z
1,906
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "autonlp", "en", "dataset:amansolanki/autonlp-data-Tweet-Sentiment-Extraction", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - amansolanki/autonlp-data-Tweet-Sentiment-Extraction co2_eq_emissions: 3.651199395353127 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 20114061 - CO2 Emissions (in grams): 3.651199395353127 ## Validation Metrics - Loss: 0.5046541690826416 - Accuracy: 0.8036219581211093 - Macro F1: 0.807095210403678 - Micro F1: 0.8036219581211093 - Weighted F1: 0.8039634739225368 - Macro Precision: 0.8076842795233988 - Micro Precision: 0.8036219581211093 - Weighted Precision: 0.8052135235094771 - Macro Recall: 0.8075241470527056 - Micro Recall: 0.8036219581211093 - Weighted Recall: 0.8036219581211093 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/amansolanki/autonlp-Tweet-Sentiment-Extraction-20114061 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("amansolanki/autonlp-Tweet-Sentiment-Extraction-20114061", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("amansolanki/autonlp-Tweet-Sentiment-Extraction-20114061", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
gagandeepkundi/latam-question-quality
gagandeepkundi
2021-10-16T16:32:19Z
5
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "autonlp", "es", "dataset:gagandeepkundi/autonlp-data-text-classification", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: es widget: - text: "I love AutoNLP 🤗" datasets: - gagandeepkundi/autonlp-data-text-classification co2_eq_emissions: 20.790169878009916 --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 19984005 - CO2 Emissions (in grams): 20.790169878009916 ## Validation Metrics - Loss: 0.06693269312381744 - Accuracy: 0.9789 - Precision: 0.9843244336569579 - Recall: 0.9733 - AUC: 0.99695552 - F1: 0.9787811745776348 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/gagandeepkundi/autonlp-text-classification-19984005 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("gagandeepkundi/autonlp-text-classification-19984005", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("gagandeepkundi/autonlp-text-classification-19984005", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
lewtun/xlm-roberta-base-finetuned-marc-de
lewtun
2021-10-16T11:38:18Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "text-classification", "generated_from_trainer", "dataset:amazon_reviews_multi", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - amazon_reviews_multi model-index: - name: xlm-roberta-base-finetuned-marc-de results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-marc-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.9934 - Mae: 0.4867 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1514 | 1.0 | 308 | 1.0455 | 0.5221 | | 0.9997 | 2.0 | 616 | 0.9934 | 0.4867 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
Yuri/xlm-roberta-base-finetuned-marc
Yuri
2021-10-16T11:36:47Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "text-classification", "generated_from_trainer", "dataset:amazon_reviews_multi", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - amazon_reviews_multi model-index: - name: xlm-roberta-base-finetuned-marc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-marc This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.9825 - Mae: 0.4956 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1432 | 1.0 | 308 | 1.0559 | 0.5133 | | 0.9883 | 2.0 | 616 | 0.9825 | 0.4956 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
ashish-chouhan/xlm-roberta-base-finetuned-marc
ashish-chouhan
2021-10-16T11:34:29Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "text-classification", "generated_from_trainer", "dataset:amazon_reviews_multi", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - amazon_reviews_multi model-index: - name: xlm-roberta-base-finetuned-marc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-marc This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 1.0171 - Mae: 0.5310 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1404 | 1.0 | 308 | 1.0720 | 0.5398 | | 0.9805 | 2.0 | 616 | 1.0171 | 0.5310 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
DongHyoungLee/distilbert-base-uncased-finetuned-cola
DongHyoungLee
2021-10-16T11:30:42Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.535587402888147 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7335 - Matthews Correlation: 0.5356 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5309 | 1.0 | 535 | 0.5070 | 0.4239 | | 0.3568 | 2.0 | 1070 | 0.5132 | 0.4913 | | 0.24 | 3.0 | 1605 | 0.6081 | 0.4990 | | 0.1781 | 4.0 | 2140 | 0.7335 | 0.5356 | | 0.1243 | 5.0 | 2675 | 0.8705 | 0.5242 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
google/muril-large-cased
google
2021-10-16T03:28:16Z
5,437
17
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "arxiv:1810.04805", "arxiv:1911.02116", "arxiv:2003.11080", "arxiv:2009.05166", "arxiv:2103.10730", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
# MuRIL Large Multilingual Representations for Indian Languages : A BERT Large (24L) model pre-trained on 17 Indian languages, and their transliterated counterparts. ## Overview This model uses a BERT large architecture [1] pretrained from scratch using the Wikipedia [2], Common Crawl [3], PMINDIA [4] and Dakshina [5] corpora for 17 [6] Indian languages. We use a training paradigm similar to multilingual bert, with a few modifications as listed: * We include translation and transliteration segment pairs in training as well. * We keep an exponent value of 0.3 and not 0.7 for upsampling, shown to enhance low-resource performance. [7] See the Training section for more details. ## Training The MuRIL model is pre-trained on monolingual segments as well as parallel segments as detailed below : * Monolingual Data : We make use of publicly available corpora from Wikipedia and Common Crawl for 17 Indian languages. * Parallel Data : We have two types of parallel data : * Translated Data : We obtain translations of the above monolingual corpora using the Google NMT pipeline. We feed translated segment pairs as input. We also make use of the publicly available PMINDIA corpus. * Transliterated Data : We obtain transliterations of Wikipedia using the IndicTrans [8] library. We feed transliterated segment pairs as input. We also make use of the publicly available Dakshina dataset. We keep an exponent value of 0.3 to calculate duplication multiplier values for upsampling of lower resourced languages and set dupe factors accordingly. Note, we limit transliterated pairs to Wikipedia only. The model was trained using a self-supervised masked language modeling task. We do whole word masking with a maximum of 80 predictions. The model was trained for 1500K steps, with a batch size of 8192, and a max sequence length of 512. ### Trainable parameters All parameters in the module are trainable, and fine-tuning all parameters is the recommended practice. ## Uses & Limitations This model is intended to be used for a variety of downstream NLP tasks for Indian languages. This model is trained on transliterated data as well, a phenomenon commonly observed in the Indian context. This model is not expected to perform well on languages other than the ones used in pre-training, i.e. 17 Indian languages. ## Evaluation We provide the results of fine-tuning this model on a set of downstream tasks.<br/> We choose these tasks from the XTREME benchmark, with evaluation done on Indian language test-sets.<br/> All results are computed in a zero-shot setting, with English being the high resource training set language.<br/> The results for XLM-R (Large) are taken from the XTREME paper [9]. * Shown below are results on datasets from the XTREME benchmark (in %) <br/> PANX (F1) | bn | en | hi | ml | mr | ta | te | ur | Average :------------ | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ------: XLM-R (large) | 78.8 | 84.7 | 73.0 | 67.8 | 68.1 | 59.5 | 55.8 | 56.4 | 68.0 MuRIL (large) | 85.8 | 85.0 | 78.3 | 75.6 | 77.3 | 71.1 | 65.6 | 83.0 | 77.7 <br/> UDPOS (F1) | en | hi | mr | ta | te | ur | Average :------------ | ---: | ---: | ---: | ---: | ---: | ---: | ------: XLM-R (large) | 96.1 | 76.4 | 80.8 | 65.2 | 86.6 | 70.3 | 79.2 MuRIL (large) | 95.7 | 71.3 | 85.7 | 62.6 | 85.8 | 62.8 | 77.3 <br/> XNLI (Accuracy) | en | hi | ur | Average :-------------- | ---: | ---: | ---: | ------: XLM-R (large) | 88.7 | 75.6 | 71.7 | 78.7 MuRIL (large) | 88.4 | 75.8 | 71.7 | 78.6 <br/> XQUAD (F1/EM) | en | hi | Average :------------ | --------: | --------: | --------: XLM-R (large) | 86.5/75.7 | 76.7/59.7 | 81.6/67.7 MuRIL (large) | 88.2/77.8 | 78.4/62.4 | 83.3/70.1 <br/> MLQA (F1/EM) | en | hi | Average :------------ | --------: | --------: | --------: XLM-R (large) | 83.5/70.6 | 70.6/53.1 | 77.1/61.9 MuRIL (large) | 84.4/71.7 | 72.2/54.1 | 78.3/62.9 <br/> TyDiQA (F1/EM) | en | bn | te | Average :------------- | --------: | --------: | --------: | --------: XLM-R (large) | 71.5/56.8 | 64.0/47.8 | 70.1/43.6 | 68.5/49.4 MuRIL (large) | 75.9/66.8 | 67.1/53.1 | 71.5/49.8 | 71.5/56.6 <br/> The fine-tuning hyperparameters are as follows: Task | Batch Size | Learning Rate | Epochs | Warm-up Ratio :----- | ---------: | ------------: | -----: | ------------: PANX | 32 | 2e-5 | 10 | 0.1 UDPOS | 64 | 5e-6 | 10 | 0.1 XNLI | 128 | 2e-5 | 5 | 0.1 XQuAD | 32 | 3e-5 | 2 | 0.1 MLQA | 32 | 3e-5 | 2 | 0.1 TyDiQA | 32 | 3e-5 | 3 | 0.1 ## References \[1]: Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova. [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805). arXiv preprint arXiv:1810.04805, 2018. \[2]: [Wikipedia](https://www.tensorflow.org/datasets/catalog/wikipedia) \[3]: [Common Crawl](http://commoncrawl.org/the-data/) \[4]: [PMINDIA](http://lotus.kuee.kyoto-u.ac.jp/WAT/indic-multilingual/index.html) \[5]: [Dakshina](https://github.com/google-research-datasets/dakshina) \[6]: Assamese (as), Bengali (bn), English (en), Gujarati (gu), Hindi (hi), Kannada (kn), Kashmiri (ks), Malayalam (ml), Marathi (mr), Nepali (ne), Oriya (or), Punjabi (pa), Sanskrit (sa), Sindhi (sd), Tamil (ta), Telugu (te) and Urdu (ur). \[7]: Conneau, Alexis, et al. [Unsupervised cross-lingual representation learning at scale](https://arxiv.org/pdf/1911.02116.pdf). arXiv preprint arXiv:1911.02116 (2019). \[8]: [IndicTrans](https://github.com/libindic/indic-trans) \[9]: Hu, J., Ruder, S., Siddhant, A., Neubig, G., Firat, O., & Johnson, M. (2020). [Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalization.](https://arxiv.org/pdf/2003.11080.pdf) arXiv preprint arXiv:2003.11080. \[10]: Fang, Y., Wang, S., Gan, Z., Sun, S., & Liu, J. (2020). [FILTER: An Enhanced Fusion Method for Cross-lingual Language Understanding.](https://arxiv.org/pdf/2009.05166.pdf) arXiv preprint arXiv:2009.05166. ## Citation If you find MuRIL useful in your applications, please cite the following paper: ``` @misc{khanuja2021muril, title={MuRIL: Multilingual Representations for Indian Languages}, author={Simran Khanuja and Diksha Bansal and Sarvesh Mehtani and Savya Khosla and Atreyee Dey and Balaji Gopalan and Dilip Kumar Margam and Pooja Aggarwal and Rajiv Teja Nagipogu and Shachi Dave and Shruti Gupta and Subhash Chandra Bose Gali and Vish Subramanian and Partha Talukdar}, year={2021}, eprint={2103.10730}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## Contact Please mail your queries/feedback to [email protected].
sciarrilli/biobert-base-cased-v1.2-finetuned-ner
sciarrilli
2021-10-15T21:47:28Z
268
2
transformers
[ "transformers", "pytorch", "bert", "token-classification", "generated_from_trainer", "dataset:jnlpba", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - jnlpba metrics: - precision - recall - f1 - accuracy model-index: - name: biobert-base-cased-v1.2-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: jnlpba type: jnlpba args: jnlpba metrics: - name: Precision type: precision value: 0.7150627220423177 - name: Recall type: recall value: 0.8300729927007299 - name: F1 type: f1 value: 0.7682875335686659 - name: Accuracy type: accuracy value: 0.90497239665345 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # biobert-base-cased-v1.2-finetuned-ner This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.2](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2) on the jnlpba dataset. It achieves the following results on the evaluation set: - Loss: 0.3655 - Precision: 0.7151 - Recall: 0.8301 - F1: 0.7683 - Accuracy: 0.9050 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.257 | 1.0 | 1160 | 0.2889 | 0.7091 | 0.8222 | 0.7615 | 0.9021 | | 0.1962 | 2.0 | 2320 | 0.3009 | 0.7154 | 0.8259 | 0.7667 | 0.9048 | | 0.158 | 3.0 | 3480 | 0.3214 | 0.7098 | 0.8228 | 0.7621 | 0.9031 | | 0.131 | 4.0 | 4640 | 0.3385 | 0.7174 | 0.8292 | 0.7692 | 0.9055 | | 0.1081 | 5.0 | 5800 | 0.3655 | 0.7151 | 0.8301 | 0.7683 | 0.9050 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1+cu102 - Datasets 1.13.2 - Tokenizers 0.10.3
yazdipour/text-to-sparql-t5-small-2021-10-15_01-00
yazdipour
2021-10-15T15:19:59Z
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer model-index: - name: text-to-sparql-t5-small-2021-10-15_01-00 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # text-to-sparql-t5-small-2021-10-15_01-00 This model was trained from scratch on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Gen Len | P | R | F1 | Bleu-score | Bleu-precisions | Bleu-bp | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:------:|:----------:|:-----------------------------------------------------------------:|:-------:| | No log | 1.0 | 26 | 4.1488 | 19.0 | 0.2368 | -0.0304 | 0.1003 | 0.8868 | [56.84848484848485, 25.0, 8.88888888888889, 0.041666666666666664] | 0.1851 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.13.2 - Tokenizers 0.10.3
adelgasmi/autonlp-kpmg_nlp-18833547
adelgasmi
2021-10-15T11:44:36Z
4
1
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autonlp", "ar", "dataset:adelgasmi/autonlp-data-kpmg_nlp", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: ar widget: - text: "I love AutoNLP 🤗" datasets: - adelgasmi/autonlp-data-kpmg_nlp co2_eq_emissions: 64.58945483765274 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 18833547 - CO2 Emissions (in grams): 64.58945483765274 ## Validation Metrics - Loss: 0.14247722923755646 - Accuracy: 0.9586074193404036 - Macro F1: 0.9468339778730883 - Micro F1: 0.9586074193404036 - Weighted F1: 0.9585551117678807 - Macro Precision: 0.9445436604001405 - Micro Precision: 0.9586074193404036 - Weighted Precision: 0.9591405429662925 - Macro Recall: 0.9499427161888565 - Micro Recall: 0.9586074193404036 - Weighted Recall: 0.9586074193404036 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/adelgasmi/autonlp-kpmg_nlp-18833547 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("adelgasmi/autonlp-kpmg_nlp-18833547", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("adelgasmi/autonlp-kpmg_nlp-18833547", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
clips/mfaq
clips
2021-10-15T06:21:13Z
5,508
36
sentence-transformers
[ "sentence-transformers", "pytorch", "tf", "xlm-roberta", "feature-extraction", "sentence-similarity", "transformers", "cs", "da", "de", "en", "es", "fi", "fr", "he", "hr", "hu", "id", "it", "nl", "no", "pl", "pt", "ro", "ru", "sv", "tr", "vi", "dataset:clips/mfaq", "arxiv:2109.12870", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity license: apache-2.0 language: - cs - da - de - en - es - fi - fr - he - hr - hu - id - it - nl - 'no' - pl - pt - ro - ru - sv - tr - vi tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers datasets: - clips/mfaq widget: source_sentence: "<Q>How many models can I host on HuggingFace?" sentences: - "<A>All plans come with unlimited private models and datasets." - "<A>AutoNLP is an automatic way to train and deploy state-of-the-art NLP models, seamlessly integrated with the Hugging Face ecosystem." - "<A>Based on how much training data and model variants are created, we send you a compute cost and payment link - as low as $10 per job." --- # MFAQ We present a multilingual FAQ retrieval model trained on the [MFAQ dataset](https://huggingface.co/datasets/clips/mfaq), it ranks candidate answers according to a given question. ## Installation ``` pip install sentence-transformers transformers ``` ## Usage You can use MFAQ with sentence-transformers or directly with a HuggingFace model. In both cases, questions need to be prepended with `<Q>`, and answers with `<A>`. #### Sentence Transformers ```python from sentence_transformers import SentenceTransformer question = "<Q>How many models can I host on HuggingFace?" answer_1 = "<A>All plans come with unlimited private models and datasets." answer_2 = "<A>AutoNLP is an automatic way to train and deploy state-of-the-art NLP models, seamlessly integrated with the Hugging Face ecosystem." answer_3 = "<A>Based on how much training data and model variants are created, we send you a compute cost and payment link - as low as $10 per job." model = SentenceTransformer('clips/mfaq') embeddings = model.encode([question, answer_1, answer_3, answer_3]) print(embeddings) ``` #### HuggingFace Transformers ```python from transformers import AutoTokenizer, AutoModel import torch def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) question = "<Q>How many models can I host on HuggingFace?" answer_1 = "<A>All plans come with unlimited private models and datasets." answer_2 = "<A>AutoNLP is an automatic way to train and deploy state-of-the-art NLP models, seamlessly integrated with the Hugging Face ecosystem." answer_3 = "<A>Based on how much training data and model variants are created, we send you a compute cost and payment link - as low as $10 per job." tokenizer = AutoTokenizer.from_pretrained('clips/mfaq') model = AutoModel.from_pretrained('clips/mfaq') # Tokenize sentences encoded_input = tokenizer([question, answer_1, answer_3, answer_3], padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) ``` ## Training You can find the training script for the model [here](https://github.com/clips/mfaq). ## People This model was developed by [Maxime De Bruyn](https://www.linkedin.com/in/maximedebruyn/), Ehsan Lotfi, Jeska Buhmann and Walter Daelemans. ## Citation information ``` @misc{debruyn2021mfaq, title={MFAQ: a Multilingual FAQ Dataset}, author={Maxime De Bruyn and Ehsan Lotfi and Jeska Buhmann and Walter Daelemans}, year={2021}, eprint={2109.12870}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
kbhugging/autonlp-text2sql-18413376
kbhugging
2021-10-15T02:36:42Z
7
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autonlp", "unk", "dataset:kbhugging/autonlp-data-text2sql", "co2_eq_emissions", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- tags: autonlp language: unk widget: - text: "I love AutoNLP 🤗" datasets: - kbhugging/autonlp-data-text2sql co2_eq_emissions: 1.4091714704861447 --- # Model Trained Using AutoNLP - Problem type: Summarization - Model ID: 18413376 - CO2 Emissions (in grams): 1.4091714704861447 ## Validation Metrics - Loss: 0.26672711968421936 - Rouge1: 61.765 - Rouge2: 52.5778 - RougeL: 61.3222 - RougeLsum: 61.1905 - Gen Len: 18.7805 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/kbhugging/autonlp-text2sql-18413376 ```
sontn122/xlm-roberta-large-finetuned-squad-v2_15102021
sontn122
2021-10-15T02:19:34Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "question-answering", "generated_from_trainer", "dataset:squad_v2", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - squad_v2 model-index: - name: xlm-roberta-large-finetuned-squad-v2_15102021 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-large-finetuned-squad-v2_15102021 This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the squad_v2 dataset. It achieves the following results on the evaluation set: - eval_loss: 17.5548 - eval_runtime: 168.7788 - eval_samples_per_second: 23.368 - eval_steps_per_second: 5.842 - epoch: 8.0 - step: 7600 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.13.1 - Tokenizers 0.10.3
gealexandri/greeksocialbert-base-greek-uncased-v1
gealexandri
2021-10-14T13:50:30Z
4
1
transformers
[ "transformers", "pytorch", "tf", "bert", "fill-mask", "el", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: el --- # GreekSocialBERT ## Model description A Greek language model based on [GreekBERT](https://huggingface.co/nlpaueb/bert-base-greek-uncased-v1) ## Training data The training data is a corpus of 458,293 documents collected from Greek social media accounts. The training corpus has been collected and provided by [Palo LTD](http://www.paloservices.com/) ## Eval results ### BibTeX entry and citation info ```bibtex @Article{info12080331, AUTHOR = {Alexandridis, Georgios and Varlamis, Iraklis and Korovesis, Konstantinos and Caridakis, George and Tsantilas, Panagiotis}, TITLE = {A Survey on Sentiment Analysis and Opinion Mining in Greek Social Media}, JOURNAL = {Information}, VOLUME = {12}, YEAR = {2021}, NUMBER = {8}, ARTICLE-NUMBER = {331}, URL = {https://www.mdpi.com/2078-2489/12/8/331}, ISSN = {2078-2489}, DOI = {10.3390/info12080331} } ```
lincoln/flaubert-mlsum-topic-classification
lincoln
2021-10-14T13:26:57Z
61
11
transformers
[ "transformers", "pytorch", "tf", "flaubert", "text-classification", "fr", "dataset:MLSUM", "arxiv:2004.14900", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: - fr license: mit datasets: - MLSUM pipeline_tag: "text-classification" widget: - text: La bourse de paris en forte baisse après que des canards ont envahit le parlement. tags: - text-classification - flaubert --- # Classification d'articles de presses avec Flaubert Ce modèle se base sur le modèle [`flaubert/flaubert_base_cased`](https://huggingface.co/flaubert/flaubert_base_cased) et à été fine-tuné en utilisant des articles de presse issus de la base de données MLSUM. Dans leur papier, les équipes de reciTAL et de la Sorbonne ont proposé comme ouverture de réaliser un modèle de détection de topic sur les articles de presse. Les topics ont été extrait à partir des URL et nous avons effectué une étape de regroupement de topics pour éliminer ceux avec un trop faible volume et ceux qui paraissaient redondants. Nous avons finalement utilisé la liste de topics avec les regroupements suivants: * __Economie__: economie, argent, emploi, entreprises, economie-francaise, immobilier, crise-financiere, evasion-fiscale, economie-mondiale, m-voiture, smart-cities, automobile, logement, flottes-d-entreprise, import, crise-de-l-euro, guide-des-impots, le-club-de-l-economie, telephonie-mobile * __Opinion__: idees, les-decodeurs, tribunes * __Politique__: politique, election-presidentielle-2012, election-presidentielle-2017, elections-americaines, municipales, referendum-sur-le-brexit, elections-legislatives-2017, elections-regionales, donald-trump, elections-regionales-2015, europeennes-2014, elections-cantonales-2011, primaire-parti-socialiste, gouvernement-philippe, elections-departementales-2015, chroniques-de-la-presidence-trump, primaire-de-la-gauche, la-republique-en-marche, elections-americaines-mi-mandat-2018, elections, elections-italiennes, elections-senatoriales * __Societe__: societe, sante, attaques-a-paris, immigration-et-diversite, religions, medecine, francaises-francais, mobilite * __Culture__: televisions-radio, musiques, festival, arts, scenes, festival-de-cannes, mode, bande-dessinee, architecture, vins, photo, m-mode, fashion-week, les-recettes-du-monde, tele-zapping, critique-litteraire, festival-d-avignon, m-gastronomie-le-lieu, les-enfants-akira, gastronomie, culture, livres, cinema, actualite-medias, blog, m-gastronomie * __Sport__: sport, football, jeux-olympiques, ligue-1, tennis, coupe-du-monde, mondial-2018, rugby, euro-2016, jeux-olympiques-rio-2016, cyclisme, ligue-des-champions, basket, roland-garros, athletisme, tour-de-france, euro2012, jeux-olympiques-pyeongchang-2018, coupe-du-monde-rugby, formule-1, voile, top-14, ski, handball, sports-mecaniques, sports-de-combat, blog-du-tour-de-france, sport-et-societe, sports-de-glisse, tournoi-des-6-nations * __Environement__: planete, climat, biodiversite, pollution, energies, cop21 * __Technologie__: pixels, technologies, sciences, cosmos, la-france-connectee, trajectoires-digitales * __Education__: campus, education, bac-lycee, enseignement-superieur, ecole-primaire-et-secondaire, o21, orientation-scolaire, brevet-college * __Justice__: police-justice, panama-papers, affaire-penelope-fillon, documents-wikileaks, enquetes, paradise-papers Les thèmes ayant moins de 100 articles n'ont pas été pris en compte. Nous avons également mis de côté les articles faisant référence à des topics geographiques, ce qui a donné lieu à un nouveau modèle de classification. Après nettoyage, la base MLSUM a été réduite à 293 995 articles. Le corps d'un article en moyenne comporte 694 tokens. Nous avons entrainé le modèle sur 20% de la base nettoyée. En moyenne, le nombre d'articles par classe est de ~4K. ## Entrainement Nous avons benchmarké différents modèles en les entrainant sur différentes parties des articles (titre, résumé, corps et titre+résumé) et avec des échantillons d'apprentissage de tailles différentes. ![Performance](./assets/Accuracy_cat.png) Les modèles ont été entrainé sur le cloud Azure avec des Tesla V100. ## Modèle Le modèle partagé sur HF est le modéle qui prend en entrée le corps d'un article. Nous l'avons entrainé sur 20% du jeu de donnée nettoyé. ## Résulats ![Matrice de confusion](assets/confusion_cat_m_0.2.png) *Les lignes correspondent aux labels prédits et les colonnes aux véritables topics. Les pourcentages sont calculés sur les colonnes.* _Nous garantissons pas les résultats sur le long terme. Modèle réalisé dans le cadre d'un POC._ ## Utilisation ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification from transformers import TextClassificationPipeline model_name = 'lincoln/flaubert-mlsum-topic-classification' loaded_tokenizer = AutoTokenizer.from_pretrained(model_name) loaded_model = AutoModelForSequenceClassification.from_pretrained(model_name) nlp = TextClassificationPipeline(model=loaded_model, tokenizer=loaded_tokenizer) nlp("Le Bayern Munich prend la grenadine.", truncation=True) ``` ## Citation ```bibtex @article{scialom2020mlsum, title={MLSUM: The Multilingual Summarization Corpus}, author={Thomas Scialom and Paul-Alexis Dray and Sylvain Lamprier and Benjamin Piwowarski and Jacopo Staiano}, year={2020}, eprint={2004.14900}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
S34NtheGuy/DialoGPT-medium-Glass_Of_Water
S34NtheGuy
2021-10-14T12:28:04Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- tags: - conversational --- # DialoGPT chat bot model using discord messages as data
emekaboris/autonlp-txc-17923129
emekaboris
2021-10-14T12:19:07Z
6
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autonlp", "en", "dataset:emekaboris/autonlp-data-txc", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - emekaboris/autonlp-data-txc co2_eq_emissions: 610.861733873082 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 17923129 - CO2 Emissions (in grams): 610.861733873082 ## Validation Metrics - Loss: 0.2319454699754715 - Accuracy: 0.9264228741381642 - Macro F1: 0.6730537318152493 - Micro F1: 0.9264228741381642 - Weighted F1: 0.9251493598895151 - Macro Precision: 0.7767479491141245 - Micro Precision: 0.9264228741381642 - Weighted Precision: 0.9277971545757154 - Macro Recall: 0.6617262519071917 - Micro Recall: 0.9264228741381642 - Weighted Recall: 0.9264228741381642 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/emekaboris/autonlp-txc-17923129 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("emekaboris/autonlp-txc-17923129", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("emekaboris/autonlp-txc-17923129", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
huggingtweets/istfoundation-sciencebits
huggingtweets
2021-10-14T10:58:31Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/istfoundation-sciencebits/1634209108264/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1340996475472494593/yqCQjZ06_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1341001037142999041/h86Ch8TO_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Science Bits & International Science Teaching Foundation</div> <div style="text-align: center; font-size: 14px;">@istfoundation-sciencebits</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Science Bits & International Science Teaching Foundation. | Data | Science Bits | International Science Teaching Foundation | | --- | --- | --- | | Tweets downloaded | 2741 | 163 | | Retweets | 759 | 103 | | Short tweets | 47 | 1 | | Tweets kept | 1935 | 59 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/c9crff9r/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @istfoundation-sciencebits's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2c68vj42) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2c68vj42/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/istfoundation-sciencebits') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
joyebright/Top3-with-mixing
joyebright
2021-10-14T10:10:25Z
0
0
null
[ "translation", "en", "fr", "dataset:wmt", "dataset:iwslt2014", "license:apache-2.0", "region:us" ]
translation
2022-03-02T23:29:05Z
--- language: - en - fr tags: - translation license: apache-2.0 datasets: - wmt - iwslt2014 metrics: - bleu - ter - chrf2 - sacrebleu ---
joyebright/Top4-with-mixing
joyebright
2021-10-14T10:09:56Z
0
0
null
[ "translation", "en", "fr", "dataset:wmt", "dataset:iwslt2014", "license:apache-2.0", "region:us" ]
translation
2022-03-02T23:29:05Z
--- language: - en - fr tags: - translation license: apache-2.0 datasets: - wmt - iwslt2014 metrics: - bleu - ter - chrf2 - sacrebleu ---
joyebright/Top2-without-mixing
joyebright
2021-10-14T10:08:58Z
0
0
null
[ "translation", "en", "fr", "dataset:wmt", "dataset:iwslt2014", "license:apache-2.0", "region:us" ]
translation
2022-03-02T23:29:05Z
--- language: - en - fr tags: - translation license: apache-2.0 datasets: - wmt - iwslt2014 metrics: - bleu - ter - chrf2 - sacrebleu ---
joyebright/Top4-without-mixing
joyebright
2021-10-14T10:08:37Z
0
0
null
[ "translation", "en", "fr", "dataset:wmt", "dataset:iwslt2014", "license:apache-2.0", "region:us" ]
translation
2022-03-02T23:29:05Z
--- language: - en - fr tags: - translation license: apache-2.0 datasets: - wmt - iwslt2014 metrics: - bleu - ter - chrf2 - sacrebleu ---
joyebright/Top2-with-mixing
joyebright
2021-10-14T10:07:58Z
0
0
null
[ "translation", "en", "fr", "dataset:wmt", "dataset:iwslt2014", "license:apache-2.0", "region:us" ]
translation
2022-03-02T23:29:05Z
--- language: - en - fr tags: - translation license: apache-2.0 datasets: - wmt - iwslt2014 metrics: - bleu - ter - chrf2 - sacrebleu ---
joyebright/Top1-with-without-mixing
joyebright
2021-10-14T08:56:42Z
0
0
null
[ "translation", "en", "fr", "dataset:wmt", "dataset:iwslt2014", "license:apache-2.0", "region:us" ]
translation
2022-03-02T23:29:05Z
--- language: - en - fr tags: - translation license: apache-2.0 datasets: - wmt - iwslt2014 metrics: - bleu - ter - chrf2 - sacrebleu ---
huggingtweets/sciencebits
huggingtweets
2021-10-14T08:42:39Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/sciencebits/1634200955730/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1340996475472494593/yqCQjZ06_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Science Bits</div> <div style="text-align: center; font-size: 14px;">@sciencebits</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Science Bits. | Data | Science Bits | | --- | --- | | Tweets downloaded | 2741 | | Retweets | 759 | | Short tweets | 47 | | Tweets kept | 1935 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/22jxh8wi/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sciencebits's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/h0qt4tsw) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/h0qt4tsw/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/sciencebits') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Langboat/mengzi-oscar-base-caption
Langboat
2021-10-14T02:17:06Z
13
2
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "zh", "arxiv:2110.06696", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- language: - zh license: apache-2.0 --- # Mengzi-oscar-base-caption (Chinese Multi-modal Image Caption model) [Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese](https://arxiv.org/abs/2110.06696) Mengzi-oscar-base-caption is fine-tuned based on Chinese multi-modal pre-training model [Mengzi-Oscar](https://github.com/Langboat/Mengzi/blob/main/Mengzi-Oscar.md), on AIC-ICC Chinese image caption dataset. ## Usage #### Installation Check [INSTALL.md](https://github.com/microsoft/Oscar/blob/master/INSTALL.md) for installation instructions. #### Pretrain & fine-tune See the [Mengzi-Oscar.md](https://github.com/Langboat/Mengzi/blob/main/Mengzi-Oscar.md) for details. ## Citation If you find the technical report or resource is useful, please cite the following technical report in your paper. ``` @misc{zhang2021mengzi, title={Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese}, author={Zhuosheng Zhang and Hanqing Zhang and Keming Chen and Yuhang Guo and Jingyun Hua and Yulong Wang and Ming Zhou}, year={2021}, eprint={2110.06696}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
jogonba2/bart-JES-cnn_dailymail
jogonba2
2021-10-14T02:00:37Z
7
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: bart-JES-cnn_dailymail results: - task: name: Summarization type: summarization metrics: - name: Rouge1 type: rouge value: 43.9753 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-JES-cnn_dailymail This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1452 - Rouge1: 43.9753 - Rouge2: 19.7191 - Rougel: 33.6236 - Rougelsum: 41.1683 - Gen Len: 80.1767 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:------:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 1.2949 | 1.0 | 71779 | 1.2080 | 11.7171 | 3.3284 | 11.3209 | 11.4022 | 20.0 | | 1.191 | 2.0 | 143558 | 1.1615 | 11.8484 | 3.363 | 11.4175 | 11.5037 | 20.0 | | 1.0907 | 3.0 | 215337 | 1.1452 | 12.6221 | 3.773 | 12.1226 | 12.2359 | 20.0 | | 0.9798 | 4.0 | 287116 | 1.1670 | 12.4306 | 3.7329 | 11.9497 | 12.0617 | 20.0 | | 0.9112 | 5.0 | 358895 | 1.1667 | 12.5404 | 3.7842 | 12.0541 | 12.1643 | 20.0 | | 0.8358 | 6.0 | 430674 | 1.1997 | 12.5153 | 3.778 | 12.0382 | 12.1332 | 20.0 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.7.1+cu110 - Datasets 1.11.0 - Tokenizers 0.10.3
Craig/paraphrase-MiniLM-L6-v2
Craig
2021-10-13T15:01:15Z
1,174
3
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "arxiv:1908.10084", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:04Z
--- pipeline_tag: feature-extraction license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # sentence-transformers/paraphrase-MiniLM-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. This is a clone of the original model, with `pipeline_tag` metadata changed to `feature-extraction`, so it can just return the embedded vector. Otherwise it is unchanged. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/paraphrase-MiniLM-L6-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-MiniLM-L6-v2') model = AutoModel.from_pretrained('sentence-transformers/paraphrase-MiniLM-L6-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/paraphrase-MiniLM-L6-v2) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
GKLMIP/roberta-hindi-romanized
GKLMIP
2021-10-13T13:46:13Z
7
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
If you use our model, please consider citing our paper: ``` @InProceedings{, author="Huang, Xixuan and Lin, Nankai and Li, Kexin and Wang, Lianxi and Gan SuiFu", title="HinPLMs: Pre-trained Language Models for Hindi", booktitle="The International Conference on Asian Language Processing", year="2021", publisher="IEEE Xplore" } ```
GKLMIP/roberta-hindi-devanagari
GKLMIP
2021-10-13T13:44:42Z
8
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
If you use our model, please consider citing our paper: ``` @InProceedings{, author="Huang, Xixuan and Lin, Nankai and Li, Kexin and Wang, Lianxi and Gan SuiFu", title="HinPLMs: Pre-trained Language Models for Hindi", booktitle="The International Conference on Asian Language Processing", year="2021", publisher="IEEE Xplore" } ```
pucpr/clinicalnerpt-chemical
pucpr
2021-10-13T09:33:30Z
5
5
transformers
[ "transformers", "pytorch", "jax", "bert", "token-classification", "pt", "dataset:SemClinBr", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: "pt" widget: - text: "Dispneia venoso central em subclavia D duplolumen recebendo solução salina e glicosada em BI." - text: "Paciente com Sepse pulmonar em D8 tazocin (paciente não recebeu por 2 dias Atb)." - text: "FOI REALIZADO CURSO DE ATB COM LEVOFLOXACINA POR 7 DIAS." datasets: - SemClinBr thumbnail: "https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" --- <img src="https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" alt="Logo BioBERTpt"> # Portuguese Clinical NER - Chemical & Drugs The Chemical&Drugs NER model is part of the [BioBERTpt project](https://www.aclweb.org/anthology/2020.clinicalnlp-1.7/), where 13 models of clinical entities (compatible with UMLS) were trained. All NER model from "pucpr" user was trained from the Brazilian clinical corpus [SemClinBr](https://github.com/HAILab-PUCPR/SemClinBr), with 10 epochs and IOB2 format, from BioBERTpt(all) model. ## Acknowledgements This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001. ## Citation ``` @inproceedings{schneider-etal-2020-biobertpt, title = "{B}io{BERT}pt - A {P}ortuguese Neural Language Model for Clinical Named Entity Recognition", author = "Schneider, Elisa Terumi Rubel and de Souza, Jo{\~a}o Vitor Andrioli and Knafou, Julien and Oliveira, Lucas Emanuel Silva e and Copara, Jenny and Gumiel, Yohan Bonescki and Oliveira, Lucas Ferro Antunes de and Paraiso, Emerson Cabrera and Teodoro, Douglas and Barra, Cl{\'a}udia Maria Cabral Moro", booktitle = "Proceedings of the 3rd Clinical Natural Language Processing Workshop", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.clinicalnlp-1.7", pages = "65--72", abstract = "With the growing number of electronic health record data, clinical NLP tasks have become increasingly relevant to unlock valuable information from unstructured clinical text. Although the performance of downstream NLP tasks, such as named-entity recognition (NER), in English corpus has recently improved by contextualised language models, less research is available for clinical texts in low resource languages. Our goal is to assess a deep contextual embedding model for Portuguese, so called BioBERTpt, to support clinical and biomedical NER. We transfer learned information encoded in a multilingual-BERT model to a corpora of clinical narratives and biomedical-scientific papers in Brazilian Portuguese. To evaluate the performance of BioBERTpt, we ran NER experiments on two annotated corpora containing clinical narratives and compared the results with existing BERT models. Our in-domain model outperformed the baseline model in F1-score by 2.72{\%}, achieving higher performance in 11 out of 13 assessed entities. We demonstrate that enriching contextual embedding models with domain literature can play an important role in improving performance for specific NLP tasks. The transfer learning process enhanced the Portuguese biomedical NER model by reducing the necessity of labeled data and the demand for retraining a whole new model.", } ``` ## Questions? Post a Github issue on the [BioBERTpt repo](https://github.com/HAILab-PUCPR/BioBERTpt).
pucpr/clinicalnerpt-diagnostic
pucpr
2021-10-13T09:33:19Z
200
5
transformers
[ "transformers", "pytorch", "bert", "token-classification", "pt", "dataset:SemClinBr", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: "pt" widget: - text: "Uretrocistografia miccional, residuo pos miccional significativo." - text: "No exame, apresentou apenas leve hiperemia no local do choque." datasets: - SemClinBr thumbnail: "https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" --- <img src="https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" alt="Logo BioBERTpt"> # Portuguese Clinical NER - Diagnostic The Diagnostic NER model is part of the [BioBERTpt project](https://www.aclweb.org/anthology/2020.clinicalnlp-1.7/), where 13 models of clinical entities (compatible with UMLS) were trained. All NER model from "pucpr" user was trained from the Brazilian clinical corpus [SemClinBr](https://github.com/HAILab-PUCPR/SemClinBr), with 10 epochs and IOB2 format, from BioBERTpt(all) model. ## Acknowledgements This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001. ## Citation ``` @inproceedings{schneider-etal-2020-biobertpt, title = "{B}io{BERT}pt - A {P}ortuguese Neural Language Model for Clinical Named Entity Recognition", author = "Schneider, Elisa Terumi Rubel and de Souza, Jo{\~a}o Vitor Andrioli and Knafou, Julien and Oliveira, Lucas Emanuel Silva e and Copara, Jenny and Gumiel, Yohan Bonescki and Oliveira, Lucas Ferro Antunes de and Paraiso, Emerson Cabrera and Teodoro, Douglas and Barra, Cl{\'a}udia Maria Cabral Moro", booktitle = "Proceedings of the 3rd Clinical Natural Language Processing Workshop", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.clinicalnlp-1.7", pages = "65--72", abstract = "With the growing number of electronic health record data, clinical NLP tasks have become increasingly relevant to unlock valuable information from unstructured clinical text. Although the performance of downstream NLP tasks, such as named-entity recognition (NER), in English corpus has recently improved by contextualised language models, less research is available for clinical texts in low resource languages. Our goal is to assess a deep contextual embedding model for Portuguese, so called BioBERTpt, to support clinical and biomedical NER. We transfer learned information encoded in a multilingual-BERT model to a corpora of clinical narratives and biomedical-scientific papers in Brazilian Portuguese. To evaluate the performance of BioBERTpt, we ran NER experiments on two annotated corpora containing clinical narratives and compared the results with existing BERT models. Our in-domain model outperformed the baseline model in F1-score by 2.72{\%}, achieving higher performance in 11 out of 13 assessed entities. We demonstrate that enriching contextual embedding models with domain literature can play an important role in improving performance for specific NLP tasks. The transfer learning process enhanced the Portuguese biomedical NER model by reducing the necessity of labeled data and the demand for retraining a whole new model.", } ``` ## Questions? Post a Github issue on the [BioBERTpt repo](https://github.com/HAILab-PUCPR/BioBERTpt).
pucpr/clinicalnerpt-healthcare
pucpr
2021-10-13T09:32:28Z
6
6
transformers
[ "transformers", "pytorch", "bert", "token-classification", "pt", "dataset:SemClinBr", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: "pt" widget: - text: "Acompanhamento da diabetes, paciente encaminhado da unidade de saúde." - text: "Paciente encaminhado por alteração na função renal." datasets: - SemClinBr thumbnail: "https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" --- <img src="https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" alt="Logo BioBERTpt"> # Portuguese Clinical NER - HealthCare The HealthCare NER model is part of the [BioBERTpt project](https://www.aclweb.org/anthology/2020.clinicalnlp-1.7/), where 13 models of clinical entities (compatible with UMLS) were trained. All NER model from "pucpr" user was trained from the Brazilian clinical corpus [SemClinBr](https://github.com/HAILab-PUCPR/SemClinBr), with 10 epochs and IOB2 format, from BioBERTpt(all) model. ## Acknowledgements This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001. ## Citation ``` @inproceedings{schneider-etal-2020-biobertpt, title = "{B}io{BERT}pt - A {P}ortuguese Neural Language Model for Clinical Named Entity Recognition", author = "Schneider, Elisa Terumi Rubel and de Souza, Jo{\~a}o Vitor Andrioli and Knafou, Julien and Oliveira, Lucas Emanuel Silva e and Copara, Jenny and Gumiel, Yohan Bonescki and Oliveira, Lucas Ferro Antunes de and Paraiso, Emerson Cabrera and Teodoro, Douglas and Barra, Cl{\'a}udia Maria Cabral Moro", booktitle = "Proceedings of the 3rd Clinical Natural Language Processing Workshop", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.clinicalnlp-1.7", pages = "65--72", abstract = "With the growing number of electronic health record data, clinical NLP tasks have become increasingly relevant to unlock valuable information from unstructured clinical text. Although the performance of downstream NLP tasks, such as named-entity recognition (NER), in English corpus has recently improved by contextualised language models, less research is available for clinical texts in low resource languages. Our goal is to assess a deep contextual embedding model for Portuguese, so called BioBERTpt, to support clinical and biomedical NER. We transfer learned information encoded in a multilingual-BERT model to a corpora of clinical narratives and biomedical-scientific papers in Brazilian Portuguese. To evaluate the performance of BioBERTpt, we ran NER experiments on two annotated corpora containing clinical narratives and compared the results with existing BERT models. Our in-domain model outperformed the baseline model in F1-score by 2.72{\%}, achieving higher performance in 11 out of 13 assessed entities. We demonstrate that enriching contextual embedding models with domain literature can play an important role in improving performance for specific NLP tasks. The transfer learning process enhanced the Portuguese biomedical NER model by reducing the necessity of labeled data and the demand for retraining a whole new model.", } ``` ## Questions? Post a Github issue on the [BioBERTpt repo](https://github.com/HAILab-PUCPR/BioBERTpt).
pucpr/clinicalnerpt-procedure
pucpr
2021-10-13T09:32:04Z
96
4
transformers
[ "transformers", "pytorch", "bert", "token-classification", "pt", "dataset:SemClinBr", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: "pt" widget: - text: "Dispneia venoso central em subclavia D duplolumen recebendo solução salina e glicosada em BI." - text: "FOI REALIZADO CURSO DE ATB COM LEVOFLOXACINA POR 7 DIAS." datasets: - SemClinBr thumbnail: "https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" --- <img src="https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" alt="Logo BioBERTpt"> # Portuguese Clinical NER - Procedure The Procedure NER model is part of the [BioBERTpt project](https://www.aclweb.org/anthology/2020.clinicalnlp-1.7/), where 13 models of clinical entities (compatible with UMLS) were trained. All NER model from "pucpr" user was trained from the Brazilian clinical corpus [SemClinBr](https://github.com/HAILab-PUCPR/SemClinBr), with 10 epochs and IOB2 format, from BioBERTpt(all) model. ## Acknowledgements This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001. ## Citation ``` @inproceedings{schneider-etal-2020-biobertpt, title = "{B}io{BERT}pt - A {P}ortuguese Neural Language Model for Clinical Named Entity Recognition", author = "Schneider, Elisa Terumi Rubel and de Souza, Jo{\~a}o Vitor Andrioli and Knafou, Julien and Oliveira, Lucas Emanuel Silva e and Copara, Jenny and Gumiel, Yohan Bonescki and Oliveira, Lucas Ferro Antunes de and Paraiso, Emerson Cabrera and Teodoro, Douglas and Barra, Cl{\'a}udia Maria Cabral Moro", booktitle = "Proceedings of the 3rd Clinical Natural Language Processing Workshop", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.clinicalnlp-1.7", pages = "65--72", abstract = "With the growing number of electronic health record data, clinical NLP tasks have become increasingly relevant to unlock valuable information from unstructured clinical text. Although the performance of downstream NLP tasks, such as named-entity recognition (NER), in English corpus has recently improved by contextualised language models, less research is available for clinical texts in low resource languages. Our goal is to assess a deep contextual embedding model for Portuguese, so called BioBERTpt, to support clinical and biomedical NER. We transfer learned information encoded in a multilingual-BERT model to a corpora of clinical narratives and biomedical-scientific papers in Brazilian Portuguese. To evaluate the performance of BioBERTpt, we ran NER experiments on two annotated corpora containing clinical narratives and compared the results with existing BERT models. Our in-domain model outperformed the baseline model in F1-score by 2.72{\%}, achieving higher performance in 11 out of 13 assessed entities. We demonstrate that enriching contextual embedding models with domain literature can play an important role in improving performance for specific NLP tasks. The transfer learning process enhanced the Portuguese biomedical NER model by reducing the necessity of labeled data and the demand for retraining a whole new model.", } ``` ## Questions? Post a Github issue on the [BioBERTpt repo](https://github.com/HAILab-PUCPR/BioBERTpt).
pucpr/clinicalnerpt-sign
pucpr
2021-10-13T09:31:19Z
5
4
transformers
[ "transformers", "pytorch", "bert", "token-classification", "pt", "dataset:SemClinBr", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: "pt" widget: - text: "Há 15 anos relata dor lombar com irradiação para coxa direita." - text: "Paciente segue internado, sem presença de edema." datasets: - SemClinBr thumbnail: "https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" --- <img src="https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" alt="Logo BioBERTpt"> # Portuguese Clinical NER - Sign The Sign NER model is part of the [BioBERTpt project](https://www.aclweb.org/anthology/2020.clinicalnlp-1.7/), where 13 models of clinical entities (compatible with UMLS) were trained. All NER model from "pucpr" user was trained from the Brazilian clinical corpus [SemClinBr](https://github.com/HAILab-PUCPR/SemClinBr), with 10 epochs and IOB2 format, from BioBERTpt(all) model. ## Acknowledgements This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001. ## Citation ``` @inproceedings{schneider-etal-2020-biobertpt, title = "{B}io{BERT}pt - A {P}ortuguese Neural Language Model for Clinical Named Entity Recognition", author = "Schneider, Elisa Terumi Rubel and de Souza, Jo{\~a}o Vitor Andrioli and Knafou, Julien and Oliveira, Lucas Emanuel Silva e and Copara, Jenny and Gumiel, Yohan Bonescki and Oliveira, Lucas Ferro Antunes de and Paraiso, Emerson Cabrera and Teodoro, Douglas and Barra, Cl{\'a}udia Maria Cabral Moro", booktitle = "Proceedings of the 3rd Clinical Natural Language Processing Workshop", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.clinicalnlp-1.7", pages = "65--72", abstract = "With the growing number of electronic health record data, clinical NLP tasks have become increasingly relevant to unlock valuable information from unstructured clinical text. Although the performance of downstream NLP tasks, such as named-entity recognition (NER), in English corpus has recently improved by contextualised language models, less research is available for clinical texts in low resource languages. Our goal is to assess a deep contextual embedding model for Portuguese, so called BioBERTpt, to support clinical and biomedical NER. We transfer learned information encoded in a multilingual-BERT model to a corpora of clinical narratives and biomedical-scientific papers in Brazilian Portuguese. To evaluate the performance of BioBERTpt, we ran NER experiments on two annotated corpora containing clinical narratives and compared the results with existing BERT models. Our in-domain model outperformed the baseline model in F1-score by 2.72{\%}, achieving higher performance in 11 out of 13 assessed entities. We demonstrate that enriching contextual embedding models with domain literature can play an important role in improving performance for specific NLP tasks. The transfer learning process enhanced the Portuguese biomedical NER model by reducing the necessity of labeled data and the demand for retraining a whole new model.", } ``` ## Questions? Post a Github issue on the [BioBERTpt repo](https://github.com/HAILab-PUCPR/BioBERTpt).
pucpr/clinicalnerpt-medical
pucpr
2021-10-13T09:28:28Z
150
6
transformers
[ "transformers", "pytorch", "bert", "token-classification", "pt", "dataset:SemClinBr", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: "pt" widget: - text: "Hoje realizou avaliacao de mp-cdi, com eletrodos atrial e ventricular." - text: "Paciente encaminhado a câmera hiperbárica no período da tarde." datasets: - SemClinBr thumbnail: "https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" --- <img src="https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" alt="Logo BioBERTpt"> # Portuguese Clinical NER - Medical The Medical NER model is part of the [BioBERTpt project](https://www.aclweb.org/anthology/2020.clinicalnlp-1.7/), where 13 models of clinical entities (compatible with UMLS) were trained. All NER model from "pucpr" user was trained from the Brazilian clinical corpus [SemClinBr](https://github.com/HAILab-PUCPR/SemClinBr), with 10 epochs and IOB2 format, from BioBERTpt(all) model. ## Acknowledgements This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001. ## Citation ``` @inproceedings{schneider-etal-2020-biobertpt, title = "{B}io{BERT}pt - A {P}ortuguese Neural Language Model for Clinical Named Entity Recognition", author = "Schneider, Elisa Terumi Rubel and de Souza, Jo{\~a}o Vitor Andrioli and Knafou, Julien and Oliveira, Lucas Emanuel Silva e and Copara, Jenny and Gumiel, Yohan Bonescki and Oliveira, Lucas Ferro Antunes de and Paraiso, Emerson Cabrera and Teodoro, Douglas and Barra, Cl{\'a}udia Maria Cabral Moro", booktitle = "Proceedings of the 3rd Clinical Natural Language Processing Workshop", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.clinicalnlp-1.7", pages = "65--72", abstract = "With the growing number of electronic health record data, clinical NLP tasks have become increasingly relevant to unlock valuable information from unstructured clinical text. Although the performance of downstream NLP tasks, such as named-entity recognition (NER), in English corpus has recently improved by contextualised language models, less research is available for clinical texts in low resource languages. Our goal is to assess a deep contextual embedding model for Portuguese, so called BioBERTpt, to support clinical and biomedical NER. We transfer learned information encoded in a multilingual-BERT model to a corpora of clinical narratives and biomedical-scientific papers in Brazilian Portuguese. To evaluate the performance of BioBERTpt, we ran NER experiments on two annotated corpora containing clinical narratives and compared the results with existing BERT models. Our in-domain model outperformed the baseline model in F1-score by 2.72{\%}, achieving higher performance in 11 out of 13 assessed entities. We demonstrate that enriching contextual embedding models with domain literature can play an important role in improving performance for specific NLP tasks. The transfer learning process enhanced the Portuguese biomedical NER model by reducing the necessity of labeled data and the demand for retraining a whole new model.", } ``` ## Questions? Post a Github issue on the [BioBERTpt repo](https://github.com/HAILab-PUCPR/BioBERTpt).
Fujitsu/pytorrent
Fujitsu
2021-10-12T18:37:18Z
14
2
transformers
[ "transformers", "pytorch", "jax", "roberta", "feature-extraction", "en", "dataset:pytorrent", "arxiv:2110.01710", "license:mit", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:04Z
--- license: mit widget: language: - en datasets: - pytorrent --- # 🔥 RoBERTa-MLM-based PyTorrent 1M 🔥 Pretrained weights based on [PyTorrent Dataset](https://github.com/fla-sil/PyTorrent) which is a curated data from a large official Python packages. We use PyTorrent dataset to train a preliminary DistilBERT-Masked Language Modeling(MLM) model from scratch. The trained model, along with the dataset, aims to help researchers to easily and efficiently work on a large dataset of Python packages using only 5 lines of codes to load the transformer-based model. We use 1M raw Python scripts of PyTorrent that includes 12,350,000 LOC to train the model. We also train a byte-level Byte-pair encoding (BPE) tokenizer that includes 56,000 tokens, which is truncated LOC with the length of 50 to save computation resources. ### Training Objective This model is trained with a Masked Language Model (MLM) objective. ## How to use the model? ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Fujitsu/pytorrent") model = AutoModel.from_pretrained("Fujitsu/pytorrent") ``` ## Citation Preprint: [https://arxiv.org/pdf/2110.01710.pdf](https://arxiv.org/pdf/2110.01710.pdf) ``` @misc{bahrami2021pytorrent, title={PyTorrent: A Python Library Corpus for Large-scale Language Models}, author={Mehdi Bahrami and N. C. Shrikanth and Shade Ruangwan and Lei Liu and Yuji Mizobuchi and Masahiro Fukuyori and Wei-Peng Chen and Kazuki Munakata and Tim Menzies}, year={2021}, eprint={2110.01710}, archivePrefix={arXiv}, primaryClass={cs.SE}, howpublished={https://arxiv.org/pdf/2110.01710}, } ```
S34NtheGuy/DialoGPT-small-Harry282
S34NtheGuy
2021-10-12T17:21:19Z
6
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- tags: - conversational --- # DialoGPT chat bot model using discord messages as data
m3hrdadfi/xlmr-large-qa-sv
m3hrdadfi
2021-10-12T13:50:27Z
9
2
transformers
[ "transformers", "pytorch", "tf", "xlm-roberta", "question-answering", "roberta", "squad", "sv", "multilingual", "model-index", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: - sv - multilingual tags: - question-answering - xlm-roberta - roberta - squad metrics: - squad_v2 widget: - text: Vilket datum är den svenska nationaldagen? context: >- Sveriges nationaldag och svenska flaggans dag firas den 6 juni varje år och är en helgdag i Sverige. Tidigare firades 6 juni enbart som "svenska flaggans dag" och det var först 1983 som dagen även fick status som nationaldag. - text: Vad innebär helgdag i Sverige? context: >- Sveriges nationaldag och svenska flaggans dag firas den 6 juni varje år och är en helgdag i Sverige. Tidigare firades 6 juni enbart som "svenska flaggans dag" och det var först 1983 som dagen även fick status som nationaldag. - text: Vilket år tillkom Sveriges nationaldag? context: >- Sveriges nationaldag och svenska flaggans dag firas den 6 juni varje år och är en helgdag i Sverige. Tidigare firades 6 juni enbart som "svenska flaggans dag" och det var först 1983 som dagen även fick status som nationaldag. model-index: - name: "XLM-RoBERTa large for QA (SwedishQA - \U0001F1F8\U0001F1EA)" results: - task: type: question-answering name: Question Answering dataset: type: swedish_qa name: SwedishQA args: sv metrics: - type: squad_v2 value: 87.97 name: Eval F1 args: max_order - type: squad_v2 value: 78.79 name: Eval Exact args: max_order --- # XLM-RoBERTa large for QA (SwedishQA - 🇸🇪) This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the [SwedishQA](https://github.com/Vottivott/building-a-swedish-qa-model) dataset. ## Hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2.0 - mixed_precision_training: Native AMP ## Performance Evaluation results on the eval set with the official [eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/). ### Evalset ```text "exact": 78.79554655870446, "f1": 87.97339064752278, "total": 5928 ``` ## Usage ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name_or_path = "m3hrdadfi/xlmr-large-qa-sv" nlp = pipeline('question-answering', model=model_name_or_path, tokenizer=model_name_or_path) context = """ Sveriges nationaldag och svenska flaggans dag firas den 6 juni varje år och är en helgdag i Sverige. Tidigare firades 6 juni enbart som "svenska flaggans dag" och det var först 1983 som dagen även fick status som nationaldag. """ questions = [ "Vilket datum är den svenska nationaldagen?", "Vad innebär helgdag i Sverige?", "Vilket år tillkom Sveriges nationaldag?" ] kwargs = {} for question in questions: r = nlp(question=question, context=context, **kwargs) answer = " ".join([token.strip() for token in r["answer"].strip().split() if token.strip()]) print(f"{question} {answer}") ``` **Output** ```text Vilket datum är den svenska nationaldagen? 6 juni Vad innebär helgdag i Sverige? svenska flaggans dag Vilket år tillkom Sveriges nationaldag? 1983 ``` ## Authors - [Mehrdad Farahani](https://github.com/m3hrdadfi) ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.9.1+cu111 - Datasets 1.12.1 - Tokenizers 0.10.3
mamlong34/t5_base_race_cosmos_qa
mamlong34
2021-10-12T07:17:10Z
12
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "dataset:race", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - race metrics: - accuracy model-index: - name: t5_base_race_cosmos_qa results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5_base_race_cosmos_qa This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the race dataset. It achieves the following results on the evaluation set: - Loss: 0.4414 - Accuracy: 0.7424 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.4355 | 1.0 | 10984 | 0.3910 | 0.7072 | | 0.3233 | 2.0 | 21968 | 0.3833 | 0.7321 | | 0.229 | 3.0 | 32952 | 0.4414 | 0.7424 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0 - Datasets 1.12.1 - Tokenizers 0.10.3
geckos/distilbert-base-uncased-fine-tuned-ner
geckos
2021-10-12T05:59:22Z
5
1
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-uncased-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - name: Precision type: precision value: 0.9303228669699323 - name: Recall type: recall value: 0.9380243875153821 - name: F1 type: f1 value: 0.9341577540106952 - name: Accuracy type: accuracy value: 0.9842407104389407 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0606 - Precision: 0.9303 - Recall: 0.9380 - F1: 0.9342 - Accuracy: 0.9842 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2459 | 1.0 | 878 | 0.0696 | 0.9117 | 0.9195 | 0.9156 | 0.9808 | | 0.0513 | 2.0 | 1756 | 0.0602 | 0.9223 | 0.9376 | 0.9299 | 0.9835 | | 0.0304 | 3.0 | 2634 | 0.0606 | 0.9303 | 0.9380 | 0.9342 | 0.9842 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.12.1 - Tokenizers 0.10.3
gauravtripathy/distilbert-base-uncased-finetuned-cola
gauravtripathy
2021-10-12T05:57:36Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5264763891845121 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7550 - Matthews Correlation: 0.5265 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5296 | 1.0 | 535 | 0.5144 | 0.4215 | | 0.3504 | 2.0 | 1070 | 0.4903 | 0.5046 | | 0.2393 | 3.0 | 1605 | 0.6339 | 0.5058 | | 0.175 | 4.0 | 2140 | 0.7550 | 0.5265 | | 0.1259 | 5.0 | 2675 | 0.8688 | 0.5259 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.12.1 - Tokenizers 0.10.3