modelId
stringlengths
4
112
sha
stringlengths
40
40
lastModified
stringlengths
24
24
tags
sequence
pipeline_tag
stringclasses
29 values
private
bool
1 class
author
stringlengths
2
38
config
null
id
stringlengths
4
112
downloads
float64
0
36.8M
likes
float64
0
712
library_name
stringclasses
17 values
__index_level_0__
int64
0
38.5k
readme
stringlengths
0
186k
Evelyn18/legalectra-small-spanish-becasv3-5
4a14fa0c3939dd56e182ffb7e6d52cfca86f3b58
2022-07-12T04:45:36.000Z
[ "pytorch", "tensorboard", "electra", "question-answering", "dataset:becasv2", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
question-answering
false
Evelyn18
null
Evelyn18/legalectra-small-spanish-becasv3-5
235
null
transformers
3,400
--- tags: - generated_from_trainer datasets: - becasv2 model-index: - name: legalectra-small-spanish-becasv3-5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # legalectra-small-spanish-becasv3-5 This model is a fine-tuned version of [mrm8488/legalectra-small-spanish](https://huggingface.co/mrm8488/legalectra-small-spanish) on the becasv2 dataset. It achieves the following results on the evaluation set: - Loss: 4.7020 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 5 | 5.7715 | | No log | 2.0 | 10 | 5.7001 | | No log | 3.0 | 15 | 5.6206 | | No log | 4.0 | 20 | 5.5463 | | No log | 5.0 | 25 | 5.4866 | | No log | 6.0 | 30 | 5.4369 | | No log | 7.0 | 35 | 5.3939 | | No log | 8.0 | 40 | 5.3545 | | No log | 9.0 | 45 | 5.3168 | | No log | 10.0 | 50 | 5.2824 | | No log | 11.0 | 55 | 5.2504 | | No log | 12.0 | 60 | 5.2193 | | No log | 13.0 | 65 | 5.1864 | | No log | 14.0 | 70 | 5.1515 | | No log | 15.0 | 75 | 5.1174 | | No log | 16.0 | 80 | 5.0839 | | No log | 17.0 | 85 | 5.0497 | | No log | 18.0 | 90 | 5.0188 | | No log | 19.0 | 95 | 4.9937 | | No log | 20.0 | 100 | 4.9726 | | No log | 21.0 | 105 | 4.9483 | | No log | 22.0 | 110 | 4.9205 | | No log | 23.0 | 115 | 4.8993 | | No log | 24.0 | 120 | 4.8802 | | No log | 25.0 | 125 | 4.8612 | | No log | 26.0 | 130 | 4.8498 | | No log | 27.0 | 135 | 4.8294 | | No log | 28.0 | 140 | 4.8176 | | No log | 29.0 | 145 | 4.8144 | | No log | 30.0 | 150 | 4.8012 | | No log | 31.0 | 155 | 4.7890 | | No log | 32.0 | 160 | 4.7745 | | No log | 33.0 | 165 | 4.7641 | | No log | 34.0 | 170 | 4.7558 | | No log | 35.0 | 175 | 4.7474 | | No log | 36.0 | 180 | 4.7384 | | No log | 37.0 | 185 | 4.7319 | | No log | 38.0 | 190 | 4.7262 | | No log | 39.0 | 195 | 4.7225 | | No log | 40.0 | 200 | 4.7201 | | No log | 41.0 | 205 | 4.7165 | | No log | 42.0 | 210 | 4.7129 | | No log | 43.0 | 215 | 4.7111 | | No log | 44.0 | 220 | 4.7086 | | No log | 45.0 | 225 | 4.7060 | | No log | 46.0 | 230 | 4.7049 | | No log | 47.0 | 235 | 4.7036 | | No log | 48.0 | 240 | 4.7028 | | No log | 49.0 | 245 | 4.7023 | | No log | 50.0 | 250 | 4.7020 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
google/ncsnpp-celebahq-256
17d28fd936ebceba39284f4d5b28946317325269
2022-07-21T15:00:03.000Z
[ "diffusers", "arxiv:2011.13456", "pytorch", "unconditional-image-generation", "license:apache-2.0" ]
unconditional-image-generation
false
google
null
google/ncsnpp-celebahq-256
235
null
diffusers
3,401
--- license: apache-2.0 tags: - pytorch - diffusers - unconditional-image-generation --- # Score-Based Generative Modeling through Stochastic Differential Equations (SDE) **Paper**: [Score-Based Generative Modeling through Stochastic Differential Equations](https://arxiv.org/abs/2011.13456) **Authors**: Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, Ben Poole **Abstract**: *Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (\aka, score) of the perturbed data distribution. By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictor-corrector framework to correct errors in the evolution of the discretized reverse-time SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with score-based models, as demonstrated with experiments on class-conditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model.* ## Inference *SDE* models can use **continous** noise schedulers such as: - [scheduling_sde_ve](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_sde_ve.py) for inference. See the following code: ```python # !pip install diffusers from diffusers import DiffusionPipeline model_id = "google/ncsnpp-celebahq-256" # load model and scheduler sde_ve = DiffusionPipeline.from_pretrained(model_id) # run pipeline in inference (sample random noise and denoise) image = sde_ve()["sample"] # save image image[0].save("sde_ve_generated_image.png") ``` Please take a look at [pipeline_score_sde_ve](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/score_sde_ve/pipeline_score_sde_ve.py) for more details on how to write your own denoising loop. For more information generally on how to use `diffusers` for inference, please have a look at the [official inference example](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb) ## Samples 1. ![sample_1](https://huggingface.co/google/ncsnpp-celebahq-256/resolve/main/images/generated_image_0.png) 2. ![sample_2](https://huggingface.co/google/ncsnpp-celebahq-256/resolve/main/images/generated_image_1.png) 3. ![sample_3](https://huggingface.co/google/ncsnpp-celebahq-256/resolve/main/images/generated_image_2.png) 4. ![sample_4](https://huggingface.co/google/ncsnpp-celebahq-256/resolve/main/images/generated_image_3.png)
Helsinki-NLP/opus-mt-de-pl
67458bb97566391315397d8e0aa5f14f774bd238
2021-09-09T21:32:59.000Z
[ "pytorch", "marian", "text2text-generation", "de", "pl", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-de-pl
234
null
transformers
3,402
--- tags: - translation license: apache-2.0 --- ### opus-mt-de-pl * source languages: de * target languages: pl * OPUS readme: [de-pl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-pl/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-pl/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-pl/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-pl/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.de.pl | 41.2 | 0.631 |
allenai/unifiedqa-v2-t5-large-1251000
5b84e7f94d0a24806d08dbb04ee872a351f83404
2022-02-22T00:36:48.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
allenai
null
allenai/unifiedqa-v2-t5-large-1251000
234
null
transformers
3,403
# Further details: https://github.com/allenai/unifiedqa
ionite/DialoGPT-medium-mohnjilesAI
bf581ec9b06e5fc6ded6e63aed6b2530be601732
2021-11-20T23:21:32.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
ionite
null
ionite/DialoGPT-medium-mohnjilesAI
234
null
transformers
3,404
--- tags: - conversational --- # mohnjilesAI DialoGPT Model
jonatasgrosman/wav2vec2-large-xlsr-53-persian
ce183fdf22d071e80806023335ca7db222c3d86b
2022-07-27T23:34:50.000Z
[ "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "fa", "dataset:common_voice", "transformers", "audio", "speech", "xlsr-fine-tuning-week", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/wav2vec2-large-xlsr-53-persian
234
3
transformers
3,405
--- language: fa datasets: - common_voice metrics: - wer - cer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Persian by Jonatas Grosman results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice fa type: common_voice args: fa metrics: - name: Test WER type: wer value: 30.12 - name: Test CER type: cer value: 7.37 --- # Fine-tuned XLSR-53 large model for speech recognition in Persian Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Persian using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :) The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint ## Usage The model can be used directly (without a language model) as follows... Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library: ```python from huggingsound import SpeechRecognitionModel model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-persian") audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"] transcriptions = model.transcribe(audio_paths) ``` Writing your own inference script: ```python import torch import librosa from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor LANG_ID = "fa" MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-persian" SAMPLES = 5 test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]") processor = Wav2Vec2Processor.from_pretrained(MODEL_ID) model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000) batch["speech"] = speech_array batch["sentence"] = batch["sentence"].upper() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) predicted_sentences = processor.batch_decode(predicted_ids) for i, predicted_sentence in enumerate(predicted_sentences): print("-" * 100) print("Reference:", test_dataset[i]["sentence"]) print("Prediction:", predicted_sentence) ``` | Reference | Prediction | | ------------- | ------------- | | از مهمونداری کنار بکشم | از مهمانداری کنار بکشم | | برو از مهرداد بپرس. | برو از ماقدعاد به پرس | | خب ، تو چیكار می كنی؟ | خوب تو چیکار می کنی | | مسقط پایتخت عمان در عربی به معنای محل سقوط است | مسقط پایتخت عمان در عربی به بعنای محل سقوط است | | آه، نه اصلاُ! | اهنه اصلا | | توانست | توانست | | قصیده فن شعر میگوید ای دوستان | قصیده فن شعر میگوید ایدوستون | | دو استایل متفاوت دارین | دوبوست داریل و متفاوت بری | | دو روز قبل از کریسمس ؟ | اون مفتود پش پشش | | ساعت های کاری چیست؟ | این توری که موشیکل خب | ## Evaluation The model can be evaluated as follows on the Persian test data of Common Voice. ```python import torch import re import librosa from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor LANG_ID = "fa" MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-persian" DEVICE = "cuda" CHARS_TO_IGNORE = [",", "?", "¿", ".", "!", "¡", ";", ";", ":", '""', "%", '"', "�", "ʿ", "·", "჻", "~", "՞", "؟", "،", "।", "॥", "«", "»", "„", "“", "”", "「", "」", "‘", "’", "《", "》", "(", ")", "[", "]", "{", "}", "=", "`", "_", "+", "<", ">", "…", "–", "°", "´", "ʾ", "‹", "›", "©", "®", "—", "→", "。", "、", "﹂", "﹁", "‧", "~", "﹏", ",", "{", "}", "(", ")", "[", "]", "【", "】", "‥", "〽", "『", "』", "〝", "〟", "⟨", "⟩", "〜", ":", "!", "?", "♪", "؛", "/", "\\", "º", "−", "^", "ʻ", "ˆ"] test_dataset = load_dataset("common_voice", LANG_ID, split="test") wer = load_metric("wer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/wer.py cer = load_metric("cer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/cer.py chars_to_ignore_regex = f"[{re.escape(''.join(CHARS_TO_IGNORE))}]" processor = Wav2Vec2Processor.from_pretrained(MODEL_ID) model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID) model.to(DEVICE) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): with warnings.catch_warnings(): warnings.simplefilter("ignore") speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000) batch["speech"] = speech_array batch["sentence"] = re.sub(chars_to_ignore_regex, "", batch["sentence"]).upper() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the audio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to(DEVICE), attention_mask=inputs.attention_mask.to(DEVICE)).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) predictions = [x.upper() for x in result["pred_strings"]] references = [x.upper() for x in result["sentence"]] print(f"WER: {wer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}") print(f"CER: {cer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}") ``` **Test Result**: In the table below I report the Word Error Rate (WER) and the Character Error Rate (CER) of the model. I ran the evaluation script described above on other models as well (on 2021-04-22). Note that the table below may show different results from those already reported, this may have been caused due to some specificity of the other evaluation scripts used. | Model | WER | CER | | ------------- | ------------- | ------------- | | jonatasgrosman/wav2vec2-large-xlsr-53-persian | **30.12%** | **7.37%** | | m3hrdadfi/wav2vec2-large-xlsr-persian-v2 | 33.85% | 8.79% | | m3hrdadfi/wav2vec2-large-xlsr-persian | 34.37% | 8.98% | ## Citation If you want to cite this model you can use this: ```bibtex @misc{grosman2021xlsr53-large-persian, title={Fine-tuned {XLSR}-53 large model for speech recognition in {P}ersian}, author={Grosman, Jonatas}, howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-persian}}, year={2021} } ```
knkarthick/MEETING-SUMMARY-BART-LARGE-XSUM-SAMSUM-DIALOGSUM
461cc35437ed5a10a43a5556c6b71a212db652f2
2022-06-27T15:28:20.000Z
[ "pytorch", "tf", "bart", "text2text-generation", "en", "dataset:cnndaily/newyorkdaily/xsum/samsum/dialogsum", "transformers", "seq2seq", "summarization", "license:apache-2.0", "model-index", "autotrain_compatible" ]
summarization
false
knkarthick
null
knkarthick/MEETING-SUMMARY-BART-LARGE-XSUM-SAMSUM-DIALOGSUM
234
1
transformers
3,406
--- language: en tags: - bart - seq2seq - summarization license: apache-2.0 datasets: - cnndaily/newyorkdaily/xsum/samsum/dialogsum metrics: - rouge widget: - text: |- Hi, I'm David and I'm supposed to be an industrial designer. Um, I just got the project announcement about what the project is. Designing a remote control. That's about it, didn't get anything else. Did you get the same thing? Cool. There's too much gear. Okay. Can't draw. Um. Yeah. Um, well anyway, I don't know, it's just the first animal I can think off the top of my head. Um. Yes. Big reason is 'cause I'm allergic to most animals. Allergic to animal fur, so um fish was a natural choice. Um, yeah, and I kind of like whales. They come in and go eat everything in sight. And they're quite harmless and mild and interesting. Tail's a bit big, I think. It's an after dinner dog then. Hmm. It does make sense from maybe the design point of view 'cause you have more complicated characters like European languages, then you need more buttons. So, possibly. Hmm. Yeah. And you keep losing them. Finding them is really a pain, you know. I mean it's usually quite small, or when you want it right, it slipped behind the couch or it's kicked under the table. You know. Yep. Mm-hmm. I think one factor would be production cost. Because there's a cap there, so um depends on how much you can cram into that price. Um. I think that that's the main factor. Cool. Okay. Right. Um well this is the kick-off meeting for our our project. Um and um this is just what we're gonna be doing over the next twenty five minutes. Um so first of all, just to kind of make sure that we all know each other, I'm Laura and I'm the project manager. Do you want to introduce yourself again? Okay. Great. Okay. Um so we're designing a new remote control and um Oh I have to record who's here actually. So that's David, Andrew and Craig, isn't it? And you all arrived on time. Um yeah so des uh design a new remote control. Um, as you can see it's supposed to be original, trendy and user friendly. Um so that's kind of our our brief, as it were. Um and so there are three different stages to the design. Um I'm not really sure what what you guys have already received um in your emails. What did you get? Mm-hmm. Is that what everybody got? Okay. Um. So we're gonna have like individual work and then a meeting about it. And repeat that process three times. Um and at this point we get try out the whiteboard over there. Um. So uh you get to draw your favourite animal and sum up your favourite characteristics of it. So who would like to go first? Very good. Mm-hmm. Yeah. Yeah. Right. Lovely. Right. You can take as long over this as you like, because we haven't got an awful lot to discuss. Ok oh we do we do. Don't feel like you're in a rush, anyway. Ach why not We might have to get you up again then. I don't know what mine is. I'm gonna have to think on the spot now. Is that a whale? Ah. Okay. God, I still don't know what I'm gonna write about. Um. I was gonna choose a dog as well. But I'll just draw a different kind of dog. M my favourite animal is my own dog at home. Um That doesn't really look like him, actually. He looks more like a pig, actually. Ah well. Do you? Oh that's very good of you. Uh. Um he's a mixture of uh various things. Um and what do I like about him, um That's just to suggest that his tail wags. Um he's very friendly and cheery and always pleased to see you, and very kind of affectionate and um uh and he's quite quite wee as well so you know he can doesn't take up too much space. Um and uh And he does a funny thing where he chases his tail as well, which is quite amusing, so It is. I think it is. He only does it after he's had his dinner and um he'll just all of a sudden just get up and start chasing his tail 'round the living room. Yeah, so uh Yeah, maybe. Maybe. Right, um where did you find this? Just down here? Yeah. Okay. Um what are we doing next? Uh um. Okay, uh we now need to discuss the project finance. Um so according to the brief um we're gonna be selling this remote control for twenty five Euro, um and we're aiming to make fifty million Euro. Um so we're gonna be selling this on an international scale. And uh we don't want it to cost any more than uh twelve fifty Euros, so fifty percent of the selling price. Sure. All together. Um I dunno. I imagine That's a good question. I imagine it probably is our sale actually because it's probably up to the the um the retailer to uh sell it for whatever price they want. Um. But I I don't know, I mean do you think the fact that it's going to be sold internationally will have a bearing on how we design it at all? Think it will? Um. Hmm. Oh yeah, regions and stuff, yeah. Yeah. Okay. Yeah. Well for a remote control, do you think that will be I suppose it's depends on how complicated our remote control is. Yeah, yeah. Okay. What, just like in terms of like the wealth of the country? Like how much money people have to spend on things like? Aye, I see what you mean, yeah. Marketing. Good marketing thoughts. Oh gosh, I should be writing all this down. Um. Mm. Yeah. Yeah, yeah. Like how much does, you know, a remote control cost. Well twenty five Euro, I mean that's um that's about like eighteen pounds or something, isn't it? Or no, is it as much as that? Sixteen seventeen eighteen pounds. Um, I dunno, I've never bought a remote control, so I don't know how how good a remote control that would get you. Um. But yeah, I suppose it has to look kind of cool and gimmicky. Um right, okay. Let me just scoot on ahead here. Okay. Um well d Does anybody have anything to add to uh to the finance issue at all? Thin No, actually. That would be useful, though, wouldn't it, if you knew like what your money would get you now. Mm-hmm. Yeah, yeah. Oh. Five minutes to end of meeting. Oh, okay. We're a bit behind. Yeah. Right, so do you think that should be like a main design aim of our remote control d you know, do your your satellite and your regular telly and your V_C_R_ and everything? Mm-hmm. Yeah. Or even like, you know, notes about um what you wanna watch. Like you might put in there oh I want to watch such and such and look a Oh that's a good idea. So extra functionalities. Mm-hmm. Hmm. Um okay, uh I'd wel we're gonna have to wrap up pretty quickly in the next couple of minutes. Um I'll just check we've nothing else. Okay. Um so anything else anybody wants to add about what they don't like about remote controls they've used, what they would really like to be part of this new one at all? You keep losing them. Okay. Yeah. W You get those ones where you can, if you like, whistle or make a really high pitched noise they beep. There I mean is that something we'd want to include, do you think? Dunno. Okay maybe. My goodness. Still feels quite primitive. Maybe like a touch screen or something? Okay. Uh-huh, okay. Well I guess that's up to our industrial designer. It looks better. Yeah. Okay. Okay. Right, well um so just to wrap up, the next meeting's gonna be in thirty minutes. So that's about um about ten to twelve by my watch. Um so inbetween now and then, um as the industrial designer, you're gonna be working on you know the actual working design of it so y you know what you're doing there. Um for user interface, technical functions, I guess that's you know like what we've been talking about, what it'll actually do. Um and uh marketing executive, you'll be just thinking about what it actually what, you know, what requirements it has to has to fulfil and you'll all get instructions emailed to you, I guess. Um. Yeah, so it's th the functional design stage is next, I guess. And uh and that's the end of the meeting. So I got that little message a lot sooner than I thought I would, so Mm-hmm. Uh-huh, yeah. Th Okay, well just very quickly 'cause this we're supposed to finish now. Um I guess that's up to us, I mean you probably want some kind of unique selling point of it, so um, you know Yeah. Mm-hmm. Yeah. Okay. Right, okay, we'll that's that's the end of the meeting, then. Um. So, uh thank you all for coming. Um I'm Craig and I'm User Interface. Yeah. Well, my favourite animal would be a monkey. Then they're small cute and furry, and uh when planet of the apes becomes real, I'm gonna be up there with them. Yeah. I know um My parents went out and bought um remote controls because um they got fed up of having four or five different remote controls for each things the house. So um for them it was just how many devices control. Uh. Mm-hmm. Great. And I'm Andrew and I'm uh our marketing expert. Mm-hmm. Mm-hmm. Yeah, that's that's it. Yeah. I will go. That's fine. Alright. So This one here, right? Okay. Very nice. Alright. My favourite animal is like A beagle. Um charac favourite characteristics of it? Is that right? Uh, right, well basically um high priority for any animal for me is that they be willing to take a lot of physical affection from their family. And, yeah that they have lots of personality and uh be fit and in robust good health. So this is blue. Blue beagle. My family's beagle. I coulda told you a whole lot more about beagles. Boy, let me tell you. Impressionist. Alright. Mm. Superb sketch, by the way. Yep. I see a dog in there. Yep. Now I see a rooster. What kind is it? Is he aware that th it's his own cha tail he's chasing? Hmm. Probably when he was little he got lots of attention for doing it and has forever been conditioned. 'Kay. Um, can we just go over that again? Uh, so bas at twel Alright, yeah. Okay. So cost like production cost is twelve fifty, but selling price is is that wholesale or retail? Like on the shelf. Our sale our sale anyway. Yeah, okay okay. Okay. Mm-hmm. Alright. Yes. Mm-hmm. Mm-hmm. Well right away I'm wondering if there's um th th uh, like with D_V_D_ players, if there are zones. Um f frequencies or something um as well as uh characters, um different uh keypad styles and s symbols. Um. I don't know. Yeah. Yeah. Yeah. And then a and then al the other thing international is on top of the price. I'm thinking the price might might appeal to a certain market in one region, whereas in another it'll be different, so Just a chara just a characteristic of the Just Or just like, basic product podi positioning, the twenty five Euro remote control might be a big hit in London, might not be such a big hit in Greece, who knows, something like that, yeah. Yep. Right away I'm making some kind of assumptions about what what information we're given here, thinking, 'kay trendy probably means something other than just basic, something other than just standard. Um so I'm wondering right away, is selling twenty five Euros, is that sort of the thi is this gonna to be like the premium product kinda thing or Uh-huh. Mm-hmm. Yep. Yeah, I'd say so, yeah. No. Yeah, yeah. Mm-hmm. Do we have any other background information on like how that compares to other other Yeah. Mm-hmm. Yeah, interesting thing about discussing um production of a remote control for me is that l as you point out, I just don't think of remote controls as somethin something people consciously assess in their purchasing habits. It's just like getting shoelaces with shoes or something. It just comes along. Do you know what I mean? Like so sort of like how do you I I mean one one way of looking at it would be, well the people producing television sets, maybe they have to buy remote controls. Or another way is maybe people who have T_V_ sets are really fed up with their remote control and they really want a better one or something. But Right. Right. Okay so Right, so in function one of the priorities might be to combine as many uses I think so. Yeah, yeah. Yeah. Well like um, maybe what we could use is a sort of like a example of a successful other piece technology is palm palm pilots. They're gone from being just like little sort of scribble boards to cameras, M_P_ three players, telephones, everything, agenda. So, like, I wonder if we might add something new to the to the remote control market, such as the lighting in your house, or um Yeah, yeah. An Yeah. Like, p personally for me, at home I've I've combined the um the audio video of my television set and my D_V_D_ player and my C_D_ player. So they w all work actually function together but I have different remote controls for each of them. So it's sort of ironic that that then they're in there um you know, the sound and everything it's just one system. But each one's got its own little part. Mm. Mm. Mm. Mm-hmm. Mm-hmm. Yeah. Yeah. That's just really good id Yep. Uh, sure. I remember when the first remote control my my family had was on a cable. Actually had a cable between it and the T_V_ and big like buttons that sort of like, like on a blender or something. And um, you know, when I think about what they are now, it's better, but actually it's still kind of, I dunno, like a massive junky thing on the table. Maybe we could think about how, could be more, you know, streamlined. S Something like that, yeah. Or whatever would be technologically reasonable. 'Cause it could b it could it could be that f it could be that functionally that doesn't make it any better, but that just the appeal of of not having You know, these days there's a r pe things in people's homes are becoming more and more like chic, you know. Um, nicer materials and might be be worth exploring anyway. Okay. Um. Before we wrap up, just to make sure we're all on the same page here, um, do we We were given sort of an example of a coffee machine or something, right? Well, um are we at ma right now on the assumption that our television remote control may have features which go beyond the television? Or are we keeping sort of like a a design commitment to television features? I I don't know. Yep. Yeah, sure. Okay. Okay, yeah. Okay. Okay. Okay. Alright. model-index: - name: bart-large-meeting-summary-xsum-samsum-dialogsum results: - task: name: Abstractive Text Summarization type: abstractive-text-summarization dataset: name: "cnndaily/newyorkdaily/xsum/samsum/dialogsum" type: cnndaily/newyorkdaily/xsum/samsum/dialogsum metrics: - name: Validation ROGUE-1 type: rouge-1 value: NA - name: Validation ROGUE-2 type: rouge-2 value: NA - name: Validation ROGUE-L type: rouge-L value: NA - name: Validation ROGUE-Lsum type: rouge-Lsum value: NA - name: Test ROGUE-1 type: rouge-1 value: NA - name: Test ROGUE-2 type: rouge-2 value: NA - name: Test ROGUE-L type: rouge-L value: NA - name: Test ROGUE-Lsum type: rouge-Lsum value: NA --- Model obtained by Fine Tuning 'facebook/bart-large-xsum' ## Usage # Example 1 ```python from transformers import pipeline summarizer = pipeline("summarization", model="knkarthick/MEETING-SUMMARY-BART-LARGE-XSUM-SAMSUM-DIALOGSUM") text = '''The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct. ''' summarizer(text) ``` # Example 2 ```python from transformers import pipeline summarizer = pipeline("summarization", model="knkarthick/MEETING-SUMMARY-BART-LARGE-XSUM-SAMSUM-DIALOGSUM") text = '''Bangalore is the capital and the largest city of the Indian state of Karnataka. It has a population of more than 8 million and a metropolitan population of around 11 million, making it the third most populous city and fifth most populous urban agglomeration in India. Located in southern India on the Deccan Plateau, at a height of over 900 m (3,000 ft) above sea level, Bangalore is known for its pleasant climate throughout the year. Its elevation is the highest among the major cities of India.The city's history dates back to around 890 CE, in a stone inscription found at the Nageshwara Temple in Begur, Bangalore. The Begur inscription is written in Halegannada (ancient Kannada), mentions 'Bengaluru Kalaga' (battle of Bengaluru). It was a significant turning point in the history of Bangalore as it bears the earliest reference to the name 'Bengaluru'. In 1537 CE, Kempé Gowdā – a feudal ruler under the Vijayanagara Empire – established a mud fort considered to be the foundation of modern Bangalore and its oldest areas, or petes, which exist to the present day. After the fall of Vijayanagar empire in 16th century, the Mughals sold Bangalore to Chikkadevaraja Wodeyar (1673–1704), the then ruler of the Kingdom of Mysore for three lakh rupees. When Haider Ali seized control of the Kingdom of Mysore, the administration of Bangalore passed into his hands. The city was captured by the British East India Company after victory in the Fourth Anglo-Mysore War (1799), who returned administrative control of the city to the Maharaja of Mysore. The old city developed in the dominions of the Maharaja of Mysore and was made capital of the Princely State of Mysore, which existed as a nominally sovereign entity of the British Raj. In 1809, the British shifted their cantonment to Bangalore, outside the old city, and a town grew up around it, which was governed as part of British India. Following India's independence in 1947, Bangalore became the capital of Mysore State, and remained capital when the new Indian state of Karnataka was formed in 1956. The two urban settlements of Bangalore – city and cantonment – which had developed as independent entities merged into a single urban centre in 1949. The existing Kannada name, Bengalūru, was declared the official name of the city in 2006. Bangalore is widely regarded as the "Silicon Valley of India" (or "IT capital of India") because of its role as the nation's leading information technology (IT) exporter. Indian technological organisations are headquartered in the city. A demographically diverse city, Bangalore is the second fastest-growing major metropolis in India. Recent estimates of the metro economy of its urban area have ranked Bangalore either the fourth- or fifth-most productive metro area of India. As of 2017, Bangalore was home to 7,700 millionaires and 8 billionaires with a total wealth of $320 billion. It is home to many educational and research institutions. Numerous state-owned aerospace and defence organisations are located in the city. The city also houses the Kannada film industry. It was ranked the most liveable Indian city with a population of over a million under the Ease of Living Index 2020. ''' summarizer(text) ``` # Example 3 ```python from transformers import pipeline summarizer = pipeline("summarization", model="knkarthick/MEETING-SUMMARY-BART-LARGE-XSUM-SAMSUM-DIALOGSUM") text = '''Hi, I'm David and I'm supposed to be an industrial designer. Um, I just got the project announcement about what the project is. Designing a remote control. That's about it, didn't get anything else. Did you get the same thing? Cool. There's too much gear. Okay. Can't draw. Um. Yeah. Um, well anyway, I don't know, it's just the first animal I can think off the top of my head. Um. Yes. Big reason is 'cause I'm allergic to most animals. Allergic to animal fur, so um fish was a natural choice. Um, yeah, and I kind of like whales. They come in and go eat everything in sight. And they're quite harmless and mild and interesting. Tail's a bit big, I think. It's an after dinner dog then. Hmm. It does make sense from maybe the design point of view 'cause you have more complicated characters like European languages, then you need more buttons. So, possibly. Hmm. Yeah. And you keep losing them. Finding them is really a pain, you know. I mean it's usually quite small, or when you want it right, it slipped behind the couch or it's kicked under the table. You know. Yep. Mm-hmm. I think one factor would be production cost. Because there's a cap there, so um depends on how much you can cram into that price. Um. I think that that's the main factor. Cool. Okay. Right. Um well this is the kick-off meeting for our our project. Um and um this is just what we're gonna be doing over the next twenty five minutes. Um so first of all, just to kind of make sure that we all know each other, I'm Laura and I'm the project manager. Do you want to introduce yourself again? Okay. Great. Okay. Um so we're designing a new remote control and um Oh I have to record who's here actually. So that's David, Andrew and Craig, isn't it? And you all arrived on time. Um yeah so des uh design a new remote control. Um, as you can see it's supposed to be original, trendy and user friendly. Um so that's kind of our our brief, as it were. Um and so there are three different stages to the design. Um I'm not really sure what what you guys have already received um in your emails. What did you get? Mm-hmm. Is that what everybody got? Okay. Um. So we're gonna have like individual work and then a meeting about it. And repeat that process three times. Um and at this point we get try out the whiteboard over there. Um. So uh you get to draw your favourite animal and sum up your favourite characteristics of it. So who would like to go first? Very good. Mm-hmm. Yeah. Yeah. Right. Lovely. Right. You can take as long over this as you like, because we haven't got an awful lot to discuss. Ok oh we do we do. Don't feel like you're in a rush, anyway. Ach why not We might have to get you up again then. I don't know what mine is. I'm gonna have to think on the spot now. Is that a whale? Ah. Okay. God, I still don't know what I'm gonna write about. Um. I was gonna choose a dog as well. But I'll just draw a different kind of dog. M my favourite animal is my own dog at home. Um That doesn't really look like him, actually. He looks more like a pig, actually. Ah well. Do you? Oh that's very good of you. Uh. Um he's a mixture of uh various things. Um and what do I like about him, um That's just to suggest that his tail wags. Um he's very friendly and cheery and always pleased to see you, and very kind of affectionate and um uh and he's quite quite wee as well so you know he can doesn't take up too much space. Um and uh And he does a funny thing where he chases his tail as well, which is quite amusing, so It is. I think it is. He only does it after he's had his dinner and um he'll just all of a sudden just get up and start chasing his tail 'round the living room. Yeah, so uh Yeah, maybe. Maybe. Right, um where did you find this? Just down here? Yeah. Okay. Um what are we doing next? Uh um. Okay, uh we now need to discuss the project finance. Um so according to the brief um we're gonna be selling this remote control for twenty five Euro, um and we're aiming to make fifty million Euro. Um so we're gonna be selling this on an international scale. And uh we don't want it to cost any more than uh twelve fifty Euros, so fifty percent of the selling price. Sure. All together. Um I dunno. I imagine That's a good question. I imagine it probably is our sale actually because it's probably up to the the um the retailer to uh sell it for whatever price they want. Um. But I I don't know, I mean do you think the fact that it's going to be sold internationally will have a bearing on how we design it at all? Think it will? Um. Hmm. Oh yeah, regions and stuff, yeah. Yeah. Okay. Yeah. Well for a remote control, do you think that will be I suppose it's depends on how complicated our remote control is. Yeah, yeah. Okay. What, just like in terms of like the wealth of the country? Like how much money people have to spend on things like? Aye, I see what you mean, yeah. Marketing. Good marketing thoughts. Oh gosh, I should be writing all this down. Um. Mm. Yeah. Yeah, yeah. Like how much does, you know, a remote control cost. Well twenty five Euro, I mean that's um that's about like eighteen pounds or something, isn't it? Or no, is it as much as that? Sixteen seventeen eighteen pounds. Um, I dunno, I've never bought a remote control, so I don't know how how good a remote control that would get you. Um. But yeah, I suppose it has to look kind of cool and gimmicky. Um right, okay. Let me just scoot on ahead here. Okay. Um well d Does anybody have anything to add to uh to the finance issue at all? Thin No, actually. That would be useful, though, wouldn't it, if you knew like what your money would get you now. Mm-hmm. Yeah, yeah. Oh. Five minutes to end of meeting. Oh, okay. We're a bit behind. Yeah. Right, so do you think that should be like a main design aim of our remote control d you know, do your your satellite and your regular telly and your V_C_R_ and everything? Mm-hmm. Yeah. Or even like, you know, notes about um what you wanna watch. Like you might put in there oh I want to watch such and such and look a Oh that's a good idea. So extra functionalities. Mm-hmm. Hmm. Um okay, uh I'd wel we're gonna have to wrap up pretty quickly in the next couple of minutes. Um I'll just check we've nothing else. Okay. Um so anything else anybody wants to add about what they don't like about remote controls they've used, what they would really like to be part of this new one at all? You keep losing them. Okay. Yeah. W You get those ones where you can, if you like, whistle or make a really high pitched noise they beep. There I mean is that something we'd want to include, do you think? Dunno. Okay maybe. My goodness. Still feels quite primitive. Maybe like a touch screen or something? Okay. Uh-huh, okay. Well I guess that's up to our industrial designer. It looks better. Yeah. Okay. Okay. Right, well um so just to wrap up, the next meeting's gonna be in thirty minutes. So that's about um about ten to twelve by my watch. Um so inbetween now and then, um as the industrial designer, you're gonna be working on you know the actual working design of it so y you know what you're doing there. Um for user interface, technical functions, I guess that's you know like what we've been talking about, what it'll actually do. Um and uh marketing executive, you'll be just thinking about what it actually what, you know, what requirements it has to has to fulfil and you'll all get instructions emailed to you, I guess. Um. Yeah, so it's th the functional design stage is next, I guess. And uh and that's the end of the meeting. So I got that little message a lot sooner than I thought I would, so Mm-hmm. Uh-huh, yeah. Th Okay, well just very quickly 'cause this we're supposed to finish now. Um I guess that's up to us, I mean you probably want some kind of unique selling point of it, so um, you know Yeah. Mm-hmm. Yeah. Okay. Right, okay, we'll that's that's the end of the meeting, then. Um. So, uh thank you all for coming. Um I'm Craig and I'm User Interface. Yeah. Well, my favourite animal would be a monkey. Then they're small cute and furry, and uh when planet of the apes becomes real, I'm gonna be up there with them. Yeah. I know um My parents went out and bought um remote controls because um they got fed up of having four or five different remote controls for each things the house. So um for them it was just how many devices control. Uh. Mm-hmm. Great. And I'm Andrew and I'm uh our marketing expert. Mm-hmm. Mm-hmm. Yeah, that's that's it. Yeah. I will go. That's fine. Alright. So This one here, right? Okay. Very nice. Alright. My favourite animal is like A beagle. Um charac favourite characteristics of it? Is that right? Uh, right, well basically um high priority for any animal for me is that they be willing to take a lot of physical affection from their family. And, yeah that they have lots of personality and uh be fit and in robust good health. So this is blue. Blue beagle. My family's beagle. I coulda told you a whole lot more about beagles. Boy, let me tell you. Impressionist. Alright. Mm. Superb sketch, by the way. Yep. I see a dog in there. Yep. Now I see a rooster. What kind is it? Is he aware that th it's his own cha tail he's chasing? Hmm. Probably when he was little he got lots of attention for doing it and has forever been conditioned. 'Kay. Um, can we just go over that again? Uh, so bas at twel Alright, yeah. Okay. So cost like production cost is twelve fifty, but selling price is is that wholesale or retail? Like on the shelf. Our sale our sale anyway. Yeah, okay okay. Okay. Mm-hmm. Alright. Yes. Mm-hmm. Mm-hmm. Well right away I'm wondering if there's um th th uh, like with D_V_D_ players, if there are zones. Um f frequencies or something um as well as uh characters, um different uh keypad styles and s symbols. Um. I don't know. Yeah. Yeah. Yeah. And then a and then al the other thing international is on top of the price. I'm thinking the price might might appeal to a certain market in one region, whereas in another it'll be different, so Just a chara just a characteristic of the Just Or just like, basic product podi positioning, the twenty five Euro remote control might be a big hit in London, might not be such a big hit in Greece, who knows, something like that, yeah. Yep. Right away I'm making some kind of assumptions about what what information we're given here, thinking, 'kay trendy probably means something other than just basic, something other than just standard. Um so I'm wondering right away, is selling twenty five Euros, is that sort of the thi is this gonna to be like the premium product kinda thing or Uh-huh. Mm-hmm. Yep. Yeah, I'd say so, yeah. No. Yeah, yeah. Mm-hmm. Do we have any other background information on like how that compares to other other Yeah. Mm-hmm. Yeah, interesting thing about discussing um production of a remote control for me is that l as you point out, I just don't think of remote controls as somethin something people consciously assess in their purchasing habits. It's just like getting shoelaces with shoes or something. It just comes along. Do you know what I mean? Like so sort of like how do you I I mean one one way of looking at it would be, well the people producing television sets, maybe they have to buy remote controls. Or another way is maybe people who have T_V_ sets are really fed up with their remote control and they really want a better one or something. But Right. Right. Okay so Right, so in function one of the priorities might be to combine as many uses I think so. Yeah, yeah. Yeah. Well like um, maybe what we could use is a sort of like a example of a successful other piece technology is palm palm pilots. They're gone from being just like little sort of scribble boards to cameras, M_P_ three players, telephones, everything, agenda. So, like, I wonder if we might add something new to the to the remote control market, such as the lighting in your house, or um Yeah, yeah. An Yeah. Like, p personally for me, at home I've I've combined the um the audio video of my television set and my D_V_D_ player and my C_D_ player. So they w all work actually function together but I have different remote controls for each of them. So it's sort of ironic that that then they're in there um you know, the sound and everything it's just one system. But each one's got its own little part. Mm. Mm. Mm. Mm-hmm. Mm-hmm. Yeah. Yeah. That's just really good id Yep. Uh, sure. I remember when the first remote control my my family had was on a cable. Actually had a cable between it and the T_V_ and big like buttons that sort of like, like on a blender or something. And um, you know, when I think about what they are now, it's better, but actually it's still kind of, I dunno, like a massive junky thing on the table. Maybe we could think about how, could be more, you know, streamlined. S Something like that, yeah. Or whatever would be technologically reasonable. 'Cause it could b it could it could be that f it could be that functionally that doesn't make it any better, but that just the appeal of of not having You know, these days there's a r pe things in people's homes are becoming more and more like chic, you know. Um, nicer materials and might be be worth exploring anyway. Okay. Um. Before we wrap up, just to make sure we're all on the same page here, um, do we We were given sort of an example of a coffee machine or something, right? Well, um are we at ma right now on the assumption that our television remote control may have features which go beyond the television? Or are we keeping sort of like a a design commitment to television features? I I don't know. Yep. Yeah, sure. Okay. Okay, yeah. Okay. Okay. Okay. Alright. ''' summarizer(text) ``` # Example 4 ```python from transformers import pipeline summarizer = pipeline("summarization", model="knkarthick/MEETING-SUMMARY-BART-LARGE-XSUM-SAMSUM-DIALOGSUM") text = ''' Das : Hi and welcome to the a16z podcast. I’m Das, and in this episode, I talk SaaS go-to-market with David Ulevitch and our newest enterprise general partner Kristina Shen. The first half of the podcast looks at how remote work impacts the SaaS go-to-market and what the smartest founders are doing to survive the current crisis. The second half covers pricing approaches and strategy, including how to think about free versus paid trials and navigating the transition to larger accounts. But we start with why it’s easier to move upmarket than down… and the advantage that gives a SaaS startup against incumbents. David : If you have a cohort of customers that are paying you $10,000 a year for your product, you’re going to find a customer that self-selects and is willing to pay $100,000 a year. Once you get one of those, your organization will figure out how you sell to, how you satisfy and support, customers at that price point and that size. But it’s really hard for a company that sells up market to move down market, because they’ve already baked in all that expensive, heavy lifting sales motion. And so as you go down market with a lower price point, usually, you can’t actually support it. Das : Does that mean that it’s easier for a company to do this go-to-market if they’re a new startup as opposed to if they’re a pre-existing SaaS? Kristina : It’s culturally very, very hard to give a product away for free that you’re already charging for. It feels like you’re eating away at your own potential revenue when you do it. So most people who try it end up pulling back very quickly. David : This is actually one of the key reasons why the bottoms up SaaS motion is just so competitive, and compelling, and so destructive against the traditional sales-driven test motion. If you have that great product and people are choosing to use it, it’s very hard for somebody with a sales-driven motion, and all the cost that’s loaded into that, to be able to compete against it. There are so many markets where initially, we would look at companies and say, “Oh, well, this couldn’t possibly be bottoms up. It has to be sold to the CIO. It has to be sold to the CSO or the CFO.” But in almost every case we’ve been wrong, and there has been a bottoms up motion. The canonical example is Slack. It’s crazy that Slack is a bottoms up company, because you’re talking about corporate messaging, and how could you ever have a messaging solution that only a few people might be using, that only a team might be using? But now it’s just, “Oh, yeah, some people started using it, and then more people started using it, and then everyone had Slack.” Kristina : I think another classic example is Dropbox versus Box. Both started as bottoms up businesses, try before you buy. But Box quickly found, “Hey, I’d rather sell to IT.” And Dropbox said, “Hey, we’ve got a great freemium motion going.” And they catalyzed their business around referrals and giving away free storage and shared storage in a way that really helped drive their bottoms up business. Das : It’s a big leap to go from selling to smaller customers to larger customers. How have you seen SaaS companies know or get the timing right on that? Especially since it does seem like that’s really related to scaling your sales force? Kristina : Don’t try to go from a 100-person company to a 20,000-person company. Start targeting early adopters, maybe they’re late stage pre-IPO companies, then newly IPO’d companies. Starting in tech tends to be a little bit easier because they tend to be early adopters. Going vertical by vertical can be a great strategy as well. Targeting one customer who might be branded in that space, can help brand yourself in that category. And then all their competitors will also want your product if you do a good job. A lot of times people will dedicate a sales rep to each vertical, so that they become really, really knowledgeable in that space, and also build their own brand and reputation and know who are the right customers to target. Das : So right now, you’ve got a lot more people working remote. Does this move to remote work mean that on-premise software is dying? And is it accelerating the move to software as a service? Kristina : This remote work and working from home is only going to catalyze more of the conversion from on-premise over to cloud and SaaS. In general, software spend declines 20% during an economic downturn. This happened in ’08, this happened in ’01. But when we look at the last downturn in ’08, SaaS spend actually, for public companies, increased, on average, 10%, which means there’s a 30% spread, which really shows us that there was a huge catalyst from people moving on-premise to SaaS. David : And as people work remote, the ability to use SaaS tools is much easier than having to VPN back into your corporate network. We’ve been seeing that, inside sales teams have been doing larger and larger deals, essentially moving up market on the inside, without having to engage with field sales teams. In fact, a lot of the new SaaS companies today rather than building out a field team, they have a hybrid team, where people are working and closing deals on the inside and if they had to go out and meet with a customer, they would do that. But by and large, most of it was happening over the phone, over email, and over videoconferencing. And all the deals now, by definition, are gonna be done remote because people can’t go visit their customers in person. Das : So with bottoms up, did user behavior and buyer behavior change, so the go-to-market evolved? Or did the go-to-market evolve and then you saw user and buyer behavior change? I’m curious with this move to remote work. Is that going to trigger more changes or has the go-to-market enabled that change in user behavior, even though we see that change coming because of a lot of forces outside of the market? Kristina : I definitely think they are interrelated. But I do think it was a user change that catalyzed everything. We decided that we preferred better software, and we tried a couple products. We were able to purchase off our credit card. And then IT and procurement eventually said, “Wow, everyone’s buying these already, I might as well get a company license and a company deal so I’m not paying as much.” While obviously software vendors had to offer the products that could be self-served, users started to realize they had the power, they wanted to use better software, they paid with their credit cards. And now software vendors are forced to change their go-to-market to actually suit that use case. Das : If that’s the case that when user behavior has changed, it’s tended to be the catalyzing force of bigger changes in the go-to-market, what are some of the changes you foresee for SaaS because the world has changed to this new reality of remote work and more distributed teams? David : We’re in a very uncertain economic environment right now. And a couple of things will become very clear over the next 3 to 9 to 15 months — you’re going to find out which SaaS products are absolutely essential to helping a business operate and run, and which ones were just nice to have and may not get renewed. I think on the customer, buying side, you’re very likely to see people push back on big annual commitments and prefer to go month-to-month where they can. Or you’ll see more incentives from SaaS startups to offer discounts for annual contracts. You’re going to see people that might sign an annual contract, but they may not want to pay upfront. They may prefer to meter the cash out ratably over the term of the contract. And as companies had empowered and allowed budget authority to be pushed down in organizations, you’re gonna see that budget authority get pulled back, more scrutiny on spending, and likely a lot of SaaS products not get renewed that turned out to not be essential. Kristina : I think the smartest founders are making sure they have the runway to continue to exist. And they’re doing that in a couple of ways. They’re preserving cash, and they are making sure that their existing customers are super, super happy, because retaining your customers is so important in this environment. And they’re making sure that they have efficient or profitable customer acquisition. Don’t spend valuable dollars acquiring customers. But acquire customers efficiently that will add to a great existing customer base. Das : To go into pricing and packaging for SaaS for a moment, what are some of the different pricing approaches that you see SaaS companies taking? Kristina : The old school way of doing SaaS go-to-market is bundle everything together, make the pricing super complex, so you don’t actually understand what you’re paying for. You’re forced to purchase it because you need one component of the product. New modern SaaS pricing is keep it simple, keep it tied to value, and make sure you’re solving one thing really, really well. David : You want to make it easy for your customers to give you money. And if your customers don’t understand your pricing, that’s a huge red flag. Sometimes founders will try to over engineer their pricing model. Kristina : We talk a lot about everything has to be 10X better than the alternatives. But it’s much easier to be 10X better when you solve one thing very, very well, and then have simple pricing around it. I think the most common that most people know about is PEPM or per employee per month, where you’re charging basically for every single seat. Another really common model is the freemium model. So, think about a Dropbox, or an Asana, or a Skype, where it’s trigger based. You try the product for free, but when you hit a certain amount of storage, or a certain amount of users, then it converts over to paid. And then you also have a time trial, where you get the full experience of the product for some limited time period. And then you’re asked if you want to continue using the product to pay. And then there’s pay as go, and particularly, pay as you go as a usage model. So, Slack will say, “Hey, if your users aren’t actually using the product this month, we won’t actually charge you for it.” David : The example that Kristina made about Slack and users, everybody understands what a user is, and if they’re using the product, they pay for it, and if they’re not using it, they don’t pay for it. That’s a very friendly way to make it easy for your customers to give you money. If Slack came up with a pricing model that was like based on number of messages, or number of API integration calls, the customer would have no idea what that means. Kristina : There’s also the consumption model. So Twilio only charges you for every SMS text or phone call that you make on the platform any given month. And so they make money or lose money as your usage goes. The pricing is very aligned to your productivity. David : Generally, those are for products where the usage only goes in one direction. If you think of a company like Databricks, where they’re charging for storage, or Amazon’s S3 service, it is very aligned with the customer, but it also strategically aligns with the business because they know the switching cost is very high, the churn is very low. And generally, in those businesses, you’re only going to store more data, so they can charge based on usage or volume of data. Kristina : Recently, there’s been a huge trend of payment as a revenue. It’s particularly common in vertical markets where SaaS companies are adding payments as a revenue in addition to their employee or subscription revenue. If you look at Shopify, for example, more than 50% of their revenue is actually payment revenue. They’re making money every single time you purchase something off one of their shopping cart websites. Das : When you’re working with a founder or a SaaS startup, how have you seen them find the right pricing model for their product, for their market? Kristina : Step one is just talk to a lot of customers. Try to figure out what is the market pricing for possible alternatives or competitors, understand their pain points and their willingness to pay. And just throw a price out there, because you have to have a starting point in order to actually test and iterate. Particularly in the SMB, or the bottoms up business, you can test and iterate pretty quickly because you have so many data points. David : I always tell founders, step one is to just go out there and talk to customers. Step two is just double your prices. I don’t think there’s ever been a great company with a great product that’s fallen apart because their pricing was wrong. But a lot of SaaS startup founders really under price, and you don’t want to find out two or three years later that you were 200% underpriced. A very common thing that SaaS companies do, they’ll have the basic package that either is free or low cost, that you can just sign up online for. They’ll have a middle package where they share some pricing, and then they’ll have the enterprise package where you have to contact sales to find out more. And that way they don’t actually have to show the pricing for that third package. And that gives the salespeople the flexibility to adjust pricing on a per deal basis. Das : When you’re working with companies, why are they underpricing their products? David : I think it’s psychological. People need to price on value, and they don’t know how much value they’re delivering relative to “Oh, it only cost me $100 a month to provide this service, so I just need to charge $200.” But if it turns out you’re saving your customer $50,000 a year, then you’re wildly underpriced. You have to remember that SaaS is essentially a proxy for outsourced IT. You’re spending money on a SaaS service to not pay to develop something internally, or to have to pay IT to support something that’s more complex on-prem. Software is much cheaper than people, and so generally, the price point can be much higher. Kristina : And the other thing is your value increases over time. You’re delivering more features, more products, you understand the customer better. It’s the beauty of the SaaS model and cloud model that you can iterate and push code immediately, and the customer immediately sees value. A lot of times people have the same price point from the first customer sold to three years later and the 200th customer. Quite frankly, you’ve delivered so much value along the way that your price point should have gone up. The other thing I’ll say is a lot of people discount per seat pricing a lot as they move up market. We tend to tell people that the best validation of your product having great product market fit is your ability to hold your price point. So while there is some natural discounting on a per seat basis because people do deserve some volume discounting, I would say try to resist that as much as possible. Das : Especially for a technical founder, it’s so tempting to get in there and fiddle with these knobs. How do you know when it is time to experiment with your pricing and packaging? David : If you’re looking at your business and you see that you are doing more deals, and they’re closing faster, you should raise your pricing. And you pay attention to how long it takes to close deals and whether the number of deals is staying consistent as you do that. And, at some point, you’re going to find out when you’re losing deals on price. I think a moment where companies have to plan ahead to avoid having to course correct is after they roll out massive pricing and packaging changes, which are pretty natural as companies move up market. But how they navigate that transition to larger accounts, and how they either bring along or move away from those smaller, earlier customers who got them to where they are, tends to be really important because they can get a lot of noise on Twitter, they can get a lot of blowback from their customers. So Zendesk is a company where they rolled out a major packaging change. And when they rolled it out, they hadn’t planned on grandfathering in their early customers. They got a lot of pushback, and very quickly, they put out a blog post and said, “We hear what you’re saying, we appreciate you building the business that we’ve become today. We do need to have a package for the future. But all the people that have been customers so far will be grandfathered in for at least a period of time into the old model.” Kristina : If you iterate pricing constantly, you don’t really have this problem because your customers will be used to pricing changes. You normally pair them with new features, and it all kind of works out. But if you have to go through a big grandfather change, I tend to lean towards treating your early customers really, really well. They adopted when you weren’t a big company yet. They probably co-built the product with you in many ways. And so, it’s great to get more dollars out of your customer base, but treat your early customers well. Das : Are there any other failure modes that you see startups really falling into around pricing and packaging or any common mistakes that they make? David : I think a lot of founders don’t always map out the cost or model of their pricing and their product relative to their cost of actually doing sales and marketing and customer acquisition. Kristina : Inside sales is so popular in Silicon Valley. When you’re selling more to an SMB or mid-market type customer, the expectation is that you’re educating and helping the prospective customer over the phone. And so, you’re not expected to be as high touch. But 5K is almost the minimum price point you need to sell to the SMB with an inside sales team in order to pay for the outbound costs and all the conversions, because there is typically a team that sits around the quota carrying rep. And so, price matching — how much your price point is compared to what your go-to-market motion is — matters a lot. Other big failure modes that I see, people guess the ramp time of a sales rep wrong. And ramp time really ties to the segment of customer you’re selling into. It tends be that if you’re selling into the enterprise, the ramp time for sales reps, because sales cycles are so long, tend to be much longer as well. They could be six months plus, could be a year. While if you’re selling more into SMB or mid-market, the ramp time to get a rep up and running can be much shorter, three to six months. Because the sales cycles are shorter, they just iterate much faster, and they ramp up much more quickly. David : The other thing that people have to understand is that sales velocity is a really important component to figuring out how many reps you should be hiring, whether they should be inside reps or field reps. If it takes you 90 days to close a deal, that can’t be a $5,000 a year deal, that has to be a $50,000 or even $150,000 a year deal. Das : Kristina, I know you’ve done a lot of work with metrics. So how do those play in? Kristina : Probably the one way to sum it all together is how many months does it take to pay back customer acquisition cost. Very commonly within the SaaS world, we talk about a 12-month CAC payback. We typically want to see for every dollar you spend on sales and marketing, you get a dollar back within a year. That means you can tweak the inputs any way you want. Let’s say that doing paid acquisition is really effective for you. Then, you can spend proportionally more on paid acquisition and less on sales reps. Vice versa, if you have a great inbound engine, you actually can hire a lot more sales reps and spend more on sales headcount. With all formulas, it’s a guide rail, so if you have customers that retain really, really well, let’s say you’re selling to the enterprise, and you’ve got a 90% or 95% annual retention rate, then your CAC payback could be between 12 and 24 months. But let’s say you’re selling to the SMB and churn is 2% or 3% monthly, which ends up being like 80% to 90% annual retention. Then, because your customer is less sticky, I would recommend looking at a CAC payback of 6 to 12 months. Das : How should you think about doing a free trial versus a paid trial? David : On the one hand, the bottoms up motion where people can try essentially a full version of a product before they buy it is extremely powerful. On the other hand, I’ve started to try to think about how I advise companies, when they are thinking about a free trial for something that might cost $100,000 or $200,000 a year? Do we do a paid pilot that has some sort of contractual obligation that if we meet then turns into a commercial engagement? Kristina : I do think the beauty of the bottoms up business is that you can get people to try the entire experience of the product for free, and they fall in love with it, and a certain percentage will convert. And that works really, really well for products that can self-serve. When you start moving up market to more complex products, the challenge with trials is it takes work to actually implement the product, whether it be integrations, IT has to give access, etc. You lose that self-serve ability, which is so amazing in the trial. And so, I tend to be more in the camp of paid trials, if it costs you money to actually deploy the trial. And when you’re selling to bigger customers, they associate value when they have to pay. Once a customer has to pay you, then they feel a need to make the project successful and thus they will onboard, schedule things, give you data and access. David : If you can get to a point where you get the customer to do that paid pilot, such that the only difference between a pilot and an actual customer is just the signing of a contract, that’s very powerful. Now, that does force you to have a really good pre-sales motion to make sure that you can deliver on the promise you’ve made your customers. When companies don’t have a great product, and they paper over it with professional services and sales engineering and post-sales support, that paid pilot thing doesn’t work because the experience isn’t good enough. So, it really is incumbent on the SaaS company that does a paid pilot to make sure that they are able to deliver on that experience. Kristina : And one emerging trend recently is people signing an annual contract with a one or three month out, as a replacement to the paid pilot. Because it’s the best of both worlds, the SaaS company that’s selling the product gets a higher level of commitment. And the customer gets the optionality of opting out in the same way as a trial without any clawback. It really comes down to where procurement falls. Sometimes procurement is at the beginning of that decision, which makes it more like an annual contract. Sometimes procurement is at the one or three month opt-out period, which means the customer already has a great experience, loves the product, and it is an easier way to convert procurements to actually sign on… David : And that is a really good segue into renewals. I always tell founders, you might have this subscription business, but it’s not a recurring revenue business until the second year when the revenue actually recurs. I think you really have the first three months to get a customer up and running and happy. And if they’re not, you then have about three months to fix it. And if all that works out, then the remaining six months of the contract can be focused on upsell and expansion. Das : Awesome. Thank you, Kristina. Thank you, David. Kristina : Thanks so much for having us. This was fun. David : Yeah, a lot of fun, great topics, and our favorite thing to talk about. ''' summarizer(text) ```
razent/SciFive-base-Pubmed
7ecd3e2966a97aa898461113a2dbb8da1acac625
2022-03-20T17:47:16.000Z
[ "pytorch", "tf", "t5", "text2text-generation", "en", "dataset:pubmed", "arxiv:2106.03598", "transformers", "token-classification", "text-classification", "question-answering", "text-generation", "autotrain_compatible" ]
text-classification
false
razent
null
razent/SciFive-base-Pubmed
234
1
transformers
3,407
--- language: - en tags: - token-classification - text-classification - question-answering - text2text-generation - text-generation datasets: - pubmed --- # SciFive Pubmed Base ## Introduction Paper: [SciFive: a text-to-text transformer model for biomedical literature](https://arxiv.org/abs/2106.03598) Authors: _Long N. Phan, James T. Anibal, Hieu Tran, Shaurya Chanana, Erol Bahadroglu, Alec Peltekian, Grégoire Altan-Bonnet_ ## How to use For more details, do check out [our Github repo](https://github.com/justinphan3110/SciFive). ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM ​ tokenizer = AutoTokenizer.from_pretrained("razent/SciFive-base-Pubmed") model = AutoModelForSeq2SeqLM.from_pretrained("razent/SciFive-base-Pubmed") ​ sentence = "Identification of APC2 , a homologue of the adenomatous polyposis coli tumour suppressor ." text = "ncbi_ner: " + sentence + " </s>" encoding = tokenizer.encode_plus(text, pad_to_max_length=True, return_tensors="pt") input_ids, attention_masks = encoding["input_ids"].to("cuda"), encoding["attention_mask"].to("cuda") outputs = model.generate( input_ids=input_ids, attention_mask=attention_masks, max_length=256, early_stopping=True ) for output in outputs: line = tokenizer.decode(output, skip_special_tokens=True, clean_up_tokenization_spaces=True) print(line) ```
textattack/xlnet-base-cased-rotten-tomatoes
d82af55dad548dfb89119b4664309e7cfa9e2053
2020-07-06T16:36:38.000Z
[ "pytorch", "xlnet", "text-generation", "transformers" ]
text-generation
false
textattack
null
textattack/xlnet-base-cased-rotten-tomatoes
234
null
transformers
3,408
## TextAttack Model Card This `xlnet-base-cased` model was fine-tuned for sequence classification using TextAttack and the rotten_tomatoes dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 16, a learning rate of 2e-05, and a maximum sequence length of 128. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.9071294559099438, as measured by the eval set accuracy, found after 2 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
csebuetnlp/mT5_m2o_hindi_crossSum
4246a0fc5df90077090cdb30f088ace8cecc3aaa
2022-04-22T15:03:33.000Z
[ "pytorch", "mt5", "text2text-generation", "am", "ar", "az", "bn", "my", "zh", "en", "fr", "gu", "ha", "hi", "ig", "id", "ja", "rn", "ko", "ky", "mr", "ne", "om", "ps", "fa", "pcm", "pt", "pa", "ru", "gd", "sr", "si", "so", "es", "sw", "ta", "te", "th", "ti", "tr", "uk", "ur", "uz", "vi", "cy", "yo", "arxiv:2112.08804", "transformers", "summarization", "mT5", "autotrain_compatible" ]
summarization
false
csebuetnlp
null
csebuetnlp/mT5_m2o_hindi_crossSum
234
null
transformers
3,409
--- tags: - summarization - mT5 language: - am - ar - az - bn - my - zh - en - fr - gu - ha - hi - ig - id - ja - rn - ko - ky - mr - ne - om - ps - fa - pcm - pt - pa - ru - gd - sr - si - so - es - sw - ta - te - th - ti - tr - uk - ur - uz - vi - cy - yo licenses: - cc-by-nc-sa-4.0 widget: - text: "Videos that say approved vaccines are dangerous and cause autism, cancer or infertility are among those that will be taken down, the company said. The policy includes the termination of accounts of anti-vaccine influencers. Tech giants have been criticised for not doing more to counter false health information on their sites. In July, US President Joe Biden said social media platforms were largely responsible for people's scepticism in getting vaccinated by spreading misinformation, and appealed for them to address the issue. YouTube, which is owned by Google, said 130,000 videos were removed from its platform since last year, when it implemented a ban on content spreading misinformation about Covid vaccines. In a blog post, the company said it had seen false claims about Covid jabs \"spill over into misinformation about vaccines in general\". The new policy covers long-approved vaccines, such as those against measles or hepatitis B. \"We're expanding our medical misinformation policies on YouTube with new guidelines on currently administered vaccines that are approved and confirmed to be safe and effective by local health authorities and the WHO,\" the post said, referring to the World Health Organization." --- # mT5-m2o-hindi-CrossSum This repository contains the many-to-one (m2o) mT5 checkpoint finetuned on all cross-lingual pairs of the [CrossSum](https://huggingface.co/datasets/csebuetnlp/CrossSum) dataset, where the target summary was in **hindi**, i.e. this model tries to **summarize text written in any language in Hindi.** For finetuning details and scripts, see the [paper](https://arxiv.org/abs/2112.08804) and the [official repository](https://github.com/csebuetnlp/CrossSum). ## Using this model in `transformers` (tested on 4.11.0.dev0) ```python import re from transformers import AutoTokenizer, AutoModelForSeq2SeqLM WHITESPACE_HANDLER = lambda k: re.sub('\s+', ' ', re.sub('\n+', ' ', k.strip())) article_text = """Videos that say approved vaccines are dangerous and cause autism, cancer or infertility are among those that will be taken down, the company said. The policy includes the termination of accounts of anti-vaccine influencers. Tech giants have been criticised for not doing more to counter false health information on their sites. In July, US President Joe Biden said social media platforms were largely responsible for people's scepticism in getting vaccinated by spreading misinformation, and appealed for them to address the issue. YouTube, which is owned by Google, said 130,000 videos were removed from its platform since last year, when it implemented a ban on content spreading misinformation about Covid vaccines. In a blog post, the company said it had seen false claims about Covid jabs "spill over into misinformation about vaccines in general". The new policy covers long-approved vaccines, such as those against measles or hepatitis B. "We're expanding our medical misinformation policies on YouTube with new guidelines on currently administered vaccines that are approved and confirmed to be safe and effective by local health authorities and the WHO," the post said, referring to the World Health Organization.""" model_name = "csebuetnlp/mT5_m2o_hindi_crossSum" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSeq2SeqLM.from_pretrained(model_name) input_ids = tokenizer( [WHITESPACE_HANDLER(article_text)], return_tensors="pt", padding="max_length", truncation=True, max_length=512 )["input_ids"] output_ids = model.generate( input_ids=input_ids, max_length=84, no_repeat_ngram_size=2, num_beams=4 )[0] summary = tokenizer.decode( output_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(summary) ``` ## Citation If you use this model, please cite the following paper: ``` @article{hasan2021crosssum, author = {Tahmid Hasan and Abhik Bhattacharjee and Wasi Uddin Ahmad and Yuan-Fang Li and Yong-bin Kang and Rifat Shahriyar}, title = {CrossSum: Beyond English-Centric Cross-Lingual Abstractive Text Summarization for 1500+ Language Pairs}, journal = {CoRR}, volume = {abs/2112.08804}, year = {2021}, url = {https://arxiv.org/abs/2112.08804}, eprinttype = {arXiv}, eprint = {2112.08804} } ```
t8oo/DialoGPT-small-zenigata
43446068845204bf072d65420cf79021798ef7f6
2022-05-23T08:02:15.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
t8oo
null
t8oo/DialoGPT-small-zenigata
234
null
transformers
3,410
--- tags: - conversational --- # Zenigata DialoGPT Model
ck46/t5-base-hotpot-qa-qg
f74aba4b96c41f84ecadb68ed23824045b4647be
2022-01-11T09:52:49.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
ck46
null
ck46/t5-base-hotpot-qa-qg
233
null
transformers
3,411
Entry not found
flax-community/gpt-neo-125M-code-clippy-dedup-2048
dcaced278779587969abf2780b49734cef1dcd1e
2021-07-18T17:30:41.000Z
[ "pytorch", "jax", "tensorboard", "gpt_neo", "text-generation", "transformers" ]
text-generation
false
flax-community
null
flax-community/gpt-neo-125M-code-clippy-dedup-2048
233
4
transformers
3,412
Entry not found
healx/gpt-2-pubmed-medium
6495202861edc7ea631c08d0892917d91255290c
2020-12-11T21:43:41.000Z
[ "pytorch", "arxiv:2004.13845", "transformers" ]
null
false
healx
null
healx/gpt-2-pubmed-medium
233
null
transformers
3,413
GPT-2 (355M model) finetuned on 0.5m PubMed abstracts. Used in the [writemeanabstract.com](writemeanabstract.com) and the following preprint: [Papanikolaou, Yannis, and Andrea Pierleoni. "DARE: Data Augmented Relation Extraction with GPT-2." arXiv preprint arXiv:2004.13845 (2020).](https://arxiv.org/abs/2004.13845)
moussaKam/frugalscore_medium_bert-base_mover-score
e4d050062a4188e213ca57bae5e22e2d689a5470
2022-05-11T11:07:21.000Z
[ "pytorch", "bert", "text-classification", "arxiv:2110.08559", "transformers" ]
text-classification
false
moussaKam
null
moussaKam/frugalscore_medium_bert-base_mover-score
233
null
transformers
3,414
# FrugalScore FrugalScore is an approach to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance Paper: https://arxiv.org/abs/2110.08559?context=cs Project github: https://github.com/moussaKam/FrugalScore The pretrained checkpoints presented in the paper : | FrugalScore | Student | Teacher | Method | |----------------------------------------------------|-------------|----------------|------------| | [moussaKam/frugalscore_tiny_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_bert-score) | BERT-tiny | BERT-Base | BERTScore | | [moussaKam/frugalscore_small_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_bert-score) | BERT-small | BERT-Base | BERTScore | | [moussaKam/frugalscore_medium_bert-base_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_bert-score) | BERT-medium | BERT-Base | BERTScore | | [moussaKam/frugalscore_tiny_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_roberta_bert-score) | BERT-tiny | RoBERTa-Large | BERTScore | | [moussaKam/frugalscore_small_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_roberta_bert-score) | BERT-small | RoBERTa-Large | BERTScore | | [moussaKam/frugalscore_medium_roberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_roberta_bert-score) | BERT-medium | RoBERTa-Large | BERTScore | | [moussaKam/frugalscore_tiny_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_tiny_deberta_bert-score) | BERT-tiny | DeBERTa-XLarge | BERTScore | | [moussaKam/frugalscore_small_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_small_deberta_bert-score) | BERT-small | DeBERTa-XLarge | BERTScore | | [moussaKam/frugalscore_medium_deberta_bert-score](https://huggingface.co/moussaKam/frugalscore_medium_deberta_bert-score) | BERT-medium | DeBERTa-XLarge | BERTScore | | [moussaKam/frugalscore_tiny_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_tiny_bert-base_mover-score) | BERT-tiny | BERT-Base | MoverScore | | [moussaKam/frugalscore_small_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_small_bert-base_mover-score) | BERT-small | BERT-Base | MoverScore | | [moussaKam/frugalscore_medium_bert-base_mover-score](https://huggingface.co/moussaKam/frugalscore_medium_bert-base_mover-score) | BERT-medium | BERT-Base | MoverScore |
mrm8488/distilroberta-finetuned-age_news-classification
2c7aff917a107ea45621627217f3e63adb8ce6b7
2021-05-20T18:23:35.000Z
[ "pytorch", "jax", "roberta", "text-classification", "en", "dataset:ag_news", "transformers", "news", "classification" ]
text-classification
false
mrm8488
null
mrm8488/distilroberta-finetuned-age_news-classification
233
1
transformers
3,415
--- language: en tags: - news - classification datasets: - ag_news widget: - text: "Venezuela Prepares for Chavez Recall Vote Supporters and rivals warn of possible fraud; government says Chavez's defeat could produce turmoil in world oil market." --- # distilroberta-base fine-tuned on age_news dataset for news classification Test set accuray: 0.94
patrickvonplaten/unispeech-large-1500h-cv-timit
084bb18d5c0ae406b34156887764c43d19db33aa
2021-10-27T10:50:16.000Z
[ "pytorch", "tensorboard", "unispeech", "automatic-speech-recognition", "dataset:timit_asr", "transformers", "timit_asr", "generated_from_trainer", "model-index" ]
automatic-speech-recognition
false
patrickvonplaten
null
patrickvonplaten/unispeech-large-1500h-cv-timit
233
null
transformers
3,416
--- tags: - automatic-speech-recognition - timit_asr - generated_from_trainer datasets: - timit_asr model-index: - name: unispeech-large-1500h-cv-timit results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # unispeech-large-1500h-cv-timit This model is a fine-tuned version of [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) on the TIMIT_ASR - NA dataset. It achieves the following results on the evaluation set: - Loss: 0.3099 - Wer: 0.2196 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.64 | 0.69 | 100 | 3.9717 | 0.9981 | | 2.6793 | 1.38 | 200 | 2.6264 | 1.0 | | 1.2221 | 2.07 | 300 | 0.9999 | 0.7167 | | 0.9009 | 2.76 | 400 | 0.6509 | 0.5570 | | 0.4352 | 3.45 | 500 | 0.4682 | 0.4332 | | 0.227 | 4.14 | 600 | 0.3661 | 0.3565 | | 0.2169 | 4.83 | 700 | 0.3244 | 0.3203 | | 0.2687 | 5.52 | 800 | 0.3137 | 0.2981 | | 0.127 | 6.21 | 900 | 0.3220 | 0.2828 | | 0.0922 | 6.9 | 1000 | 0.3075 | 0.2708 | | 0.0965 | 7.59 | 1100 | 0.2779 | 0.2576 | | 0.1298 | 8.28 | 1200 | 0.3111 | 0.2480 | | 0.0855 | 8.97 | 1300 | 0.3021 | 0.2421 | | 0.0629 | 9.66 | 1400 | 0.3122 | 0.2511 | | 0.0471 | 10.34 | 1500 | 0.2965 | 0.2368 | | 0.0871 | 11.03 | 1600 | 0.3247 | 0.2387 | | 0.0503 | 11.72 | 1700 | 0.3359 | 0.2363 | | 0.0402 | 12.41 | 1800 | 0.2976 | 0.2332 | | 0.0336 | 13.1 | 1900 | 0.3139 | 0.2321 | | 0.0634 | 13.79 | 2000 | 0.3188 | 0.2309 | | 0.0429 | 14.48 | 2100 | 0.3145 | 0.2335 | | 0.028 | 15.17 | 2200 | 0.3244 | 0.2242 | | 0.0255 | 15.86 | 2300 | 0.2914 | 0.2196 | | 0.0406 | 16.55 | 2400 | 0.3249 | 0.2202 | | 0.0512 | 17.24 | 2500 | 0.3037 | 0.2198 | | 0.0269 | 17.93 | 2600 | 0.3218 | 0.2242 | | 0.0287 | 18.62 | 2700 | 0.3106 | 0.2185 | | 0.0319 | 19.31 | 2800 | 0.3124 | 0.2217 | | 0.0494 | 20.0 | 2900 | 0.3099 | 0.2196 | ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.8.1 - Datasets 1.14.1.dev0 - Tokenizers 0.10.3
smilesandtea/DialoGPT-medium-Rick
5438cc37710aaa5fe9f6523bb4f63a59eea18c99
2021-12-06T19:27:03.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
smilesandtea
null
smilesandtea/DialoGPT-medium-Rick
233
null
transformers
3,417
--- tags: - conversational --- # Rick DialoGPT Model
tau/splinter-large-qass
317a7d0f7432d4bbae0e4187257f20e425ff154b
2021-09-03T08:47:23.000Z
[ "pytorch", "splinter", "question-answering", "en", "arxiv:2108.05857", "transformers", "SplinterModel", "license:apache-2.0", "autotrain_compatible" ]
question-answering
false
tau
null
tau/splinter-large-qass
233
0
transformers
3,418
--- language: en tags: - splinter - SplinterModel license: apache-2.0 --- # Splinter large model, (with pretrained QASS-layer weights) Splinter-large is the pretrained model discussed in the paper [Few-Shot Question Answering by Pretraining Span Selection](https://aclanthology.org/2021.acl-long.239/) (at ACL 2021). Its original repository can be found [here](https://github.com/oriram/splinter). The model is case-sensitive. Note (1): This model **does** contain the pretrained weights for the QASS layer (see paper for details). For the model **without** those weights, see [tau/splinter-large](https://huggingface.co/tau/splinter-large). Note (2): Splinter-large was trained after the paper was released, so the results are not reported. However, this model outperforms the base model by large margins. For example, on SQuAD, the model is able to reach 80% F1 given only 128 examples, whereas the base model obtains only ~73%). See the results for Splinter-large in the Appendix of [this paper](https://arxiv.org/pdf/2108.05857.pdf). ## Model description Splinter is a model that is pretrained in a self-supervised fashion for few-shot question answering. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the Recurring Span Selection (RSS) objective, which emulates the span selection process involved in extractive question answering. Given a text, clusters of recurring spans (n-grams that appear more than once in the text) are first identified. For each such cluster, all of its instances but one are replaced with a special `[QUESTION]` token, and the model should select the correct (i.e., unmasked) span for each masked one. The model also defines the Question-Aware Span selection (QASS) layer, which selects spans conditioned on a specific question (in order to perform multiple predictions). ## Intended uses & limitations The prime use for this model is few-shot extractive QA. ## Pretraining The model was pretrained on a v3-32 TPU for 2.4M steps. The training data is based on **Wikipedia** and **BookCorpus**. See the paper for more details. ### BibTeX entry and citation info ```bibtex @inproceedings{ram-etal-2021-shot, title = "Few-Shot Question Answering by Pretraining Span Selection", author = "Ram, Ori and Kirstain, Yuval and Berant, Jonathan and Globerson, Amir and Levy, Omer", booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.acl-long.239", doi = "10.18653/v1/2021.acl-long.239", pages = "3066--3079", } ```
Intel/bert-base-uncased-mrpc
014e870f64e3c1376952bf518a8cdb9e95df20f7
2022-04-06T08:13:30.000Z
[ "pytorch", "bert", "text-classification", "en", "dataset:glue", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
Intel
null
Intel/bert-base-uncased-mrpc
233
null
transformers
3,419
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: bert-base-uncased-mrpc results: - task: name: Text Classification type: text-classification dataset: name: GLUE MRPC type: glue args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8602941176470589 - name: F1 type: f1 value: 0.9042016806722689 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-mrpc This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.6978 - Accuracy: 0.8603 - F1: 0.9042 - Combined Score: 0.8822 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu102 - Datasets 1.14.0 - Tokenizers 0.11.6
alibaba-pai/pai-bert-tiny-zh
4acdb9757ebe593f0e65f829339a8818a90094e1
2022-06-10T02:34:43.000Z
[ "pytorch", "bert", "zh", "arxiv:2205.00258", "transformers", "license:apache-2.0", "fill-mask" ]
fill-mask
false
alibaba-pai
null
alibaba-pai/pai-bert-tiny-zh
233
1
transformers
3,420
--- language: zh pipeline_tag: fill-mask widget: - text: "中国的首都是北[MASK]。" - text: "牛奶是[MASK]色的。" tags: - bert license: apache-2.0 --- ## Alibaba PAI BERT Tiny Chinese This project provides Chinese pre-trained language models and various types of NLP tools. The models are pre-trained on the large-scale corpora hosted by the Alibaba PAI team. It is developed based on the EasyNLP framework (https://github.com/alibaba/EasyNLP). ## Citation If you find the resource is useful, please cite the following paper in your work: ``` @article{easynlp, title = {EasyNLP: A Comprehensive and Easy-to-use Toolkit for Natural Language Processing}, publisher = {arXiv}, author = {Wang, Chengyu and Qiu, Minghui and Zhang, Taolin and Liu, Tingting and Li, Lei and Wang, Jianing and Wang, Ming and Huang, Jun and Lin, Wei}, url = {https://arxiv.org/abs/2205.00258}, year = {2022} } ```
joaoalvarenga/bloom-8bit
8d1adb1b9642666dfe80d87440e690b3f974ca20
2022-07-14T00:12:48.000Z
[ "pytorch", "bloom", "text-generation", "ak", "ar", "as", "bm", "bn", "ca", "en", "es", "eu", "fon", "fr", "gu", "hi", "id", "ig", "ki", "kn", "lg", "ln", "ml", "mr", "ne", "nso", "ny", "or", "pa", "pt", "rn", "rw", "sn", "st", "sw", "ta", "te", "tn", "ts", "tum", "tw", "ur", "vi", "wo", "xh", "yo", "zh", "zu", "arxiv:2106.09685", "transformers", "license:bigscience-bloom-rail-1.0" ]
text-generation
false
joaoalvarenga
null
joaoalvarenga/bloom-8bit
233
41
transformers
3,421
--- inference: false license: bigscience-bloom-rail-1.0 language: - ak - ar - as - bm - bn - ca - en - es - eu - fon - fr - gu - hi - id - ig - ki - kn - lg - ln - ml - mr - ne - nso - ny - or - pa - pt - rn - rw - sn - st - sw - ta - te - tn - ts - tum - tw - ur - vi - wo - xh - yo - zh - zu pipeline_tag: text-generation --- ### Quantized bigscience/bloom with 8-bit weights Heavily inspired by [Hivemind's GPT-J-6B with 8-bit weights](https://huggingface.co/hivemind/gpt-j-6B-8bit), this is a version of [bigscience/bloom](https://huggingface.co/bigscience/bloom) a ~176 billion parameters language model that you run and fine-tune with less memory. Here, we also apply [LoRA (Low Rank Adaptation)](https://arxiv.org/abs/2106.09685) to reduce model size. The original version takes \~353GB memory, this version takes **\~180GB**. Our main goal is to generate a model compressed enough to be deployed in a traditional Kubernetes cluster. ### How to fine-tune In this [notebook](https://nbviewer.org/urls/huggingface.co/joaoalvarenga/bloom-8bit/raw/main/fine-tuning-example.ipynb) you can find an adaptation from [Hivemind's GPT-J 8-bit fine-tuning notebook](https://colab.research.google.com/drive/1ft6wQU0BhqG5PRlwgaZJv2VukKKjU4Es) to fine-tune Bloom 8-bit with a 3x NVIDIA A100 instance. ### How to use This model can be used by adapting Bloom original implementation. This is an adaptation from [Hivemind's GPT-J 8-bit](https://nbviewer.org/urls/huggingface.co/hivemind/gpt-j-6B-8bit/raw/main/convert-gpt-j.ipynb): ```python import transformers import torch import torch.nn as nn import torch.nn.functional as F from bitsandbytes.functional import quantize_blockwise, dequantize_blockwise from typing import Tuple from torch.cuda.amp import custom_fwd, custom_bwd class FrozenBNBLinear(nn.Module): def __init__(self, weight, absmax, code, bias=None): assert isinstance(bias, nn.Parameter) or bias is None super().__init__() self.out_features, self.in_features = weight.shape self.register_buffer("weight", weight.requires_grad_(False)) self.register_buffer("absmax", absmax.requires_grad_(False)) self.register_buffer("code", code.requires_grad_(False)) self.adapter = None self.bias = bias def forward(self, input): output = DequantizeAndLinear.apply(input, self.weight, self.absmax, self.code, self.bias) if self.adapter: output += self.adapter(input) return output @classmethod def from_linear(cls, linear: nn.Linear) -> "FrozenBNBLinear": weights_int8, state = quantize_blockise_lowmemory(linear.weight) return cls(weights_int8, *state, linear.bias) def __repr__(self): return f"{self.__class__.__name__}({self.in_features}, {self.out_features})" class DequantizeAndLinear(torch.autograd.Function): @staticmethod @custom_fwd def forward(ctx, input: torch.Tensor, weights_quantized: torch.ByteTensor, absmax: torch.FloatTensor, code: torch.FloatTensor, bias: torch.FloatTensor): weights_deq = dequantize_blockwise(weights_quantized, absmax=absmax, code=code) ctx.save_for_backward(input, weights_quantized, absmax, code) ctx._has_bias = bias is not None return F.linear(input, weights_deq, bias) @staticmethod @custom_bwd def backward(ctx, grad_output: torch.Tensor): assert not ctx.needs_input_grad[1] and not ctx.needs_input_grad[2] and not ctx.needs_input_grad[3] input, weights_quantized, absmax, code = ctx.saved_tensors # grad_output: [*batch, out_features] weights_deq = dequantize_blockwise(weights_quantized, absmax=absmax, code=code) grad_input = grad_output @ weights_deq grad_bias = grad_output.flatten(0, -2).sum(dim=0) if ctx._has_bias else None return grad_input, None, None, None, grad_bias class FrozenBNBEmbedding(nn.Module): def __init__(self, weight, absmax, code): super().__init__() self.num_embeddings, self.embedding_dim = weight.shape self.register_buffer("weight", weight.requires_grad_(False)) self.register_buffer("absmax", absmax.requires_grad_(False)) self.register_buffer("code", code.requires_grad_(False)) self.adapter = None def forward(self, input, **kwargs): with torch.no_grad(): # note: both quantuized weights and input indices are *not* differentiable weight_deq = dequantize_blockwise(self.weight, absmax=self.absmax, code=self.code) output = F.embedding(input, weight_deq, **kwargs) if self.adapter: output += self.adapter(input) return output @classmethod def from_embedding(cls, embedding: nn.Embedding) -> "FrozenBNBEmbedding": weights_int8, state = quantize_blockise_lowmemory(embedding.weight) return cls(weights_int8, *state) def __repr__(self): return f"{self.__class__.__name__}({self.num_embeddings}, {self.embedding_dim})" def quantize_blockise_lowmemory(matrix: torch.Tensor, chunk_size: int = 2 ** 20): assert chunk_size % 4096 == 0 code = None chunks = [] absmaxes = [] flat_tensor = matrix.view(-1) for i in range((matrix.numel() - 1) // chunk_size + 1): input_chunk = flat_tensor[i * chunk_size: (i + 1) * chunk_size].clone() quantized_chunk, (absmax_chunk, code) = quantize_blockwise(input_chunk, code=code) chunks.append(quantized_chunk) absmaxes.append(absmax_chunk) matrix_i8 = torch.cat(chunks).reshape_as(matrix) absmax = torch.cat(absmaxes) return matrix_i8, (absmax, code) def convert_to_int8(model): """Convert linear and embedding modules to 8-bit with optional adapters""" for module in list(model.modules()): for name, child in module.named_children(): if isinstance(child, nn.Linear): print(name, child) setattr( module, name, FrozenBNBLinear( weight=torch.zeros(child.out_features, child.in_features, dtype=torch.uint8), absmax=torch.zeros((child.weight.numel() - 1) // 4096 + 1), code=torch.zeros(256), bias=child.bias, ), ) elif isinstance(child, nn.Embedding): setattr( module, name, FrozenBNBEmbedding( weight=torch.zeros(child.num_embeddings, child.embedding_dim, dtype=torch.uint8), absmax=torch.zeros((child.weight.numel() - 1) // 4096 + 1), code=torch.zeros(256), ) ) class BloomBlock(transformers.models.bloom.modeling_bloom.BloomBlock): def __init__(self, config, layer_number=None): super().__init__(config, layer_number) convert_to_int8(self.self_attention) convert_to_int8(self.mlp) class BloomModel(transformers.models.bloom.modeling_bloom.BloomModel): def __init__(self, config): super().__init__(config) convert_to_int8(self) class BloomForCausalLM(transformers.models.bloom.modeling_bloom.BloomForCausalLM): def __init__(self, config): super().__init__(config) convert_to_int8(self) transformers.models.bloom.modeling_bloom.BloomBlock = BloomBlock model = BloomForCausalLM.from_pretrained('joaoalvarenga/bloom-8bit', low_cpu_mem_usage=True) tokenizer = BloomTokenizerFast.from_pretrained('joaoalvarenga/bloom-8bit') prompt = tokenizer("Given a table named salaries and columns id, created_at, salary, age. Creates a SQL to answer What is the average salary for 22 years old:", return_tensors='pt') out = model.generate(**prompt, min_length=10, do_sample=True) tokenizer.decode(out[0]) ```
helpmefindaname/mini-sequence-tagger-conll03
d8fbd8898a209e1264fb2abef3af852ad3a56a4b
2022-07-19T00:53:03.000Z
[ "pytorch", "en", "dataset:conll2003", "flair", "token-classification", "sequence-tagger-model" ]
token-classification
false
helpmefindaname
null
helpmefindaname/mini-sequence-tagger-conll03
233
null
flair
3,422
--- tags: - flair - token-classification - sequence-tagger-model language: en datasets: - conll2003 widget: - text: "George Washington went to Washington" --- This is a very small model I use for testing my [ner eval dashboard](https://github.com/helpmefindaname/ner-eval-dashboard) F1-Score: **48,73** (CoNLL-03) Predicts 4 tags: | **tag** | **meaning** | |---------------------------------|-----------| | PER | person name | | LOC | location name | | ORG | organization name | | MISC | other name | Based on huggingface minimal testing embeddings --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("helpmefindaname/mini-sequence-tagger-conll03") # make example sentence sentence = Sentence("George Washington went to Washington") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ``` This yields the following output: ``` Span [1,2]: "George Washington" [− Labels: PER (1.0)] Span [5]: "Washington" [− Labels: LOC (1.0)] ``` So, the entities "*George Washington*" (labeled as a **person**) and "*Washington*" (labeled as a **location**) are found in the sentence "*George Washington went to Washington*". --- ### Training: Script to train this model The following command was used to train this model: where `examples\ner\run_ner.py` refers to [this script](https://github.com/flairNLP/flair/blob/master/examples/ner/run_ner.py) ``` python examples\ner\run_ner.py --model_name_or_path hf-internal-testing/tiny-random-bert --dataset_name CONLL_03 --learning_rate 0.002 --mini_batch_chunk_size 1024 --batch_size 64 --num_epochs 100 ``` ---
bespin-global/klue-sentence-roberta-base
6cc0ac3cdf46e4ebeaae46e385b6dda316548a6d
2022-02-07T07:14:05.000Z
[ "pytorch", "roberta", "feature-extraction", "dataset:klue", "sentence-transformers", "sentence-similarity", "transformers", "license:cc-by-nc-4.0" ]
sentence-similarity
false
bespin-global
null
bespin-global/klue-sentence-roberta-base
232
null
sentence-transformers
3,423
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers datasets: - klue license: cc-by-nc-4.0 --- # bespin-global/klue-sentence-roberta-base This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('bespin-global/klue-sentence-roberta-base') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('bespin-global/klue-sentence-roberta-base') model = AutoModel.from_pretrained('bespin-global/klue-sentence-roberta-base') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=bespin-global/klue-sentence-roberta-base) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 365 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 6, "evaluation_steps": 1000, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 219, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information --> [Jaehyeong](https://huggingface.co/jaehyeong) at [Bespin Global](https://www.bespinglobal.com/)
hfl/chinese-electra-small-generator
dd271ca037299a9b0d2d389c9c65c3e28c2d8f49
2021-03-03T01:38:55.000Z
[ "pytorch", "tf", "electra", "zh", "arxiv:2004.13922", "transformers", "license:apache-2.0", "fill-mask" ]
fill-mask
false
hfl
null
hfl/chinese-electra-small-generator
232
null
transformers
3,424
--- language: - zh license: "apache-2.0" pipeline_tag: "fill-mask" --- **Please use `ElectraForPreTraining` for `discriminator` and `ElectraForMaskedLM` for `generator` if you are re-training these models.** ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra) You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
sibyl/BART-commongen
5993c052a6432e12c319069fad44bfd45b1d02a0
2021-08-09T22:24:43.000Z
[ "pytorch", "tensorboard", "bart", "text2text-generation", "dataset:gem", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
false
sibyl
null
sibyl/BART-commongen
232
null
transformers
3,425
--- tags: - generated_from_trainer datasets: - gem model_index: - name: BART-commongen results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: gem type: gem args: common_gen --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BART-commongen This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the gem dataset. It achieves the following results on the evaluation set: - Loss: 1.1263 - Spice: 0.4178 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - training_steps: 6317 ### Training results | Training Loss | Epoch | Step | Validation Loss | Spice | |:-------------:|:-----:|:----:|:---------------:|:------:| | 9.0971 | 0.05 | 100 | 4.1336 | 0.3218 | | 3.5348 | 0.09 | 200 | 1.5467 | 0.3678 | | 1.5099 | 0.14 | 300 | 1.1280 | 0.3821 | | 1.2395 | 0.19 | 400 | 1.1178 | 0.3917 | | 1.1827 | 0.24 | 500 | 1.0919 | 0.4086 | | 1.1545 | 0.28 | 600 | 1.1028 | 0.4035 | | 1.1363 | 0.33 | 700 | 1.1021 | 0.4187 | | 1.1156 | 0.38 | 800 | 1.1231 | 0.4103 | | 1.1077 | 0.43 | 900 | 1.1221 | 0.4117 | | 1.0964 | 0.47 | 1000 | 1.1169 | 0.4088 | | 1.0704 | 0.52 | 1100 | 1.1143 | 0.4133 | | 1.0483 | 0.57 | 1200 | 1.1085 | 0.4058 | | 1.0556 | 0.62 | 1300 | 1.1059 | 0.4249 | | 1.0343 | 0.66 | 1400 | 1.0992 | 0.4102 | | 1.0123 | 0.71 | 1500 | 1.1126 | 0.4104 | | 1.0108 | 0.76 | 1600 | 1.1140 | 0.4177 | | 1.005 | 0.81 | 1700 | 1.1264 | 0.4078 | | 0.9822 | 0.85 | 1800 | 1.1256 | 0.4158 | | 0.9918 | 0.9 | 1900 | 1.1345 | 0.4118 | | 0.9664 | 0.95 | 2000 | 1.1087 | 0.4073 | | 0.9532 | 1.0 | 2100 | 1.1217 | 0.4063 | | 0.8799 | 1.04 | 2200 | 1.1229 | 0.4115 | | 0.8665 | 1.09 | 2300 | 1.1263 | 0.4178 | ### Framework versions - Transformers 4.9.2 - Pytorch 1.9.0+cu102 - Datasets 1.11.1.dev0 - Tokenizers 0.10.3
TheBakerCat/2chan_ruGPT3_small
ae88dc55e1e0f80876e0478bb5ac90699324066c
2021-05-21T11:26:24.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "transformers" ]
text-generation
false
TheBakerCat
null
TheBakerCat/2chan_ruGPT3_small
231
null
transformers
3,426
ruGPT3-small model, trained on some 2chan posts
codistai/codeBERT-small-v2
01695bc17a6157b5e24cb003c8d0b0ce88c87894
2021-05-20T15:35:42.000Z
[ "pytorch", "jax", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
codistai
null
codistai/codeBERT-small-v2
231
null
transformers
3,427
Entry not found
fgaim/tiroberta-base
4e81446260c169a8cf3ff7f1a9e4f5c04e5f8e9c
2021-10-08T00:07:07.000Z
[ "pytorch", "jax", "roberta", "fill-mask", "ti", "transformers", "autotrain_compatible" ]
fill-mask
false
fgaim
null
fgaim/tiroberta-base
231
1
transformers
3,428
--- language: ti widget: - text: "ዓቕሚ መንእሰይ ኤርትራ <mask> ተራእዩ" --- # RoBERTa Pretrained for Tigrinya Language We pretrain a RoBERTa base model for Tigrinya on a dataset of 40 million tokens trained for 40 epochs. Contained in this repo is the original pretrained Flax model that was trained on a TPU v3.8 and it's corresponding PyTorch version. ## Hyperparameters The hyperparameters corresponding to model sizes mentioned above are as follows: | Model Size | L | AH | HS | FFN | P | Seq | |------------|----|----|-----|------|------|------| | BASE | 12 | 12 | 768 | 3072 | 125M | 512 | (L = number of layers; AH = number of attention heads; HS = hidden size; FFN = feedforward network dimension; P = number of parameters; Seq = maximum sequence length.)
google/bert_uncased_L-4_H-128_A-2
c29bee83fc7f003ac8c5e6e135529da4ecddb7c3
2021-05-19T17:30:08.000Z
[ "pytorch", "jax", "bert", "arxiv:1908.08962", "transformers", "license:apache-2.0" ]
null
false
google
null
google/bert_uncased_L-4_H-128_A-2
231
null
transformers
3,429
--- thumbnail: https://huggingface.co/front/thumbnails/google.png license: apache-2.0 --- BERT Miniatures === This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the [official BERT Github page](https://github.com/google-research/bert/), or via HuggingFace from the links below: | |H=128|H=256|H=512|H=768| |---|:---:|:---:|:---:|:---:| | **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]| | **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]| | **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]| | **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]| | **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]| | **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]| Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: |Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX| |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0| |BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1| |BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6| |BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5| For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: - batch sizes: 8, 16, 32, 64, 128 - learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper: ``` @article{turc2019, title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models}, author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, journal={arXiv preprint arXiv:1908.08962v2 }, year={2019} } ``` [2_128]: https://huggingface.co/google/bert_uncased_L-2_H-128_A-2 [2_256]: https://huggingface.co/google/bert_uncased_L-2_H-256_A-4 [2_512]: https://huggingface.co/google/bert_uncased_L-2_H-512_A-8 [2_768]: https://huggingface.co/google/bert_uncased_L-2_H-768_A-12 [4_128]: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2 [4_256]: https://huggingface.co/google/bert_uncased_L-4_H-256_A-4 [4_512]: https://huggingface.co/google/bert_uncased_L-4_H-512_A-8 [4_768]: https://huggingface.co/google/bert_uncased_L-4_H-768_A-12 [6_128]: https://huggingface.co/google/bert_uncased_L-6_H-128_A-2 [6_256]: https://huggingface.co/google/bert_uncased_L-6_H-256_A-4 [6_512]: https://huggingface.co/google/bert_uncased_L-6_H-512_A-8 [6_768]: https://huggingface.co/google/bert_uncased_L-6_H-768_A-12 [8_128]: https://huggingface.co/google/bert_uncased_L-8_H-128_A-2 [8_256]: https://huggingface.co/google/bert_uncased_L-8_H-256_A-4 [8_512]: https://huggingface.co/google/bert_uncased_L-8_H-512_A-8 [8_768]: https://huggingface.co/google/bert_uncased_L-8_H-768_A-12 [10_128]: https://huggingface.co/google/bert_uncased_L-10_H-128_A-2 [10_256]: https://huggingface.co/google/bert_uncased_L-10_H-256_A-4 [10_512]: https://huggingface.co/google/bert_uncased_L-10_H-512_A-8 [10_768]: https://huggingface.co/google/bert_uncased_L-10_H-768_A-12 [12_128]: https://huggingface.co/google/bert_uncased_L-12_H-128_A-2 [12_256]: https://huggingface.co/google/bert_uncased_L-12_H-256_A-4 [12_512]: https://huggingface.co/google/bert_uncased_L-12_H-512_A-8 [12_768]: https://huggingface.co/google/bert_uncased_L-12_H-768_A-12
liam168/trans-opus-mt-zh-en
85f60aa282af51009c10912996c377ec4f68385c
2021-07-16T03:34:38.000Z
[ "pytorch", "marian", "text2text-generation", "en", "zh", "transformers", "translation", "autotrain_compatible" ]
translation
false
liam168
null
liam168/trans-opus-mt-zh-en
231
null
transformers
3,430
--- language: - en - zh tags: - translation widget: - text: "我喜欢学习数据科学和机器学习。" --- # liam168/trans-opus-mt-zh-en ## Model description * source group: English * target group: Chinese * model: transformer * source language(s): eng ## How to use ```python >>> from transformers import AutoModelWithLMHead,AutoTokenizer,pipeline >>> mode_name = 'liam168/trans-opus-mt-zh-en' >>> model = AutoModelWithLMHead.from_pretrained(mode_name) >>> tokenizer = AutoTokenizer.from_pretrained(mode_name) >>> translation = pipeline("translation_zh_to_en", model=model, tokenizer=tokenizer) >>> translation('我喜欢学习数据科学和机器学习。', max_length=400) [{'translation_text': 'I like to study data science and machine learning.'}] ``` ## Contact [email protected]
lonewanderer27/DialoGPT-small-Joshua
7de3318f53e928b825cda8e67171f1f7507d1b09
2021-08-23T15:15:43.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
lonewanderer27
null
lonewanderer27/DialoGPT-small-Joshua
231
null
transformers
3,431
--- tags: - conversational --- # Joshua DialoGPT Model
pucpr/gpt2-bio-pt
28356b33732dbe98a5eb1f81bec7a01b0062d035
2021-07-22T21:30:05.000Z
[ "pytorch", "tf", "jax", "gpt2", "text-generation", "pt", "dataset:biomedical literature from Scielo and Pubmed", "transformers" ]
text-generation
false
pucpr
null
pucpr/gpt2-bio-pt
231
4
transformers
3,432
--- language: "pt" widget: - text: "O paciente recebeu " - text: "A cardiologia provou que " - text: "O paciente chegou no hospital " - text: "Cientistas descobriram que " - text: "O nível de atividade biológica " - text: "O DNA e o RNA " datasets: - biomedical literature from Scielo and Pubmed thumbnail: "https://raw.githubusercontent.com/HAILab-PUCPR/gpt2-bio-pt/main/img/logo-gpt2-bio-pt.png" --- <img src="https://raw.githubusercontent.com/HAILab-PUCPR/gpt2-bio-pt/main/img/logo-gpt2-bio-pt.png" alt="Logo GPt2-Bio-Pt"> # GPT2-BioPT - a Language Model for Portuguese Biomedical text generation ## Introduction GPT2-BioPT (Portuguese Biomedical GPT-2 small) is a language model for Portuguese based on the OpenAI GPT-2 model, trained from the [GPorTuguese-2](https://huggingface.co/pierreguillou/gpt2-small-portuguese/) with biomedical literature. We used **Transfer Learning and Fine-tuning techniques** with 110MB of training data, corresponding to 16,209,373 tokens and 729,654 sentences. ## GPT-2 *Note: information copied/pasted from [Model: gpt2 >> GPT-2](https://huggingface.co/gpt2#gpt-2)* Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in this [paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and first released at this [page](https://openai.com/blog/better-language-models/) (February 14, 2019). Disclaimer: The team releasing GPT-2 also wrote a [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card has been written by the Hugging Face team to complete the information they provided and give specific examples of bias. ## Model description *Note: information copied/pasted from [Model: gpt2 >> Model description](https://huggingface.co/gpt2#model-description)* GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt. ## How to use GPT2-BioPT with HuggingFace ``` from transformers import pipeline chef = pipeline('text-generation',model="pucpr/gpt2-bio-pt", tokenizer="pucpr/gpt2-bio-pt",config={'max_length':800}) result = chef('O paciente chegou no hospital')[0]['generated_text'] print(result) ``` Resultado: *```O paciente chegou no hospital três meses após a operação, não houve complicações graves. Entre os grupos que apresentaram maior número de lesões, o exame da cavidade pélvica estava significantemente associado à ausência de complicações. Foi encontrada uma maior incidência de fraturas (...)```* ## Citation ``` @INPROCEEDINGS{9474713, author={Schneider, Elisa Terumi Rubel and de Souza, João Vitor Andrioli and Gumiel, Yohan Bonescki and Moro, Claudia and Paraiso, Emerson Cabrera}, booktitle={2021 IEEE 34th International Symposium on Computer-Based Medical Systems (CBMS)}, title={A GPT-2 Language Model for Biomedical Texts in Portuguese}, year={2021}, volume={}, number={}, pages={474-479}, doi={10.1109/CBMS52027.2021.00056}} ``` ## Questions? Post a Github issue on the [GPT2-Bio-Pt repo](https://github.com/HAILab-PUCPR/gpt2-bio-pt/).
tprincessazula/Dialog-GPT-small-AANG
6437f08ecf00ba83af90008ccc017e886c8eca82
2021-12-19T10:13:07.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
tprincessazula
null
tprincessazula/Dialog-GPT-small-AANG
231
1
transformers
3,433
--- tags: - conversational --- # AAng Dialog-GPT Model
Helsinki-NLP/opus-mt-de-it
cd2319a082a7be0dd471fe62701ae557a71833c2
2021-09-09T21:32:05.000Z
[ "pytorch", "marian", "text2text-generation", "de", "it", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-de-it
230
null
transformers
3,434
--- tags: - translation license: apache-2.0 --- ### opus-mt-de-it * source languages: de * target languages: it * OPUS readme: [de-it](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-it/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-it/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-it/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-it/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.de.it | 45.3 | 0.671 |
MoritzLaurer/DeBERTa-v3-base-mnli-fever-docnli-ling-2c
29f90c4b7bbbaec52e99d1ee1f6f0aa3301d1d61
2022-07-28T16:23:48.000Z
[ "pytorch", "deberta-v2", "text-classification", "en", "arxiv:2104.07179", "arxiv:2106.09449", "arxiv:2006.03654", "arxiv:2111.09543", "transformers", "zero-shot-classification", "license:mit" ]
text-classification
false
MoritzLaurer
null
MoritzLaurer/DeBERTa-v3-base-mnli-fever-docnli-ling-2c
230
3
transformers
3,435
--- language: - en license: mit tags: - text-classification - zero-shot-classification metrics: - accuracy widget: - text: "I first thought that I liked the movie, but upon second thought it was actually disappointing. [SEP] The movie was good." --- # DeBERTa-v3-base-mnli-fever-docnli-ling-2c ## Model description This model was trained on 1.279.665 hypothesis-premise pairs from 8 NLI datasets: [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [LingNLI](https://arxiv.org/abs/2104.07179) and [DocNLI](https://arxiv.org/pdf/2106.09449.pdf) (which includes [ANLI](https://github.com/facebookresearch/anli), QNLI, DUC, CNN/DailyMail, Curation). It is the only model in the model hub trained on 8 NLI datasets, including DocNLI with very long texts to learn long range reasoning. Note that the model was trained on binary NLI to predict either "entailment" or "not-entailment". The DocNLI merges the classes "neural" and "contradiction" into "not-entailment" to enable the inclusion of the DocNLI dataset. The base model is [DeBERTa-v3-base from Microsoft](https://huggingface.co/microsoft/deberta-v3-base). The v3 variant of DeBERTa substantially outperforms previous versions of the model by including a different pre-training objective, see annex 11 of the original [DeBERTa paper](https://arxiv.org/pdf/2006.03654.pdf) as well as the [DeBERTa-V3 paper](https://arxiv.org/abs/2111.09543). For highest performance (but less speed), I recommend using https://huggingface.co/MoritzLaurer/DeBERTa-v3-large-mnli-fever-anli-ling-wanli. ## Intended uses & limitations #### How to use the model ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") model_name = "MoritzLaurer/DeBERTa-v3-base-mnli-fever-docnli-ling-2c" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) premise = "I first thought that I liked the movie, but upon second thought it was actually disappointing." hypothesis = "The movie was good." input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt") output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu" prediction = torch.softmax(output["logits"][0], -1).tolist() label_names = ["entailment", "not_entailment"] prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)} print(prediction) ``` ### Training data This model was trained on 1.279.665 hypothesis-premise pairs from 8 NLI datasets: [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [LingNLI](https://arxiv.org/abs/2104.07179) and [DocNLI](https://arxiv.org/pdf/2106.09449.pdf) (which includes [ANLI](https://github.com/facebookresearch/anli), QNLI, DUC, CNN/DailyMail, Curation). ### Training procedure DeBERTa-v3-small-mnli-fever-docnli-ling-2c was trained using the Hugging Face trainer with the following hyperparameters. ``` training_args = TrainingArguments( num_train_epochs=3, # total number of training epochs learning_rate=2e-05, per_device_train_batch_size=32, # batch size per device during training per_device_eval_batch_size=32, # batch size for evaluation warmup_ratio=0.1, # number of warmup steps for learning rate scheduler weight_decay=0.06, # strength of weight decay fp16=True # mixed precision training ) ``` ### Eval results The model was evaluated using the binary test sets for MultiNLI and ANLI and the binary dev set for Fever-NLI (two classes instead of three). The metric used is accuracy. mnli-m-2c | mnli-mm-2c | fever-nli-2c | anli-all-2c | anli-r3-2c | lingnli-2c ---------|----------|---------|----------|----------|------ 0.935 | 0.933 | 0.897 | 0.710 | 0.678 | 0.895 ## Limitations and bias Please consult the original DeBERTa paper and literature on different NLI datasets for potential biases. ## Citation If you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022. ‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI’. Preprint, June. Open Science Framework. https://osf.io/74b8k. ### Ideas for cooperation or questions? If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/) ### Debugging and issues Note that DeBERTa-v3 was released on 06.12.21 and older versions of HF Transformers seem to have issues running the model (e.g. resulting in an issue with the tokenizer). Using Transformers>=4.13 might solve some issues.
jb2k/bert-base-multilingual-cased-language-detection
aa4473be53d456ad2ae216a2048f002dae00c920
2021-11-24T01:36:01.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
jb2k
null
jb2k/bert-base-multilingual-cased-language-detection
230
2
transformers
3,436
# bert-base-multilingual-cased-language-detection A model for language detection with support for 45 languages ## Model description This model was created by fine-tuning [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the [common language](https://huggingface.co/datasets/common_language) dataset. This dataset has support for 45 languages, which are listed below: ``` Arabic, Basque, Breton, Catalan, Chinese_China, Chinese_Hongkong, Chinese_Taiwan, Chuvash, Czech, Dhivehi, Dutch, English, Esperanto, Estonian, French, Frisian, Georgian, German, Greek, Hakha_Chin, Indonesian, Interlingua, Italian, Japanese, Kabyle, Kinyarwanda, Kyrgyz, Latvian, Maltese, Mongolian, Persian, Polish, Portuguese, Romanian, Romansh_Sursilvan, Russian, Sakha, Slovenian, Spanish, Swedish, Tamil, Tatar, Turkish, Ukranian, Welsh ``` ## Evaluation This model was evaluated on the test split of the [common language](https://huggingface.co/datasets/common_language) dataset, and achieved the following metrics: * Accuracy: 97.8%
miguelvictor/multilingual-gpt2-large
d3f3a185b1c31018552090c6881e7b10581d5953
2021-05-23T09:24:27.000Z
[ "pytorch", "jax", "tensorboard", "gpt2", "text-generation", "transformers" ]
text-generation
false
miguelvictor
null
miguelvictor/multilingual-gpt2-large
230
1
transformers
3,437
Entry not found
sonoisa/byt5-small-japanese
851ce1d7642798766fa1a053178f9080b1fe275d
2021-09-23T16:29:53.000Z
[ "pytorch", "mt5", "ja", "dataset:wikipedia", "dataset:oscar", "dataset:cc100", "transformers", "byt5", "t5", "text2text-generation", "seq2seq", "license:cc-by-sa-4.0" ]
text2text-generation
false
sonoisa
null
sonoisa/byt5-small-japanese
230
3
transformers
3,438
--- language: ja tags: - byt5 - t5 - text2text-generation - seq2seq license: cc-by-sa-4.0 datasets: - wikipedia - oscar - cc100 --- # 日本語ByT5事前学習済みモデル This is a [ByT5 (a tokenizer-free extension of the Text-to-Text Transfer Transformer)](https://github.com/google-research/byt5/) model pretrained on Japanese corpus. 次の日本語コーパス(約100GB)を用いて事前学習を行ったByT5 (a tokenizer-free extension of the Text-to-Text Transfer Transformer) モデルです。 * [Wikipedia](https://ja.wikipedia.org)の日本語ダンプデータ (2020年7月6日時点のもの) * [OSCAR](https://oscar-corpus.com)の日本語コーパス * [CC-100](http://data.statmt.org/cc-100/)の日本語コーパス このモデルは事前学習のみを行なったものであり、特定のタスクに利用するにはファインチューニングする必要があります。 本モデルにも、大規模コーパスを用いた言語モデルにつきまとう、学習データの内容の偏りに由来する偏った(倫理的ではなかったり、有害だったり、バイアスがあったりする)出力結果になる問題が潜在的にあります。 この問題が発生しうることを想定した上で、被害が発生しない用途にのみ利用するよう気をつけてください。 # 転移学習のサンプルコード 準備中 # ベンチマーク livedoorニュースコーパスを用いたニュース記事のジャンル予測タスクの精度は次の通りです。 日本語T5 ([byt5-small-japanese](https://huggingface.co/sonoisa/byt5-small-japanese), パラメータ数は299M) | label | precision | recall | f1-score | support | | ----------- | ----------- | ------- | -------- | ------- | | 0 | 0.94 | 0.89 | 0.91 | 130 | | 1 | 0.93 | 0.94 | 0.93 | 121 | | 2 | 0.88 | 0.93 | 0.90 | 123 | | 3 | 0.90 | 0.87 | 0.88 | 82 | | 4 | 0.95 | 0.95 | 0.95 | 129 | | 5 | 0.94 | 0.95 | 0.94 | 141 | | 6 | 0.98 | 0.96 | 0.97 | 127 | | 7 | 0.98 | 0.98 | 0.98 | 127 | | 8 | 0.97 | 0.97 | 0.97 | 120 | | accuracy | | | 0.94 | 1100 | | macro avg | 0.94 | 0.94 | 0.94 | 1100 | | weighted avg | 0.94 | 0.94 | 0.94 | 1100 | 比較対象: 多言語T5 ([google/byt5-small](https://huggingface.co/google/byt5-small), パラメータ数は299M) | label | precision | recall | f1-score | support | | ----------- | ----------- | ------- | -------- | ------- | | 0 | 0.93 | 0.88 | 0.91 | 130 | | 1 | 0.90 | 0.79 | 0.84 | 121 | | 2 | 0.75 | 0.86 | 0.80 | 123 | | 3 | 0.87 | 0.79 | 0.83 | 82 | | 4 | 0.93 | 0.96 | 0.94 | 129 | | 5 | 0.87 | 0.95 | 0.91 | 141 | | 6 | 0.98 | 0.93 | 0.96 | 127 | | 7 | 0.97 | 0.91 | 0.94 | 127 | | 8 | 0.89 | 0.94 | 0.91 | 120 | | accuracy | | | 0.90 | 1100 | | macro avg | 0.90 | 0.89 | 0.89 | 1100 | | weighted avg | 0.90 | 0.90 | 0.90 | 1100 | ## 免責事項 本モデルの作者は本モデルを作成するにあたって、その内容、機能等について細心の注意を払っておりますが、モデルの出力が正確であるかどうか、安全なものであるか等について保証をするものではなく、何らの責任を負うものではありません。本モデルの利用により、万一、利用者に何らかの不都合や損害が発生したとしても、モデルやデータセットの作者や作者の所属組織は何らの責任を負うものではありません。利用者には本モデルやデータセットの作者や所属組織が責任を負わないことを明確にする義務があります。 ## ライセンス [CC-BY SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/deed.ja) [Common Crawlの利用規約](http://commoncrawl.org/terms-of-use/)も守るようご注意ください。
toyfreak/DialoGPT-small-addy
9effa898d8b2fed24c03a5f65c0ad6ce0320ec5e
2022-01-11T00:48:27.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
toyfreak
null
toyfreak/DialoGPT-small-addy
230
null
transformers
3,439
--- tags: - conversational --- # Addy DialoGPT Model
wolfrage89/company_segment_ner
1e9e906d37778502136c50531084a76d3376ffcf
2022-01-27T16:56:23.000Z
[ "pytorch", "roberta", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
wolfrage89
null
wolfrage89/company_segment_ner
230
null
transformers
3,440
## Roberta based NER This model will take in a new article label 3 entities [ORGS, SEGNUM, NUM]. This model is train on reuters news articles ## Try out on huggingface Spaces https://huggingface.co/spaces/wolfrage89/company_segments_ner ## colab sample notebook https://colab.research.google.com/drive/165utMQzYVAX7-aQjWjpmPHwHpdKTaHBa?usp=sharing ## How to use ```python from transformers import pipeline # Minimum code sentence = """Exxon Mobil Corporation is engaged in energy business. The Company is engaged in the exploration, production, trade, transportation and sale of crude oil and natural gas, and the manufacture, transportation and sale of crude oil, natural gas, petroleum products, petrochemicals and a range of specialty products. The Company's segments include Upstream, Downstream, Chemical, and Corporate and Financing. The Upstream segment operates to explore for and produce crude oil and natural gas. The Downstream manufactures, trades and sells petroleum products. The refining and supply operations consists of a global network of manufacturing plants, transportation systems, and distribution centers that provide a range of fuels, lubricants and other products and feedstocks to its customers around the world. The Chemical segment manufactures and sells petrochemicals. The Chemical business supplies olefins, polyolefins, aromatics, and a variety of other petrochemicals.""" model = pipeline('ner', "wolfrage89/company_segment_ner") model_output = model(sentence) print(model_ouput) # [{'entity': 'B-ORG', 'score': 0.99996805, 'index': 1, 'word': 'Ex', 'start': 0, 'end': 2}, {'entity': 'I-ORG', 'score': 0.99971646, 'index': 2, 'word': 'xon', 'start': 2, 'end': 5}, ....] # Sample helper function if you want to use def ner_prediction(model, sentence): entity_map = { "B-ORG":"ORG", "B-SEG":"SEG", "B-SEGNUM":"SEGNUM" } results = [] model_output = model(sentence) accumulate = "" current_class = None start = 0 end = 0 for item in model_output: if item['entity'].startswith("B"): if len(accumulate) >0: results.append((current_class, accumulate, start, end)) accumulate = item['word'].lstrip("Ġ") current_class = entity_map[item['entity']] start=item['start'] end = item['end'] else: if item['word'].startswith("Ġ"): accumulate+=" "+item['word'].lstrip("Ġ") else: accumulate+=item['word'] end = item['end'] # clear last cache if len(accumulate)>0: results.append((current_class, accumulate, start, end)) return results ```
datarpit/distilbert-base-uncased-finetuned-natural-questions
28400e5824c250ea3fac5f53da0fee11e03dfd4d
2022-03-16T07:52:09.000Z
[ "pytorch", "distilbert", "question-answering", "dataset:natural_questions", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
question-answering
false
datarpit
null
datarpit/distilbert-base-uncased-finetuned-natural-questions
230
1
transformers
3,441
--- license: apache-2.0 tags: - generated_from_trainer datasets: - natural_questions model-index: - name: distilbert-base-uncased-finetuned-natural-questions results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-natural-questions This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the natural_questions dataset. It achieves the following results on the evaluation set: - Loss: 0.6267 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 40 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 2.0532 | 1.0 | 5104 | 0.2393 | | 1.8912 | 2.0 | 10208 | 0.2284 | | 1.7854 | 3.0 | 15312 | 0.2357 | | 1.6856 | 4.0 | 20416 | 0.2487 | | 1.5918 | 5.0 | 25520 | 0.2743 | | 1.5067 | 6.0 | 30624 | 0.2586 | | 1.4323 | 7.0 | 35728 | 0.2763 | | 1.365 | 8.0 | 40832 | 0.2753 | | 1.3162 | 9.0 | 45936 | 0.3200 | | 1.281 | 10.0 | 51040 | 0.3127 | | 1.308 | 11.0 | 57104 | 0.2947 | | 1.241 | 12.0 | 62208 | 0.2941 | | 1.1391 | 13.0 | 67312 | 0.3103 | | 1.0334 | 14.0 | 72416 | 0.3694 | | 0.9538 | 15.0 | 77520 | 0.3658 | | 0.8749 | 16.0 | 82624 | 0.4009 | | 0.8154 | 17.0 | 87728 | 0.3672 | | 0.7533 | 18.0 | 92832 | 0.3675 | | 0.7079 | 19.0 | 97936 | 0.4611 | | 0.6658 | 20.0 | 103040 | 0.4222 | | 0.595 | 21.0 | 108144 | 0.4095 | | 0.5765 | 22.0 | 113248 | 0.4400 | | 0.5259 | 23.0 | 118352 | 0.5109 | | 0.4804 | 24.0 | 123456 | 0.4711 | | 0.4389 | 25.0 | 128560 | 0.5072 | | 0.4034 | 26.0 | 133664 | 0.5363 | | 0.374 | 27.0 | 138768 | 0.5460 | | 0.3434 | 28.0 | 143872 | 0.5627 | | 0.3181 | 29.0 | 148976 | 0.5657 | | 0.2971 | 30.0 | 154080 | 0.5819 | | 0.275 | 31.0 | 159184 | 0.5649 | | 0.2564 | 32.0 | 164288 | 0.6087 | | 0.2431 | 33.0 | 169392 | 0.6137 | | 0.2289 | 34.0 | 174496 | 0.6123 | | 0.2151 | 35.0 | 179600 | 0.5979 | | 0.2041 | 36.0 | 184704 | 0.6196 | | 0.1922 | 37.0 | 189808 | 0.6191 | | 0.1852 | 38.0 | 194912 | 0.6313 | | 0.1718 | 39.0 | 200016 | 0.6234 | | 0.1718 | 39.81 | 204160 | 0.6267 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0 - Datasets 1.18.4 - Tokenizers 0.11.6
luohy/rgx-qa-v2
d4e1af690cc4be3d52fa7d9f06002d918e23eb1e
2022-06-29T13:05:16.000Z
[ "pytorch", "electra", "question-answering", "transformers", "license:afl-3.0", "autotrain_compatible" ]
question-answering
false
luohy
null
luohy/rgx-qa-v2
230
null
transformers
3,442
--- license: afl-3.0 ---
MCFeli/new-booru-t5
7ab30857fc801ca428c69066d889c30eadfb0ba2
2022-07-10T13:53:20.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
MCFeli
null
MCFeli/new-booru-t5
230
null
transformers
3,443
Entry not found
Rajan/NepaliBERT
996c3b86b779a63225b473221678447c9d9185d0
2021-06-07T14:36:58.000Z
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
Rajan
null
Rajan/NepaliBERT
229
null
transformers
3,444
# NepaliBERT(Phase 1) NEPALIBERT is a state-of-the-art language model for Nepali based on the BERT model. The model is trained using a masked language modeling (MLM). # Loading the model and tokenizer 1. clone the model repo ``` git lfs install git clone https://huggingface.co/Rajan/NepaliBERT ``` 2. Loading the Tokenizer ``` from transformers import BertTokenizer vocab_file_dir = './NepaliBERT/' tokenizer = BertTokenizer.from_pretrained(vocab_file_dir, strip_accents=False, clean_text=False ) ``` 3. Loading the model: ``` from transformers import BertForMaskedLM model = BertForMaskedLM.from_pretrained('./NepaliBERT') ``` The easiest way to check whether our language model is learning anything interesting is via the ```FillMaskPipeline```. Pipelines are simple wrappers around tokenizers and models, and the 'fill-mask' one will let you input a sequence containing a masked token (here, [mask]) and return a list of the most probable filled sequences, with their probabilities. ``` from transformers import pipeline fill_mask = pipeline( "fill-mask", model=model, tokenizer=tokenizer ) ``` For more info visit the [GITHUB🤗](https://github.com/R4j4n/NepaliBERT)
allenai/dsp_roberta_base_dapt_cs_tapt_citation_intent_1688
326cb08451b42ab268e57ee0e62a78558b435a0e
2021-05-20T13:08:32.000Z
[ "pytorch", "jax", "roberta", "transformers" ]
null
false
allenai
null
allenai/dsp_roberta_base_dapt_cs_tapt_citation_intent_1688
229
null
transformers
3,445
Entry not found
dandelin/vilt-b32-finetuned-nlvr2
d72f414aeb17ccbc50114a64346b3ce4bb6954b1
2022-01-23T09:43:30.000Z
[ "pytorch", "vilt", "arxiv:2102.03334", "transformers", "license:apache-2.0" ]
null
false
dandelin
null
dandelin/vilt-b32-finetuned-nlvr2
229
1
transformers
3,446
--- license: apache-2.0 --- # Vision-and-Language Transformer (ViLT), fine-tuned on NLVR2 Vision-and-Language Transformer (ViLT) model fine-tuned on [NLVR2](https://lil.nlp.cornell.edu/nlvr/). It was introduced in the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Kim et al. and first released in [this repository](https://github.com/dandelin/ViLT). Disclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Intended uses & limitations You can use the model to determine whether a sentence is true or false given 2 images. ### How to use Here is how to use the model in PyTorch: ``` from transformers import ViltProcessor, ViltForImagesAndTextClassification import requests from PIL import Image image1 = Image.open(requests.get("https://lil.nlp.cornell.edu/nlvr/exs/ex0_0.jpg", stream=True).raw) image2 = Image.open(requests.get("https://lil.nlp.cornell.edu/nlvr/exs/ex0_1.jpg", stream=True).raw) text = "The left image contains twice the number of dogs as the right image." processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-finetuned-nlvr2") model = ViltForImagesAndTextClassification.from_pretrained("dandelin/vilt-b32-finetuned-nlvr2") # prepare inputs encoding = processor([image1, image2], text, return_tensors="pt") # forward pass outputs = model(input_ids=encoding.input_ids, pixel_values=encoding.pixel_values.unsqueeze(0)) logits = outputs.logits idx = logits.argmax(-1).item() print("Predicted answer:", model.config.id2label[idx]) ``` ## Training data (to do) ## Training procedure ### Preprocessing (to do) ### Pretraining (to do) ## Evaluation results (to do) ### BibTeX entry and citation info ```bibtex @misc{kim2021vilt, title={ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision}, author={Wonjae Kim and Bokyung Son and Ildoo Kim}, year={2021}, eprint={2102.03334}, archivePrefix={arXiv}, primaryClass={stat.ML} } ```
google/roberta2roberta_L-24_discofuse
6e27093f8bae22e876a8a8a9f2857babaecb33f4
2020-12-11T21:43:12.000Z
[ "pytorch", "encoder-decoder", "text2text-generation", "en", "dataset:discofuse", "arxiv:1907.12461", "transformers", "license:apache-2.0", "autotrain_compatible" ]
text2text-generation
false
google
null
google/roberta2roberta_L-24_discofuse
229
null
transformers
3,447
--- language: en license: apache-2.0 datasets: - discofuse --- # Roberta2Roberta_L-24_discofuse EncoderDecoder model The model was introduced in [this paper](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn and first released in [this repository](https://tfhub.dev/google/bertseq2seq/roberta24_discofuse/1). The model is an encoder-decoder model that was initialized on the `roberta-large` checkpoints for both the encoder and decoder and fine-tuned on sentencefusion on the discofuse dataset, which is linked above. Disclaimer: The model card has been written by the Hugging Face team. ## How to use You can use this model for sentence fusion, *e.g.* IMPORTANT: The model was not trained on the `"` (double quotation mark) character -> so the before tokenizing the text, it is advised to replace all `"` (double quotation marks) with a single `` ` `` (single back tick). ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("google/roberta2roberta_L-24_discofuse") model = AutoModelForSeq2SeqLM.from_pretrained("google/roberta2roberta_L-24_discofuse") discofuse = """As a run-blocker, Zeitler moves relatively well. Zeitler often struggles at the point of contact in space.""" input_ids = tokenizer(discofuse, return_tensors="pt").input_ids output_ids = model.generate(input_ids)[0] print(tokenizer.decode(output_ids, skip_special_tokens=True)) # should output # As a run-blocker, Zeitler moves relatively well. However, Zeitler often struggles at the point of contact in space. ```
pere/norwegian-gpt2-social
31b9c1fe79e7eab73d0f466cdf60952c3c1a49f0
2021-11-01T11:01:55.000Z
[ "pytorch", "jax", "tensorboard", "gpt2", "text-generation", "no", "transformers", "norwegian", "GPT2", "casual language modeling", "license:cc-by-4.0" ]
text-generation
false
pere
null
pere/norwegian-gpt2-social
229
null
transformers
3,448
--- language: no license: cc-by-4.0 tags: - norwegian - GPT2 - casual language modeling --- # Norwegian GPT-2 - Social ## Description Experimental Norwegian GPT-2-model trained on a 37GB mainly social corpus. The following sub-corpora are used: ```bash wikipedia_download_nb.jsonl wikipedia_download_nn.jsonl newspapers_online_nb.jsonl newspapers_online_nn.jsonl twitter_2016_2018_no.jsonl twitter_news_2016_2018_no.jsonl open_subtitles_no.jsonl facebook_no.jsonl reddit_no.jsonl vgdebatt_no.jsonl ```
rovai/chatbotmedium3
2831d18c91f89213f4079ebeed92c30ac73fb68a
2021-12-01T16:19:29.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
rovai
null
rovai/chatbotmedium3
229
null
transformers
3,449
--- tags: - conversational --- # chatbotmedium3
Salesforce/codegen-350M-nl
170f13a3699e3bde3bdb61970dcb1c9c2954c5c1
2022-06-28T17:47:41.000Z
[ "pytorch", "codegen", "text-generation", "arxiv:2203.13474", "transformers", "license:bsd-3-clause" ]
text-generation
false
Salesforce
null
Salesforce/codegen-350M-nl
229
null
transformers
3,450
--- license: bsd-3-clause --- # CodeGen (CodeGen-NL 350M) ## Model description CodeGen is a family of autoregressive language models for **program synthesis** from the paper: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. The models are originally released in [this repository](https://github.com/salesforce/CodeGen), under 3 pre-training data variants (`NL`, `Multi`, `Mono`) and 4 model size variants (`350M`, `2B`, `6B`, `16B`). The checkpoint included in this repository is denoted as **CodeGen-NL 350M** in the paper, where "NL" means it is pre-trained on the Pile and "350M" refers to the number of trainable parameters. ## Training data This checkpoint (CodeGen-NL 350M) was pre-trained on [the Pile](https://github.com/EleutherAI/the-pile), a large-scale curated dataset created by [EleutherAI](https://www.eleuther.ai/). Parts of the dataset include code data. ## Training procedure CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs. The family of models are trained using multiple TPU-v4-512 by Google, leveraging data and model parallelism. See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details. ## Evaluation results We evaluate our models on two code generation benchmark: HumanEval and MTPB. Please refer to the [paper](https://arxiv.org/abs/2203.13474) for more details. ## Intended Use and Limitations As an autoregressive language model, CodeGen is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them. However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well. ## How to use This model can be easily loaded using the `AutoModelForCausalLM` functionality: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-350M-nl") model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-350M-nl") text = "def hello_world():" input_ids = tokenizer(text, return_tensors="pt").input_ids generated_ids = model.generate(input_ids, max_length=128) print(tokenizer.decode(generated_ids[0], skip_special_tokens=True)) ``` ## BibTeX entry and citation info ```bibtex @article{Nijkamp2022ACP, title={A Conversational Paradigm for Program Synthesis}, author={Nijkamp, Erik and Pang, Bo and Hayashi, Hiroaki and Tu, Lifu and Wang, Huan and Zhou, Yingbo and Savarese, Silvio and Xiong, Caiming}, journal={arXiv preprint}, year={2022} } ```
AnnaWegmann/Style-Embedding
c098de52c64898eaf32d1eeb36fc19ed27695525
2022-05-20T07:46:47.000Z
[ "pytorch", "roberta", "feature-extraction", "sentence-transformers", "sentence-similarity", "transformers" ]
sentence-similarity
false
AnnaWegmann
null
AnnaWegmann/Style-Embedding
229
null
sentence-transformers
3,451
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # Style Embedding This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. for more info see [Style-Embeddings](https://github.com/nlpsoc/Style-Embeddings) ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 26250 with parameters: ``` {'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.TripletLoss.TripletLoss` with parameters: ``` {'distance_metric': 'TripletDistanceMetric.COSINE', 'triplet_margin': 0.5} ``` Parameters of the fit()-Method: ``` { "epochs": 4, "evaluation_steps": 0, "evaluator": "sentence_transformers.evaluation.TripletEvaluator.TripletEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "correct_bias": true, "eps": 1e-08, "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10500, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
speechbrain/sepformer-wham16k-enhancement
30f979f4814a0a401e7558994cec647e537e505d
2022-07-01T01:03:12.000Z
[ "en", "dataset:WHAM!", "arxiv:2010.13154", "arxiv:2106.04624", "speechbrain", "audio-to-audio", "Speech Enhancement", "WHAM!", "SepFormer", "Transformer", "pytorch", "license:apache-2.0" ]
audio-to-audio
false
speechbrain
null
speechbrain/sepformer-wham16k-enhancement
229
1
speechbrain
3,452
--- language: "en" thumbnail: tags: - audio-to-audio - Speech Enhancement - WHAM! - SepFormer - Transformer - pytorch - speechbrain license: "apache-2.0" datasets: - WHAM! metrics: - SI-SNR - PESQ --- <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # SepFormer trained on WHAM! for speech enhancement (16k sampling frequency) This repository provides all the necessary tools to perform speech enhancement (denoising) with a [SepFormer](https://arxiv.org/abs/2010.13154v2) model, implemented with SpeechBrain, and pretrained on [WHAM!](http://wham.whisper.ai/) dataset with 16k sampling frequency, which is basically a version of WSJ0-Mix dataset with environmental noise and reverberation in 8k. For a better experience we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The given model performance is 14.3 dB SI-SNR on the test set of WHAM! dataset. | Release | Test-Set SI-SNR | Test-Set PESQ | |:-------------:|:--------------:|:--------------:| | 06-30-22 | 13.8 | 2.20 | ## Install SpeechBrain First of all, please install SpeechBrain with the following command: ``` pip install speechbrain ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). ### Perform speech enhancement on your own audio file ```python from speechbrain.pretrained import SepformerSeparation as separator import torchaudio model = separator.from_hparams(source="speechbrain/sepformer-wham16k-enhancement", savedir='pretrained_models/sepformer-wham16k-enhancement') # for custom file, change path est_sources = model.separate_file(path='speechbrain/sepformer-wham16k-enhancement/example_wham16k.wav') torchaudio.save("enhanced_wham16k.wav", est_sources[:, :, 0].detach().cpu(), 16000) ``` ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ### Training The training script is currently being worked on an ongoing pull-request. We will update the model card as soon as the PR is merged. You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1bbQvaiN-R79M697NnekA7Rr0jIYtO6e3). ### Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. #### Referencing SpeechBrain ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ``` #### Referencing SepFormer ```bibtex @inproceedings{subakan2021attention, title={Attention is All You Need in Speech Separation}, author={Cem Subakan and Mirco Ravanelli and Samuele Cornell and Mirko Bronzi and Jianyuan Zhong}, year={2021}, booktitle={ICASSP 2021} } ``` # **About SpeechBrain** - Website: https://speechbrain.github.io/ - Code: https://github.com/speechbrain/speechbrain/ - HuggingFace: https://huggingface.co/speechbrain/
wbbbbb/wav2vec2-large-chinese-zh-cn
369f73139f85a98570ff74e641dc93d421a3860e
2022-07-18T10:12:44.000Z
[ "pytorch", "tensorboard", "wav2vec2", "pretraining", "zh", "dataset:common_voice", "transformers", "audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
wbbbbb
null
wbbbbb/wav2vec2-large-chinese-zh-cn
229
1
transformers
3,453
--- language: zh datasets: - common_voice metrics: - wer - cer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Chinese (zh-CN) by wbbbbb results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice zh-CN type: common_voice args: zh-CN metrics: - name: Test WER type: wer value: 70.47 - name: Test CER type: cer value: 12.30 --- # Fine-tuned XLSR-53 large model for speech recognition in Chinese Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Chinese using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice), [CSS10](https://github.com/Kyubyong/css10) and [ST-CMDS](http://www.openslr.org/38/). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned on RTX3090 for 50h The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint ## Usage The model can be used directly (without a language model) as follows... Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library: ```python from huggingsound import SpeechRecognitionModel model = SpeechRecognitionModel("wbbbbb/wav2vec2-large-chinese-zh-cn") audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"] transcriptions = model.transcribe(audio_paths) ``` ## Evaluation The model can be evaluated as follows on the Chinese (zh-CN) test data of Common Voice. ```python import torch import re import librosa from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import warnings import os os.environ["KMP_AFFINITY"] = "" LANG_ID = "zh-CN" MODEL_ID = "zh-CN-output-aishell" DEVICE = "cuda" test_dataset = load_dataset("common_voice", LANG_ID, split="test") wer = load_metric("wer") cer = load_metric("cer") processor = Wav2Vec2Processor.from_pretrained(MODEL_ID) model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID) model.to(DEVICE) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): with warnings.catch_warnings(): warnings.simplefilter("ignore") speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000) batch["speech"] = speech_array batch["sentence"] = ( re.sub("([^\u4e00-\u9fa5\u0030-\u0039])", "", batch["sentence"]).lower() + " " ) return batch test_dataset = test_dataset.map( speech_file_to_array_fn, num_proc=15, remove_columns=['client_id', 'up_votes', 'down_votes', 'age', 'gender', 'accent', 'locale', 'segment'], ) # Preprocessing the datasets. # We need to read the audio files as arrays def evaluate(batch): inputs = processor( batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True ) with torch.no_grad(): logits = model( inputs.input_values.to(DEVICE), attention_mask=inputs.attention_mask.to(DEVICE), ).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) predictions = [x.lower() for x in result["pred_strings"]] references = [x.lower() for x in result["sentence"]] print( f"WER: {wer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}" ) print(f"CER: {cer.compute(predictions=predictions, references=references) * 100}") ``` **Test Result**: In the table below I report the Word Error Rate (WER) and the Character Error Rate (CER) of the model. I ran the evaluation script described above on other models as well (on 2022-07-18). Note that the table below may show different results from those already reported, this may have been caused due to some specificity of the other evaluation scripts used. | Model | WER | CER | | ------------- | ------------- | ------------- | | wbbbbb/wav2vec2-large-chinese-zh-cn | **70.47%** | **12.30%** | | jonatasgrosman/wav2vec2-large-xlsr-53-chinese-zh-cn | **82.37%** | **19.03%** | | ydshieh/wav2vec2-large-xlsr-53-chinese-zh-cn-gpt | 84.01% | 20.95% | ## Citation If you want to cite this model you can use this: ```bibtex @misc{grosman2021xlsr53-large-chinese, title={Fine-tuned {XLSR}-53 large model for speech recognition in {C}hinese}, author={Grosman, Jonatas}, howpublished={\url{https://huggingface.co/wbbbbb/wav2vec2-large-chinese-zh-cn}}, year={2021} } ```
jeanconstantin/distilcausal_bert_fr
3967575923e6805ef04f14449a92ebe21c869ea1
2022-07-21T20:18:57.000Z
[ "pytorch", "camembert", "text-classification", "transformers" ]
text-classification
false
jeanconstantin
null
jeanconstantin/distilcausal_bert_fr
229
null
transformers
3,454
Entry not found
SEBIS/code_trans_t5_base_commit_generation_multitask_finetune
5d4f07a9c2ab6564a5461cda17ac167423880e92
2021-06-23T05:00:29.000Z
[ "pytorch", "jax", "t5", "feature-extraction", "transformers", "summarization" ]
summarization
false
SEBIS
null
SEBIS/code_trans_t5_base_commit_generation_multitask_finetune
228
null
transformers
3,455
--- tags: - summarization widget: - text: "new file mode 100644 index 000000000 . . 892fda21b Binary files / dev / null and b / src / plugins / gateway / lib / joscar . jar differ" --- # CodeTrans model for git commit message generation Pretrained model on git commit using the t5 base model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized git commit: it works best with tokenized git commit. ## Model description This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the git commit message generation task for the java commit changes. ## Intended uses & limitations The model could be used to generate the git commit message for the git commit changes or be fine-tuned on other relevant tasks. It can be used on unparsed and untokenized commit changes. However, if the change is tokenized, the performance should be better. ### How to use Here is how to use this model to generate git commit message using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_commit_generation_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_commit_generation_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "new file mode 100644 index 000000000 . . 892fda21b Binary files / dev / null and b / src / plugins / gateway / lib / joscar . jar differ" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/commit%20generation/base_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 16,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing commit changes. ## Evaluation results For the git commit message generation task, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Java | | -------------------- | :------------: | | CodeTrans-ST-Small | 39.61 | | CodeTrans-ST-Base | 38.67 | | CodeTrans-TF-Small | 44.22 | | CodeTrans-TF-Base | 44.17 | | CodeTrans-TF-Large | **44.41** | | CodeTrans-MT-Small | 36.17 | | CodeTrans-MT-Base | 39.25 | | CodeTrans-MT-Large | 41.18 | | CodeTrans-MT-TF-Small | 43.96 | | CodeTrans-MT-TF-Base | 44.19 | | CodeTrans-MT-TF-Large | 44.34 | | State of the art | 32.81 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
ftnvir/DialoGPT-medium-bullyMaguire
9717913bb04393e7d7852965814e427b5eea6726
2022-01-25T14:09:02.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
ftnvir
null
ftnvir/DialoGPT-medium-bullyMaguire
228
null
transformers
3,456
--- tags: - conversational --- #Bully Maguire demo bot
lserinol/bert-turkish-question-answering
791f71680be796c2785d23eb29baeb805d1ec16c
2021-05-19T22:06:55.000Z
[ "pytorch", "jax", "bert", "question-answering", "tr", "transformers", "autotrain_compatible" ]
question-answering
false
lserinol
null
lserinol/bert-turkish-question-answering
228
1
transformers
3,457
--- language: tr --- # bert-turkish-question-answering ## Usage ```python from transformers import pipeline nlp = pipeline('question-answering', model='lserinol/bert-turkish-question-answering', tokenizer='lserinol/bert-turkish-question-answering') nlp({ 'question': "Ankara'da kaç ilçe vardır?", 'context': r"""Türkiye'nin başkenti Ankara'dır. Ülkenin en büyük idari birimleri illerdir ve 81 il vardır. Bu iller ilçelere ayrılmıştır, toplamda 973 ilçe mevcuttur.""" }) ``` ```python from transformers import AutoTokenizer, AutoModelForQuestionAnswering import torch tokenizer = AutoTokenizer.from_pretrained("lserinol/bert-turkish-question-answering") model = AutoModelForQuestionAnswering.from_pretrained("lserinol/bert-turkish-question-answering") text = r""" Ankara'nın başkent ilan edilmesinin ardından (13 Ekim 1923) şehir hızla gelişmiş ve Türkiye'nin ikinci en kalabalık ili olmuştur. Türkiye Cumhuriyeti'nin ilk yıllarında ekonomisi tarım ve hayvancılığa dayanan ilin topraklarının yarısı hâlâ tarım amaçlı kullanılmaktadır. Ekonomik etkinlik büyük oranda ticaret ve sanayiye dayalıdır. Tarım ve hayvancılığın ağırlığı ise giderek azalmaktadır. Ankara ve civarındaki gerek kamu sektörü gerek özel sektör yatırımları, başka illerden büyük bir nüfus göçünü teşvik etmiştir. Cumhuriyetin kuruluşundan günümüze, nüfusu ülke nüfusunun iki katı hızda artmıştır. Nüfusun yaklaşık dörtte üçü hizmet sektörü olarak tanımlanabilecek memuriyet, ulaşım, haberleşme ve ticaret benzeri işlerde, dörtte biri sanayide, %2'si ise tarım alanında çalışır. Sanayi, özellikle tekstil, gıda ve inşaat sektörlerinde yoğunlaşmıştır. Günümüzde ise en çok savunma, metal ve motor sektörlerinde yatırım yapılmaktadır. Türkiye'nin en çok sayıda üniversiteye sahip ili olan Ankara'da ayrıca, üniversite diplomalı kişi oranı ülke ortalamasının iki katıdır. Bu eğitimli nüfus, teknoloji ağırlıklı yatırımların gereksinim duyduğu iş gücünü oluşturur. Ankara'dan otoyollar, demir yolu ve hava yoluyla Türkiye'nin diğer şehirlerine ulaşılır. Ankara aynı zamanda başkent olarak Türkiye Büyük Millet Meclisi (TBMM)'ye de ev sahipliği yapmaktadır. """ questions = [ "Ankara kaç yılında başkent oldu?", "Ankara ne zaman başkent oldu?", "Ankara'dan başka şehirlere nasıl ulaşılır?", "TBMM neyin kısaltmasıdır?" ] for question in questions: inputs = tokenizer(question, text, add_special_tokens=True, return_tensors="pt") input_ids = inputs["input_ids"].tolist()[0] text_tokens = tokenizer.convert_ids_to_tokens(input_ids) answer_start_scores, answer_end_scores = model(**inputs) answer_start = torch.argmax( answer_start_scores ) # Get the most likely beginning of answer with the argmax of the score answer_end = torch.argmax(answer_end_scores) + 1 # Get the most likely end of answer with the argmax of the score answer = tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end])) print(f"Question: {question}") print(f"Answer: {answer}\n") ```
thesamuelpena/Dialog-medium-masterchief
48be93ebdf3679158a219727e03af58c022bbf95
2021-11-14T01:18:14.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
thesamuelpena
null
thesamuelpena/Dialog-medium-masterchief
228
null
transformers
3,458
--- tags: - conversational --- # Master Chief DialoGPT Model
Shakerlicious/DialoGPT-small-descentbot
1593849331f7159103fbf3e2e1b562d460005dcb
2022-05-03T04:40:13.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
Shakerlicious
null
Shakerlicious/DialoGPT-small-descentbot
228
null
transformers
3,459
--- tags: - conversational --- # Sergio bot DialoGPT Model
kakife3586/Ekastestest
42429f8ee7d163bf0bbcc9b925f59fcf2f0bbff0
2022-07-09T03:21:49.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
kakife3586
null
kakife3586/Ekastestest
228
null
transformers
3,460
Entry not found
Backedman/DialoGPT-small-Anika
c9bdba4e72530104497b5686bdd0bd11bd8c00c3
2021-11-18T15:16:27.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
Backedman
null
Backedman/DialoGPT-small-Anika
227
null
transformers
3,461
--- tags: - conversational --- #Anika Bot
KoichiYasuoka/roberta-large-english-upos
c0e2ec7cc128a0a18e974307380f3b3ddf4e7494
2022-02-16T03:16:33.000Z
[ "pytorch", "roberta", "token-classification", "en", "dataset:universal_dependencies", "transformers", "english", "pos", "dependency-parsing", "license:cc-by-sa-4.0", "autotrain_compatible" ]
token-classification
false
KoichiYasuoka
null
KoichiYasuoka/roberta-large-english-upos
227
0
transformers
3,462
--- language: - "en" tags: - "english" - "token-classification" - "pos" - "dependency-parsing" datasets: - "universal_dependencies" license: "cc-by-sa-4.0" pipeline_tag: "token-classification" --- # roberta-large-english-upos ## Model Description This is a RoBERTa model pre-trained with [UD_English](https://universaldependencies.org/en/) for POS-tagging and dependency-parsing, derived from [roberta-large](https://huggingface.co/roberta-large). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech). ## How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-large-english-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-large-english-upos") ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/roberta-large-english-upos") ``` ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa models
armheb/DNA_bert_3
ed28178e378645f8582810a667e3a152960bb847
2021-10-10T22:26:24.000Z
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
armheb
null
armheb/DNA_bert_3
227
null
transformers
3,463
Entry not found
cahya/gpt2-large-indonesian-522M
9d01a8304f15c1f0d2216b64eb6f8ec5e9f0f7c3
2021-05-21T14:39:08.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "transformers" ]
text-generation
false
cahya
null
cahya/gpt2-large-indonesian-522M
227
null
transformers
3,464
Entry not found
cross-encoder/nli-deberta-v3-xsmall
d922ed00ccc227f499505cc6207fcc1e58938cb3
2021-12-27T22:27:20.000Z
[ "pytorch", "deberta-v2", "text-classification", "en", "dataset:multi_nli", "dataset:snli", "transformers", "microsoft/deberta-v3-xsmall", "license:apache-2.0", "zero-shot-classification" ]
zero-shot-classification
false
cross-encoder
null
cross-encoder/nli-deberta-v3-xsmall
227
2
transformers
3,465
--- language: en pipeline_tag: zero-shot-classification tags: - microsoft/deberta-v3-xsmall datasets: - multi_nli - snli metrics: - accuracy license: apache-2.0 --- # Cross-Encoder for Natural Language Inference This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. This model is based on [microsoft/deberta-v3-xsmall](https://huggingface.co/microsoft/deberta-v3-xsmall) ## Training Data The model was trained on the [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral. ## Performance - Accuracy on SNLI-test dataset: 91.64 - Accuracy on MNLI mismatched set: 87.77 For futher evaluation results, see [SBERT.net - Pretrained Cross-Encoder](https://www.sbert.net/docs/pretrained_cross-encoders.html#nli). ## Usage Pre-trained models can be used like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('cross-encoder/nli-deberta-v3-xsmall') scores = model.predict([('A man is eating pizza', 'A man eats something'), ('A black race car starts up in front of a crowd of people.', 'A man is driving down a lonely road.')]) #Convert scores to labels label_mapping = ['contradiction', 'entailment', 'neutral'] labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)] ``` ## Usage with Transformers AutoModel You can use the model also directly with Transformers library (without SentenceTransformers library): ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/nli-deberta-v3-xsmall') tokenizer = AutoTokenizer.from_pretrained('cross-encoder/nli-deberta-v3-xsmall') features = tokenizer(['A man is eating pizza', 'A black race car starts up in front of a crowd of people.'], ['A man eats something', 'A man is driving down a lonely road.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits label_mapping = ['contradiction', 'entailment', 'neutral'] labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)] print(labels) ``` ## Zero-Shot Classification This model can also be used for zero-shot-classification: ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model='cross-encoder/nli-deberta-v3-xsmall') sent = "Apple just announced the newest iPhone X" candidate_labels = ["technology", "sports", "politics"] res = classifier(sent, candidate_labels) print(res) ```
ethzanalytics/distilgpt2-tiny-conversational
374785cb7942780b7f3fcd8cc28dd972630aa189
2022-07-21T06:33:55.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "chatbot", "dialogue", "distilgpt2", "ai-msgbot", "license:apache-2.0" ]
text-generation
false
ethzanalytics
null
ethzanalytics/distilgpt2-tiny-conversational
227
null
transformers
3,466
--- license: apache-2.0 tags: - text-generation - chatbot - dialogue - distilgpt2 - gpt2 - ai-msgbot widget: - text: "I know you're tired, but can we go for another walk this evening?\nperson beta:\n\n" example_title: "walk" - text: "Have you done anything exciting lately?\nperson beta:\n\n" example_title: "activities" - text: "hey - do you have a favorite grocery store around here?\nperson beta:\n\n" example_title: "grocery" - text: "Can you take me for dinner somewhere nice this time?\nperson beta:\n\n" example_title: "dinner" - text: "What's your favorite form of social media?\nperson beta:\n\n" example_title: "social media" - text: "Hi, how are you?\nperson beta:\n\n" example_title: "greeting" - text: "I am the best; my sister is the worst. What am I?\nperson beta:\n\n" example_title: "sister" - text: "What do you call an alligator who's just had surgery to remove his left arm?\nperson beta:\n\n" example_title: "alligator" - text: "A man walks into a bar and asks for a drink. The bartender asks for $10, and he pays him $1. What did he pay him with?\nperson beta:\n\n" example_title: "dollar" - text: "What did I say was in the mailbox when it was actually in the cabinet?\nperson beta:\n\n" example_title: "mailbox" - text: "My friend says that she knows every language, but she doesn't speak any of them.. what's wrong with her?\nperson beta:\n\n" example_title: "language" inference: parameters: min_length: 2 max_length: 64 length_penalty: 0.7 no_repeat_ngram_size: 2 do_sample: True top_p: 0.95 top_k: 20 temperature: 0.3 repetition_penalty: 3.5 --- # distilgpt2-tiny-conversational This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on a parsed version of Wizard of Wikipedia. Persona alpha/beta framework designed for use with [ai-msgbot](https://github.com/pszemraj/ai-msgbot). It achieves the following results on the evaluation set: - Loss: 2.2461 ## Model description - a basic dialogue model for conversation. It can be used as a chatbot. - check out a [simple demo here](https://huggingface.co/spaces/ethzanalytics/dialogue-demo) ## Intended uses & limitations - usage is designed for integrating with this repo: [ai-msgbot](https://github.com/pszemraj/ai-msgbot) - the main specific information to know is that the model generates whole conversations between two entities, `person alpha` and `person beta`. These entity names are used functionally as custom `<bos>` tokens to extract when one response ends and another begins. ## Training and evaluation data - [wizard of Wikipedia](https://parl.ai/projects/wizard_of_wikipedia/) parsed, from parlAI ## Training procedure - deepspeed + huggingface trainer, an example notebook is in [ai-msgbot](https://github.com/pszemraj/ai-msgbot) ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | No log | 1.0 | 418 | 2.7793 | | 2.9952 | 2.0 | 836 | 2.6914 | | 2.7684 | 3.0 | 1254 | 2.6348 | | 2.685 | 4.0 | 1672 | 2.5938 | | 2.6243 | 5.0 | 2090 | 2.5625 | | 2.5816 | 6.0 | 2508 | 2.5332 | | 2.5816 | 7.0 | 2926 | 2.5098 | | 2.545 | 8.0 | 3344 | 2.4902 | | 2.5083 | 9.0 | 3762 | 2.4707 | | 2.4793 | 10.0 | 4180 | 2.4551 | | 2.4531 | 11.0 | 4598 | 2.4395 | | 2.4269 | 12.0 | 5016 | 2.4238 | | 2.4269 | 13.0 | 5434 | 2.4102 | | 2.4051 | 14.0 | 5852 | 2.3945 | | 2.3777 | 15.0 | 6270 | 2.3848 | | 2.3603 | 16.0 | 6688 | 2.3711 | | 2.3394 | 17.0 | 7106 | 2.3613 | | 2.3206 | 18.0 | 7524 | 2.3516 | | 2.3206 | 19.0 | 7942 | 2.3398 | | 2.3026 | 20.0 | 8360 | 2.3301 | | 2.2823 | 21.0 | 8778 | 2.3203 | | 2.2669 | 22.0 | 9196 | 2.3105 | | 2.2493 | 23.0 | 9614 | 2.3027 | | 2.2334 | 24.0 | 10032 | 2.2930 | | 2.2334 | 25.0 | 10450 | 2.2852 | | 2.2194 | 26.0 | 10868 | 2.2754 | | 2.2014 | 27.0 | 11286 | 2.2695 | | 2.1868 | 28.0 | 11704 | 2.2598 | | 2.171 | 29.0 | 12122 | 2.2539 | | 2.1597 | 30.0 | 12540 | 2.2461 | ### Framework versions - Transformers 4.16.1 - Pytorch 1.10.0+cu111 - Tokenizers 0.11.0
nvidia/segformer-b0-finetuned-cityscapes-768-768
c837d6d06664132b0c9a98e25c4459ee3807643d
2022-07-20T09:54:23.000Z
[ "pytorch", "tf", "segformer", "dataset:cityscapes", "arxiv:2105.15203", "transformers", "vision", "image-segmentation", "license:apache-2.0" ]
image-segmentation
false
nvidia
null
nvidia/segformer-b0-finetuned-cityscapes-768-768
227
null
transformers
3,467
--- license: apache-2.0 tags: - vision - image-segmentation datasets: - cityscapes widget: - src: https://www.researchgate.net/profile/Anurag-Arnab/publication/315881952/figure/fig5/AS:667673876779033@1536197265755/Sample-results-on-the-Cityscapes-dataset-The-above-images-show-how-our-method-can-handle.jpg example_title: Road --- # SegFormer (b0-sized) model fine-tuned on CityScapes SegFormer model fine-tuned on CityScapes at resolution 768x768. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer). Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset. ## Intended uses & limitations You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation from PIL import Image import requests feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b0-finetuned-cityscapes-768-768") model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b0-finetuned-cityscapes-768-768") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4) ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2105-15203, author = {Enze Xie and Wenhai Wang and Zhiding Yu and Anima Anandkumar and Jose M. Alvarez and Ping Luo}, title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers}, journal = {CoRR}, volume = {abs/2105.15203}, year = {2021}, url = {https://arxiv.org/abs/2105.15203}, eprinttype = {arXiv}, eprint = {2105.15203}, timestamp = {Wed, 02 Jun 2021 11:46:42 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
solfer/DialoGPT-small-ryuji
16aab55c1797a1040dbabc07c5943952cf16dcc0
2021-08-30T04:36:57.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
solfer
null
solfer/DialoGPT-small-ryuji
227
null
transformers
3,468
--- tags: - conversational --- # Ryuji DialoGPT Model
ssspider/DialoGPT-medium-harrypotter
df8cb343ad8edf8b1cc6065c020604d9e3d20c7c
2021-12-25T17:09:42.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
ssspider
null
ssspider/DialoGPT-medium-harrypotter
226
null
transformers
3,469
--- tags: - conversational --- # Harry Potter DialoGPT Model
thu-coai/LongLM-large
461383ecc756769255023947ddc22001e5bd3656
2022-01-10T15:44:33.000Z
[ "pytorch", "t5", "text2text-generation", "zh", "arxiv:2108.12960", "transformers", "lm-head", "autotrain_compatible" ]
text2text-generation
false
thu-coai
null
thu-coai/LongLM-large
226
4
transformers
3,470
--- language: - zh thumbnail: http://coai.cs.tsinghua.edu.cn/coai/img/logo.png?v=13923 tags: - pytorch - lm-head - zh widget: - text: "小咕噜对靳司寒完全是个自来熟,小家伙爬进他怀里小手搂着他的脖子,奶声奶气的要求:“靳蜀黎,你给咕噜讲故事好不好?”讲故事?童话故事吗?“我不会。”小家伙明显不信。嘟着小嘴大眼汪汪的盯着他,“哼。”小家伙轻轻哼了一声,靳司寒默了半晌,<extra_id_1>" - text: "美女亲自打招呼,这可是破天荒第一次,之前不管他献多少次殷勤,美女<extra_id_1>甩他,难道今天真是老天<extra_id_2>不敢<extra_id_3>的兄连滚带爬的来到<extra_id_4>身边队友都带着艳<extra_id_5>他,<extra_id_6>连计算机系的那票球友都在那儿不住地偷看MAGGIE,这种感觉真<extra_id_7>毙了!" inference: parameters: top_p: 0.9 --- ## LongLM ### 1. Parameters | Versions | $d_m$ | $d_{ff}$ | $d_{kv}$ | $n_h$ | $n_e/n_d$ | \# P | | ------------ | ----- | -------- | -------- | ----- | --------- | ---- | | LongLM-small | 512 | 2,048 | 64 | 8 | 6/6 | 60M | | LongLM-base | 768 | 3,072 | 64 | 12 | 12/12 | 223M | | LongLM-large | 1,536 | 3,072 | 64 | 12 | 24/32 | 1B | - $d_m$: the dimension of hidden states - $d_{ff}$: the dimension of feed forward layers - $d_{kv}$: the dimension of the keys/values in the self-attention layers - $n_h$: the number of attention heads - $n_e$: the number of hidden layers of the encoder - $n_d$: the number of hidden layers of the decoder - \#P: the number of parameters ### 2. Pretraining Tasks Encoder-decoder models are trained typically by maximizing the likelihood of the target output given an input. To improve the capacities of both the encoder and decoder, we propose to train LongLM with two pretraining tasks including text infilling (Raffel et al., 2020) and conditional continuation (Radford et al., 2019). For the first task, the input is a text where a number of spans are sampled and replaced by special tokens with unique IDs, while the output is the spans delimited by the special tokens used in the input. The lengths of masked spans are drawn from a Poisson distribution with λ=3 and all masked tokens compress 15% of the original texts. As for the second task, the input and output are respectively the front and back half of a text, which is split into two parts randomly. ### 3. Pretraining Data We collect 120G novels as the pretraining data for LongLM. ### 4. Checkpoints 1. **Model Loading:** ```python\ from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained('LongLM-large') model = T5ForConditionalGeneration.from_pretrained('LongLM-large') ``` 2. **Generation:** ```python input_ids = tokenizer("小咕噜对,<extra_id_1>",return_tensors="pt", padding=True, truncation=True, max_length=512).input_ids.to(device) gen = model.generate(input_ids, do_sample=True, decoder_start_token_id=1, top_p=0.9, max_length=512) ``` ### 5. Dependencies ``` datasets 1.6.2 deepspeed 0.3.16 huggingface-hub 0.0.8 jieba 0.42.1 jsonlines 2.0.0 nltk 3.5 numpy 1.19.5 pytorch-lightning 1.2.0 regex 2020.11.13 rouge 1.0.1 rouge-score 0.0.4 sacrebleu 1.5.0 scipy 1.5.4 sentencepiece 0.1.95 tokenizers 0.10.1 torch 1.8.1 torchaudio 0.8.0 torchmetrics 0.2.0 torchvision 0.9.0 transformers 4.6.1 ``` ### 6. Contributers [Jian Guan](https://jianguanthu.github.io/) at [thu-coai](http://coai.cs.tsinghua.edu.cn/) ## Citation ```txt @misc{guan2021lot, title={LOT: A Benchmark for Evaluating Chinese Long Text Understanding and Generation}, author={Jian Guan and Zhuoer Feng and Yamei Chen and Ruilin He and Xiaoxi Mao and Changjie Fan and Minlie Huang}, year={2021}, eprint={2108.12960}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
SkolkovoInstitute/Mutual_Implication_Score
3d2a208cf3bbca5cc26a150d95d20c48ab1081eb
2022-07-11T12:36:45.000Z
[ "pytorch", "roberta", "en", "transformers", "paraphrase detection", "paraphrase", "paraphrasing" ]
null
false
SkolkovoInstitute
null
SkolkovoInstitute/Mutual_Implication_Score
226
null
transformers
3,471
--- language: - en tags: - paraphrase detection - paraphrase - paraphrasing licenses: - cc-by-nc-sa --- ## Model overview Mutual Implication Score is a symmetric measure of text semantic similarity based on a RoBERTA model pretrained for natural language inference and fine-tuned on a paraphrase detection dataset. The code for inference and evaluation of the model is available [here](https://github.com/skoltech-nlp/mutual_implication_score). This measure is **particularly useful for paraphrase detection**, but can also be applied to other semantic similarity tasks, such as content similarity scoring in text style transfer. ## How to use The following snippet illustrates code usage: ```python !pip install mutual-implication-score from mutual_implication_score import MIS mis = MIS(device='cpu')#cuda:0 for using cuda with certain index source_texts = ['I want to leave this room', 'Hello world, my name is Nick'] paraphrases = ['I want to go out of this room', 'Hello world, my surname is Petrov'] scores = mis.compute(source_texts, paraphrases) print(scores) # expected output: [0.9748, 0.0545] ``` ## Model details We slightly modify the [RoBERTa-Large NLI](https://huggingface.co/ynie/roberta-large-snli_mnli_fever_anli_R1_R2_R3-nli) model architecture (see the scheme below) and fine-tune it with [QQP](https://www.kaggle.com/c/quora-question-pairs) paraphrase dataset. ![alt text](https://huggingface.co/SkolkovoInstitute/Mutual_Implication_Score/raw/main/MIS.jpg) ## Performance on Text Style Transfer and Paraphrase Detection tasks This measure was developed in terms of large scale comparison of different measures on text style transfer and paraphrase datasets. <img src="https://huggingface.co/SkolkovoInstitute/Mutual_Implication_Score/raw/main/corr_main.jpg" alt="drawing" width="1000"/> The scheme above shows the correlations of measures of different classes with human judgments on paraphrase and text style transfer datasets. The text above each dataset indicates the best-performing measure. The rightmost columns show the mean performance of measures across the datasets. MIS outperforms all measures on the paraphrase detection task and performs on par with top measures on the text style transfer task. To learn more, refer to our article: [A large-scale computational study of content preservation measures for text style transfer and paraphrase generation](https://aclanthology.org/2022.acl-srw.23/) ## Citations If you find this repository helpful, feel free to cite our publication: ``` @inproceedings{babakov-etal-2022-large, title = "A large-scale computational study of content preservation measures for text style transfer and paraphrase generation", author = "Babakov, Nikolay and Dale, David and Logacheva, Varvara and Panchenko, Alexander", booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.acl-srw.23", pages = "300--321", abstract = "Text style transfer and paraphrasing of texts are actively growing areas of NLP, dozens of methods for solving these tasks have been recently introduced. In both tasks, the system is supposed to generate a text which should be semantically similar to the input text. Therefore, these tasks are dependent on methods of measuring textual semantic similarity. However, it is still unclear which measures are the best to automatically evaluate content preservation between original and generated text. According to our observations, many researchers still use BLEU-like measures, while there exist more advanced measures including neural-based that significantly outperform classic approaches. The current problem is the lack of a thorough evaluation of the available measures. We close this gap by conducting a large-scale computational study by comparing 57 measures based on different principles on 19 annotated datasets. We show that measures based on cross-encoder models outperform alternative approaches in almost all cases.We also introduce the Mutual Implication Score (MIS), a measure that uses the idea of paraphrasing as a bidirectional entailment and outperforms all other measures on the paraphrase detection task and performs on par with the best measures in the text style transfer task.", } ``` ## Licensing Information [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa]. [![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa] [cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/ [cc-by-nc-sa-image]: https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png
surrey-nlp/roberta-base-finetuned-abbr
a905959f197275955f52eef71c452c6355bd0f91
2022-04-30T12:17:39.000Z
[ "pytorch", "tf", "roberta", "token-classification", "dataset:surrey-nlp/PLOD-filtered", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
false
surrey-nlp
null
surrey-nlp/roberta-base-finetuned-abbr
226
1
transformers
3,472
--- model_creators: - Leonardo Zilio, Hadeel Saadany, Prashant Sharma, Diptesh Kanojia, Constantin Orasan license: mit tags: - generated_from_trainer datasets: - surrey-nlp/PLOD-filtered metrics: - precision - recall - f1 - accuracy widget: - text: "Light dissolved inorganic carbon (DIC) resulting from the oxidation of hydrocarbons." - text: "RAFs are plotted for a selection of neurons in the dorsal zone (DZ) of auditory cortex in Figure 1." - text: "Images were acquired using a GE 3.0T MRI scanner with an upgrade for echo-planar imaging (EPI)." model-index: - name: roberta-base-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: surrey-nlp/PLOD-filtered type: token-classification args: PLODfiltered metrics: - name: Precision type: precision value: 0.9644756447594547 - name: Recall type: recall value: 0.9583209148378798 - name: F1 type: f1 value: 0.9613884293804785 - name: Accuracy type: accuracy value: 0.9575894768204436 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-ner This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the [PLOD-filtered](surrey-nlp/PLOD-filtered) dataset. It achieves the following results on the evaluation set: - Loss: 0.1148 - Precision: 0.9645 - Recall: 0.9583 - F1: 0.9614 - Accuracy: 0.9576 ## Model description RoBERTa is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs. ## Intended uses & limitations More information needed ## Training and evaluation data The model is fine-tuned using [PLOD-Filtered](https://huggingface.co/datasets/surrey-nlp/PLOD-filtered) dataset. This dataset is used for training and evaluating the model. The PLOD Dataset is published at LREC 2022. The dataset can help build sequence labeling models for the task of Abbreviation Detection. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1179 | 1.99 | 7000 | 0.1130 | 0.9602 | 0.9517 | 0.9559 | 0.9522 | | 0.0878 | 3.98 | 14000 | 0.1106 | 0.9647 | 0.9564 | 0.9606 | 0.9567 | | 0.0724 | 5.96 | 21000 | 0.1149 | 0.9646 | 0.9582 | 0.9614 | 0.9576 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.1+cu111 - Datasets 2.1.0 - Tokenizers 0.12.1
markofhope/DialoGPT-medium-HarringtonBot
161b8bdd6768190bde819abe69208bd26180aa02
2022-06-14T07:04:17.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
markofhope
null
markofhope/DialoGPT-medium-HarringtonBot
226
null
transformers
3,473
--- tags: - conversational --- #HarringtonBot dialogue model
JdThe65th/GPT2-Glitchfur-Zenith-JD
9b3e2d9767959526803f6403670eb11930ff6756
2022-06-23T00:21:20.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
JdThe65th
null
JdThe65th/GPT2-Glitchfur-Zenith-JD
226
null
transformers
3,474
--- language: en thumbnail: http://www.huggingtweets.com/glitchfur-jdthe65th-zenitho_o/1655941045991/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1536036266818555907/0Mq-Q1NY_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1516278022135029761/snP1qGDO_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1516504502442016773/iEfei2hf_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Glitch 💻😺 & The 65th JD & zenith</div> <div style="text-align: center; font-size: 14px;">@glitchfur-jdthe65th-zenitho_o</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Glitch 💻😺 & The 65th JD & zenith. | Data | Glitch 💻😺 | The 65th JD | zenith | | --- | --- | --- | --- | | Tweets downloaded | 3206 | 3231 | 3245 | | Retweets | 663 | 328 | 205 | | Short tweets | 551 | 645 | 717 | | Tweets kept | 1992 | 2258 | 2323 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3pavbr60/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @glitchfur-jdthe65th-zenitho_o's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/pg3exi1g) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/pg3exi1g/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='JdThe65th/GPT2-Glitchfur-Zenith-JD') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Helsinki-NLP/opus-mt-am-sv
4ee236b77a2559c6c94f3fbef5228dc28c7929fe
2021-09-09T21:26:12.000Z
[ "pytorch", "marian", "text2text-generation", "am", "sv", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-am-sv
225
null
transformers
3,475
--- tags: - translation license: apache-2.0 --- ### opus-mt-am-sv * source languages: am * target languages: sv * OPUS readme: [am-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/am-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/am-sv/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/am-sv/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/am-sv/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.am.sv | 21.0 | 0.377 |
Helsinki-NLP/opus-mt-en-af
c6a79302395db2b59af8b15f4016081a66095ace
2021-09-09T21:34:05.000Z
[ "pytorch", "marian", "text2text-generation", "en", "af", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-en-af
225
null
transformers
3,476
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-af * source languages: en * target languages: af * OPUS readme: [en-af](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-af/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-af/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-af/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-af/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.af | 56.1 | 0.741 |
MingZhong/DialogLED-large-5120
b6c71369f2fee0cbc99178f7ab681dfbf9d8d09f
2022-01-05T07:36:41.000Z
[ "pytorch", "led", "text2text-generation", "arxiv:2109.02492", "transformers", "autotrain_compatible" ]
text2text-generation
false
MingZhong
null
MingZhong/DialogLED-large-5120
225
2
transformers
3,477
[DialogLM: Pre-trained Model for Long Dialogue Understanding and Summarization](https://arxiv.org/abs/2109.02492). ## Introduction DialogLED is a pre-trained model for long dialogue understanding and summarization. It builds on the Longformer-Encoder-Decoder (LED) architecture and uses window-based denoising as the pre-training task on a large amount of long dialogue data for further training. Here is a large version of DialogLED, the input length is limited to 5,120 in the pre-training phase. ## Finetuning for Downstream Tasks Please refer to [our GitHub page](https://github.com/microsoft/DialogLM).
clue/roberta_chinese_3L312_clue_tiny
5c87eb26d6ca701f3badaacbeebb37b878bbd9aa
2021-05-20T15:22:48.000Z
[ "pytorch", "jax", "roberta", "zh", "arxiv:2003.01355", "transformers" ]
null
false
clue
null
clue/roberta_chinese_3L312_clue_tiny
225
1
transformers
3,478
--- language: zh --- # Introduction This model was trained on TPU and the details are as follows: ## Model ## | Model_name | params | size | Training_corpus | Vocab | | :------------------------------------------ | :----- | :------- | :----------------- | :-----------: | | **`RoBERTa-tiny-clue`** <br/>Super_small_model | 7.5M | 28.3M | **CLUECorpus2020** | **CLUEVocab** | | **`RoBERTa-tiny-pair`** <br/>Super_small_sentence_pair_model | 7.5M | 28.3M | **CLUECorpus2020** | **CLUEVocab** | | **`RoBERTa-tiny3L768-clue`** <br/>small_model | 38M | 110M | **CLUECorpus2020** | **CLUEVocab** | | **`RoBERTa-tiny3L312-clue`** <br/>small_model | <7.5M | 24M | **CLUECorpus2020** | **CLUEVocab** | | **`RoBERTa-large-clue`** <br/> Large_model | 290M | 1.20G | **CLUECorpus2020** | **CLUEVocab** | | **`RoBERTa-large-pair`** <br/>Large_sentence_pair_model | 290M | 1.20G | **CLUECorpus2020** | **CLUEVocab** | ### Usage With the help of[Huggingface-Transformers 2.5.1](https://github.com/huggingface/transformers), you could use these model as follows ``` tokenizer = BertTokenizer.from_pretrained("MODEL_NAME") model = BertModel.from_pretrained("MODEL_NAME") ``` `MODEL_NAME`: | Model_NAME | MODEL_LINK | | -------------------------- | ------------------------------------------------------------ | | **RoBERTa-tiny-clue** | [`clue/roberta_chinese_clue_tiny`](https://huggingface.co/clue/roberta_chinese_clue_tiny) | | **RoBERTa-tiny-pair** | [`clue/roberta_chinese_pair_tiny`](https://huggingface.co/clue/roberta_chinese_pair_tiny) | | **RoBERTa-tiny3L768-clue** | [`clue/roberta_chinese_3L768_clue_tiny`](https://huggingface.co/clue/roberta_chinese_3L768_clue_tiny) | | **RoBERTa-tiny3L312-clue** | [`clue/roberta_chinese_3L312_clue_tiny`](https://huggingface.co/clue/roberta_chinese_3L312_clue_tiny) | | **RoBERTa-large-clue** | [`clue/roberta_chinese_clue_large`](https://huggingface.co/clue/roberta_chinese_clue_large) | | **RoBERTa-large-pair** | [`clue/roberta_chinese_pair_large`](https://huggingface.co/clue/roberta_chinese_pair_large) | ## Details Please read <a href='https://arxiv.org/pdf/2003.01355'>https://arxiv.org/pdf/2003.01355. Please visit our repository: https://github.com/CLUEbenchmark/CLUEPretrainedModels.git
google/bert_uncased_L-12_H-512_A-8
58975ac76f4442555b5cd68848df3e0838a832bb
2021-05-19T17:26:55.000Z
[ "pytorch", "jax", "bert", "arxiv:1908.08962", "transformers", "license:apache-2.0" ]
null
false
google
null
google/bert_uncased_L-12_H-512_A-8
225
null
transformers
3,479
--- thumbnail: https://huggingface.co/front/thumbnails/google.png license: apache-2.0 --- BERT Miniatures === This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the [official BERT Github page](https://github.com/google-research/bert/), or via HuggingFace from the links below: | |H=128|H=256|H=512|H=768| |---|:---:|:---:|:---:|:---:| | **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]| | **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]| | **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]| | **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]| | **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]| | **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]| Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: |Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX| |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0| |BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1| |BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6| |BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5| For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: - batch sizes: 8, 16, 32, 64, 128 - learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper: ``` @article{turc2019, title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models}, author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, journal={arXiv preprint arXiv:1908.08962v2 }, year={2019} } ``` [2_128]: https://huggingface.co/google/bert_uncased_L-2_H-128_A-2 [2_256]: https://huggingface.co/google/bert_uncased_L-2_H-256_A-4 [2_512]: https://huggingface.co/google/bert_uncased_L-2_H-512_A-8 [2_768]: https://huggingface.co/google/bert_uncased_L-2_H-768_A-12 [4_128]: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2 [4_256]: https://huggingface.co/google/bert_uncased_L-4_H-256_A-4 [4_512]: https://huggingface.co/google/bert_uncased_L-4_H-512_A-8 [4_768]: https://huggingface.co/google/bert_uncased_L-4_H-768_A-12 [6_128]: https://huggingface.co/google/bert_uncased_L-6_H-128_A-2 [6_256]: https://huggingface.co/google/bert_uncased_L-6_H-256_A-4 [6_512]: https://huggingface.co/google/bert_uncased_L-6_H-512_A-8 [6_768]: https://huggingface.co/google/bert_uncased_L-6_H-768_A-12 [8_128]: https://huggingface.co/google/bert_uncased_L-8_H-128_A-2 [8_256]: https://huggingface.co/google/bert_uncased_L-8_H-256_A-4 [8_512]: https://huggingface.co/google/bert_uncased_L-8_H-512_A-8 [8_768]: https://huggingface.co/google/bert_uncased_L-8_H-768_A-12 [10_128]: https://huggingface.co/google/bert_uncased_L-10_H-128_A-2 [10_256]: https://huggingface.co/google/bert_uncased_L-10_H-256_A-4 [10_512]: https://huggingface.co/google/bert_uncased_L-10_H-512_A-8 [10_768]: https://huggingface.co/google/bert_uncased_L-10_H-768_A-12 [12_128]: https://huggingface.co/google/bert_uncased_L-12_H-128_A-2 [12_256]: https://huggingface.co/google/bert_uncased_L-12_H-256_A-4 [12_512]: https://huggingface.co/google/bert_uncased_L-12_H-512_A-8 [12_768]: https://huggingface.co/google/bert_uncased_L-12_H-768_A-12
rovai/AI
0e8e5055c2254b438256d250e3351bcd3fe8faad
2021-12-01T23:52:04.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
rovai
null
rovai/AI
225
null
transformers
3,480
--- tags: - conversational - gpt2 --- #MIHO
sberbank-ai/bert-base-NER-reptile-5-datasets
feb2dcd088bf24fde96b3f53f720ac148fb678ef
2022-02-04T10:51:07.000Z
[ "pytorch", "bert", "token-classification", "en", "dataset:conll2003", "dataset:wnut_17", "dataset:jnlpba", "dataset:conll2012", "dataset:BTC", "dataset:dfki-nlp/few-nerd", "arxiv:2010.02405", "transformers", "PyTorch", "model-index", "autotrain_compatible" ]
token-classification
false
sberbank-ai
null
sberbank-ai/bert-base-NER-reptile-5-datasets
225
3
transformers
3,481
--- language: - en inference: false pipeline_tag: false datasets: - conll2003 - wnut_17 - jnlpba - conll2012 - BTC - dfki-nlp/few-nerd tags: - PyTorch model-index: - name: "bert-base-NER-reptile-5-datasets" results: - task: name: few-shot-ner type: named-entity-recognition dataset: name: few-nerd-inter type: named-entity-recognition metrics: - name: 5 way 1~2 shot type: f1 value: 56.12 - name: 5-way 5~10-shot type: f1 value: 62.7 - name: 10-way 1~2-shot type: f1 value: 50.3 - name: 10-way 5~10-shot type: f1 value: 58.82 --- # BERT base uncased model pre-trained on 5 NER datasets Model was trained by _SberIDP_. The pretraining process and technical details are described [in this article](https://habr.com/ru/company/sberbank/blog/649609/). * Task: Named Entity Recognition * Base model: [bert-base-uncased](https://huggingface.co/bert-base-uncased) * Training Data is 5 datasets: [CoNLL-2003](https://aclanthology.org/W03-0419.pdf), [WNUT17](http://noisy-text.github.io/2017/emerging-rare-entities.html), [JNLPBA](http://www.geniaproject.org/shared-tasks/bionlp-jnlpba-shared-task-2004), [CoNLL-2012 (OntoNotes)](https://aclanthology.org/W12-4501.pdf), [BTC](https://www.derczynski.com/papers/btc.pdf) * Testing was made in Few-Shot scenario on [Few-NERD dataset](https://github.com/thunlp/Few-NERD) using the model as a backbone for [StructShot](https://arxiv.org/abs/2010.02405) The model is pretrained for NER task using [Reptile](https://openai.com/blog/reptile/) and can be finetuned for new entities with only a small amount of samples.
youscan/ukr-roberta-base
f8689ddd740b7a3277f5205cd1d5dc5481699bb5
2021-05-20T23:23:40.000Z
[ "pytorch", "jax", "roberta", "fill-mask", "uk", "transformers", "autotrain_compatible" ]
fill-mask
false
youscan
null
youscan/ukr-roberta-base
225
5
transformers
3,482
--- language: - uk --- # ukr-roberta-base ## Pre-training corpora Below is the list of corpora used along with the output of wc command (counting lines, words and characters). These corpora were concatenated and tokenized with HuggingFace Roberta Tokenizer. | Tables | Lines | Words | Characters | | ------------- |--------------:| -----:| -----:| | [Ukrainian Wikipedia - May 2020](https://dumps.wikimedia.org/ukwiki/latest/ukwiki-latest-pages-articles.xml.bz2) | 18 001 466| 201 207 739 | 2 647 891 947 | | [Ukrainian OSCAR deduplicated dataset](https://oscar-public.huma-num.fr/shuffled/uk_dedup.txt.gz) | 56 560 011 | 2 250 210 650 | 29 705 050 592 | | Sampled mentions from social networks | 11 245 710 | 128 461 796 | 1 632 567 763 | | Total | 85 807 187 | 2 579 880 185 | 33 985 510 302 | ## Pre-training details * Ukrainian Roberta was trained with code provided in [HuggingFace tutorial](https://huggingface.co/blog/how-to-train) * Currently released model follows roberta-base-cased model architecture (12-layer, 768-hidden, 12-heads, 125M parameters) * The model was trained on 4xV100 (85 hours) * Training configuration you can find in the [original repository](https://github.com/youscan/language-models) ## Author Vitalii Radchenko - contact me on Twitter [@vitaliradchenko](https://twitter.com/vitaliradchenko)
GanjinZero/coder_eng
eb359315c38881c03d445e08614101ac9b214f1e
2022-04-25T02:19:42.000Z
[ "pytorch", "bert", "feature-extraction", "en", "transformers", "biomedical", "license:apache-2.0" ]
feature-extraction
false
GanjinZero
null
GanjinZero/coder_eng
224
1
transformers
3,483
--- language: - en license: apache-2.0 tags: - bert - biomedical --- CODER: Knowledge infused cross-lingual medical term embedding for term normalization. English Version. ``` @article{YUAN2022103983, title = {CODER: Knowledge-infused cross-lingual medical term embedding for term normalization}, journal = {Journal of Biomedical Informatics}, pages = {103983}, year = {2022}, issn = {1532-0464}, doi = {https://doi.org/10.1016/j.jbi.2021.103983}, url = {https://www.sciencedirect.com/science/article/pii/S1532046421003129}, author = {Zheng Yuan and Zhengyun Zhao and Haixia Sun and Jiao Li and Fei Wang and Sheng Yu}, keywords = {medical term normalization, cross-lingual, medical term representation, knowledge graph embedding, contrastive learning} } ```
lgris/bp400-xlsr
22bd0ba76c39569ce00a2133870d641d367fbee9
2022-04-01T20:31:02.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "pt", "dataset:common_voice", "dataset:mls", "dataset:cetuc", "dataset:lapsbm", "dataset:voxforge", "dataset:tedx", "dataset:sid", "arxiv:2107.11414", "arxiv:2012.03411", "transformers", "audio", "speech", "portuguese-speech-corpus", "PyTorch", "hf-asr-leaderboard", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
lgris
null
lgris/bp400-xlsr
224
2
transformers
3,484
--- language: pt datasets: - common_voice - mls - cetuc - lapsbm - voxforge - tedx - sid metrics: - wer tags: - audio - speech - wav2vec2 - pt - portuguese-speech-corpus - automatic-speech-recognition - speech - PyTorch - hf-asr-leaderboard model-index: - name: bp400-xlsr results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 7.0 type: mozilla-foundation/common_voice_7_0 args: pt metrics: - name: Test WER type: wer value: 14.0 license: apache-2.0 --- # bp400-xlsr: Wav2vec 2.0 with Brazilian Portuguese (BP) Dataset **Paper:** https://arxiv.org/abs/2107.11414 This is a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the following datasets: - [CETUC](http://www02.smt.ufrj.br/~igor.quintanilha/alcaim.tar.gz): contains approximately 145 hours of Brazilian Portuguese speech distributed among 50 male and 50 female speakers, each pronouncing approximately 1,000 phonetically balanced sentences selected from the [CETEN-Folha](https://www.linguateca.pt/cetenfolha/) corpus. - [Common Voice 7.0](https://commonvoice.mozilla.org/pt): is a project proposed by Mozilla Foundation with the goal to create a wide open dataset in different languages. In this project, volunteers donate and validate speech using the [oficial site](https://commonvoice.mozilla.org/pt). - [Lapsbm](https://github.com/falabrasil/gitlab-resources): "Falabrasil - UFPA" is a dataset used by the Fala Brasil group to benchmark ASR systems in Brazilian Portuguese. Contains 35 speakers (10 females), each one pronouncing 20 unique sentences, totalling 700 utterances in Brazilian Portuguese. The audios were recorded in 22.05 kHz without environment control. - [Multilingual Librispeech (MLS)](https://arxiv.org/abs/2012.03411): a massive dataset available in many languages. The MLS is based on audiobook recordings in public domain like [LibriVox](https://librivox.org/). The dataset contains a total of 6k hours of transcribed data in many languages. The set in Portuguese [used in this work](http://www.openslr.org/94/) (mostly Brazilian variant) has approximately 284 hours of speech, obtained from 55 audiobooks read by 62 speakers. - [Multilingual TEDx](http://www.openslr.org/100): a collection of audio recordings from TEDx talks in 8 source languages. The Portuguese set (mostly Brazilian Portuguese variant) contains 164 hours of transcribed speech. - [Sidney](https://igormq.github.io/datasets/) (SID): contains 5,777 utterances recorded by 72 speakers (20 women) from 17 to 59 years old with fields such as place of birth, age, gender, education, and occupation; - [VoxForge](http://www.voxforge.org/): is a project with the goal to build open datasets for acoustic models. The corpus contains approximately 100 speakers and 4,130 utterances of Brazilian Portuguese, with sample rates varying from 16kHz to 44.1kHz. These datasets were combined to build a larger Brazilian Portuguese dataset. All data was used for training except Common Voice dev/test sets, that were used for validation/test respectively. We also made test sets for all the gathered datasets. | Dataset | Train | Valid | Test | |--------------------------------|-------:|------:|------:| | CETUC | 93.9h | -- | 5.4h | | Common Voice | 37.6h | 8.9h | 9.5h | | LaPS BM | 0.8h | -- | 0.1h | | MLS | 161.0h | -- | 3.7h | | Multilingual TEDx (Portuguese) | 144.2h | -- | 1.8h | | SID | 5.0h | -- | 1.0h | | VoxForge | 2.8h | -- | 0.1h | | Total | 437.2h | 8.9h | 21.6h | The original model was fine-tuned using [fairseq](https://github.com/pytorch/fairseq). This notebook uses a converted version of the original one. The link to the original fairseq model is available [here](https://drive.google.com/drive/folders/1eRUExXRF2XK8JxUjIzbLBkLa5wuR3nig?usp=sharing). #### Summary | | CETUC | CV | LaPS | MLS | SID | TEDx | VF | AVG | |----------------------|---------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------| | bp\_400 (demonstration below) | 0.052 | 0.140 | 0.074 | 0.117 | 0.121 | 0.245 | 0.118 | 0.124 | | bp\_400 + 3-gram | 0.033 | 0.095 | 0.046 | 0.123 | 0.112 | 0.212 | 0.123 | 0.106 | | bp\_400 + 4-gram (demonstration below) | **0.030** | 0.096 | 0.043 | **0.106** | 0.118 | 0.229 | **0.117** | **0.105** | | bp\_400 + 5-gram | 0.033 | 0.094 | 0.043 | 0.123 | **0.111** | **0.210** | 0.123 | **0.105** | | bp\_400 + Transf. | 0.032 | **0.092** | **0.036** | 0.130 | 0.115 | 0.215 | 0.125 | 0.106 | #### Transcription examples | Text | Transcription | |------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------| |alguém sabe a que horas começa o jantar | alguém sabe a que horas **começo** jantar | |lila covas ainda não sabe o que vai fazer no fundo|**lilacovas** ainda não sabe o que vai fazer no fundo| |que tal um pouco desse bom spaghetti|**quetá** um pouco **deste** bom **ispaguete**| |hong kong em cantonês significa porto perfumado|**rongkong** **en** **cantones** significa porto perfumado| |vamos hackear esse problema|vamos **rackar** esse problema| |apenas a poucos metros há uma estação de ônibus|apenas **ha** poucos metros **á** uma estação de ônibus| |relâmpago e trovão sempre andam juntos|**relampagotrevão** sempre andam juntos| ## Demonstration ```python MODEL_NAME = "lgris/bp400-xlsr" ``` ### Imports and dependencies ```python %%capture !pip install torch==1.8.2+cu111 torchvision==0.9.2+cu111 torchaudio===0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html !pip install datasets !pip install jiwer !pip install transformers !pip install soundfile !pip install pyctcdecode !pip install https://github.com/kpu/kenlm/archive/master.zip ``` ```python import jiwer import torchaudio from datasets import load_dataset, load_metric from transformers import ( Wav2Vec2ForCTC, Wav2Vec2Processor, ) from pyctcdecode import build_ctcdecoder import torch import re import sys ``` ### Helpers ```python chars_to_ignore_regex = '[\,\?\.\!\;\:\"]' # noqa: W605 def map_to_array(batch): speech, _ = torchaudio.load(batch["path"]) batch["speech"] = speech.squeeze(0).numpy() batch["sampling_rate"] = 16_000 batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") batch["target"] = batch["sentence"] return batch ``` ```python def calc_metrics(truths, hypos): wers = [] mers = [] wils = [] for t, h in zip(truths, hypos): try: wers.append(jiwer.wer(t, h)) mers.append(jiwer.mer(t, h)) wils.append(jiwer.wil(t, h)) except: # Empty string? pass wer = sum(wers)/len(wers) mer = sum(mers)/len(mers) wil = sum(wils)/len(wils) return wer, mer, wil ``` ```python def load_data(dataset): data_files = {'test': f'{dataset}/test.csv'} dataset = load_dataset('csv', data_files=data_files)["test"] return dataset.map(map_to_array) ``` ### Model ```python class STT: def __init__(self, model_name, device='cuda' if torch.cuda.is_available() else 'cpu', lm=None): self.model_name = model_name self.model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device) self.processor = Wav2Vec2Processor.from_pretrained(model_name) self.vocab_dict = self.processor.tokenizer.get_vocab() self.sorted_dict = { k.lower(): v for k, v in sorted(self.vocab_dict.items(), key=lambda item: item[1]) } self.device = device self.lm = lm if self.lm: self.lm_decoder = build_ctcdecoder( list(self.sorted_dict.keys()), self.lm ) def batch_predict(self, batch): features = self.processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt") input_values = features.input_values.to(self.device) attention_mask = features.attention_mask.to(self.device) with torch.no_grad(): logits = self.model(input_values, attention_mask=attention_mask).logits if self.lm: logits = logits.cpu().numpy() batch["predicted"] = [] for sample_logits in logits: batch["predicted"].append(self.lm_decoder.decode(sample_logits)) else: pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = self.processor.batch_decode(pred_ids) return batch ``` ### Download datasets ```python %%capture !gdown --id 1HFECzIizf-bmkQRLiQD0QVqcGtOG5upI !mkdir bp_dataset !unzip bp_dataset -d bp_dataset/ ``` ### Tests ```python stt = STT(MODEL_NAME) ``` #### CETUC ```python ds = load_data('cetuc_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("CETUC WER:", wer) ``` CETUC WER: 0.05159104708285062 #### Common Voice ```python ds = load_data('commonvoice_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("CV WER:", wer) ``` CV WER: 0.14031426198658084 #### LaPS ```python ds = load_data('lapsbm_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("Laps WER:", wer) ``` Laps WER: 0.07432133838383838 #### MLS ```python ds = load_data('mls_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("MLS WER:", wer) ``` MLS WER: 0.11678793514817509 #### SID ```python ds = load_data('sid_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("Sid WER:", wer) ``` Sid WER: 0.12152357273433984 #### TEDx ```python ds = load_data('tedx_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("TEDx WER:", wer) ``` TEDx WER: 0.24666815906766504 #### VoxForge ```python ds = load_data('voxforge_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("VoxForge WER:", wer) ``` VoxForge WER: 0.11873106060606062 ### Tests with LM ```python !rm -rf ~/.cache !gdown --id 1GJIKseP5ZkTbllQVgOL98R4yYAcIySFP # trained with wikipedia stt = STT(MODEL_NAME, lm='pt-BR-wiki.word.4-gram.arpa') # !gdown --id 1dLFldy7eguPtyJj5OAlI4Emnx0BpFywg # trained with bp # stt = STT(MODEL_NAME, lm='pt-BR.word.4-gram.arpa') ``` ### Cetuc ```python ds = load_data('cetuc_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("CETUC WER:", wer) ``` CETUC WER: 0.030266462438593742 #### Common Voice ```python ds = load_data('commonvoice_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("CV WER:", wer) ``` CV WER: 0.09577710237417715 #### LaPS ```python ds = load_data('lapsbm_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("Laps WER:", wer) ``` Laps WER: 0.043617424242424235 #### MLS ```python ds = load_data('mls_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("MLS WER:", wer) ``` MLS WER: 0.10642133314350002 #### SID ```python ds = load_data('sid_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("Sid WER:", wer) ``` Sid WER: 0.11839021001747055 #### TEDx ```python ds = load_data('tedx_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("TEDx WER:", wer) ``` TEDx WER: 0.22929952467810416 #### VoxForge ```python ds = load_data('voxforge_dataset') result = ds.map(stt.batch_predict, batched=True, batch_size=8) wer, mer, wil = calc_metrics(result["sentence"], result["predicted"]) print("VoxForge WER:", wer) ``` VoxForge WER: 0.11716314935064935
pritamdeka/S-Bluebert-snli-multinli-stsb
a823c84f078a8323c58d0e8bd0fd3d311c508738
2022-01-28T16:23:12.000Z
[ "pytorch", "bert", "feature-extraction", "sentence-transformers", "sentence-similarity", "transformers" ]
sentence-similarity
false
pritamdeka
null
pritamdeka/S-Bluebert-snli-multinli-stsb
224
1
sentence-transformers
3,485
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 90 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "callback": null, "epochs": 4, "evaluation_steps": 1000, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 36, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
swtx/ernie-3.0-base-chinese
22c12393ee7a5cf78bf22c1a1c8704baff06b77d
2022-07-26T14:58:41.000Z
[ "pytorch", "arxiv:2106.02241", "arxiv:2112.12731", "transformers", "license:apache-2.0" ]
null
false
swtx
null
swtx/ernie-3.0-base-chinese
224
1
transformers
3,486
--- license: apache-2.0 --- # ERNIE 3.0 轻量级模型 **目录** * [模型介绍](#模型介绍) * [在线蒸馏技术](#在线蒸馏技术) * [模型效果](#模型效果) * [微调](#微调) * [模型压缩](#模型压缩) * [环境依赖](#环境依赖) * [模型压缩 API 使用](#模型压缩API使用) * [压缩效果](#压缩效果) * [精度测试](#精度测试) * [性能测试](#性能测试) * [CPU 性能](#CPU性能) * [GPU 性能](#CPU性能) * [使用 FasterTokenizer 加速](#使用FasterTokenizer加速) * [部署](#部署) * [Python 部署](#Python部署) * [服务化部署](#服务化部署) * [Paddle2ONNX 部署](#Paddle2ONNX部署) * [Notebook教程](#Notebook教程) * [参考文献](#参考文献) <a name="模型介绍"></a> ## 模型介绍 本次开源的模型是在文心大模型ERNIE 3.0 基础上通过**在线蒸馏技术**得到的轻量级模型,模型结构与 ERNIE 2.0 保持一致,相比 ERNIE 2.0 具有更强的中文效果。 相关技术详解可参考文章[《解析全球最大中文单体模型鹏城-百度·文心技术细节》](https://www.jiqizhixin.com/articles/2021-12-08-9) <a name="在线蒸馏技术"></a> ### 在线蒸馏技术 在线蒸馏技术在模型学习的过程中周期性地将知识信号传递给若干个学生模型同时训练,从而在蒸馏阶段一次性产出多种尺寸的学生模型。相对传统蒸馏技术,该技术极大节省了因大模型额外蒸馏计算以及多个学生的重复知识传递带来的算力消耗。 这种新颖的蒸馏方式利用了文心大模型的规模优势,在蒸馏完成后保证了学生模型的效果和尺寸丰富性,方便不同性能需求的应用场景使用。此外,由于文心大模型的模型尺寸与学生模型差距巨大,模型蒸馏难度极大甚至容易失效。为此,通过引入了助教模型进行蒸馏的技术,利用助教作为知识传递的桥梁以缩短学生模型和大模型表达空间相距过大的问题,从而促进蒸馏效率的提升。 更多技术细节可以参考论文: - [ERNIE-Tiny: A Progressive Distillation Framework for Pretrained Transformer Compression](https://arxiv.org/abs/2106.02241) - [ERNIE 3.0 Titan: Exploring Larger-scale Knowledge Enhanced Pre-training for Language Understanding and Generation](https://arxiv.org/abs/2112.12731) <p align="center"> <img width="644" alt="image" src="https://user-images.githubusercontent.com/1371212/168516904-3fff73e0-010d-4bef-adc1-4d7c97a9c6ff.png" title="ERNIE 3.0 Online Distillation"> </p> <a name="模型效果"></a> ## 模型效果 本项目开源 **ERNIE 3.0 _Base_** 、**ERNIE 3.0 _Medium_** 、 **ERNIE 3.0 _Mini_** 、 **ERNIE 3.0 _Micro_** 、 **ERNIE 3.0 _Nano_** 五个模型: - [**ERNIE 3.0-_Base_**](https://bj.bcebos.com/paddlenlp/models/transformers/ernie_3.0/ernie_3.0_base_zh.pdparams) (_12-layer, 768-hidden, 12-heads_) - [**ERNIE 3.0-_Medium_**](https://bj.bcebos.com/paddlenlp/models/transformers/ernie_3.0/ernie_3.0_medium_zh.pdparams) (_6-layer, 768-hidden, 12-heads_) - [**ERNIE 3.0-_Mini_**](https://bj.bcebos.com/paddlenlp/models/transformers/ernie_3.0/ernie_3.0_mini_zh.pdparams) (_6-layer, 384-hidden, 12-heads_) - [**ERNIE 3.0-_Micro_**](https://bj.bcebos.com/paddlenlp/models/transformers/ernie_3.0/ernie_3.0_micro_zh.pdparams) (_4-layer, 384-hidden, 12-heads_) - [**ERNIE 3.0-_Nano_**](https://bj.bcebos.com/paddlenlp/models/transformers/ernie_3.0/ernie_3.0_nano_zh.pdparams) (_4-layer, 312-hidden, 12-heads_) 下面是 PaddleNLP 中轻量级中文模型的**效果-时延图**。横坐标表示在 IFLYTEK 数据集 (最大序列长度设置为 128) 上测试的延迟(latency,单位:ms),纵坐标是 CLUE 10 个任务上的平均精度(包含文本分类、文本匹配、自然语言推理、代词消歧、阅读理解等任务),其中 CMRC2018 阅读理解任务的评价指标是 Exact Match(EM),其他任务的评价指标均是 Accuracy。图中越靠**左上**的模型,精度和性能水平越高。 图中模型名下方标注了模型的参数量,测试环境见[性能测试](#性能测试)。 batch_size=32 时,CPU 下的效果-时延图(线程数 1 和 8): <table> <tr> <td><a><img src="https://user-images.githubusercontent.com/26483581/175852121-2798b5c9-d122-4ac0-b4c8-da46b89b5512.png"></a></td> <td><a><img src="https://user-images.githubusercontent.com/26483581/175852129-bbe58835-8eec-45d5-a4a9-cc2cf9a3db6a.png"></a></td> </tr> </table> batch_size=1 时,CPU 下的效果-时延图(线程数 1 和 8): <table> <tr> <td><a><img src="https://user-images.githubusercontent.com/26483581/175852106-658e18e7-705b-4f53-bad0-027281163ae3.png"></a></td> <td><a><img src="https://user-images.githubusercontent.com/26483581/175852112-4b89d675-7c95-4d75-84b6-db5a6ea95e2c.png"></a></td> </tr> </table> batch_size=32 和 1,预测精度为 FP16 时,GPU 下的效果-时延图: <table> <tr> <td><a><img src="https://user-images.githubusercontent.com/26483581/175854679-3247f42e-8716-4a36-b5c6-9ce4661b36c7.png"></a></td> <td><a><img src="https://user-images.githubusercontent.com/26483581/175854670-57878b34-c213-47ac-b620-aaaec082f435.png"></a></td> </tr> </table> 从图上可看出,ERNIE 3.0 系列轻量级模型在精度和性能上的综合表现已全面领先于 UER-py、Huawei-Noah 以及 HFL 的中文模型。且当 batch_size=1、预测精度为 FP16 时,在 GPU 上宽且浅的模型的推理性能更有优势。 在 CLUE **验证集**上评测指标如下表所示: <table style="width:100%;" cellpadding="2" cellspacing="0" border="1" bordercolor="#000000"> <tbody> <tr> <td style="text-align:center;vertical-align:middle"> <span style="font-size:18px;">Arch</span> </td> <td style="text-align:center"> <span style="font-size:18px;">Model</span> </td> <td style="text-align:center"> <span style="font-size:18px;">AVG</span> </td> <td style="text-align:center"> <span style="font-size:18px;">AFQMC</span> </td> <td style="text-align:center;"> <span style="font-size:18px;">TNEWS</span> </td> <td style="text-align:center;"> <span style="font-size:18px;">IFLYTEK</span> </td> <td style="text-align:center;"> <span style="font-size:18px;">CMNLI</span> </td> <td style="text-align:center;"> <span style="font-size:18px;">OCNLI</span> </td> <td style="text-align:center;"> <span style="font-size:18px;">CLUEWSC2020</span> </td> <td style="text-align:center;"> <span style="font-size:18px;">CSL</span> </td> <td style="text-align:center;"> <span style="font-size:18px;">CMRC2018</span> </td> <td style="text-align:center;"> <span style="font-size:18px;">CHID</span> </td> <td style="text-align:center;"> <span style="font-size:18px;">C<sup>3</sup></span> </td> </tr> <tr> <td rowspan=2 align=center> 24L1024H </td> <td style="text-align:center"> <span style="font-size:18px"><b>ERNIE 2.0-Large-zh</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>77.03</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>76.41</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>59.67</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>62.29</b></span> </td> <td style="text-align:center"> <span style="font-size:18px">83.82</span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>79.69</b></span> </td> <td style="text-align:center"> <span style="font-size:18px">89.14</span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>84.10</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>71.48/90.35</b></span> </td> <td style="text-align:center"> <span style="font-size:18px">85.52</span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>78.12</b></span> </td> </tr> <tr> <td style="text-align:center"> <span style="font-size:18px">RoBERTa-wwm-ext-large</span> </td> <td style="text-align:center"> <span style="font-size:18px">76.61</span> </td> <td style="text-align:center"> <span style="font-size:18px">76.00</span> </td> <td style="text-align:center"> <span style="font-size:18px">59.33</span> </td> <td style="text-align:center"> <span style="font-size:18px">62.02</span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>83.88</b></span> </td> <td style="text-align:center"> <span style="font-size:18px">78.81</span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>90.79</b></span> </td> <td style="text-align:center"> <span style="font-size:18px">83.67</span> </td> <td style="text-align:center"> <span style="font-size:18px">70.58/89.82</span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>85.72</b></span> </td> <td style="text-align:center"> <span style="font-size:18px">75.26</span> </td> </tr> <tr> <td rowspan=1 align=center> 20L1024H </td> <td style="text-align:center"> <span style="font-size:18px"><b>ERNIE 3.0-Xbase-zh</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>78.71</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>76.85</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>59.89</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>62.41</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>84.76</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>82.51</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>89.80</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>84.47</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>75.49/92.67</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>86.36</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>84.59</b></span> </td> </tr> <tr> <td rowspan=8 align=center> 12L768H </td> <td style="text-align:center"> <span style="font-size:18px"> <a href="https://bj.bcebos.com/paddlenlp/models/transformers/ernie_3.0/ernie_3.0_base_zh.pdparams"> ERNIE 3.0-Base-zh </a> </span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>76.05</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>75.93</b></span> </td> <td style="text-align:center"> <span style="font-size:18px">58.26</span> </td> <td style="text-align:center"> <span style="font-size:18px">61.56</span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>83.02</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>80.10</b></span> </td> <td style="text-align:center"> <span style="font-size:18px">86.18</span> </td> <td style="text-align:center"> <span style="font-size:18px">82.63</span> </td> <td style="text-align:center"> <span style="font-size:18px">70.71/90.41</span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>84.26</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>77.88</b></span> </td> </tr> <tr> <td style="text-align:center"> <span style="font-size:18px">ERNIE-Gram-zh</span> </td> <td style="text-align:center"> <span style="font-size:18px">75.72</span> </td> <td style="text-align:center"> <span style="font-size:18px">75.28</span> </td> <td style="text-align:center"> <span style="font-size:18px">57.88</span> </td> <td style="text-align:center"> <span style="font-size:18px">60.87</span> </td> <td style="text-align:center"> <span style="font-size:18px">82.90</span> </td> <td style="text-align:center"> <span style="font-size:18px">79.08</span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>88.82</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>82.83</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>71.82/90.38</b></span> </td> <td style="text-align:center"> <span style="font-size:18px">84.04</span> </td> <td style="text-align:center"> <span style="font-size:18px">73.69</span> </td> </tr> <tr> <td style="text-align:center"> <span style="font-size:18px">ERNIE 2.0-Base-zh</span> </td> <td style="text-align:center"> <span style="font-size:18px">74.95</span> </td> <td style="text-align:center"> <span style="font-size:18px">76.25</span> </td> <td style="text-align:center"> <span style="font-size:18px">58.53</span> </td> <td style="text-align:center"> <span style="font-size:18px">61.72</span> </td> <td style="text-align:center"> <span style="font-size:18px">83.07</span> </td> <td style="text-align:center"> <span style="font-size:18px">78.81</span> </td> <td style="text-align:center"> <span style="font-size:18px">84.21</span> </td> <td style="text-align:center"> <span style="font-size:18px">82.77</span> </td> <td style="text-align:center"> <span style="font-size:18px">68.22/88.71</span> </td> <td style="text-align:center"> <span style="font-size:18px">82.78</span> </td> <td style="text-align:center"> <span style="font-size:18px">73.19</span> </td> </tr> <tr> <td style="text-align:center"> <span style="font-size:18px">Langboat/Mengzi-BERT-Base</span> </td> <td style="text-align:center"> <span style="font-size:18px">74.69</span> </td> <td style="text-align:center"> <span style="font-size:18px">75.35</span> </td> <td style="text-align:center"> <span style="font-size:18px">57.76</span> </td> <td style="text-align:center"> <span style="font-size:18px">61.64</span> </td> <td style="text-align:center"> <span style="font-size:18px">82.41</span> </td> <td style="text-align:center"> <span style="font-size:18px">77.93</span> </td> <td style="text-align:center"> <span style="font-size:18px">88.16</span> </td> <td style="text-align:center"> <span style="font-size:18px">82.20</span> </td> <td style="text-align:center"> <span style="font-size:18px">67.04/88.35</span> </td> <td style="text-align:center"> <span style="font-size:18px">83.74</span> </td> <td style="text-align:center"> <span style="font-size:18px">70.70</span> </td> </tr> <tr> <td style="text-align:center"> <span style="font-size:18px">ERNIE 1.0-Base-zh</span> </td> <td style="text-align:center"> <span style="font-size:18px">74.17</span> </td> <td style="text-align:center"> <span style="font-size:18px">74.84</span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>58.91</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>62.25</b></span> </td> <td style="text-align:center"> <span style="font-size:18px">81.68</span> </td> <td style="text-align:center"> <span style="font-size:18px">76.58</span> </td> <td style="text-align:center"> <span style="font-size:18px">85.20</span> </td> <td style="text-align:center"> <span style="font-size:18px">82.77</span> </td> <td style="text-align:center"> <span style="font-size:18px">67.32/87.83</span> </td> <td style="text-align:center"> <span style="font-size:18px">82.47</span> </td> <td style="text-align:center"> <span style="font-size:18px">69.68</span> </td> </tr> <tr> <td style="text-align:center"> <span style="font-size:18px">RoBERTa-wwm-ext</span> </td> <td style="text-align:center"> <span style="font-size:18px">74.11</span> </td> <td style="text-align:center"> <span style="font-size:18px">74.60</span> </td> <td style="text-align:center"> <span style="font-size:18px">58.08</span> </td> <td style="text-align:center"> <span style="font-size:18px">61.23</span> </td> <td style="text-align:center"> <span style="font-size:18px">81.11</span> </td> <td style="text-align:center"> <span style="font-size:18px">76.92</span> </td> <td style="text-align:center"> <span style="font-size:18px">88.49</span> </td> <td style="text-align:center"> <span style="font-size:18px">80.77</span> </td> <td style="text-align:center"> <span style="font-size:18px">68.39/88.50</span> </td> <td style="text-align:center"> <span style="font-size:18px">83.43</span> </td> <td style="text-align:center"> <span style="font-size:18px">68.03</span> </td> </tr> <tr> <td style="text-align:center"> <span style="font-size:18px">BERT-Base-Chinese</span> </td> <td style="text-align:center"> <span style="font-size:18px">72.57</span> </td> <td style="text-align:center"> <span style="font-size:18px">74.63</span> </td> <td style="text-align:center"> <span style="font-size:18px">57.13</span> </td> <td style="text-align:center"> <span style="font-size:18px">61.29</span> </td> <td style="text-align:center"> <span style="font-size:18px">80.97</span> </td> <td style="text-align:center"> <span style="font-size:18px">75.22</span> </td> <td style="text-align:center"> <span style="font-size:18px">81.91</span> </td> <td style="text-align:center"> <span style="font-size:18px">81.90</span> </td> <td style="text-align:center"> <span style="font-size:18px">65.30/86.53</span> </td> <td style="text-align:center"> <span style="font-size:18px">82.01</span> </td> <td style="text-align:center"> <span style="font-size:18px">65.38</span> </td> </tr> <tr> <td style="text-align:center"> <span style="font-size:18px">UER/Chinese-RoBERTa-Base</span> </td> <td style="text-align:center"> <span style="font-size:18px">71.78</span> </td> <td style="text-align:center"> <span style="font-size:18px">72.89</span> </td> <td style="text-align:center"> <span style="font-size:18px">57.62</span> </td> <td style="text-align:center"> <span style="font-size:18px">61.14</span> </td> <td style="text-align:center"> <span style="font-size:18px">80.01</span> </td> <td style="text-align:center"> <span style="font-size:18px">75.56</span> </td> <td style="text-align:center"> <span style="font-size:18px">81.58</span> </td> <td style="text-align:center"> <span style="font-size:18px">80.80</span> </td> <td style="text-align:center"> <span style="font-size:18px">63.87/84.95</span> </td> <td style="text-align:center"> <span style="font-size:18px">81.52</span> </td> <td style="text-align:center"> <span style="font-size:18px">62.76</span> </td> </tr> <tr> <td rowspan=1 align=center> 8L512H </td> <td style="text-align:center"> <span style="font-size:18px">UER/Chinese-RoBERTa-Medium</span> </td> <td style="text-align:center"> <span style="font-size:18px">67.06</span> </td> <td style="text-align:center"> <span style="font-size:18px">70.64</span> </td> <td style="text-align:center"> <span style="font-size:18px">56.10</span> </td> <td style="text-align:center"> <span style="font-size:18px">58.29</span> </td> <td style="text-align:center"> <span style="font-size:18px">77.35</span> </td> <td style="text-align:center"> <span style="font-size:18px">71.90</span> </td> <td style="text-align:center"> <span style="font-size:18px">68.09</span> </td> <td style="text-align:center"> <span style="font-size:18px">78.63</span> </td> <td style="text-align:center"> <span style="font-size:18px">57.63/78.91</span> </td> <td style="text-align:center"> <span style="font-size:18px">75.13</span> </td> <td style="text-align:center"> <span style="font-size:18px">56.84</span> </td> </tr> <tr> <td rowspan=5 align=center> 6L768H </td> <td style="text-align:center"> <span style="font-size:18px"> <a href="https://bj.bcebos.com/paddlenlp/models/transformers/ernie_3.0/ernie_3.0_medium_zh.pdparams"> ERNIE 3.0-Medium-zh </a> </span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>72.49</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>73.37</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>57.00</b></span> </td> <td style="text-align:center"> <span style="font-size:18px">60.67</span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>80.64</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>76.88</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>79.28</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>81.60</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>65.83/87.30</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>79.91</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>69.73</b></span> </td> </tr> <tr> <td style="text-align:center"> <span style="font-size:18px">HLF/RBT6, Chinese</span> </td> <td style="text-align:center"> <span style="font-size:18px">70.06</span> </td> <td style="text-align:center"> <span style="font-size:18px">73.45</span> </td> <td style="text-align:center"> <span style="font-size:18px">56.82</span> </td> <td style="text-align:center"> <span style="font-size:18px">59.64</span> </td> <td style="text-align:center"> <span style="font-size:18px">79.36</span> </td> <td style="text-align:center"> <span style="font-size:18px">73.32</span> </td> <td style="text-align:center"> <span style="font-size:18px">76.64</span> </td> <td style="text-align:center"> <span style="font-size:18px">80.67</span> </td> <td style="text-align:center"> <span style="font-size:18px">62.72/84.77</span> </td> <td style="text-align:center"> <span style="font-size:18px">78.17</span> </td> <td style="text-align:center"> <span style="font-size:18px">59.85</span> </td> </tr> <tr> <td style="text-align:center"> <span style="font-size:18px">TinyBERT<sub>6</sub>, Chinese</span> </td> <td style="text-align:center"> <span style="font-size:18px">69.62</span> </td> <td style="text-align:center"> <span style="font-size:18px">72.22</span> </td> <td style="text-align:center"> <span style="font-size:18px">55.70</span> </td> <td style="text-align:center"> <span style="font-size:18px">54.48</span> </td> <td style="text-align:center"> <span style="font-size:18px">79.12</span> </td> <td style="text-align:center"> <span style="font-size:18px">74.07</span> </td> <td style="text-align:center"> <span style="font-size:18px">77.63</span> </td> <td style="text-align:center"> <span style="font-size:18px">80.17</span> </td> <td style="text-align:center"> <span style="font-size:18px">63.03/83.75</span> </td> <td style="text-align:center"> <span style="font-size:18px">77.64</span> </td> <td style="text-align:center"> <span style="font-size:18px">62.11</span> </td> </tr> <tr> <td style="text-align:center"> <span style="font-size:18px">RoFormerV2 Small</span> </td> <td style="text-align:center"> <span style="font-size:18px">68.52</span> </td> <td style="text-align:center"> <span style="font-size:18px">72.47</span> </td> <td style="text-align:center"> <span style="font-size:18px">56.53</span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>60.72</b></span> </td> <td style="text-align:center"> <span style="font-size:18px">76.37</span> </td> <td style="text-align:center"> <span style="font-size:18px">72.95</span> </td> <td style="text-align:center"> <span style="font-size:18px">75.00</span> </td> <td style="text-align:center"> <span style="font-size:18px">81.07</span> </td> <td style="text-align:center"> <span style="font-size:18px">62.97/83.64</span> </td> <td style="text-align:center"> <span style="font-size:18px">67.66</span> </td> <td style="text-align:center"> <span style="font-size:18px">59.41</span> </td> </tr> <tr> <td style="text-align:center"> <span style="font-size:18px">UER/Chinese-RoBERTa-L6-H768</span> </td> <td style="text-align:center"> <span style="font-size:18px">67.09</span> </td> <td style="text-align:center"> <span style="font-size:18px">70.13</span> </td> <td style="text-align:center"> <span style="font-size:18px">56.54</span> </td> <td style="text-align:center"> <span style="font-size:18px">60.48</span> </td> <td style="text-align:center"> <span style="font-size:18px">77.49</span> </td> <td style="text-align:center"> <span style="font-size:18px">72.00</span> </td> <td style="text-align:center"> <span style="font-size:18px">72.04</span> </td> <td style="text-align:center"> <span style="font-size:18px">77.33</span> </td> <td style="text-align:center"> <span style="font-size:18px">53.74/75.52</span> </td> <td style="text-align:center"> <span style="font-size:18px">76.73</span> </td> <td style="text-align:center"> <span style="font-size:18px">54.40</span> </td> </tr> <tr> <td rowspan=1 align=center> 6L384H </td> <td style="text-align:center"> <span style="font-size:18px"> <a href="https://bj.bcebos.com/paddlenlp/models/transformers/ernie_3.0/ernie_3.0_mini_zh.pdparams"> ERNIE 3.0-Mini-zh </a> </span> </td> <td style="text-align:center"> <span style="font-size:18px">66.90</span> </td> <td style="text-align:center"> <span style="font-size:18px">71.85</span> </td> <td style="text-align:center"> <span style="font-size:18px">55.24</span> </td> <td style="text-align:center"> <span style="font-size:18px">54.48</span> </td> <td style="text-align:center"> <span style="font-size:18px">77.19</span> </td> <td style="text-align:center"> <span style="font-size:18px">73.08</span> </td> <td style="text-align:center"> <span style="font-size:18px">71.05</span> </td> <td style="text-align:center"> <span style="font-size:18px">79.30</span> </td> <td style="text-align:center"> <span style="font-size:18px">58.53/81.97</span> </td> <td style="text-align:center"> <span style="font-size:18px">69.71</span> </td> <td style="text-align:center"> <span style="font-size:18px">58.60</span> </td> </tr> <tr> <td rowspan=1 align=center> 4L768H </td> <td style="text-align:center"> <span style="font-size:18px">HFL/RBT4, Chinese</span> </td> <td style="text-align:center"> <span style="font-size:18px">67.42</span> </td> <td style="text-align:center"> <span style="font-size:18px">72.41</span> </td> <td style="text-align:center"> <span style="font-size:18px">56.50</span> </td> <td style="text-align:center"> <span style="font-size:18px">58.95</span> </td> <td style="text-align:center"> <span style="font-size:18px">77.34</span> </td> <td style="text-align:center"> <span style="font-size:18px">70.78</span> </td> <td style="text-align:center"> <span style="font-size:18px">71.05</span> </td> <td style="text-align:center"> <span style="font-size:18px">78.23</span> </td> <td style="text-align:center"> <span style="font-size:18px">59.30/81.93</span> </td> <td style="text-align:center"> <span style="font-size:18px">73.18</span> </td> <td style="text-align:center"> <span style="font-size:18px">56.45</span> </td> </tr> <tr> <td rowspan=1 align=center> 4L512H </td> <td style="text-align:center"> <span style="font-size:18px">UER/Chinese-RoBERTa-Small</span> </td> <td style="text-align:center"> <span style="font-size:18px">63.25</span> </td> <td style="text-align:center"> <span style="font-size:18px">69.21</span> </td> <td style="text-align:center"> <span style="font-size:18px">55.41</span> </td> <td style="text-align:center"> <span style="font-size:18px">57.552</span> </td> <td style="text-align:center"> <span style="font-size:18px">73.64</span> </td> <td style="text-align:center"> <span style="font-size:18px">69.80</span> </td> <td style="text-align:center"> <span style="font-size:18px">66.78</span> </td> <td style="text-align:center"> <span style="font-size:18px">74.83</span> </td> <td style="text-align:center"> <span style="font-size:18px">46.75/69.69</span> </td> <td style="text-align:center"> <span style="font-size:18px">67.59</span> </td> <td style="text-align:center"> <span style="font-size:18px">50.92</span> </td> </tr> <tr> <td rowspan=1 align=center> 4L384H </td> <td style="text-align:center"> <span style="font-size:18px"> <a href="https://bj.bcebos.com/paddlenlp/models/transformers/ernie_3.0/ernie_3.0_micro_zh.pdparams"> ERNIE 3.0-Micro-zh </a> </span> </td> <td style="text-align:center"> <span style="font-size:18px">64.21</span> </td> <td style="text-align:center"> <span style="font-size:18px">71.15</span> </td> <td style="text-align:center"> <span style="font-size:18px">55.05</span> </td> <td style="text-align:center"> <span style="font-size:18px">53.83</span> </td> <td style="text-align:center"> <span style="font-size:18px">74.81</span> </td> <td style="text-align:center"> <span style="font-size:18px">70.41</span> </td> <td style="text-align:center"> <span style="font-size:18px">69.08</span> </td> <td style="text-align:center"> <span style="font-size:18px">76.50</span> </td> <td style="text-align:center"> <span style="font-size:18px">53.77/77.82</span> </td> <td style="text-align:center"> <span style="font-size:18px">62.26</span> </td> <td style="text-align:center"> <span style="font-size:18px">55.53</span> </td> </tr> <tr> <td rowspan=2 align=center> 4L312H </td> <td style="text-align:center"> <span style="font-size:18px"> <a href="https://bj.bcebos.com/paddlenlp/models/transformers/ernie_3.0/ernie_3.0_nano_zh.pdparams"> ERNIE 3.0-Nano-zh </a> </span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>62.97</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>70.51</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>54.57</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>48.36</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>74.97</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>70.61</b></span> </td> <td style="text-align:center"> <span style="font-size:18px">68.75</span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>75.93</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>52.00/76.35</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>58.91</b></span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>55.11</b></span> </td> </tr> <tr> <td style="text-align:center"> <span style="font-size:18px">TinyBERT<sub>4</sub>, Chinese</span> </td> <td style="text-align:center"> <span style="font-size:18px">60.82</span> </td> <td style="text-align:center"> <span style="font-size:18px">69.07</span> </td> <td style="text-align:center"> <span style="font-size:18px">54.02</span> </td> <td style="text-align:center"> <span style="font-size:18px">39.71</span> </td> <td style="text-align:center"> <span style="font-size:18px">73.94</span> </td> <td style="text-align:center"> <span style="font-size:18px">69.59</span> </td> <td style="text-align:center"> <span style="font-size:18px"><b>70.07</b></span> </td> <td style="text-align:center"> <span style="font-size:18px">75.07</span> </td> <td style="text-align:center"> <span style="font-size:18px">46.04/69.34</span> </td> <td style="text-align:center"> <span style="font-size:18px">58.53</span> </td> <td style="text-align:center"> <span style="font-size:18px">52.18</span> </td> </tr> <tr> <td rowspan=1 align=center> 4L256H </td> <td style="text-align:center"> <span style="font-size:18px">UER/Chinese-RoBERTa-Mini</span> </td> <td style="text-align:center"> <span style="font-size:18px">53.40</span> </td> <td style="text-align:center"> <span style="font-size:18px">69.32</span> </td> <td style="text-align:center"> <span style="font-size:18px">54.22</span> </td> <td style="text-align:center"> <span style="font-size:18px">41.63</span> </td> <td style="text-align:center"> <span style="font-size:18px">69.40</span> </td> <td style="text-align:center"> <span style="font-size:18px">67.36</span> </td> <td style="text-align:center"> <span style="font-size:18px">65.13</span> </td> <td style="text-align:center"> <span style="font-size:18px">70.07</span> </td> <td style="text-align:center"> <span style="font-size:18px">5.96/17.13</span> </td> <td style="text-align:center"> <span style="font-size:18px">51.19</span> </td> <td style="text-align:center"> <span style="font-size:18px">39.68</span> </td> </tr> <tr> <td rowspan=1 align=center> 3L1024H </td> <td style="text-align:center"> <span style="font-size:18px">HFL/RBTL3, Chinese</span> </td> <td style="text-align:center"> <span style="font-size:18px">66.63</span> </td> <td style="text-align:center"> <span style="font-size:18px">71.11</span> </td> <td style="text-align:center"> <span style="font-size:18px">56.14</span> </td> <td style="text-align:center"> <span style="font-size:18px">59.56</span> </td> <td style="text-align:center"> <span style="font-size:18px">76.41</span> </td> <td style="text-align:center"> <span style="font-size:18px">71.29</span> </td> <td style="text-align:center"> <span style="font-size:18px">69.74</span> </td> <td style="text-align:center"> <span style="font-size:18px">76.93</span> </td> <td style="text-align:center"> <span style="font-size:18px">58.50/80.90</span> </td> <td style="text-align:center"> <span style="font-size:18px">71.03</span> </td> <td style="text-align:center"> <span style="font-size:18px">55.56</span> </td> </tr> <tr> <td rowspan=1 align=center> 3L768H </td> <td style="text-align:center"> <span style="font-size:18px">HFL/RBT3, Chinese</span> </td> <td style="text-align:center"> <span style="font-size:18px">65.72</span> </td> <td style="text-align:center"> <span style="font-size:18px">70.95</span> </td> <td style="text-align:center"> <span style="font-size:18px">55.53</span> </td> <td style="text-align:center"> <span style="font-size:18px">59.18</span> </td> <td style="text-align:center"> <span style="font-size:18px">76.20</span> </td> <td style="text-align:center"> <span style="font-size:18px">70.71</span> </td> <td style="text-align:center"> <span style="font-size:18px">67.11</span> </td> <td style="text-align:center"> <span style="font-size:18px">76.63</span> </td> <td style="text-align:center"> <span style="font-size:18px">55.73/78.63</span> </td> <td style="text-align:center"> <span style="font-size:18px">70.26</span> </td> <td style="text-align:center"> <span style="font-size:18px">54.93</span> </td> </tr> <tr> <td rowspan=1 align=center> 2L128H </td> <td style="text-align:center"> <span style="font-size:18px">UER/Chinese-RoBERTa-Tiny</span> </td> <td style="text-align:center"> <span style="font-size:18px">44.45</span> </td> <td style="text-align:center"> <span style="font-size:18px">69.02</span> </td> <td style="text-align:center"> <span style="font-size:18px">51.47</span> </td> <td style="text-align:center"> <span style="font-size:18px">20.28</span> </td> <td style="text-align:center"> <span style="font-size:18px">59.95</span> </td> <td style="text-align:center"> <span style="font-size:18px">57.73</span> </td> <td style="text-align:center"> <span style="font-size:18px">63.82</span> </td> <td style="text-align:center"> <span style="font-size:18px">67.43</span> </td> <td style="text-align:center"> <span style="font-size:18px">3.08/14.33</span> </td> <td style="text-align:center"> <span style="font-size:18px">23.57</span> </td> <td style="text-align:center"> <span style="font-size:18px">28.12</span> </td> </tr> <tbody> </table> <br /> 以下是本项目目录结构及说明: ```shell . ├── run_seq_cls.py # 分类任务的微调脚本 ├── run_token_cls.py # 序列标注任务的微调脚本 ├── run_qa.py # 阅读理解任务的微调脚本 ├── compress_seq_cls.py # 分类任务的压缩脚本 ├── compress_token_cls.py # 序列标注任务的压缩脚本 ├── compress_qa.py # 阅读理解任务的压缩脚本 ├── config.yml # 压缩配置文件 ├── infer.py # 支持 CLUE 分类、CLUE CMRC2018、MSRA_NER 任务的预测脚本 ├── deploy # 部署目录 │ └── python │ └── ernie_predictor.py │ └── infer_cpu.py │ └── infer_gpu.py │ └── README.md │ └── serving │ └── seq_cls_rpc_client.py │ └── seq_cls_service.py │ └── seq_cls_config.yml │ └── token_cls_rpc_client.py │ └── token_cls_service.py │ └── token_cls_config.yml │ └── README.md │ └── paddle2onnx │ └── ernie_predictor.py │ └── infer.py │ └── README.md └── README.md # 文档,本文件 ``` <a name="微调"></a> ## 微调 ERNIE 3.0 发布的预训练模型还不能直接在下游任务上直接使用,需要使用具体任务上的数据对预训练模型进行微调。 使用 PaddleNLP 只需要一行代码可以拿到 ERNIE 3.0 系列模型,之后可以在自己的下游数据下进行微调,从而获得具体任务上效果更好的模型。 ```python from paddlenlp.transformers import * tokenizer = AutoTokenizer.from_pretrained("ernie-3.0-medium-zh") # 用于分类任务 seq_cls_model = AutoModelForSequenceClassification.from_pretrained("ernie-3.0-medium-zh") # 用于序列标注任务 token_cls_model = AutoModelForTokenClassification.from_pretrained("ernie-3.0-medium-zh") # 用于阅读理解任务 qa_model = AutoModelForQuestionAnswering.from_pretrained("ernie-3.0-medium-zh") ``` 本项目提供了针对分类(包含文本分类、文本匹配、自然语言推理、代词消歧等任务)、序列标注、阅读理解三大场景下微调的示例脚本,可分别参考 `run_seq_cls.py` 、`run_token_cls.py`、`run_qa.py` 三个脚本,启动方式如下: ```shell # 分类任务 python run_seq_cls.py --task_name tnews --model_name_or_path ernie-3.0-medium-zh --do_train # 序列标注任务 python run_token_cls.py --task_name msra_ner --model_name_or_path ernie-3.0-medium-zh --do_train # 阅读理解任务 python run_qa.py --model_name_or_path ernie-3.0-medium-zh --do_train ``` <a name="模型压缩"></a> ## 模型压缩 尽管 ERNIE 3.0 已提供了效果不错的 6 层、4 层轻量级模型可以微调后直接使用,但如果有模型部署上线的需求,则需要进一步压缩模型体积,可以使用这里提供的一套模型压缩方案及 API 对上一步微调后的模型进行压缩。 <a name="环境依赖"></a> ### 环境依赖 使用裁剪功能需要安装 paddleslim 包 ```shell pip install paddleslim ``` <a name="模型压缩API使用"></a> ### 模型压缩 API 使用 本项目基于 PaddleNLP 的 Trainer API 发布提供了模型压缩 API。压缩 API 支持用户对 ERNIE、BERT 等 Transformers 类下游任务微调模型进行裁剪、量化。用户只需要简单地调用 `compress()` 即可一键启动裁剪和量化,并自动保存压缩后的模型。 可以这样使用压缩 API (示例代码只提供了核心调用,如需跑通完整的例子可参考下方完整样例脚本): ```python trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=train_dataset, eval_dataset=eval_dataset, tokenizer=tokenizer) output_dir = os.path.join(model_args.model_name_or_path, "compress") compress_config = CompressConfig(quantization_config=PTQConfig( algo_list=['hist', 'mse'], batch_size_list=[4, 8, 16]), DynabertConfig(width_mul_ist=[3/4])) trainer.compress( output_dir, pruning=True, # 开启裁剪 quantization=True, # 开启量化 compress_config=compress_config) ``` 由于压缩 API 基于 Trainer,所以首先需要初始化一个 Trainer 实例,对于模型压缩来说必要传入的参数如下: - `model`:ERNIE、BERT 等模型,是在下游任务中微调后的模型。以分类模型为例,可通过`AutoModelForSequenceClassification.from_pretrained(model_name_or_path)` 来获取 - `data_collator`:三类任务均可使用 PaddleNLP 预定义好的[DataCollator 类](../../paddlenlp/data/data_collator.py),`data_collator` 可对数据进行 `Pad` 等操作。使用方法参考本项目中代码即可 - `train_dataset`:裁剪训练需要使用的训练集 - `eval_dataset`:裁剪训练使用的评估集,也是量化使用的校准数据 - `tokenizer`:模型`model`对应的 `tokenizer`,可使用 `AutoTokenizer.from_pretrained(model_name_or_path)` 来获取 然后可以直接调用 `compress` 启动压缩,其中 `compress` 的参数释义如下: - `output_dir`:裁剪、量化后的模型保存目录 - `pruning`:是否裁剪,默认为`True` - `quantization`:是否量化,默认为 `True` - `compress_config`:压缩配置,需要分别传入裁剪和量化的配置实例。目前裁剪和量化分别仅支持`DynabertConfig`和`PTQConfig`类。当默认参数不满足需求时,可通过传入参数对压缩过程进行特殊配置: 其中,`DynabertConfig`中可以传的参数有: - `width_mult_list`:裁剪宽度保留的比例list,对 6 层模型推荐 `3/4` ,对 12 层模型推荐 `2/3`,表示对 `q`、`k`、`v` 以及 `ffn` 权重宽度的保留比例。默认是 `[3/4]` - `output_filename_prefix`:裁剪导出模型的文件名前缀,默认是`"float32"` `PTQConfig`中可以传的参数有: - `algo_list`:量化策略列表,目前支持 `KL`, `abs_max`, `min_max`, `avg`, `hist`和`mse`,不同的策略计算量化比例因子的方法不同。建议传入多种策略,可批量得到由多种策略产出的多个量化模型,从中选择最优模型。推荐`hist`, `mse`, `KL`,默认是`["hist"]` - `batch_size_list`:校准样本数,默认是 `[4]`。并非越大越好,也是一个超参数,建议传入多种校准样本数,可从多个量化模型中选择最优模型。 - `input_dir`:待量化模型的目录。如果是 `None`,当不启用裁剪时,表示待量化的模型是 `Trainer` 初始化的模型;当启用裁剪时,表示待量化的模型是裁剪后导出的模型。默认是`None` - `input_filename_prefix`:待量化模型文件名前缀,默认是 `"float32"` - `output_filename_prefix`:导出的量化模型文件名后缀,默认是`"int8"` 本项目还提供了压缩 API 在分类(包含文本分类、文本匹配、自然语言推理、代词消歧等任务)、序列标注、阅读理解三大场景下的使用样例,可以分别参考 `compress_seq_cls.py` 、`compress_token_cls.py`、`compress_qa.py`,启动方式如下: ```shell # --model_name_or_path 参数传入的是上面微调过程后得到的模型所在目录,压缩后的模型也会在该目录下 # 分类任务 python compress_seq_cls.py --dataset "clue tnews" --model_name_or_path best_models/TNEWS --output_dir ./ # 序列标注任务 python compress_token_cls.py --dataset "msra_ner" --model_name_or_path best_models/MSRA_NER --output_dir ./ # 阅读理解任务 python compress_seq_cls.py --dataset "clue cmrc2018" --model_name_or_path best_models/CMRC2018 --output_dir ./ ``` 一行代码验证上面模型压缩后模型的精度: ```shell # 原模型 python infer.py --task_name tnews --model_path best_models/TNEWS/compress/inference/infer --use_trt # 裁剪后 python infer.py --task_name tnews --model_path best_models/TNEWS/compress/0.75/float --use_trt # 量化后 python infer.py --task_name tnews --model_path best_models/TNEWS/compress/0.75/hist16/int8 --use_trt --precision int8 ``` 其中 --model_path 参数需要传入静态图模型的路径和前缀名。 **压缩 API 使用 TIPS:** 1. 模型压缩主要用于加速推理部署,因此压缩后的模型都是静态图模型,不能再通过 `from_pretrained()` API 导入继续训练。 2. 压缩 API `compress()` 默认会启动裁剪和量化,但用户也可以通过在 `compress()` 中设置 pruning=False 或者 quantization=False 来关掉裁剪或者量化过程。目前裁剪策略有额外的训练的过程,需要下游任务的数据,其训练时间视下游任务数据量而定,且和微调的训练时间是一个量级。量化则不需要额外的训练,更快,量化的加速比比裁剪更明显,但是单独量化精度下降可能也更多; 3. 裁剪类似蒸馏过程,方便起见,可以直接使用微调时的超参。如果想要进一步提升精度,可以对 `batch_size`、`learning_rate`、`epoch` 等超参进行 Grid Search; <a name="压缩效果"></a> ### 压缩效果 <a name="精度测试"></a> #### 精度测试 本案例中我们对 ERNIE 3.0-Medium 模型在三类任务上微调后的模型使用压缩 API 进行压缩。压缩后精度如下: | Model | AVG | AFQMC | TNEWS | IFLYTEK | CMNLI | OCNLI | CLUEWSC2020 | CSL | CMRC2018 | MSRA_NER | | ------------------------------- | ----- | ----- | ----- | ------- | ----- | ----- | ----------- | ----- | ----------- | ----------------- | | ERNIE 3.0-Medium | 74.87 | 75.35 | 57.45 | 60.18 | 81.16 | 77.19 | 80.59 | 81.93 | 66.95/87.15 | 92.65/93.43/93.04 | | ERNIE 3.0-Medium+FP16 | 74.87 | 75.32 | 57.45 | 60.22 | 81.16 | 77.22 | 80.59 | 81.90 | 66.95/87.16 | 92.65/93.45/93.05 | | ERNIE 3.0-Medium+裁剪+FP32 | 74.70 | 75.14 | 57.31 | 60.29 | 81.25 | 77.46 | 79.93 | 81.70 | 65.92/86.43 | 93.10/93.43/93.27 | | ERNIE 3.0-Medium+裁剪+FP16 | 74.71 | 75.21 | 57.27 | 60.29 | 81.24 | 77.56 | 79.93 | 81.73 | 65.89/86.44 | 93.10/93.43/93.27 | | ERNIE 3.0-Medium+裁剪+量化+INT8 | 74.44 | 75.02 | 57.26 | 60.37 | 81.03 | 77.25 | 77.96 | 81.67 | 66.17/86.55 | 93.17/93.23/93.20 | | ERNIE 3.0-Medium+量化+INT8 | 74.10 | 74.67 | 56.99 | 59.91 | 81.03 | 75.05 | 78.62 | 81.60 | 66.32/86.82 | 93.10/92.90/92.70 | **评价指标说明:** 其中 CLUE 分类任务(AFQMC 语义相似度、TNEWS 文本分类、IFLYTEK 长文本分类、CMNLI 自然语言推理、OCNLI 自然语言推理、CLUEWSC2020 代词消歧、CSL 论文关键词识别)的评价指标是 Accuracy,阅读理解任务 CLUE CMRC2018 的评价指标是 EM (Exact Match) / F1-Score,计算平均值时取 EM,序列标注任务 MSRA_NER 的评价指标是 Precision/Recall/F1-Score,计算平均值时取 F1-Score。 由表可知,`ERNIE 3.0-Medium` 模型经过裁剪和量化后,精度平均下降 0.46,其中裁剪后下降了 0.17,单独量化精度平均下降 0.77。 <a name="性能测试"></a> #### 性能测试 性能测试的配置如下: 1. 数据集:TNEWS(文本分类)、MSRA_NER(序列标注)、CLUE CMRC2018(阅读理解) 2. 计算卡:T4、CUDA11.2、CuDNN8.2 3. CPU 信息:Intel(R) Xeon(R) Gold 6271C CPU 4. PaddlePaddle 版本:2.3 5. PaddleNLP 版本:2.3 6. 性能数据单位是 QPS。QPS 测试方法:固定 batch size 为 32,测试运行时间 total_time,计算 QPS = total_samples / total_time 7. 精度数据单位:文本分类是 Accuracy,序列标注是 F1-Score,阅读理解是 EM (Exact Match) <a name="CPU性能"></a> ##### CPU 性能 测试环境及说明如上,测试 CPU 性能时,线程数设置为12。 | | TNEWS 性能 | TNEWS 精度 | MSRA_NER 性能 | MSRA_NER 精度 | CMRC2018 性能 | CMRC2018 精度 | | -------------------------- | ------------ | ------------ | ------------- | ------------- | ------------- | ------------- | | ERNIE 3.0-Medium+FP32 | 311.95(1.0X) | 57.45 | 90.91(1.0x) | 93.04 | 33.74(1.0x) | 66.95 | | ERNIE 3.0-Medium+INT8 | 600.35(1.9x) | 56.57(-0.88) | 141.00(1.6x) | 92.64(-0.40) | 56.51(1.7x) | 66.23(-0.72) | | ERNIE 3.0-Medium+裁剪+FP32 | 408.65(1.3x) | 57.31(-0.14) | 122.13(1.3x) | 93.27(+0.23) | 48.47(1.4x) | 65.55(-1.40) | | ERNIE 3.0-Medium+裁剪+INT8 | 704.42(2.3x) | 56.69(-0.76) | 215.58(2.4x) | 92.39(-0.65) | 75.23(2.2x) | 63.47(-3.48) | 三类任务(分类、序列标注、阅读理解)经过相同压缩过程后,加速比达到 2.3 左右。 <a name="GPU性能"></a> ##### GPU 性能 | | TNEWS 性能 | TNEWS 精度 | MSRA_NER 性能 | MSRA_NER 精度 | CMRC2018 性能 | CMRC2018 精度 | | -------------------------- | ------------- | ------------ | ------------- | ------------- | ------------- | ------------- | | ERNIE 3.0-Medium+FP32 | 1123.85(1.0x) | 57.45 | 366.75(1.0x) | 93.04 | 146.84(1.0x) | 66.95 | | ERNIE 3.0-Medium+FP16 | 2672.41(2.4x) | 57.45(0.00) | 840.11(2.3x) | 93.05(0.01) | 303.43(2.1x) | 66.95(0.00) | | ERNIE 3.0-Medium+INT8 | 3226.26(2.9x) | 56.99(-0.46) | 889.33(2.4x) | 92.70(-0.34) | 348.84(2.4x) | 66.32(-0.63 | | ERNIE 3.0-Medium+裁剪+FP32 | 1424.01(1.3x) | 57.31(-0.14) | 454.27(1.2x) | 93.27(+0.23) | 183.77(1.3x) | 65.92(-1.03) | | ERNIE 3.0-Medium+裁剪+FP16 | 3577.62(3.2x) | 57.27(-0.18) | 1138.77(3.1x) | 93.27(+0.23) | 445.71(3.0x) | 65.89(-1.06) | | ERNIE 3.0-Medium+裁剪+INT8 | 3635.48(3.2x) | 57.26(-0.19) | 1105.26(3.0x) | 93.20(+0.16) | 444.27(3.0x) | 66.17(-0.78) | 三类任务(分类、序列标注、阅读理解)经过裁剪 + 量化后加速比均达到 3 倍左右,所有任务上平均精度损失可控制在 0.5 以内(0.46)。 <a name="使用FasterTokenizer加速"></a> ### 使用 FasterTokenizer 加速 FasterTokenizer 是飞桨提供的速度领先的文本处理算子库,集成了 Google 于 2021 年底发布的 LinMaxMatch 算法,该算法引入 Aho-Corasick 将 WordPiece 的时间复杂度从 O(N<sup>2</sup>) 优化到 O(N),已在 Google 搜索业务中大规模上线。FasterTokenizer 速度显著领先,且呈现 batch_size 越大,优势越突出。例如,设置 batch_size = 64 时,FasterTokenizer 切词速度比 HuggingFace 快 28 倍。 在 ERNIE 3.0 轻量级模型裁剪、量化基础上,当设置切词线程数为 4 时,使用 FasterTokenizer 在 NVIDIA Tesla T4 环境下在 IFLYTEK (长文本分类数据集,最大序列长度为 128)数据集上性能提升了 2.39 倍,相比 BERT-Base 性能提升了 7.09 倍,在 Intel(R) Xeon(R) Gold 6271C CPU @ 2.60GHz、线程数为 8 的情况下性能提升了 1.27 倍,相比 BERT-Base 性能提升了 5.13 倍。加速效果如下图所示: <table> <tr> <td><a><img src="https://user-images.githubusercontent.com/26483581/175452331-bc5ff646-90ee-4377-85a5-d5b073a8e7f9.png"></a></td> <td><a><img src="https://user-images.githubusercontent.com/26483581/175452337-e0eff0d3-ed5f-42e7-b06b-caad61f37978.png"></a></td> </tr> </table> 使用 FasterTokenizer 的方式非常简单,在安装 faster_tokenizer 包之后,仅需在 tokenizer 实例化时直接传入 `use_faster=True` 即可。目前已在 Linux 系统下支持 BERT、ERNIE、TinyBERT 等模型。 安装 faster_tokenizer 包的命令: ```shell pip install faster_tokenizer ``` 如需设置切词线程数,需要运行前先设置环境变量 `OMP_NUM_THREADS` : ```shell # 设置切词线程数为 4 export OMP_NUM_THREADS=4 ``` 调用 `from_pretrained` 时只需轻松传入一个参数 `use_faster=True`: ```python from paddlenlp.transformers import AutoTokenizer AutoTokenizer.from_pretrained("ernie-3.0-medium-zh", use_faster=True) ``` <a name="部署"></a> ## 部署 我们为 ERNIE 3.0 提供了多种部署方案,可以满足不同场景下的部署需求,请根据实际情况进行选择。 <p align="center"> <img width="700" alt="image" src="https://user-images.githubusercontent.com/26483581/175260618-610a160c-270c-469a-842c-96871243c4ed.png"> </p> <a name="Python部署"></a> ### Python 部署 Python部署请参考:[Python部署指南](./deploy/python/README.md) <a name="服务化部署"></a> ### 服务化部署 - [Triton Inference Server服务化部署指南](./deploy/triton/README.md) - [Paddle Serving服务化部署指南](./deploy/serving/README.md) <a name="Paddle2ONNX部署"></a> ### Paddle2ONNX 部署 ONNX 导出及 ONNXRuntime 部署请参考:[ONNX导出及ONNXRuntime部署指南](./deploy/paddle2onnx/README.md) ### Paddle Lite 移动端部署 即将支持,敬请期待 <a name="参考文献"></a> ## Notebook教程 - [【快速上手ERNIE 3.0】中文情感分析实战](https://aistudio.baidu.com/aistudio/projectdetail/3955163) - [【快速上手ERNIE 3.0】法律文本多标签分类实战](https://aistudio.baidu.com/aistudio/projectdetail/3996601) - [【快速上手ERNIE 3.0】中文语义匹配实战](https://aistudio.baidu.com/aistudio/projectdetail/3986803) - [【快速上手ERNIE 3.0】MSRA序列标注实战](https://aistudio.baidu.com/aistudio/projectdetail/3989073) - [【快速上手ERNIE 3.0】机器阅读理解实战](https://aistudio.baidu.com/aistudio/projectdetail/2017189) - [【快速上手ERNIE 3.0】对话意图识别](https://aistudio.baidu.com/aistudio/projectdetail/2017202?contributionType=1) ## 参考文献 * Sun Y, Wang S, Feng S, et al. ERNIE 3.0: Large-scale Knowledge Enhanced Pre-training for Language Understanding and Generation[J]. arXiv preprint arXiv:2107.02137, 2021. * Su W, Chen X, Feng S, et al. ERNIE-Tiny: A Progressive Distillation Framework for Pretrained Transformer Compression[J]. arXiv preprint arXiv:2106.02241, 2021. * Wang S, Sun Y, Xiang Y, et al. ERNIE 3.0 Titan: Exploring Larger-scale Knowledge Enhanced Pre-training for Language Understanding and Generation[J]. arXiv preprint arXiv:2112.12731, 2021.
lisaterumi/postagger-portuguese
3f35db993e5ceffbc56a85f11fd608ab30c0f44e
2022-07-25T21:40:35.000Z
[ "pytorch", "bert", "token-classification", "pt", "dataset:MacMorpho", "transformers", "autotrain_compatible" ]
token-classification
false
lisaterumi
null
lisaterumi/postagger-portuguese
224
1
transformers
3,487
--- language: "pt" widget: - text: "Tinha uma pedra no meio do caminho." - text: "Vamos tomar um café quentinho?" - text: "Como você se chama?" datasets: - MacMorpho --- # POS-Tagger Portuguese We fine-tuned the [BERTimbau](https://github.com/neuralmind-ai/portuguese-bert/) model with the [MacMorpho](http://nilc.icmc.usp.br/macmorpho/) corpus for the Post-Tagger task, with 10 epochs, achieving a general F1-Score of 0.9826. Metrics: ``` Precision Recall F1 Suport accuracy 0.98 33729 macro avg 0.96 0.95 0.95 33729 weighted avg 0.98 0.98 0.98 33729 F1: 0.9826 Accuracy: 0.9826 ``` Parameters: ``` nclasses = 27 nepochs = 30 batch_size = 32 batch_status = 32 learning_rate = 1e-5 early_stop = 3 max_length = 200 ``` Tags: | Tag | Meaning | | ------------------- | ------------------- | | ADJ | Adjetivo | | ADV | Advérbio | | ADV-KS | Advérbio conjuntivo subordinado | | ADV-KS-REL | Advérbio relativo subordinado | | ART | Artigo | | CUR | Moeda | | IN | Interjeição | | KC | Conjunção coordenativa | | KS | Conjunção subordinativa | | N | Substantivo | | NPROP | Substantivo próprio | | NUM | Número | | PCP | Particípio | | PDEN | Palavra denotativa | | PREP | Preposição | | PROADJ | Pronome Adjetivo | | PRO-KS | Pronome conjuntivo subordinado | | PRO-KS-REL | Pronome relativo conectivo subordinado | | PROPESS | Pronome pessoal | | PROSUB | Pronome nominal | | V | Verbo | | VAUX | Verbo auxiliar | ## Questions? Please, post a Github issue on the [NLP Portuguese POS-Tagger](https://github.com/lisaterumi/nlp-portuguese-postagger).
ionite/DialoGPT-medium-NakaAI
d3906a0dfa6f54491862af0249e5fb15d317aa3b
2022-07-20T07:12:36.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
ionite
null
ionite/DialoGPT-medium-NakaAI
224
null
transformers
3,488
--- tags: - conversational --- # NakaAI DialoGPT Model
Helsinki-NLP/opus-mt-en-trk
54a2a1aa579bd6b91d0f97dac094c0ae81c75902
2021-01-18T08:18:08.000Z
[ "pytorch", "marian", "text2text-generation", "en", "tt", "cv", "tk", "tr", "ba", "trk", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-en-trk
223
0
transformers
3,489
--- language: - en - tt - cv - tk - tr - ba - trk tags: - translation license: apache-2.0 --- ### eng-trk * source group: English * target group: Turkic languages * OPUS readme: [eng-trk](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-trk/README.md) * model: transformer * source language(s): eng * target language(s): aze_Latn bak chv crh crh_Latn kaz_Cyrl kaz_Latn kir_Cyrl kjh kum ota_Arab ota_Latn sah tat tat_Arab tat_Latn tuk tuk_Latn tur tyv uig_Arab uig_Cyrl uzb_Cyrl uzb_Latn * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-trk/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-trk/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-trk/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2016-entr-engtur.eng.tur | 10.1 | 0.437 | | newstest2016-entr-engtur.eng.tur | 9.2 | 0.410 | | newstest2017-entr-engtur.eng.tur | 9.0 | 0.410 | | newstest2018-entr-engtur.eng.tur | 9.2 | 0.413 | | Tatoeba-test.eng-aze.eng.aze | 26.8 | 0.577 | | Tatoeba-test.eng-bak.eng.bak | 7.6 | 0.308 | | Tatoeba-test.eng-chv.eng.chv | 4.3 | 0.270 | | Tatoeba-test.eng-crh.eng.crh | 8.1 | 0.330 | | Tatoeba-test.eng-kaz.eng.kaz | 11.1 | 0.359 | | Tatoeba-test.eng-kir.eng.kir | 28.6 | 0.524 | | Tatoeba-test.eng-kjh.eng.kjh | 1.0 | 0.041 | | Tatoeba-test.eng-kum.eng.kum | 2.2 | 0.075 | | Tatoeba-test.eng.multi | 19.9 | 0.455 | | Tatoeba-test.eng-ota.eng.ota | 0.5 | 0.065 | | Tatoeba-test.eng-sah.eng.sah | 0.7 | 0.030 | | Tatoeba-test.eng-tat.eng.tat | 9.7 | 0.316 | | Tatoeba-test.eng-tuk.eng.tuk | 5.9 | 0.317 | | Tatoeba-test.eng-tur.eng.tur | 34.6 | 0.623 | | Tatoeba-test.eng-tyv.eng.tyv | 5.4 | 0.210 | | Tatoeba-test.eng-uig.eng.uig | 0.1 | 0.155 | | Tatoeba-test.eng-uzb.eng.uzb | 3.4 | 0.275 | ### System Info: - hf_name: eng-trk - source_languages: eng - target_languages: trk - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-trk/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'tt', 'cv', 'tk', 'tr', 'ba', 'trk'] - src_constituents: {'eng'} - tgt_constituents: {'kir_Cyrl', 'tat_Latn', 'tat', 'chv', 'uzb_Cyrl', 'kaz_Latn', 'aze_Latn', 'crh', 'kjh', 'uzb_Latn', 'ota_Arab', 'tuk_Latn', 'tuk', 'tat_Arab', 'sah', 'tyv', 'tur', 'uig_Arab', 'crh_Latn', 'kaz_Cyrl', 'uig_Cyrl', 'kum', 'ota_Latn', 'bak'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-trk/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-trk/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: trk - short_pair: en-trk - chrF2_score: 0.455 - bleu: 19.9 - brevity_penalty: 1.0 - ref_len: 57072.0 - src_name: English - tgt_name: Turkic languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: trk - prefer_old: False - long_pair: eng-trk - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
SEBIS/legal_t5_small_summ_it
1811eb0c9d38453c2cc244ab9b53dd2d4d32c637
2021-06-23T11:23:40.000Z
[ "pytorch", "jax", "t5", "text2text-generation", "Italian", "dataset:jrc-acquis", "transformers", "summarization Italian model", "autotrain_compatible" ]
text2text-generation
false
SEBIS
null
SEBIS/legal_t5_small_summ_it
222
null
transformers
3,490
--- language: Italian tags: - summarization Italian model datasets: - jrc-acquis widget: - text: "LA COMMISSIONE DELLE COMUNITÀ EUROPEE, visto il trattato che istituisce la Comunità europea, visto il regolamento (CEE) n. 2082/92 del Consiglio, del 14 luglio 1992, relativo alle attestazioni di specificità dei prodotti agricoli ed alimentari(1), in particolare l'articolo 9, paragrafo 1, considerando quanto segue: (1) A norma dell'articolo 7 del regolamento (CEE) n. 2082/92, la Finlandia ha trasmesso alla Commissione una domanda di registrazione della denominazione %quot%Kalakukko%quot% quale attestazione di specificità. (2) La dicitura %quot%specialità tradizionale garantita%quot% può applicarsi soltanto a denominazioni figuranti nel summenzionato albo. (3) Nessuna dichiarazione di opposizione, ai sensi dell'articolo 8 del summenzionato regolamento, è stata trasmessa alla Commissione a seguito della pubblicazione nella Gazzetta ufficiale delle Comunità europee(2) della denominazione figurante nell'allegato del presente regolamento. (4) Di conseguenza, la denominazione di cui all'allegato può essere iscritta nell'albo delle attestazioni di specificità e beneficiare pertanto della protezione a livello comunitario quale specialità tradizionale garantita nella Comunità in virtù dell'articolo 13, paragrafo 2, del regolamento (CEE) n. 2082/92. (5) L'allegato del presente regolamento completa l'allegato del regolamento (CE) n. 2301/97 della Commissione(3), modificato da ultimo dal regolamento (CE) n. 688/2002(4), HA ADOTTATO IL PRESENTE REGOLAMENTO: Articolo 1 La denominazione di cui all'allegato del presente regolamento è aggiunta all'allegato del regolamento (CE) n. 2301/97 e iscritta nell'albo delle attestazioni di specificità, conformemente all'articolo 9, paragrafo 1, del regolamento (CEE) n. 2082/92. Tale denominazione è protetta ai sensi dell'articolo 13, paragrafo 2, del summenzionato regolamento. Articolo 2 Il presente regolamento entra in vigore il ventesimo giorno successivo alla pubblicazione nella Gazzetta ufficiale delle Comunità europee. Il presente regolamento è obbligatorio in tutti i suoi elementi e direttamente applicabile in ciascuno degli Stati membri. Fatto a Bruxelles, il 15 luglio 2002. Per la Commissione Franz Fischler Membro della Commissione (1) GU L 208 del 24.7.1992, pag. 9. (2) GU C 235 del 21.8.2001, pag. 12. (3) GU L 319 del 21.11.1997, pag. 8. (4) GU L 106 del 23.4.2002, pag. 7. ALLEGATO Prodotti della panetteria, della pasticceria, della confetteria o della biscotteria - Kalakukko " --- # legal_t5_small_summ_it model Model for Summarization of legal text written in Italian. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis. ## Model description legal_t5_small_summ_it is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for summarization of legal texts written in Italian. ### How to use Here is how to use this model to summarize legal text written in Italian in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_summ_it"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_summ_it", do_lower_case=False, skip_special_tokens=True), device=0 ) it_text = "LA COMMISSIONE DELLE COMUNITÀ EUROPEE, visto il trattato che istituisce la Comunità europea, visto il regolamento (CEE) n. 2082/92 del Consiglio, del 14 luglio 1992, relativo alle attestazioni di specificità dei prodotti agricoli ed alimentari(1), in particolare l'articolo 9, paragrafo 1, considerando quanto segue: (1) A norma dell'articolo 7 del regolamento (CEE) n. 2082/92, la Finlandia ha trasmesso alla Commissione una domanda di registrazione della denominazione %quot%Kalakukko%quot% quale attestazione di specificità. (2) La dicitura %quot%specialità tradizionale garantita%quot% può applicarsi soltanto a denominazioni figuranti nel summenzionato albo. (3) Nessuna dichiarazione di opposizione, ai sensi dell'articolo 8 del summenzionato regolamento, è stata trasmessa alla Commissione a seguito della pubblicazione nella Gazzetta ufficiale delle Comunità europee(2) della denominazione figurante nell'allegato del presente regolamento. (4) Di conseguenza, la denominazione di cui all'allegato può essere iscritta nell'albo delle attestazioni di specificità e beneficiare pertanto della protezione a livello comunitario quale specialità tradizionale garantita nella Comunità in virtù dell'articolo 13, paragrafo 2, del regolamento (CEE) n. 2082/92. (5) L'allegato del presente regolamento completa l'allegato del regolamento (CE) n. 2301/97 della Commissione(3), modificato da ultimo dal regolamento (CE) n. 688/2002(4), HA ADOTTATO IL PRESENTE REGOLAMENTO: Articolo 1 La denominazione di cui all'allegato del presente regolamento è aggiunta all'allegato del regolamento (CE) n. 2301/97 e iscritta nell'albo delle attestazioni di specificità, conformemente all'articolo 9, paragrafo 1, del regolamento (CEE) n. 2082/92. Tale denominazione è protetta ai sensi dell'articolo 13, paragrafo 2, del summenzionato regolamento. Articolo 2 Il presente regolamento entra in vigore il ventesimo giorno successivo alla pubblicazione nella Gazzetta ufficiale delle Comunità europee. Il presente regolamento è obbligatorio in tutti i suoi elementi e direttamente applicabile in ciascuno degli Stati membri. Fatto a Bruxelles, il 15 luglio 2002. Per la Commissione Franz Fischler Membro della Commissione (1) GU L 208 del 24.7.1992, pag. 9. (2) GU C 235 del 21.8.2001, pag. 12. (3) GU L 319 del 21.11.1997, pag. 8. (4) GU L 106 del 23.4.2002, pag. 7. ALLEGATO Prodotti della panetteria, della pasticceria, della confetteria o della biscotteria - Kalakukko " pipeline([it_text], max_length=512) ``` ## Training data The legal_t5_small_summ_it model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 22 Thousand texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for classification test dataset, achieves the following results: Test results : | Model | Rouge1 | Rouge2 | Rouge Lsum | |:-----:|:-----:|:-----:|:-----:| | legal_t5_small_summ_it | 75.07|65.53 |73.85| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
ethanyt/guwen-sent
2bd35fb055cf935093e63a0534ab042668451f21
2021-06-18T04:51:54.000Z
[ "pytorch", "roberta", "text-classification", "zh", "transformers", "chinese", "classical chinese", "literary chinese", "ancient chinese", "bert", "sentiment classificatio", "license:apache-2.0" ]
text-classification
false
ethanyt
null
ethanyt/guwen-sent
222
2
transformers
3,491
--- language: - "zh" thumbnail: "https://user-images.githubusercontent.com/9592150/97142000-cad08e00-179a-11eb-88df-aff9221482d8.png" tags: - "chinese" - "classical chinese" - "literary chinese" - "ancient chinese" - "bert" - "pytorch" - "sentiment classificatio" license: "apache-2.0" pipeline_tag: "text-classification" widget: - text: "滚滚长江东逝水,浪花淘尽英雄" - text: "寻寻觅觅,冷冷清清,凄凄惨惨戚戚" - text: "执手相看泪眼,竟无语凝噎,念去去,千里烟波,暮霭沉沉楚天阔。" - text: "忽如一夜春风来,干树万树梨花开" --- # Guwen Sent A Classical Chinese Poem Sentiment Classifier. See also: <a href="https://github.com/ethan-yt/guwen-models"> <img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwen-models&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" /> </a> <a href="https://github.com/ethan-yt/cclue/"> <img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=cclue&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" /> </a> <a href="https://github.com/ethan-yt/guwenbert/"> <img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwenbert&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" /> </a>
cynthiachan/finetuned-10pct-cti
5b950cbf07e4799a806b5a85b66f4c249b55d6a1
2022-07-15T08:23:34.000Z
[ "pytorch", "bert", "token-classification", "dataset:cynthiachan/FeedRef_10pct", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
token-classification
false
cynthiachan
null
cynthiachan/finetuned-10pct-cti
222
null
transformers
3,492
--- license: apache-2.0 tags: - generated_from_trainer datasets: - cynthiachan/FeedRef_10pct metrics: - precision - recall - f1 - accuracy model-index: - name: training_3 results: - task: name: Token Classification type: token-classification dataset: name: cynthiachan/FeedRef_10pct type: cynthiachan/FeedRef_10pct args: FeedRef_10pct metrics: - name: Precision type: precision value: 0.6786570743405276 - name: Recall type: recall value: 0.8372781065088757 - name: F1 type: f1 value: 0.7496688741721853 - name: Accuracy type: accuracy value: 0.9678051829961355 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # training_3 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the cynthiachan/FeedRef_10pct dataset. It achieves the following results on the evaluation set: - Loss: 0.1302 - Precision: 0.6787 - Recall: 0.8373 - F1: 0.7497 - Accuracy: 0.9678 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.3927 | 0.37 | 500 | 0.2362 | 0.2854 | 0.4112 | 0.3370 | 0.9431 | | 0.1994 | 0.75 | 1000 | 0.1656 | 0.4762 | 0.6509 | 0.55 | 0.9550 | | 0.1541 | 1.12 | 1500 | 0.1437 | 0.5608 | 0.7367 | 0.6368 | 0.9602 | | 0.1275 | 1.5 | 2000 | 0.1497 | 0.6090 | 0.7604 | 0.6763 | 0.9638 | | 0.1184 | 1.87 | 2500 | 0.1302 | 0.6787 | 0.8373 | 0.7497 | 0.9678 | | 0.0753 | 2.25 | 3000 | 0.1375 | 0.7454 | 0.8314 | 0.7860 | 0.9698 | | 0.0613 | 2.62 | 3500 | 0.1465 | 0.8254 | 0.8669 | 0.8456 | 0.9736 | | 0.0577 | 3.0 | 4000 | 0.1334 | 0.8144 | 0.8698 | 0.8412 | 0.9746 | ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
bigscience/distill-bloom-1b3
44d410716eb93a1cc796e5ac45fc8d7e57a81b1d
2022-07-18T09:01:28.000Z
[ "pytorch", "bloom", "feature-extraction", "ak", "ar", "as", "bm", "bn", "ca", "code", "en", "es", "eu", "fon", "fr", "gu", "hi", "id", "ig", "ki", "kn", "lg", "ln", "ml", "mr", "ne", "nso", "ny", "or", "pa", "pt", "rn", "rw", "sn", "st", "sw", "ta", "te", "tn", "ts", "tum", "tw", "ur", "vi", "wo", "xh", "yo", "zh", "zhs", "zht", "zu", "arxiv:1909.08053", "arxiv:2110.02861", "arxiv:2108.12409", "transformers", "license:bigscience-bloom-rail-1.0", "text-generation" ]
text-generation
false
bigscience
null
bigscience/distill-bloom-1b3
222
null
transformers
3,493
--- license: bigscience-bloom-rail-1.0 language: - ak - ar - as - bm - bn - ca - code - en - es - eu - fon - fr - gu - hi - id - ig - ki - kn - lg - ln - ml - mr - ne - nso - ny - or - pa - pt - rn - rw - sn - st - sw - ta - te - tn - ts - tum - tw - ur - vi - wo - xh - yo - zh - zhs - zht - zu pipeline_tag: text-generation --- # <span style="color:red"><b>WARNING:</b> This is an <b>intermediary checkpoint</b> and WIP project. It is not fully trained yet. You might want to use [Bloom-1B3](https://huggingface.co/bigscience/bloom-1b3) if you want a model that has completed training. This model is a distilled version of [Bloom-1B3](https://huggingface.co/bigscience/bloom-1b3) </span> <h1 style='text-align: center '>BLOOM LM</h1> <h2 style='text-align: center '><em>BigScience Large Open-science Open-access Multilingual Language Model</em> </h2> <h3 style='text-align: center '>Model Card</h3> <img src="https://s3.amazonaws.com/moonup/production/uploads/1657124309515-5f17f0a0925b9863e28ad517.png" alt="BigScience Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> Version 1.0 / 18.Jul.2022 ## Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Training Data](#training-data) 4. [Risks and Limitations](#risks-and-limitations) 5. [Evaluation](#evaluation) 6. [Recommendations](#recommendations) 7. [Glossary and Calculations](#glossary-and-calculations) 8. [More Information](#more-information) 9. [Model Card Authors](#model-card-authors) ## Model Details ### Basics *This section provides information for anyone who wants to know about the model.* <details> <summary>Click to expand</summary> <br/> **Developed by:** BigScience ([website](https://bigscience.huggingface.co)) * All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)* **Model Type:** Transformer-based Language Model **Version:** 1.0.0 **Languages:** Multiple; see [training data](#training-data) **License:** RAIL License v1.0 ([link](https://huggingface.co/spaces/bigscience/license)) **Release Date Estimate:** Monday, 11.July.2022 **Send Questions to:** [email protected] **Cite as:** BigScience, _BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model_. International, May 2021-May 2022 **Funded by:** * The French government. * Hugging Face ([website](https://huggingface.co)). * Organizations of contributors. *(Further breakdown of organizations forthcoming.)* </details> ### Technical Specifications *This section provides information for people who work on model development.* <details> <summary>Click to expand</summary><br/> Please see [the BLOOM training README](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml#readme) for full details on replicating training. **Model Architecture:** Modified from Megatron-LM GPT2 (see [paper](https://arxiv.org/abs/1909.08053), [BLOOM Megatron code](https://github.com/bigscience-workshop/Megatron-DeepSpeed)): * Decoder-only architecture * Layer normalization applied to word embeddings layer (`StableEmbedding`; see [code](https://github.com/facebookresearch/bitsandbytes), [paper](https://arxiv.org/pdf/2110.02861.pdf)) * ALiBI positional encodings (see [paper](https://arxiv.org/pdf/2108.12409.pdf)), with GeLU activation functions * 288 million parameters: * 12 layers, 8 attention heads * Hidden layers are 1024-dimensional * Sequence length of 2048 tokens used (see [BLOOM tokenizer](https://huggingface.co/bigscience/tokenizer), [tokenizer description](#tokenization)) **Objective Function:** Cross Entropy with mean reduction (see [API documentation](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss)). **Compute infrastructure:** Jean Zay Public Supercomputer, provided by the French government (see [announcement](https://www.enseignementsup-recherche.gouv.fr/fr/signature-du-marche-d-acquisition-de-l-un-des-supercalculateurs-les-plus-puissants-d-europe-46733)). * Hardware: 384 A100 80GB GPUs (48 nodes): * Additional 32 A100 80GB GPUs (4 nodes) in reserve * 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links * CPU: AMD * CPU memory: 512GB per node * GPU memory: 640GB per node * Inter-node connect: Omni-Path Architecture (OPA) * NCCL-communications network: a fully dedicated subnet * Disc IO network: shared network with other types of nodes * Software: * Megatron-DeepSpeed ([Github link](https://github.com/bigscience-workshop/Megatron-DeepSpeed)) * DeepSpeed ([Github link](https://github.com/microsoft/DeepSpeed)) * PyTorch (pytorch-1.11 w/ CUDA-11.5; see [Github link](https://github.com/pytorch/pytorch)) * apex ([Github link](https://github.com/NVIDIA/apex)) #### **Training** _In progress._ Current training logs: [Tensorboard link](https://huggingface.co/tensorboard/bigscience/tr11-176B-ml-logs/) - Checkpoint size: - Bf16 weights: 329GB - Full checkpoint with optimizer states: 2.3TB - Training throughput: About 150 TFLOP per GPU per second - Number of epochs: 1 (*current target*) - Dates: - Started 11th March, 2022 11:42am PST - Estimated end: 5th July, 2022 - Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments) - Server training location: Île-de-France, France #### **Tokenization** The BLOOM tokenizer ([link](https://huggingface.co/bigscience/tokenizer)) is a learned subword tokenizer trained using: - A byte-level Byte Pair Encoding (BPE) algorithm - A simple pre-tokenization rule, no normalization - A vocabulary size of 250,680 It was trained on a subset of a preliminary version of the corpus using alpha-weighting per language. </details> ### Environmental Impact <details> <summary>Click to expand</summary><br/> The training supercomputer, Jean Zay ([website](http://www.idris.fr/eng/jean-zay/jean-zay-presentation-eng.html)), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing. **Estimated carbon emissions:** *(Forthcoming upon completion of training.)* **Estimated electricity usage:** *(Forthcoming upon completion of training.)* </details> <p>&nbsp;</p> ## Uses *This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model. It provides information for anyone considering using the model or who is affected by the model.* <details> <summary>Click to expand</summary><br/> ### Intended Use This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive. #### **Direct Use** - Text generation - Exploring characteristics of language generated by a language model - Examples: Cloze tests, counterfactuals, generations with reframings #### **Downstream Use** - Tasks that leverage language models include: Information Extraction, Question Answering, Summarization ### Misuse and Out-of-scope Use *This section addresses what users ought not do with the model.* See the [BLOOM License](https://huggingface.co/spaces/bigscience/license), Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases. #### **Out-of-scope Uses** Using the model in [high-stakes](#high-stakes) settings is out of scope for this model.  The model is not designed for [critical decisions](#critical-decisions) nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct. ##### Out-of-scope Uses Include: - Usage in biomedical domains, political and legal domains, or finance domains - Usage for evaluating or scoring individuals, such as for employment, education, or credit - Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct #### **Misuse** Intentionally using the model for harm, violating [human rights](#human-rights), or other kinds of malicious activities, is a misuse of this model. This includes: - Spam generation - Disinformation and influence operations - Disparagement and defamation - Harassment and abuse - [Deception](#deception) - Unconsented impersonation and imitation - Unconsented surveillance - Generating content without attribution to the model, as specified in the [RAIL License, Use Restrictions](https://huggingface.co/spaces/bigscience/license) ### Intended Users #### **Direct Users** - General Public - Researchers - Students - Educators - Engineers/developers - Non-commercial entities - Community advocates, including human and civil rights groups #### Indirect Users - Users of derivatives created by Direct Users, such as those using software with an [intended use](#intended-use) - Users of [Derivatives of the Model, as described in the License](https://huggingface.co/spaces/bigscience/license) #### Others Affected (Parties Prenantes) - People and groups referred to by the LLM - People and groups exposed to outputs of, or decisions based on, the LLM - People and groups whose original work is included in the LLM </details> <p>&nbsp;</p> ## Training Data *This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.* <details> <summary>Click to expand</summary><br/> Details for each dataset are provided in individual [Data Cards](https://huggingface.co/spaces/bigscience/BigScienceCorpus). Training data includes: - 45 natural languages - 12 programming languages - In 1.5TB of pre-processed text, converted into 350B unique tokens (see [the tokenizer section](#tokenization) for more.) #### **Languages** The pie chart shows the distribution of languages in training data. ![pie chart showing the distribution of languages in training data](https://github.com/bigscience-workshop/model_card/blob/main/assets/data/pie_chart.svg?raw=true) The following table shows the further distribution of Niger-Congo and Indic languages in the training data. <details> <summary>Click to expand</summary><br/> | Niger Congo | Percentage | | Indic | Percentage | |----------------|------------ |------ |-----------|------------| | Chi Tumbuka | 0.00002 | | Assamese | 0.01 | | Kikuyu | 0.00004 | | Odia | 0.04 | | Bambara | 0.00004 | | Gujarati | 0.04 | | Akan | 0.00007 | | Marathi | 0.05 | | Xitsonga | 0.00007 | | Punjabi | 0.05 | | Sesotho | 0.00007 | | Kannada | 0.06 | | Chi Chewa | 0.0001 | | Nepali | 0.07 | | Setswana | 0.0002 | | Telugu | 0.09 | | Northern Sotho | 0.0002 | | Malayalam | 0.10 | | Fon | 0.0002 | | Urdu | 0.10 | | Kirundi | 0.0003 | | Tamil | 0.20 | | Wolof | 0.0004 | | Bengali | 0.50 | | Kuganda | 0.0004 | | Hindi | 0.70 | | Chi Shona | 0.001 | | Isi Zulu | 0.001 | | Igbo | 0.001 | | Xhosa | 0.001 | | Kinyarwanda | 0.003 | | Yoruba | 0.006 | | Swahili | 0.02 | </details> The following table shows the distribution of programming languages. <details> <summary>Click to expand</summary><br/> | Extension | Language | Number of files | |----------------|------------|-----------------| | java | Java | 5,407,724 | | php | PHP | 4,942,186 | | cpp | C++ | 2,503,930 | | py | Python | 2,435,072 | | js | JavaScript | 1,905,518 | | cs | C# | 1,577,347 | | rb | Ruby | 6,78,413 | | cc | C++ | 443,054 | | hpp | C++ | 391,048 | | lua | Lua | 352,317 | | go | GO | 227,763 | | ts | TypeScript | 195,254 | | C | C | 134,537 | | scala | Scala | 92,052 | | hh | C++ | 67,161 | | H | C++ | 55,899 | | tsx | TypeScript | 33,107 | | rs | Rust | 29,693 | | phpt | PHP | 9,702 | | c++ | C++ | 1,342 | | h++ | C++ | 791 | | php3 | PHP | 540 | | phps | PHP | 270 | | php5 | PHP | 166 | | php4 | PHP | 29 | </details> </details> <p>&nbsp;</p> ## Risks and Limitations *This section identifies foreseeable harms and misunderstandings.* <details> <summary>Click to expand</summary><br/> Model may: - Overrepresent some viewpoints and underrepresent others - Contain stereotypes - Contain [personal information](#personal-data-and-information) - Generate: - Hateful, abusive, or violent language - Discriminatory or prejudicial language - Content that may not be appropriate for all settings, including sexual content - Make errors, including producing incorrect information as if it were factual - Generate irrelevant or repetitive outputs </details> <p>&nbsp;</p> ## Evaluation *This section describes the evaluation protocols and provides the results.* <details> <summary>Click to expand</summary><br/> ### Metrics *This section describes the different ways performance is calculated and why.* Includes: | Metric | Why chosen | |--------------------|--------------------------------------------------------------------| | [Perplexity](#perplexity) | Standard metric for quantifying model improvements during training | | Cross Entropy [Loss](#loss) | Standard objective for language models. | And multiple different metrics for specific tasks. _(More evaluation metrics forthcoming upon completion of evaluation protocol.)_ ### Factors *This section lists some different aspects of what BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.* - Language, such as English or Yoruba - Domain, such as newswire or stories - Demographic characteristics, such as gender or nationality ### Results *Results are based on the [Factors](#factors) and [Metrics](#metrics).* **Train-time Evaluation:** As of 25.May.2022, 15:00 PST: - Training Loss: 2.0 - Validation Loss: 2.2 - Perplexity: 8.9 (More evaluation scores forthcoming at the end of model training.) </details> <p>&nbsp;</p> ## Recommendations *This section provides information on warnings and potential mitigations.* <details> <summary>Click to expand</summary><br/> - Indirect users should be made aware when the content they're working with is created by the LLM. - Users should be aware of [Risks and Limitations](#risks-and-limitations), and include an appropriate age disclaimer or blocking interface as necessary. - Models pretrained with the LLM should include an updated Model Card. - Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments. </details> <p>&nbsp;</p> ## Glossary and Calculations *This section defines common terms and how metrics are calculated.* <details> <summary>Click to expand</summary><br/> - <a name="loss">**Loss:**</a> A calculation of the difference between what the model has learned and what the data shows ("groundtruth"). The lower the loss, the better. The training process aims to minimize the loss. - <a name="perplexity">**Perplexity:**</a> This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy. - <a name="high-stakes">**High-stakes settings:**</a> Such as those identified as "high-risk AI systems" and "unacceptable risk AI systems" in the European Union's proposed [Artificial Intelligence (AI) Act](https://artificialintelligenceact.eu/annexes/). - <a name="critical-decisions">**Critical decisions:**</a> Such as those defined in [the United States' proposed Algorithmic Accountability Act](https://www.congress.gov/117/bills/s3572/BILLS-117s3572is.pdf). - <a name="human-rights">**Human rights:**</a> Includes those rights defined in the [Universal Declaration of Human Rights](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf). - <a name="personal-data-and-information">**Personal Data and Personal Information:**</a> Personal data and information is defined in multiple data protection regulations, such as "[personal data](https://gdpr-info.eu/issues/personal-data/)" in the [European Union's General Data Protection Regulation](https://gdpr-info.eu); and "personal information" in the Republic of South Africa's [Protection of Personal Information Act](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf), The People's Republic of China's [Personal information protection law](http://en.npc.gov.cn.cdurl.cn/2021-12/29/c_694559.htm). - <a name="sensitive-characteristics">**Sensitive characteristics:**</a> This includes specifically protected categories in human rights (see [UHDR, Article 2](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf)) and personal information regulation (see GDPR, [Article 9; Protection of Personal Information Act, Chapter 1](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf)) - <a name="deception">**Deception:**</a> Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated. </details> <p>&nbsp;</p> ## More Information <details> <summary>Click to expand</summary><br/> ### Dataset Creation Blog post detailing the design choices during the dataset creation: https://bigscience.huggingface.co/blog/building-a-tb-scale-multilingual-dataset-for-language-modeling ### Technical Specifications Blog post summarizing how the architecture, size, shape, and pre-training duration where selected: https://bigscience.huggingface.co/blog/what-language-model-to-train-if-you-have-two-million-gpu-hours More details on the architecture/optimizer: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml Blog post on the hardware/engineering side: https://bigscience.huggingface.co/blog/which-hardware-to-train-a-176b-parameters-model Details on the distributed setup used for the training: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml Tensorboard updated during the training: https://huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard#scalars&tagFilter=loss Insights on how to approach training, negative results: https://github.com/bigscience-workshop/bigscience/blob/master/train/lessons-learned.md Details on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): https://github.com/bigscience-workshop/bigscience/blob/master/train/tr11-176B-ml/chronicles.md ### Initial Results Initial prompting experiments using interim checkpoints: https://huggingface.co/spaces/bigscience/bloom-book </details> <p>&nbsp;</p> ## Model Card Authors *Ordered roughly chronologically and by amount of time spent.* Margaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay
dbmdz/electra-base-ukrainian-cased-generator
2f854462bb1b3da0a2b033a2dd8280906f62c164
2020-11-10T21:15:17.000Z
[ "pytorch", "electra", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
dbmdz
null
dbmdz/electra-base-ukrainian-cased-generator
221
null
transformers
3,494
Entry not found
ionite/DialoGPT-medium-orangeAI
70f8eea3121cb0768be68531ec999377de8bd55c
2021-11-07T18:01:49.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
ionite
null
ionite/DialoGPT-medium-orangeAI
221
1
transformers
3,495
--- tags: - conversational --- # orangeAI DialoGPT Model
marefa-nlp/marefa-ner
97150023f089d776bf025950d1e4506625c71c34
2021-12-04T05:21:57.000Z
[ "pytorch", "xlm-roberta", "token-classification", "ar", "dataset:Marefa-NER", "transformers", "autotrain_compatible" ]
token-classification
false
marefa-nlp
null
marefa-nlp/marefa-ner
221
2
transformers
3,496
--- language: ar datasets: - Marefa-NER widget: - text: "في استاد القاهرة، بدأ حفل افتتاح بطولة كأس الأمم الأفريقية بحضور رئيس الجمهورية و رئيس الاتحاد الدولي لكرة القدم" --- # Tebyan تبيـان ## Marefa Arabic Named Entity Recognition Model ## نموذج المعرفة لتصنيف أجزاء النص <p align="center"> <img src="https://huggingface.co/marefa-nlp/marefa-ner/resolve/main/assets/marefa-tebyan-banner.png" alt="Marfa Arabic NER Model" width="600"/> </p? --------- **Version**: 1.3 **Last Update:** 3-12-2021 ## Model description **Marefa-NER** is a Large Arabic Named Entity Recognition (NER) model built on a completely new dataset and targets to extract up to 9 different types of entities ``` Person, Location, Organization, Nationality, Job, Product, Event, Time, Art-Work ``` نموذج المعرفة لتصنيف أجزاء النص. نموذج جديد كليا من حيث البيانات المستخدمة في تدريب النموذج. كذلك يستهدف النموذج تصنيف حتى 9 أنواع مختلفة من أجزاء النص ``` شخص - مكان - منظمة - جنسية - وظيفة - منتج - حدث - توقيت - عمل إبداعي ``` ## How to use كيف تستخدم النموذج *You can test the model quickly by checking this [Colab notebook](https://colab.research.google.com/drive/1OGp9Wgm-oBM5BBhTLx6Qow4dNRSJZ-F5?usp=sharing)* ---- Install the following Python packages `$ pip3 install transformers==4.8.0 nltk==3.5 protobuf==3.15.3 torch==1.9.0 ` > If you are using `Google Colab`, please restart your runtime after installing the packages. ----------- ```python from transformers import AutoTokenizer, AutoModelForTokenClassification import torch import numpy as np import nltk nltk.download('punkt') from nltk.tokenize import word_tokenize custom_labels = ["O", "B-job", "I-job", "B-nationality", "B-person", "I-person", "B-location","B-time", "I-time", "B-event", "I-event", "B-organization", "I-organization", "I-location", "I-nationality", "B-product", "I-product", "B-artwork", "I-artwork"] def _extract_ner(text: str, model: AutoModelForTokenClassification, tokenizer: AutoTokenizer, start_token: str="▁"): tokenized_sentence = tokenizer([text], padding=True, truncation=True, return_tensors="pt") tokenized_sentences = tokenized_sentence['input_ids'].numpy() with torch.no_grad(): output = model(**tokenized_sentence) last_hidden_states = output[0].numpy() label_indices = np.argmax(last_hidden_states[0], axis=1) tokens = tokenizer.convert_ids_to_tokens(tokenized_sentences[0]) special_tags = set(tokenizer.special_tokens_map.values()) grouped_tokens = [] for token, label_idx in zip(tokens, label_indices): if token not in special_tags: if not token.startswith(start_token) and len(token.replace(start_token,"").strip()) > 0: grouped_tokens[-1]["token"] += token else: grouped_tokens.append({"token": token, "label": custom_labels[label_idx]}) # extract entities ents = [] prev_label = "O" for token in grouped_tokens: label = token["label"].replace("I-","").replace("B-","") if token["label"] != "O": if label != prev_label: ents.append({"token": [token["token"]], "label": label}) else: ents[-1]["token"].append(token["token"]) prev_label = label # group tokens ents = [{"token": "".join(rec["token"]).replace(start_token," ").strip(), "label": rec["label"]} for rec in ents ] return ents model_cp = "marefa-nlp/marefa-ner" tokenizer = AutoTokenizer.from_pretrained(model_cp) model = AutoModelForTokenClassification.from_pretrained(model_cp, num_labels=len(custom_labels)) samples = [ "تلقى تعليمه في الكتاب ثم انضم الى الأزهر عام 1873م. تعلم على يد السيد جمال الدين الأفغاني والشيخ محمد عبده", "بعد عودته إلى القاهرة، التحق نجيب الريحاني فرقة جورج أبيض، الذي كان قد ضمَّ - قُبيل ذلك - فرقته إلى فرقة سلامة حجازي . و منها ذاع صيته", "في استاد القاهرة، قام حفل افتتاح بطولة كأس الأمم الأفريقية بحضور رئيس الجمهورية و رئيس الاتحاد الدولي لكرة القدم", "من فضلك أرسل هذا البريد الى صديقي جلال الدين في تمام الساعة الخامسة صباحا في يوم الثلاثاء القادم", "امبارح اتفرجت على مباراة مانشستر يونايتد مع ريال مدريد في غياب الدون كرستيانو رونالدو", "لا تنسى تصحيني الساعة سبعة, و ضيف في الجدول اني احضر مباراة نادي النصر غدا", ] # [optional] samples = [ " ".join(word_tokenize(sample.strip())) for sample in samples if sample.strip() != "" ] for sample in samples: ents = _extract_ner(text=sample, model=model, tokenizer=tokenizer, start_token="▁") print(sample) for ent in ents: print("\t",ent["token"],"==>",ent["label"]) print("========\n") ``` Output ``` تلقى تعليمه في الكتاب ثم انضم الى الأزهر عام 1873م . تعلم على يد السيد جمال الدين الأفغاني والشيخ محمد عبده الأزهر ==> organization عام 1873م ==> time السيد جمال الدين الأفغاني ==> person محمد عبده ==> person ======== بعد عودته إلى القاهرة، التحق نجيب الريحاني فرقة جورج أبيض، الذي كان قد ضمَّ - قُبيل ذلك - فرقته إلى فرقة سلامة حجازي . و منها ذاع صيته القاهرة، ==> location نجيب الريحاني ==> person فرقة جورج أبيض، ==> organization فرقة سلامة حجازي ==> organization ======== في استاد القاهرة، قام حفل افتتاح بطولة كأس الأمم الأفريقية بحضور رئيس الجمهورية و رئيس الاتحاد الدولي لكرة القدم استاد القاهرة، ==> location بطولة كأس الأمم الأفريقية ==> event رئيس الجمهورية ==> job رئيس ==> job الاتحاد الدولي لكرة القدم ==> organization ======== من فضلك أرسل هذا البريد الى صديقي جلال الدين في تمام الساعة الخامسة صباحا في يوم الثلاثاء القادم جلال الدين ==> person الساعة الخامسة صباحا ==> time يوم الثلاثاء القادم ==> time ======== امبارح اتفرجت على مباراة مانشستر يونايتد مع ريال مدريد في غياب الدون كرستيانو رونالدو مانشستر يونايتد ==> organization ريال مدريد ==> organization كرستيانو رونالدو ==> person ======== لا تنسى تصحيني الساعة سبعة , و ضيف في الجدول اني احضر مباراة نادي النصر غدا الساعة سبعة ==> time نادي النصر ==> organization غدا ==> time ======== ``` ## Fine-Tuning Check this [notebook](https://colab.research.google.com/drive/1WUYrnmDFFEItqGMvbyjqZEJJqwU7xQR-?usp=sharing) to fine-tune the NER model ## Evaluation We tested the model agains a test set of 1959 sentences. The results is in the follwing table | type | f1-score | precision | recall | support | |:-------------|-----------:|------------:|---------:|----------:| | person | 0.93298 | 0.931479 | 0.934487 | 4335 | | location | 0.891537 | 0.896926 | 0.886212 | 4939 | | time | 0.873003 | 0.876087 | 0.869941 | 1853 | | nationality | 0.871246 | 0.843153 | 0.901277 | 2350 | | job | 0.837656 | 0.79912 | 0.880097 | 2477 | | organization | 0.781317 | 0.773328 | 0.789474 | 2299 | | event | 0.686695 | 0.733945 | 0.645161 | 744 | | artwork | 0.653552 | 0.678005 | 0.630802 | 474 | | product | 0.625483 | 0.553531 | 0.718935 | 338 | | **weighted avg** | 0.859008 | 0.852365 | 0.86703 | 19809 | | **micro avg** | 0.858771 | 0.850669 | 0.86703 | 19809 | | **macro avg** | 0.79483 | 0.787286 | 0.806265 | 19809 | ## Acknowledgment شكر و تقدير قام بإعداد البيانات التي تم تدريب النموذج عليها, مجموعة من المتطوعين الذين قضوا ساعات يقومون بتنقيح البيانات و مراجعتها - على سيد عبد الحفيظ - إشراف - نرمين محمد عطيه - صلاح خيرالله - احمد علي عبدربه - عمر بن عبد العزيز سليمان - محمد ابراهيم الجمال - عبدالرحمن سلامه خلف - إبراهيم كمال محمد سليمان - حسن مصطفى حسن - أحمد فتحي سيد - عثمان مندو - عارف الشريف - أميرة محمد محمود - حسن سعيد حسن - عبد العزيز علي البغدادي - واثق عبدالملك الشويطر - عمرو رمضان عقل الحفناوي - حسام الدين أحمد على - أسامه أحمد محمد محمد - حاتم محمد المفتي - عبد الله دردير - أدهم البغدادي - أحمد صبري - عبدالوهاب محمد محمد - أحمد محمد عوض
prithivida/active_to_passive_styletransfer
d3deb88bab2ae342e5233d160eb0d454d7eb2f57
2021-06-23T13:43:58.000Z
[ "pytorch", "jax", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
prithivida
null
prithivida/active_to_passive_styletransfer
221
1
transformers
3,497
## This model belongs to the Styleformer project [Please refer to github page](https://github.com/PrithivirajDamodaran/Styleformer)
rovai/chatbotmedium1
b3d784b91ae6ab681c6dcd55f65ce0bc67f23793
2021-12-01T14:06:52.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
rovai
null
rovai/chatbotmedium1
221
null
transformers
3,498
--- tags: - conversational --- # chatbot
ELiRF/NASES
9e805fe577912dbfea0519d0dbf576d8bd6efb94
2022-04-21T14:12:01.000Z
[ "pytorch", "bart", "text2text-generation", "es", "transformers", "summarization", "autotrain_compatible" ]
summarization
false
ELiRF
null
ELiRF/NASES
220
1
transformers
3,499
--- language: es tags: - summarization widget: - text: "La Agencia Valenciana de la Innovación (AVI) financia el desarrollo de un software que integra diferentes modelos y tecnologías para la monitorización y análisis multilingüe de las redes sociales. A través de técnicas de 'deep learning' y procesamiento del lenguaje natural es capaz de interpretar la ironía y las emociones en los textos, incluso en aquellos escritos en idiomas menos extendidos, a menudo no contemplados por las herramientas comerciales. La iniciativa, bautizada como 'Guaita', está liderada por el Instituto Valenciano de Investigación en Inteligencia Artificial (VRAIN), adscrito a la Universidad Politécnica de Valencia (UPV), que cuenta a su vez para su desarrollo con la colaboración del Instituto Valenciano de Informática (ITI) y la Corporación Valenciana de Mitjans de Comunicación (CVMC).De este modo, y a solicitud del usuario o usuaria, monitorizará las redes sociales para obtener la información asociada a los temas objeto de interés y ofrecerá los resultados de forma gráfica, bien a través de una interfaz web, bien mediante la generación de informes. El programa será, además, capaz de determinar la reputación de una empresa o institución a partir de dichos análisis gracias a la combinación de distintas tecnologías de procesamiento e interpretación, destaca la agencia en un comunicado." --- **IMPORTANT:** On the 5th of April 2022, we detected a mistake in the configuration file; thus, the model was not generating the summaries correctly, and it was underperforming in all scenarios. For this reason, if you had used the model until that day, we would be glad if you would re-evaluate the model if you are publishing some results with it. We apologize for the inconvenience and thank you for your understanding. # NASca and NASes: Two Monolingual Pre-Trained Models for Abstractive Summarization in Catalan and Spanish Most of the models proposed in the literature for abstractive summarization are generally suitable for the English language but not for other languages. Multilingual models were introduced to address that language constraint, but despite their applicability being broader than that of the monolingual models, their performance is typically lower, especially for minority languages like Catalan. In this paper, we present a monolingual model for abstractive summarization of textual content in the Catalan language. The model is a Transformer encoder-decoder which is pretrained and fine-tuned specifically for the Catalan language using a corpus of newspaper articles. In the pretraining phase, we introduced several self-supervised tasks to specialize the model on the summarization task and to increase the abstractivity of the generated summaries. To study the performance of our proposal in languages with higher resources than Catalan, we replicate the model and the experimentation for the Spanish language. The usual evaluation metrics, not only the most used ROUGE measure but also other more semantic ones such as BertScore, do not allow to correctly evaluate the abstractivity of the generated summaries. In this work, we also present a new metric, called content reordering, to evaluate one of the most common characteristics of abstractive summaries, the rearrangement of the original content. We carried out an exhaustive experimentation to compare the performance of the monolingual models proposed in this work with two of the most widely used multilingual models in text summarization, mBART and mT5. The experimentation results support the quality of our monolingual models, especially considering that the multilingual models were pretrained with many more resources than those used in our models. Likewise, it is shown that the pretraining tasks helped to increase the degree of abstractivity of the generated summaries. To our knowledge, this is the first work that explores a monolingual approach for abstractive summarization both in Catalan and Spanish. # The NASes model News Abstractive Summarization for Spanish (NASes) is a Transformer encoder-decoder model, with the same hyper-parameters than BART, to perform summarization of Spanish news articles. It is pre-trained on a combination of several self-supervised tasks that help to increase the abstractivity of the generated summaries. Four pre-training tasks have been combined: sentence permutation, text infilling, Gap Sentence Generation, and Next Segment Generation. Spanish newspapers, and Wikipedia articles in Spanish were used for pre-training the model (21GB of raw text -8.5 millions of documents-). NASes is finetuned for the summarization task on 1.802.919 (document, summary) pairs from the Dataset for Automatic summarization of Catalan and Spanish newspaper Articles (DACSA). ### BibTeX entry ```bibtex @Article{app11219872, AUTHOR = {Ahuir, Vicent and Hurtado, Lluís-F. and González, José Ángel and Segarra, Encarna}, TITLE = {NASca and NASes: Two Monolingual Pre-Trained Models for Abstractive Summarization in Catalan and Spanish}, JOURNAL = {Applied Sciences}, VOLUME = {11}, YEAR = {2021}, NUMBER = {21}, ARTICLE-NUMBER = {9872}, URL = {https://www.mdpi.com/2076-3417/11/21/9872}, ISSN = {2076-3417}, DOI = {10.3390/app11219872} } ```