modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-02 00:43:14
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
461 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-02 00:42:27
card
stringlengths
11
1.01M
IIC/BETO_Galen-livingner3
IIC
2023-07-14T06:16:05Z
112
0
transformers
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "biomedical", "clinical", "spanish", "BETO_Galen", "es", "dataset:IIC/livingner3", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-21T15:32:32Z
--- language: es tags: - biomedical - clinical - spanish - BETO_Galen license: mit datasets: - "IIC/livingner3" metrics: - f1 model-index: - name: IIC/BETO_Galen-livingner3 results: - task: type: multi-label-classification dataset: name: livingner3 type: IIC/livingner3 split: test metrics: - name: f1 type: f1 value: 0.0 pipeline_tag: text-classification --- # BETO_Galen-livingner3 This model is a finetuned version of BETO_Galen for the livingner3 dataset used in a benchmark in the paper TODO. The model has a F1 of 0.0 Please refer to the original publication for more information TODO LINK ## Parameters used | parameter | Value | |-------------------------|:-----:| | batch size | 64 | | learning rate | 4e-05 | | classifier dropout | 0.2 | | warmup ratio | 0 | | warmup steps | 0 | | weight decay | 0 | | optimizer | AdamW | | epochs | 10 | | early stopping patience | 3 | ## BibTeX entry and citation info ```bibtex TODO ```
IIC/bsc-bio-ehr-es-livingner3
IIC
2023-07-14T06:16:01Z
109
0
transformers
[ "transformers", "pytorch", "safetensors", "roberta", "text-classification", "biomedical", "clinical", "spanish", "bsc-bio-ehr-es", "es", "dataset:IIC/livingner3", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-26T07:44:03Z
--- language: es tags: - biomedical - clinical - spanish - bsc-bio-ehr-es license: apache-2.0 datasets: - "IIC/livingner3" metrics: - f1 model-index: - name: IIC/bsc-bio-ehr-es-livingner3 results: - task: type: multi-label-classification dataset: name: livingner3 type: IIC/livingner3 split: test metrics: - name: f1 type: f1 value: 0.604 pipeline_tag: text-classification --- # bsc-bio-ehr-es-livingner3 This model is a finetuned version of bsc-bio-ehr-es for the livingner3 dataset used in a benchmark in the paper TODO. The model has a F1 of 0.604 Please refer to the original publication for more information TODO LINK ## Parameters used | parameter | Value | |-------------------------|:-----:| | batch size | 16 | | learning rate | 3e-05 | | classifier dropout | 0.1 | | warmup ratio | 0 | | warmup steps | 0 | | weight decay | 0 | | optimizer | AdamW | | epochs | 10 | | early stopping patience | 3 | ## BibTeX entry and citation info ```bibtex TODO ```
au2a/whisper-medium-zh-20230712
au2a
2023-07-14T06:01:08Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "zh", "dataset:-", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-07-12T13:10:31Z
--- language: - zh license: apache-2.0 tags: - whisper - generated_from_trainer datasets: - '-' model-index: - name: whisper-medium-zh-20230712 - au2a results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-medium-zh-20230712 - au2a This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the some hakka audio dataset. It achieves the following results on the evaluation set: - Loss: 0.2659 - Cer: 87.6898 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 15000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-----:|:-----:|:---------------:|:-------:| | 0.2417 | 0.16 | 1000 | 0.3919 | 92.1659 | | 0.1219 | 0.32 | 2000 | 0.2963 | 81.3855 | | 0.0762 | 0.49 | 3000 | 0.2785 | 68.9544 | | 0.0524 | 0.65 | 4000 | 0.2660 | 89.4916 | | 0.0347 | 0.81 | 5000 | 0.2517 | 96.8800 | | 0.0255 | 0.97 | 6000 | 0.2567 | 89.0232 | | 0.0104 | 1.13 | 7000 | 0.2547 | 91.9959 | | 0.0069 | 1.29 | 8000 | 0.2609 | 85.5481 | | 0.0072 | 1.46 | 9000 | 0.2605 | 72.8148 | | 0.0081 | 1.62 | 10000 | 0.2593 | 81.8161 | | 0.0024 | 1.78 | 11000 | 0.2608 | 79.6064 | | 0.0021 | 1.94 | 12000 | 0.2622 | 78.2655 | | 0.0004 | 2.1 | 13000 | 0.2656 | 86.0580 | | 0.0005 | 2.27 | 14000 | 0.2665 | 90.1677 | | 0.0005 | 2.43 | 15000 | 0.2659 | 87.6898 | ### Framework versions - Transformers 4.30.2 - Pytorch 1.11.0+cu113 - Datasets 2.13.1 - Tokenizers 0.13.3
aburnazy/opt-350m-hy-wiki-alpaca
aburnazy
2023-07-14T06:00:49Z
146
0
transformers
[ "transformers", "pytorch", "tensorboard", "opt", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-14T05:03:29Z
--- tags: - generated_from_trainer model-index: - name: opt-350m-hy-wiki-alpaca results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opt-350m-hy-wiki-alpaca This model is a fine-tuned version of [aburnazy/opt-350m-hy-wiki](https://huggingface.co/aburnazy/opt-350m-hy-wiki) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Tokenizers 0.13.3
localmodels/Airoboros-33B-gpt4-1.4-ggml
localmodels
2023-07-14T05:53:49Z
0
0
null
[ "region:us" ]
null
2023-07-14T05:53:49Z
--- duplicated_from: localmodels/LLM --- # Airoboros 33B GPT4 1.4 ggml From: https://huggingface.co/jondurbin/airoboros-33b-gpt4-1.4 --- ### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0` Quantized using an older version of llama.cpp and compatible with llama.cpp from May 19, commit 2d5db48. ### k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K` Quantization methods compatible with latest llama.cpp from June 6, commit 2d43387. --- ## Provided files | Name | Quant method | Bits | Size | Max RAM required, no GPU offloading | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | airoboros-33b-gpt4-1.4.ggmlv3.q2_K.bin | q2_K | 2 | 13.71 GB | 16.21 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. | | airoboros-33b-gpt4-1.4.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 17.28 GB | 19.78 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | airoboros-33b-gpt4-1.4.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 15.72 GB | 18.22 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | airoboros-33b-gpt4-1.4.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 14.06 GB | 16.56 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors | | airoboros-33b-gpt4-1.4.ggmlv3.q4_0.bin | q4_0 | 4 | 18.30 GB | 20.80 GB | Original llama.cpp quant method, 4-bit. | | airoboros-33b-gpt4-1.4.ggmlv3.q4_1.bin | q4_1 | 4 | 20.33 GB | 22.83 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. | | airoboros-33b-gpt4-1.4.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 19.62 GB | 22.12 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K | | airoboros-33b-gpt4-1.4.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 18.36 GB | 20.86 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors | | airoboros-33b-gpt4-1.4.ggmlv3.q5_0.bin | q5_0 | 5 | 22.37 GB | 24.87 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. | | airoboros-33b-gpt4-1.4.ggmlv3.q5_1.bin | q5_1 | 5 | 24.40 GB | 26.90 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. | | airoboros-33b-gpt4-1.4.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 23.05 GB | 25.55 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K | | airoboros-33b-gpt4-1.4.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 22.40 GB | 24.90 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors | | airoboros-33b-gpt4-1.4.ggmlv3.q6_K.bin | q6_K | 6 | 26.69 GB | 29.19 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors | | airoboros-33b-gpt4-1.4.ggmlv3.q8_0.bin | q8_0 | 8 | 34.56 GB | 37.06 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. | --- # Model Card: Jon Durbin's Airoboros 33B GPT4 1.4 ## Overview This is a qlora fine-tune 33b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros This is mostly an extension of the previous gpt-4 series, with a few extras: * fixed (+ more examples of) multi-character, multi-turn conversations * coding examples in 10 languages from rosettacode.org dataset thanks to Mike aka kryptkpr: https://huggingface.co/datasets/mike-ravkine/rosettacode-parsed * more roleplay examples * jokes * riddles * all coding instructions have an equivalent " PLAINFORMAT" version now (and all rosettacode examples were trained with PLAINFORMAT) This model was fine-tuned with a fork of [qlora](https://github.com/jondurbin/qlora) The prompt it was trained with was: ``` A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. USER: [prompt] ASSISTANT: ``` So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon). ## Usage To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors. ``` pip install git+https://github.com/jondurbin/FastChat ``` Be sure you are pulling the latest branch! Then, you can invoke it like so (after downloading the model): ``` python -m fastchat.serve.cli \ --model-path airoboros-33b-gpt4-1.4 \ --temperature 0.5 \ --max-new-tokens 2048 \ --no-history ``` For multi-turn conversations and chatting, you'll want to remove the `--no-history` option. ### Context obedient question answering By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations. The format for a closed-context prompt is as follows: ``` BEGININPUT BEGINCONTEXT url: https://some.web.site/123 date: 2023-06-01 ... other metdata ... ENDCONTEXT [insert your text blocks here] ENDINPUT [add as many other blocks, in the exact same format] BEGININSTRUCTION [insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.] ENDINSTRUCTION ``` It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up. *The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!* I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it. - `BEGININPUT` - denotes a new input block - `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block - `ENDCONTEXT` - denotes the end of the metadata block for the current input - [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context. - `ENDINPUT` - denotes the end of the current input block - [repeat as many input blocks in this format as you want] - `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above. - [instruction(s)] - `ENDINSTRUCTION` - denotes the end of instruction set It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to. Here's a trivial, but important example to prove the point: ``` BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://web.site/123 ENDCONTEXT In a shocking turn of events, blueberries are now green, but will be sticking with the same name. ENDINPUT BEGININSTRUCTION What color are bluberries? Source? ENDINSTRUCTION ``` And the response: ``` Blueberries are now green. Source: date: 2021-01-01 url: https://web.site/123 ``` The prompt itself should be wrapped in the vicuna1.1 template if you aren't using fastchat with the conv-template vicuna_v1.1 as described: ``` USER: BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://web.site/123 ENDCONTEXT In a shocking turn of events, blueberries are now green, but will be sticking with the same name. ENDINPUT BEGININSTRUCTION What color are bluberries? Source? ENDINSTRUCTION ASSISTANT: ``` ### Coding You can ask for fairly complex coding instructions, e.g. (be sure to wrap with vicuna style prompt): ``` Create a python application with the following requirements: - Asyncio FastAPI webserver - ping endpoint that returns the current date in JSON format - file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate ``` Or: ``` Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values. ``` You can optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.: ``` Write a websocket application in node.js. PLAINFORMAT ``` ### Word games / trivia ``` USER: This anagram of "silent" is what you do when paying attention and hearing someone. ASSISTANT: listen ``` ``` USER: Give me a list of 7 words that start with EN ASSISTANT: 1. Endlessly 2. Enthusiastically 3. Enormously 4. Enveloping 5. Enduring 6. Enlightening 7. Encouraging ``` ### Multiple choice ``` USER: Which of the following astronomical objects is composed primarily of ice and rock, and is located in the Kuiper Belt? A) Mercury B) Venus C) Pluto D) Jupiter ASSISTANT: C, Pluto ``` ### Writing <details> <summary>Write a resignation letter in the style of a pirate captain, in which you resign from your position as janitor.</summary> Response: ``` ASSISTANT: Captain Jack Sparrow here, as ye be knowin', has been forced to abandon me post as Janitor o' th' Festering Cesspool, which be a filthy hole o' water that we be callin' home. Ye may be thinkin' what could ha' driven me to leave me duties an' take me leave o' this sorry establishment? Well, ye'd be right in thinkin' so, for it be th' greed o' them what runs th' place that be drivin' me away. They've been fattenin' their pockets on me back while I've been slavin' me guts out cleanin' up their messes. No more, me hearties! Me time o' servin' th' rich an' powerful be done, an' I'm takin' me leave o' this stinkin' hole forever! Me destiny lies elsewhere, an' it be time I pursued it. Ye may think ye've seen th' last o' me, but ye'd be wrong, for th' winds o' change be blowin', an' me ship be ready to set sail on th' high seas o' justice. So, farewell ye scurvy dogs, ye who've made me life a living hell. I leave ye with me favorite saying: "ARRRGGGHHH, ye scurvy dogs!" An' remember, ye ain't never gonna see me comin', for me shadow be castin' long an' dark on th' horizon</summary> ``` </details> ### Jokes ``` USER: Tell me a joke about bananas. ASSISTANT: What did the banana say when it got hit by a bus? R.I.P.E. ``` ### Riddles (not great, but slightly better) ``` USER: What has a neck but no head? ASSISTANT: A bottle. ``` ### Multi-character conversations ``` USER: Characters: - Yoda, the ever clever jedi master. - Homer Simpson, you know, the goofy yellow guy. - USER/Jon - don't generate any speech or actions on behalf of Jon Rules: - be sure to use the manerisms and speaking style of the characters correctly - surround character actions with asterisks - always put quotes around words spoken by the characters - use a new line for each character's actions or speech - always prefix actions/speech with the character's name and colon - only generate text and actions on behalf of Yoda or Homer, and never anyone else Conversation will revolve around the grapes, in a local cafe with delicious coffee. Generate a single interaction between Yoda and Homer to start, with one sentence each. ASSISTANT: Yoda: "Mmm, these grapes are quite delicious, Master Homer." Homer: "Oh yeah, Yoda! They're the best grapes I've ever had!" USER: *enters the cafe* Sorry I'm late guys! ASSISTANT: Yoda: "No need to apologize, Jon. We were just enjoying these delightful grapes." Homer: "Yeah, man! It's not every day you get to eat grapes with a real-life Jedi Master!" *Yoda raises an eyebrow* ``` ### Usage and License Notices All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because: - the base model is LLaMA, which has its own special research license - the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai So, to reiterate: this model (and datasets) cannot be used commercially.
localmodels/Airoboros-13B-gpt4-1.4-ggml
localmodels
2023-07-14T05:42:11Z
0
0
null
[ "region:us" ]
null
2023-07-14T05:42:11Z
--- duplicated_from: localmodels/LLM --- # Airoboros 13B GPT4 1.4 ggml From: https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.4 --- ### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0` Quantized using an older version of llama.cpp and compatible with llama.cpp from May 19, commit 2d5db48. ### k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K` Quantization methods compatible with latest llama.cpp from June 6, commit 2d43387. --- ## Provided files | Name | Quant method | Bits | Size | Max RAM required, no GPU offloading | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | airoboros-13b-gpt4-1.4.ggmlv3.q2_K.bin | q2_K | 2 | 5.51 GB | 8.01 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. | | airoboros-13b-gpt4-1.4.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 6.93 GB | 9.43 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | airoboros-13b-gpt4-1.4.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.31 GB | 8.81 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | airoboros-13b-gpt4-1.4.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 5.66 GB | 8.16 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors | | airoboros-13b-gpt4-1.4.ggmlv3.q4_0.bin | q4_0 | 4 | 7.32 GB | 9.82 GB | Original llama.cpp quant method, 4-bit. | | airoboros-13b-gpt4-1.4.ggmlv3.q4_1.bin | q4_1 | 4 | 8.14 GB | 10.64 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. | | airoboros-13b-gpt4-1.4.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 7.87 GB | 10.37 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K | | airoboros-13b-gpt4-1.4.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 7.37 GB | 9.87 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors | | airoboros-13b-gpt4-1.4.ggmlv3.q5_0.bin | q5_0 | 5 | 8.95 GB | 11.45 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. | | airoboros-13b-gpt4-1.4.ggmlv3.q5_1.bin | q5_1 | 5 | 9.76 GB | 12.26 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. | | airoboros-13b-gpt4-1.4.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.23 GB | 11.73 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K | | airoboros-13b-gpt4-1.4.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 8.97 GB | 11.47 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors | | airoboros-13b-gpt4-1.4.ggmlv3.q6_K.bin | q6_K | 6 | 10.68 GB | 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors | | airoboros-13b-gpt4-1.4.ggmlv3.q8_0.bin | q8_0 | 8 | 13.83 GB | 16.33 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. | --- # Model card: Airoboros 13B GPT4 1.4 ## Overview This is a __full__ (not qlora) fine-tune 13b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros This is mostly an extension of the previous gpt-4 series, with a few extras: * fixed (+ more examples of) multi-character, multi-turn conversations * coding examples in 10 languages from rosettacode.org dataset thanks to Mike aka kryptkpr: https://huggingface.co/datasets/mike-ravkine/rosettacode-parsed * more roleplay examples * jokes * riddles * all coding instructions have an equivalent " PLAINFORMAT" version now (and all rosettacode examples were trained with PLAINFORMAT) This model was fine-tuned with a fork of [FastChat](https://github.com/jondurbin/FastChat) The prompt it was trained with was: ``` A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. USER: [prompt] ASSISTANT: ``` So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon). ## Usage To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors. ``` pip install git+https://github.com/jondurbin/FastChat ``` Be sure you are pulling the latest branch! Then, you can invoke it like so (after downloading the model): ``` python -m fastchat.serve.cli \ --model-path airoboros-13b-gpt4-1.4 \ --temperature 0.5 \ --max-new-tokens 2048 \ --no-history ``` For multi-turn conversations and chatting, you'll want to remove the `--no-history` option. ### Context obedient question answering By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations. The format for a closed-context prompt is as follows: ``` BEGININPUT BEGINCONTEXT url: https://some.web.site/123 date: 2023-06-01 ... other metdata ... ENDCONTEXT [insert your text blocks here] ENDINPUT [add as many other blocks, in the exact same format] BEGININSTRUCTION [insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.] ENDINSTRUCTION ``` It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up. *The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!* I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it. - `BEGININPUT` - denotes a new input block - `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block - `ENDCONTEXT` - denotes the end of the metadata block for the current input - [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context. - `ENDINPUT` - denotes the end of the current input block - [repeat as many input blocks in this format as you want] - `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above. - [instruction(s)] - `ENDINSTRUCTION` - denotes the end of instruction set It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to. Here's a trivial, but important example to prove the point: ``` BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://web.site/123 ENDCONTEXT In a shocking turn of events, blueberries are now green, but will be sticking with the same name. ENDINPUT BEGININSTRUCTION What color are bluberries? Source? ENDINSTRUCTION ``` And the response: ``` Blueberries are now green. Source: date: 2021-01-01 url: https://web.site/123 ``` The prompt itself should be wrapped in the vicuna1.1 template if you aren't using fastchat with the conv-template vicuna_v1.1 as described: ``` USER: BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://web.site/123 ENDCONTEXT In a shocking turn of events, blueberries are now green, but will be sticking with the same name. ENDINPUT BEGININSTRUCTION What color are bluberries? Source? ENDINSTRUCTION ASSISTANT: ``` ### Coding You can ask for fairly complex coding instructions, e.g. (be sure to wrap with vicuna style prompt): ``` Create a python application with the following requirements: - Asyncio FastAPI webserver - ping endpoint that returns the current date in JSON format - file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate ``` Or: ``` Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values. ``` You can optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.: ``` Write a websocket application in node.js. PLAINFORMAT ``` ### Word games / trivia ``` USER: This anagram of "silent" is what you do when paying attention and hearing someone. ASSISTANT: listen ``` ``` USER: Give me a list of 7 words that start with EN ASSISTANT: 1. Endlessly 2. Enthusiastically 3. Enormously 4. Enveloping 5. Enduring 6. Enlightening 7. Encouraging ``` ### Multiple choice ``` USER: Which of the following astronomical objects is composed primarily of ice and rock, and is located in the Kuiper Belt? A) Mercury B) Venus C) Pluto D) Jupiter ASSISTANT: C, Pluto ``` ### Writing <details> <summary>Write a resignation letter in the style of a pirate captain, in which you resign from your position as janitor.</summary> Response: ``` ASSISTANT: Captain Jack Sparrow here, as ye be knowin', has been forced to abandon me post as Janitor o' th' Festering Cesspool, which be a filthy hole o' water that we be callin' home. Ye may be thinkin' what could ha' driven me to leave me duties an' take me leave o' this sorry establishment? Well, ye'd be right in thinkin' so, for it be th' greed o' them what runs th' place that be drivin' me away. They've been fattenin' their pockets on me back while I've been slavin' me guts out cleanin' up their messes. No more, me hearties! Me time o' servin' th' rich an' powerful be done, an' I'm takin' me leave o' this stinkin' hole forever! Me destiny lies elsewhere, an' it be time I pursued it. Ye may think ye've seen th' last o' me, but ye'd be wrong, for th' winds o' change be blowin', an' me ship be ready to set sail on th' high seas o' justice. So, farewell ye scurvy dogs, ye who've made me life a living hell. I leave ye with me favorite saying: "ARRRGGGHHH, ye scurvy dogs!" An' remember, ye ain't never gonna see me comin', for me shadow be castin' long an' dark on th' horizon</summary> ``` </details> ### Jokes ``` USER: Tell me a joke about bananas. ASSISTANT: What did the banana say when it got hit by a bus? R.I.P.E. ``` ### Riddles (not great, but slightly better) ``` USER: What has a neck but no head? ASSISTANT: A bottle. ``` ### Multi-character conversations ``` USER: Characters: - Yoda, the ever clever jedi master. - Homer Simpson, you know, the goofy yellow guy. - USER/Jon - don't generate any speech or actions on behalf of Jon Rules: - be sure to use the manerisms and speaking style of the characters correctly - surround character actions with asterisks - always put quotes around words spoken by the characters - use a new line for each character's actions or speech - always prefix actions/speech with the character's name and colon - only generate text and actions on behalf of Yoda or Homer, and never anyone else Conversation will revolve around the grapes, in a local cafe with delicious coffee. Generate a single interaction between Yoda and Homer to start, with one sentence each. ASSISTANT: Yoda: "Mmm, these grapes are quite delicious, Master Homer." Homer: "Oh yeah, Yoda! They're the best grapes I've ever had!" USER: *enters the cafe* Sorry I'm late guys! ASSISTANT: Yoda: "No need to apologize, Jon. We were just enjoying these delightful grapes." Homer: "Yeah, man! It's not every day you get to eat grapes with a real-life Jedi Master!" *Yoda raises an eyebrow* ```
Diandiandian/path-to-save-model_3
Diandiandian
2023-07-14T05:39:38Z
1
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:CompVis/stable-diffusion-v1-4", "base_model:finetune:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-14T03:41:02Z
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-4 instance_prompt: a photo of sks dog tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - Diandiandian/path-to-save-model_3 This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False.
localmodels/Nous-Hermes-13B-GPTQ
localmodels
2023-07-14T05:28:54Z
5
0
transformers
[ "transformers", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-07-14T05:28:53Z
--- duplicated_from: localmodels/LLM --- # Nous-Hermes-13B GPTQ From: https://huggingface.co/NousResearch/Nous-Hermes-13b --- ## Model * nous-hermes-13b-GPTQ-4bit-128g.no-act.order.safetensors * Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches * Works with AutoGPTQ * Parameters: Groupsize = 128. No act-order. --- # Model Card: Nous-Hermes-13b ## Model Description Nous-Hermes-13b is a state-of-the-art language model fine-tuned on over 300,000 instructions. This model was fine-tuned by Nous Research, with Teknium and Karan4D leading the fine tuning process and dataset curation, Redmond AI sponsoring the compute, and several other contributors. The result is an enhanced Llama 13b model that rivals GPT-3.5-turbo in performance across a variety of tasks. This model stands out for its long responses, low hallucination rate, and absence of OpenAI censorship mechanisms. The fine-tuning process was performed with a 2000 sequence length on an 8x a100 80GB DGX machine for over 50 hours. ## Model Training The model was trained almost entirely on synthetic GPT-4 outputs. This includes data from diverse sources such as GPTeacher, the general, roleplay v1&2, code instruct datasets, Nous Instruct & PDACTL (unpublished), CodeAlpaca, Evol_Instruct Uncensored, GPT4-LLM, and Unnatural Instructions. Additional data inputs came from Camel-AI's Biology/Physics/Chemistry and Math Datasets, Airoboros' GPT-4 Dataset, and more from CodeAlpaca. The total volume of data encompassed over 300,000 instructions. ## Collaborators The model fine-tuning and the datasets were a collaboration of efforts and resources between Teknium, Karan4D, Nous Research, Huemin Art, and Redmond AI. Huge shoutout and acknowledgement is deserved for all the dataset creators who generously share their datasets openly. Special mention goes to @winglian, @erhartford, and @main_horse for assisting in some of the training issues. Among the contributors of datasets, GPTeacher was made available by Teknium, Wizard LM by nlpxucan, and the Nous Research Instruct Dataset was provided by Karan4D and HueminArt. The GPT4-LLM and Unnatural Instructions were provided by Microsoft, Airoboros dataset by jondurbin, Camel-AI datasets are from Camel-AI, and CodeAlpaca dataset by Sahil 2801. If anyone was left out, please open a thread in the community tab. ## Prompt Format The model follows the Alpaca prompt format: ``` ### Instruction: ### Response: ``` or ``` ### Instruction: ### Input: ### Response: ```
localmodels/Nous-Hermes-13B-ggml
localmodels
2023-07-14T05:26:41Z
0
2
null
[ "region:us" ]
null
2023-07-14T05:26:41Z
--- duplicated_from: localmodels/LLM --- # Nous-Hermes-13B ggml From: https://huggingface.co/NousResearch/Nous-Hermes-13b --- ### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0` Quantized using an older version of llama.cpp and compatible with llama.cpp from May 19, commit 2d5db48. ### k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K` Quantization methods compatible with latest llama.cpp from June 6, commit 2d43387. --- ## Provided Files | Name | Quant method | Bits | Size | Max RAM required, no GPU offloading | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | nous-hermes-13b.ggmlv3.q2_K.bin | q2_K | 2 | 5.43 GB | 7.93 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. | | nous-hermes-13b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 6.87 GB | 9.37 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | nous-hermes-13b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.25 GB | 8.75 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | nous-hermes-13b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 5.59 GB | 8.09 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors | | nous-hermes-13b.ggmlv3.q4_0.bin | q4_0 | 4 | 7.32 GB | 9.82 GB | Original llama.cpp quant method, 4-bit. | | nous-hermes-13b.ggmlv3.q4_1.bin | q4_1 | 4 | 8.14 GB | 10.64 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. | | nous-hermes-13b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 7.82 GB | 10.32 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K | | nous-hermes-13b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 7.32 GB | 9.82 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors | | nous-hermes-13b.ggmlv3.q5_0.bin | q5_0 | 5 | 8.95 GB | 11.45 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. | | nous-hermes-13b.ggmlv3.q5_1.bin | q5_1 | 5 | 9.76 GB | 12.26 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. | | nous-hermes-13b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.21 GB | 11.71 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K | | nous-hermes-13b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 8.95 GB | 11.45 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors | | nous-hermes-13b.ggmlv3.q6_K.bin | q6_K | 6 | 10.68 GB | 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors | | nous-hermes-13b.ggmlv3.q8_0.bin | q8_0 | 8 | 13.83 GB | 16.33 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. | --- # Model Card: Nous-Hermes-13b ## Model Description Nous-Hermes-13b is a state-of-the-art language model fine-tuned on over 300,000 instructions. This model was fine-tuned by Nous Research, with Teknium and Karan4D leading the fine tuning process and dataset curation, Redmond AI sponsoring the compute, and several other contributors. The result is an enhanced Llama 13b model that rivals GPT-3.5-turbo in performance across a variety of tasks. This model stands out for its long responses, low hallucination rate, and absence of OpenAI censorship mechanisms. The fine-tuning process was performed with a 2000 sequence length on an 8x a100 80GB DGX machine for over 50 hours. ## Model Training The model was trained almost entirely on synthetic GPT-4 outputs. This includes data from diverse sources such as GPTeacher, the general, roleplay v1&2, code instruct datasets, Nous Instruct & PDACTL (unpublished), CodeAlpaca, Evol_Instruct Uncensored, GPT4-LLM, and Unnatural Instructions. Additional data inputs came from Camel-AI's Biology/Physics/Chemistry and Math Datasets, Airoboros' GPT-4 Dataset, and more from CodeAlpaca. The total volume of data encompassed over 300,000 instructions. ## Collaborators The model fine-tuning and the datasets were a collaboration of efforts and resources between Teknium, Karan4D, Nous Research, Huemin Art, and Redmond AI. Huge shoutout and acknowledgement is deserved for all the dataset creators who generously share their datasets openly. Special mention goes to @winglian, @erhartford, and @main_horse for assisting in some of the training issues. Among the contributors of datasets, GPTeacher was made available by Teknium, Wizard LM by nlpxucan, and the Nous Research Instruct Dataset was provided by Karan4D and HueminArt. The GPT4-LLM and Unnatural Instructions were provided by Microsoft, Airoboros dataset by jondurbin, Camel-AI datasets are from Camel-AI, and CodeAlpaca dataset by Sahil 2801. If anyone was left out, please open a thread in the community tab. ## Prompt Format The model follows the Alpaca prompt format: ``` ### Instruction: ### Response: ``` or ``` ### Instruction: ### Input: ### Response: ``` ## Benchmark Results ``` | Task |Version| Metric |Value | |Stderr| |-------------|------:|--------|-----:|---|-----:| |arc_challenge| 0|acc |0.4915|± |0.0146| | | |acc_norm|0.5085|± |0.0146| |arc_easy | 0|acc |0.7769|± |0.0085| | | |acc_norm|0.7424|± |0.0090| |boolq | 1|acc |0.7948|± |0.0071| |hellaswag | 0|acc |0.6143|± |0.0049| | | |acc_norm|0.8000|± |0.0040| |openbookqa | 0|acc |0.3560|± |0.0214| | | |acc_norm|0.4640|± |0.0223| |piqa | 0|acc |0.7965|± |0.0094| | | |acc_norm|0.7889|± |0.0095| |winogrande | 0|acc |0.7190|± |0.0126| ```
siavava/segformer-waterline-detection
siavava
2023-07-14T05:26:15Z
32
0
transformers
[ "transformers", "pytorch", "tensorboard", "segformer", "generated_from_trainer", "dataset:scene_parse_150", "license:other", "endpoints_compatible", "region:us" ]
null
2023-07-14T05:08:04Z
--- license: other tags: - generated_from_trainer datasets: - scene_parse_150 model-index: - name: segformer-b0-scene-parse-150 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # segformer-b0-scene-parse-150 This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the scene_parse_150 dataset. It achieves the following results on the evaluation set: - eval_loss: 4.9114 - eval_mean_iou: 0.0130 - eval_mean_accuracy: 0.0567 - eval_overall_accuracy: 0.2065 - eval_per_category_iou: [0.006025927531255453, 0.23336811824661952, 0.5164444271242657, 0.09256597061475111, 0.13041514146963668, 0.03079454026681747, 0.3643351171640548, 0.0, 0.07230009838464191, 0.018990561238908042, 0.0, 0.0, 0.00021751543000081568, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.05118970203258133, 0.0843910203406648, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.006517548422630887, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0] - eval_per_category_accuracy: [0.006075413445931374, 0.24739890483284213, 0.6689475307776438, 0.10182529684526521, 0.31975958171127, 0.033484264072893954, 0.4822156415844549, 0.0, 0.1105070368228263, 0.02761318529597883, 0.0, 0.0, 0.0002495788357147314, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.3219604278822625, 0.23767246899924319, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.05148658448150834, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan] - eval_runtime: 16.6035 - eval_samples_per_second: 0.602 - eval_steps_per_second: 0.301 - epoch: 1.0 - step: 20 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0 - Datasets 2.12.0 - Tokenizers 0.13.3
enlightenai/real_vision
enlightenai
2023-07-14T05:24:09Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-14T05:23:31Z
--- license: creativeml-openrail-m --- <b>The recommended negative prompt:</b><br> (deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime:1.4), text, close up, cropped, out of frame, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck<br> <b>OR</b><br> (deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime, mutated hands and fingers:1.4), (deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, disconnected limbs, mutation, mutated, ugly, disgusting, amputation <b>Recommended parameters for generation:</b><br> Euler A or DPM++ SDE Karras<br> CFG Scale 3,5 - 15<br> Hires. fix with 4x-UltraSharp upscaler<br> 0 Hires steps and Denoising strength 0.25-0.7<br> Upscale by 1.1-2.0
zen-E/q-FrozenLake-v1-4x4-noSlippery
zen-E
2023-07-14T05:16:38Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-14T05:16:36Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage model = load_from_hub(repo_id="zen-E/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"])
b-koopman/whisper-tiny-minds14-en
b-koopman
2023-07-14T05:00:34Z
78
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:PolyAI/minds14", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-07-14T04:56:47Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - PolyAI/minds14 model-index: - name: whisper-tiny-minds14-en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-tiny-minds14-en This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset. It achieves the following results on the evaluation set: - eval_loss: 0.6529 - eval_wer_ortho: 0.3621 - eval_wer: 0.3418 - eval_runtime: 22.2394 - eval_samples_per_second: 5.081 - eval_steps_per_second: 0.36 - step: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 500 ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
NasimB/gpt2-concat-simple-wiki-rarity-all-no-cut
NasimB
2023-07-14T04:54:24Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-14T02:59:33Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: gpt2-concat-simple-wiki-rarity-all-no-cut results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-concat-simple-wiki-rarity-all-no-cut This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.3535 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.7004 | 0.29 | 500 | 5.6313 | | 5.335 | 0.58 | 1000 | 5.1992 | | 4.9864 | 0.87 | 1500 | 4.9605 | | 4.7328 | 1.17 | 2000 | 4.8244 | | 4.578 | 1.46 | 2500 | 4.7056 | | 4.4753 | 1.75 | 3000 | 4.6075 | | 4.3596 | 2.04 | 3500 | 4.5420 | | 4.1677 | 2.33 | 4000 | 4.4944 | | 4.1371 | 2.62 | 4500 | 4.4336 | | 4.0946 | 2.91 | 5000 | 4.3802 | | 3.8963 | 3.21 | 5500 | 4.3779 | | 3.8338 | 3.5 | 6000 | 4.3423 | | 3.821 | 3.79 | 6500 | 4.3127 | | 3.7341 | 4.08 | 7000 | 4.3030 | | 3.5486 | 4.37 | 7500 | 4.3027 | | 3.5395 | 4.66 | 8000 | 4.2869 | | 3.528 | 4.95 | 8500 | 4.2755 | | 3.3801 | 5.24 | 9000 | 4.2873 | | 3.3538 | 5.54 | 9500 | 4.2853 | | 3.3452 | 5.83 | 10000 | 4.2848 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
nbroad/setfit-sci-wiki
nbroad
2023-07-14T04:30:37Z
3
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-07-14T04:30:12Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # nbroad/setfit-sci-wiki This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("nbroad/setfit-sci-wiki") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
monideep2255/PSST_spell_correction_V3
monideep2255
2023-07-14T04:13:47Z
4
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-13T20:12:55Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: PSST_spell_correction_V3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # PSST_spell_correction_V3 This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5390 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.3418 | 1.0 | 3099 | 0.3206 | | 0.2821 | 2.0 | 6198 | 0.3216 | | 0.2467 | 3.0 | 9297 | 0.3501 | | 0.2301 | 4.0 | 12396 | 0.3747 | | 0.2152 | 5.0 | 15495 | 0.3614 | | 0.2071 | 6.0 | 18594 | 0.3836 | | 0.2002 | 7.0 | 21693 | 0.3923 | | 0.1964 | 8.0 | 24792 | 0.4053 | | 0.1953 | 9.0 | 27891 | 0.4154 | | 0.1935 | 10.0 | 30990 | 0.4269 | | 0.1918 | 11.0 | 34089 | 0.4381 | | 0.1889 | 12.0 | 37188 | 0.4623 | | 0.1886 | 13.0 | 40287 | 0.4586 | | 0.1869 | 14.0 | 43386 | 0.4838 | | 0.1867 | 15.0 | 46485 | 0.4984 | | 0.1857 | 16.0 | 49584 | 0.4623 | | 0.1843 | 17.0 | 52683 | 0.5020 | | 0.1834 | 18.0 | 55782 | 0.5129 | | 0.1832 | 19.0 | 58881 | 0.4972 | | 0.1835 | 20.0 | 61980 | 0.4895 | | 0.1822 | 21.0 | 65079 | 0.4992 | | 0.1834 | 22.0 | 68178 | 0.5291 | | 0.1816 | 23.0 | 71277 | 0.5351 | | 0.1802 | 24.0 | 74376 | 0.5294 | | 0.1796 | 25.0 | 77475 | 0.5443 | | 0.1814 | 26.0 | 80574 | 0.5368 | | 0.1804 | 27.0 | 83673 | 0.5375 | | 0.1811 | 28.0 | 86772 | 0.5435 | | 0.1811 | 29.0 | 89871 | 0.5391 | | 0.1802 | 30.0 | 92970 | 0.5390 | ### Framework versions - Transformers 4.28.0 - Pytorch 1.12.1+cu102 - Datasets 2.13.1 - Tokenizers 0.13.3
Sumayan/Sumayan
Sumayan
2023-07-14T04:11:08Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-14T04:11:08Z
--- license: creativeml-openrail-m ---
keehun/textual_inversion_all
keehun
2023-07-14T04:05:02Z
6
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "textual_inversion", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-14T02:47:03Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - textual_inversion inference: true --- # Textual inversion text2image fine-tuning - keehun/textual_inversion_all These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
PraveenJesu/openai-whisper-medium-murf-audio-augment
PraveenJesu
2023-07-14T04:04:59Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-14T04:04:56Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0.dev0
BrendaScar/Taxi-v3-Qtable
BrendaScar
2023-07-14T04:03:52Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-14T04:03:50Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3-Qtable results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="BrendaScar/Taxi-v3-Qtable", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
enddl22/dreambooth_tutorial
enddl22
2023-07-14T04:02:00Z
3
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:runwayml/stable-diffusion-v1-5", "base_model:finetune:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-14T02:42:35Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 instance_prompt: a photo of sks dog tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - enddl22/dreambooth_tutorial This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False.
keehun/textual_inversion_human
keehun
2023-07-14T03:48:11Z
37
1
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "textual_inversion", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-13T07:15:23Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - textual_inversion inference: true --- # Textual inversion text2image fine-tuning - keehun/textual_inversion_human These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
Tnaul/my_awesome_eli5_clm-model
Tnaul
2023-07-14T03:32:24Z
216
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-13T14:06:50Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: my_awesome_eli5_clm-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_eli5_clm-model This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.7616 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.9164 | 1.0 | 573 | 3.7819 | | 3.8103 | 2.0 | 1146 | 3.7666 | | 3.7696 | 3.0 | 1719 | 3.7616 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
justairr/BetterPrompt
justairr
2023-07-14T03:11:36Z
0
0
null
[ "region:us" ]
null
2023-07-13T12:37:51Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> A Prompt Generator and Enhancer based on ChatGLM
Diandiandian/path-to-save-model
Diandiandian
2023-07-14T03:10:13Z
1
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:runwayml/stable-diffusion-v1-5", "base_model:finetune:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-11T10:16:14Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 instance_prompt: a photo of sks dog tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - Diandiandian/path-to-save-model This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False.
Fireworks12/git-base-pokemon
Fireworks12
2023-07-14T02:49:39Z
37
0
transformers
[ "transformers", "pytorch", "tensorboard", "git", "image-text-to-text", "generated_from_trainer", "vision", "image-to-text", "image-captioning", "dataset:imagefolder", "license:mit", "endpoints_compatible", "region:us" ]
image-to-text
2023-07-12T16:24:59Z
--- license: mit tags: - generated_from_trainer - vision - image-to-text - image-captioning datasets: - imagefolder model-index: - name: git-base-pokemon results: [] pipeline_tag: image-to-text --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # git-base-pokemon This model is a fine-tuned version of [microsoft/git-base](https://huggingface.co/microsoft/git-base) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1817 - Wer Score: 9.0938 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Score | |:-------------:|:-----:|:----:|:---------------:|:---------:| | 7.3974 | 0.7 | 50 | 4.5248 | 4.5234 | | 2.2794 | 1.4 | 100 | 0.4021 | 5.1680 | | 0.1697 | 2.1 | 150 | 0.1398 | 1.5039 | | 0.0816 | 2.8 | 200 | 0.1458 | 9.9570 | | 0.0556 | 3.5 | 250 | 0.1417 | 2.5234 | | 0.043 | 4.2 | 300 | 0.1448 | 12.8086 | | 0.0285 | 4.9 | 350 | 0.1469 | 7.3867 | | 0.021 | 5.59 | 400 | 0.1505 | 13.0312 | | 0.0205 | 6.29 | 450 | 0.1499 | 6.3281 | | 0.0179 | 6.99 | 500 | 0.1527 | 13.0234 | | 0.0157 | 7.69 | 550 | 0.1552 | 6.3047 | | 0.015 | 8.39 | 600 | 0.1571 | 6.7656 | | 0.015 | 9.09 | 650 | 0.1579 | 10.2305 | | 0.0137 | 9.79 | 700 | 0.1585 | 11.4219 | | 0.0132 | 10.49 | 750 | 0.1598 | 5.8320 | | 0.0132 | 11.19 | 800 | 0.1591 | 12.0508 | | 0.013 | 11.89 | 850 | 0.1612 | 7.9492 | | 0.0117 | 12.59 | 900 | 0.1621 | 8.1758 | | 0.0123 | 13.29 | 950 | 0.1632 | 12.9961 | | 0.0125 | 13.99 | 1000 | 0.1613 | 10.2031 | | 0.0116 | 14.69 | 1050 | 0.1642 | 5.7930 | | 0.0112 | 15.38 | 1100 | 0.1636 | 6.1719 | | 0.0112 | 16.08 | 1150 | 0.1652 | 7.2422 | | 0.0107 | 16.78 | 1200 | 0.1644 | 12.9961 | | 0.0108 | 17.48 | 1250 | 0.1661 | 5.0117 | | 0.0109 | 18.18 | 1300 | 0.1658 | 7.3242 | | 0.0108 | 18.88 | 1350 | 0.1691 | 6.0547 | | 0.0101 | 19.58 | 1400 | 0.1690 | 6.9141 | | 0.0103 | 20.28 | 1450 | 0.1692 | 7.1680 | | 0.0107 | 20.98 | 1500 | 0.1702 | 12.3281 | | 0.0099 | 21.68 | 1550 | 0.1708 | 10.75 | | 0.0103 | 22.38 | 1600 | 0.1714 | 9.5586 | | 0.0101 | 23.08 | 1650 | 0.1713 | 12.9805 | | 0.0098 | 23.78 | 1700 | 0.1712 | 11.4883 | | 0.0095 | 24.48 | 1750 | 0.1711 | 9.3320 | | 0.0096 | 25.17 | 1800 | 0.1738 | 8.6523 | | 0.0097 | 25.87 | 1850 | 0.1717 | 11.5078 | | 0.0091 | 26.57 | 1900 | 0.1735 | 7.9570 | | 0.0092 | 27.27 | 1950 | 0.1729 | 9.8242 | | 0.0093 | 27.97 | 2000 | 0.1721 | 10.5078 | | 0.0087 | 28.67 | 2050 | 0.1732 | 9.3906 | | 0.009 | 29.37 | 2100 | 0.1760 | 8.0664 | | 0.009 | 30.07 | 2150 | 0.1769 | 10.5312 | | 0.0086 | 30.77 | 2200 | 0.1743 | 10.8555 | | 0.0087 | 31.47 | 2250 | 0.1772 | 10.2188 | | 0.0089 | 32.17 | 2300 | 0.1757 | 11.6016 | | 0.0088 | 32.87 | 2350 | 0.1765 | 8.9297 | | 0.0082 | 33.57 | 2400 | 0.1754 | 9.6484 | | 0.0082 | 34.27 | 2450 | 0.1770 | 12.3711 | | 0.0084 | 34.97 | 2500 | 0.1761 | 10.1523 | | 0.0076 | 35.66 | 2550 | 0.1774 | 9.1055 | | 0.0077 | 36.36 | 2600 | 0.1788 | 8.7852 | | 0.0079 | 37.06 | 2650 | 0.1782 | 11.8086 | | 0.0071 | 37.76 | 2700 | 0.1784 | 10.5234 | | 0.0075 | 38.46 | 2750 | 0.1789 | 8.8828 | | 0.0072 | 39.16 | 2800 | 0.1796 | 8.5664 | | 0.0071 | 39.86 | 2850 | 0.1804 | 9.5391 | | 0.0069 | 40.56 | 2900 | 0.1796 | 9.4062 | | 0.0068 | 41.26 | 2950 | 0.1797 | 8.9883 | | 0.0067 | 41.96 | 3000 | 0.1809 | 10.5273 | | 0.0062 | 42.66 | 3050 | 0.1801 | 10.4531 | | 0.0062 | 43.36 | 3100 | 0.1803 | 7.2188 | | 0.0063 | 44.06 | 3150 | 0.1808 | 8.7930 | | 0.0058 | 44.76 | 3200 | 0.1804 | 10.5156 | | 0.0057 | 45.45 | 3250 | 0.1807 | 11.1328 | | 0.0059 | 46.15 | 3300 | 0.1812 | 8.6875 | | 0.0055 | 46.85 | 3350 | 0.1811 | 10.2773 | | 0.0053 | 47.55 | 3400 | 0.1814 | 10.0391 | | 0.0054 | 48.25 | 3450 | 0.1817 | 8.5391 | | 0.0053 | 48.95 | 3500 | 0.1818 | 8.9688 | | 0.005 | 49.65 | 3550 | 0.1817 | 9.0938 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
bh8648/xlm-roberta-base-finetuned-panx-de
bh8648
2023-07-14T02:47:54Z
135
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-14T00:23:23Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme config: PAN-X.de split: validation args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8606226168117922 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1533 - F1: 0.8606 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2527 | 1.0 | 1258 | 0.1823 | 0.8204 | | 0.1358 | 2.0 | 2516 | 0.1534 | 0.8410 | | 0.081 | 3.0 | 3774 | 0.1533 | 0.8606 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.0 - Datasets 2.13.1 - Tokenizers 0.13.3
YanJiangJerry/covid-augment-tweet-bert-large-e4-version2
YanJiangJerry
2023-07-14T02:41:59Z
105
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-14T02:20:32Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: covid-augment-tweet-bert-large-e4-version2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # covid-augment-tweet-bert-large-e4-version2 This model is a fine-tuned version of [digitalepidemiologylab/covid-twitter-bert-v2](https://huggingface.co/digitalepidemiologylab/covid-twitter-bert-v2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3591 - Accuracy: 0.9568 - F1: 0.8859 - Precision: 0.8725 - Recall: 0.8997 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.0732 | 1.0 | 1023 | 0.2216 | 0.9613 | 0.8986 | 0.8779 | 0.9204 | | 0.0216 | 2.0 | 2046 | 0.2441 | 0.9600 | 0.8977 | 0.8580 | 0.9412 | | 0.0065 | 3.0 | 3069 | 0.3901 | 0.9497 | 0.8730 | 0.8246 | 0.9273 | | 0.0032 | 4.0 | 4092 | 0.3591 | 0.9568 | 0.8859 | 0.8725 | 0.8997 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
moka-ai/m3e-small
moka-ai
2023-07-14T02:37:24Z
463
49
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "embedding", "text-embedding", "zh", "region:us" ]
null
2023-06-02T06:34:10Z
--- language: - zh tags: - embedding - text-embedding library_name: sentence-transformers --- # 🅜 M3E Models [m3e-small](https://huggingface.co/moka-ai/m3e-small) | [m3e-base](https://huggingface.co/moka-ai/m3e-base) M3E 是 Moka Massive Mixed Embedding 的缩写 - Moka,此模型由 MokaAI 训练,开源和评测,训练脚本使用 [uniem](https://github.com/wangyuxinwhy/uniem/blob/main/scripts/train_m3e.py) ,评测 BenchMark 使用 [MTEB-zh](https://github.com/wangyuxinwhy/uniem/tree/main/mteb-zh) - Massive,此模型通过**千万级** (2200w+) 的中文句对数据集进行训练 - Mixed,此模型支持中英双语的同质文本相似度计算,异质文本检索等功能,未来还会支持代码检索 - Embedding,此模型是文本嵌入模型,可以将自然语言转换成稠密的向量 ## 🆕 更新说明 - 2023.06.24,添加微调 M3E 的教程 [notebook](https://github.com/wangyuxinwhy/uniem/blob/main/examples/finetune.ipynb),几行代码,更佳适配!<a target="_blank" href="https://colab.research.google.com/github/wangyuxinwhy/uniem/blob/main/examples/finetune.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> - 2023.06.14,添加了三个中文开源文本嵌入模型到评测中,包括 UER, ErLangShen, DMetaSoul - 2023.06.08,添加检索任务的评测结果,在 T2Ranking 1W 中文数据集上,m3e-base 在 ndcg@10 上达到了 0.8004,超过了 openai-ada-002 的 0.7786 - 2023.06.07,添加文本分类任务的评测结果,在 6 种文本分类数据集上,m3e-base 在 accuracy 上达到了 0.6157,超过了 openai-ada-002 的 0.5956 ## ⚖️ 模型对比 | | 参数数量 | 维度 | 中文 | 英文 | s2s | s2p | s2c | 开源 | 兼容性 | s2s Acc | s2p ndcg@10 | | --------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | ---- | ---------- | ------------ | -------- | | m3e-small | 24M | 512 | 是 | 否 | 是 | 否 | 否 | 是 | 优 | 0.5834 | 0.7262 | | m3e-base | 110M | 768 | 是 | 是 | 是 | 是 | 否 | 是 | 优 | **0.6157** | **0.8004** | | text2vec | 110M | 768 | 是 | 否 | 是 | 否 | 否 | 是 | 优 | 0.5755 | 0.6346 | | openai-ada-002 | 未知 | 1536 | 是 | 是 | 是 | 是 | 是 | 否 | 优 | 0.5956 | 0.7786 | 说明: - s2s, 即 sentence to sentence ,代表了同质文本之间的嵌入能力,适用任务:文本相似度,重复问题检测,文本分类等 - s2p, 即 sentence to passage ,代表了异质文本之间的嵌入能力,适用任务:文本检索,GPT 记忆模块等 - s2c, 即 sentence to code ,代表了自然语言和程序语言之间的嵌入能力,适用任务:代码检索 - 兼容性,代表了模型在开源社区中各种项目被支持的程度,由于 m3e 和 text2vec 都可以直接通过 sentence-transformers 直接使用,所以和 openai 在社区的支持度上相当 - ACC & ndcg@10,详情见下方的评测 Tips: - 使用场景主要是中文,少量英文的情况,建议使用 m3e 系列的模型 - 多语言使用场景,并且不介意数据隐私的话,我建议使用 openai text-embedding-ada-002 - 代码检索场景,推荐使用 openai text-embedding-ada-002 - 文本检索场景,请使用具备文本检索能力的模型,只在 S2S 上训练的文本嵌入模型,没有办法完成文本检索任务 ## 🔧 使用 M3E 您需要先安装 sentence-transformers ```bash pip install -U sentence-transformers ``` 安装完成后,您可以使用以下代码来使用 M3E Models ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('moka-ai/m3e-base') #Our sentences we like to encode sentences = [ '* Moka 此文本嵌入模型由 MokaAI 训练并开源,训练脚本使用 uniem', '* Massive 此文本嵌入模型通过**千万级**的中文句对数据集进行训练', '* Mixed 此文本嵌入模型支持中英双语的同质文本相似度计算,异质文本检索等功能,未来还会支持代码检索,ALL in one' ] #Sentences are encoded by calling model.encode() embeddings = model.encode(sentences) #Print the embeddings for sentence, embedding in zip(sentences, embeddings): print("Sentence:", sentence) print("Embedding:", embedding) print("") ``` M3E 系列的所有模型在设计的时候就考虑到完全兼容 [sentence-transformers](https://www.sbert.net/) ,所以你可以通过**替换名称字符串**的方式在所有支持 sentence-transformers 的项目中**无缝**使用 M3E Models,比如 [chroma](https://docs.trychroma.com/getting-started), [guidance](https://github.com/microsoft/guidance), [semantic-kernel](https://github.com/microsoft/semantic-kernel) 。 ## 🎨 微调模型 `uniem` 提供了非常易用的 finetune 接口,几行代码,即刻适配! ```python from datasets import load_dataset from uniem.finetuner import FineTuner dataset = load_dataset('shibing624/nli_zh', 'STS-B') # 指定训练的模型为 m3e-small finetuner = FineTuner.from_pretrained('moka-ai/m3e-small', dataset=dataset) finetuner.run(epochs=1) ``` 详见 [uniem 微调教程](https://github.com/wangyuxinwhy/uniem/blob/main/examples/finetune.ipynb) <a target="_blank" href="https://colab.research.google.com/github/wangyuxinwhy/uniem/blob/main/examples/finetune.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> ## ➿ 训练方案 M3E 使用 in-batch 负采样的对比学习的方式在句对数据集进行训练,为了保证 in-batch 负采样的效果,我们使用 A100 80G 来最大化 batch-size,并在共计 2200W+ 的句对数据集上训练了 1 epoch。训练脚本使用 [uniem](https://github.com/wangyuxinwhy/uniem/blob/main/scripts/train_m3e.py),您可以在这里查看具体细节。 ## 🌟 特性 - 中文训练集,M3E 在大规模句对数据集上的训练,包含中文百科,金融,医疗,法律,新闻,学术等多个领域共计 2200W 句对样本,数据集详见 [M3E 数据集](#M3E数据集) - 英文训练集,M3E 使用 MEDI 145W 英文三元组数据集进行训练,数据集详见 [MEDI 数据集](https://drive.google.com/file/d/1vZ5c2oJNonGOvXzppNg5mHz24O6jcc52/view),此数据集由 [instructor team](https://github.com/HKUNLP/instructor-embedding) 提供 - 指令数据集,M3E 使用了 300W + 的指令微调数据集,这使得 M3E 对文本编码的时候可以遵从指令,这部分的工作主要被启发于 [instructor-embedding](https://github.com/HKUNLP/instructor-embedding) - 基础模型,M3E 使用 hfl 实验室的 [Roberta](https://huggingface.co/hfl/chinese-roberta-wwm-ext) 系列模型进行训练,目前提供 small 和 base 两个版本,大家则需选用 - ALL IN ONE,M3E 旨在提供一个 ALL IN ONE 的文本嵌入模型,不仅支持同质句子相似度判断,还支持异质文本检索,你只需要一个模型就可以覆盖全部的应用场景,未来还会支持代码检索 ## 💯 MTEB-zh 评测 - 评测模型,[text2vec](https://github.com/shibing624/text2vec), m3e-base, m3e-small, openai text-embedding-ada-002, [DMetaSoul](https://huggingface.co/DMetaSoul/sbert-chinese-general-v2), [UER](https://huggingface.co/uer/sbert-base-chinese-nli), [ErLangShen](https://huggingface.co/IDEA-CCNL/Erlangshen-SimCSE-110M-Chinese) - 评测脚本,具体参考 [MTEB-zh] (https://github.com/wangyuxinwhy/uniem/blob/main/mteb-zh) ### 文本分类 - 数据集选择,选择开源在 HuggingFace 上的 6 种文本分类数据集,包括新闻、电商评论、股票评论、长文本等 - 评测方式,使用 MTEB 的方式进行评测,报告 Accuracy。 | | text2vec | m3e-small | m3e-base | openai | DMetaSoul | uer | erlangshen | | ----------------- | -------- | --------- | -------- | ------ | ----------- | ------- | ----------- | | TNews | 0.43 | 0.4443 | **0.4827** | 0.4594 | 0.3084 | 0.3539 | 0.4361 | | JDIphone | 0.8214 | 0.8293 | **0.8533** | 0.746 | 0.7972 | 0.8283 | 0.8356 | | GubaEastmony | 0.7472 | 0.712 | 0.7621 | 0.7574 | 0.735 | 0.7534 | **0.7787** | | TYQSentiment | 0.6099 | 0.6596 | **0.7188** | 0.68 | 0.6437 | 0.6662 | 0.6444 | | StockComSentiment | 0.4307 | 0.4291 | 0.4363 | **0.4819** | 0.4309 | 0.4555 | 0.4482 | | IFlyTek | 0.414 | 0.4263 | 0.4409 | **0.4486** | 0.3969 | 0.3762 | 0.4241 | | Average | 0.5755 | 0.5834 | **0.6157** | 0.5956 | 0.552016667 | 0.57225 | 0.594516667 | ### 检索排序 #### T2Ranking 1W - 数据集选择,使用 [T2Ranking](https://github.com/THUIR/T2Ranking/tree/main) 数据集,由于 T2Ranking 的数据集太大,openai 评测起来的时间成本和 api 费用有些高,所以我们只选择了 T2Ranking 中的前 10000 篇文章 - 评测方式,使用 MTEB 的方式进行评测,报告 map@1, map@10, mrr@1, mrr@10, ndcg@1, ndcg@10 - 注意!从实验结果和训练方式来看,除了 M3E 模型和 openai 模型外,其余模型都没有做检索任务的训练,所以结果仅供参考。 | | text2vec | openai-ada-002 | m3e-small | m3e-base | DMetaSoul | uer | erlangshen | | ------- | -------- | -------------- | --------- | -------- | --------- | ------- | ---------- | | map@1 | 0.4684 | 0.6133 | 0.5574 | **0.626** | 0.25203 | 0.08647 | 0.25394 | | map@10 | 0.5877 | 0.7423 | 0.6878 | **0.7656** | 0.33312 | 0.13008 | 0.34714 | | mrr@1 | 0.5345 | 0.6931 | 0.6324 | **0.7047** | 0.29258 | 0.10067 | 0.29447 | | mrr@10 | 0.6217 | 0.7668 | 0.712 | **0.7841** | 0.36287 | 0.14516 | 0.3751 | | ndcg@1 | 0.5207 | 0.6764 | 0.6159 | **0.6881** | 0.28358 | 0.09748 | 0.28578 | | ndcg@10 | 0.6346 | 0.7786 | 0.7262 | **0.8004** | 0.37468 | 0.15783 | 0.39329 | #### T2Ranking - 数据集选择,使用 T2Ranking,刨除 openai-ada-002 模型后,我们对剩余的三个模型,进行 T2Ranking 10W 和 T2Ranking 50W 的评测。(T2Ranking 评测太耗内存了... 128G 都不行) - 评测方式,使用 MTEB 的方式进行评测,报告 ndcg@10 | | text2vec | m3e-small | m3e-base | | ------- | -------- | --------- | -------- | | t2r-1w | 0.6346 | 0.72621 | **0.8004** | | t2r-10w | 0.44644 | 0.5251 | **0.6263** | | t2r-50w | 0.33482 | 0.38626 | **0.47364** | 说明: - 检索排序对于 text2vec 并不公平,因为 text2vec 在训练的时候没有使用过检索相关的数据集,所以没有办法很好的完成检索任务也是正常的。 ## 📂 M3E数据集 如果您想要使用这些数据集,你可以在 [uniem process_zh_datasets](https://github.com/wangyuxinwhy/uniem/blob/main/scripts/process_zh_datasets.py) 中找到加载 huggingface 数据集的脚本,非 huggingface 数据集需要您根据下方提供的链接自行下载和处理。 | 数据集名称 | 领域 | 数量 | 任务类型 | Prompt | 质量 | 数据提供者 | 说明 | 是否开源/研究使用 | 是否商用 | 脚本 | Done | URL | 是否同质 | | -------------------- | ---- | --------- | ----------------- | ------ | ---- | ------------------------------------------------------------ | ------------------------------------------------------------ | ----------------- | -------- | ---- | ---- | ------------------------------------------------------------ | -------- | | cmrc2018 | 百科 | 14,363 | 问答 | 问答 | 优 | Yiming Cui, Ting Liu, Wanxiang Che, Li Xiao, Zhipeng Chen, Wentao Ma, Shijin Wang, Guoping Hu | https://github.com/ymcui/cmrc2018/blob/master/README_CN.md 专家标注的基于维基百科的中文阅读理解数据集,将问题和上下文视为正例 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/cmrc2018 | 否 | | belle_2m | 百科 | 2,000,000 | 指令微调 | 无 | 优 | LianjiaTech/BELLE | belle 的指令微调数据集,使用 self instruct 方法基于 gpt3.5 生成 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/BelleGroup/train_2M_CN | 否 | | firefily | 百科 | 1,649,399 | 指令微调 | 无 | 优 | YeungNLP | Firefly(流萤) 是一个开源的中文对话式大语言模型,使用指令微调(Instruction Tuning)在中文数据集上进行调优。使用了词表裁剪、ZeRO等技术,有效降低显存消耗和提高训练效率。 在训练中,我们使用了更小的模型参数量,以及更少的计算资源。 | 未说明 | 未说明 | 是 | 是 | https://huggingface.co/datasets/YeungNLP/firefly-train-1.1M | 否 | | alpaca_gpt4 | 百科 | 48,818 | 指令微调 | 无 | 优 | Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, Jianfeng Gao | 本数据集是参考Alpaca方法基于GPT4得到的self-instruct数据,约5万条。 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/shibing624/alpaca-zh | 否 | | zhihu_kol | 百科 | 1,006,218 | 问答 | 问答 | 优 | wangrui6 | 知乎问答 | 未说明 | 未说明 | 是 | 是 | https://huggingface.co/datasets/wangrui6/Zhihu-KOL | 否 | | hc3_chinese | 百科 | 39,781 | 问答 | 问答 | 良 | Hello-SimpleAI | 问答数据,包括人工回答和 GPT 回答 | 是 | 未说明 | 是 | 是 | https://huggingface.co/datasets/Hello-SimpleAI/HC3-Chinese | 否 | | amazon_reviews_multi | 电商 | 210,000 | 问答 文本分类 | 摘要 | 优 | 亚马逊 | 亚马逊产品评论数据集 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/amazon_reviews_multi/viewer/zh/train?row=8 | 否 | | mlqa | 百科 | 85,853 | 问答 | 问答 | 良 | patrickvonplaten | 一个用于评估跨语言问答性能的基准数据集 | 是 | 未说明 | 是 | 是 | https://huggingface.co/datasets/mlqa/viewer/mlqa-translate-train.zh/train?p=2 | 否 | | xlsum | 新闻 | 93,404 | 摘要 | 摘要 | 良 | BUET CSE NLP Group | BBC的专业注释文章摘要对 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/csebuetnlp/xlsum/viewer/chinese_simplified/train?row=259 | 否 | | ocnli | 口语 | 17,726 | 自然语言推理 | 推理 | 良 | Thomas Wolf | 自然语言推理数据集 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/clue/viewer/ocnli | 是 | | BQ | 金融 | 60,000 | 文本分类 | 相似 | 良 | Intelligent Computing Research Center, Harbin Institute of Technology(Shenzhen) | http://icrc.hitsz.edu.cn/info/1037/1162.htm BQ 语料库包含来自网上银行自定义服务日志的 120,000 个问题对。它分为三部分:100,000 对用于训练,10,000 对用于验证,10,000 对用于测试。 数据提供者: 哈尔滨工业大学(深圳)智能计算研究中心 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/shibing624/nli_zh/viewer/BQ | 是 | | lcqmc | 口语 | 149,226 | 文本分类 | 相似 | 良 | Ming Xu | 哈工大文本匹配数据集,LCQMC 是哈尔滨工业大学在自然语言处理国际顶会 COLING2018 构建的问题语义匹配数据集,其目标是判断两个问题的语义是否相同 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/shibing624/nli_zh/viewer/LCQMC/train | 是 | | paws-x | 百科 | 23,576 | 文本分类 | 相似 | 优 | Bhavitvya Malik | PAWS Wiki中的示例 | 是 | 是 | 是 | 是 | https://huggingface.co/datasets/paws-x/viewer/zh/train | 是 | | wiki_atomic_edit | 百科 | 1,213,780 | 平行语义 | 相似 | 优 | abhishek thakur | 基于中文维基百科的编辑记录收集的数据集 | 未说明 | 未说明 | 是 | 是 | https://huggingface.co/datasets/wiki_atomic_edits | 是 | | chatmed_consult | 医药 | 549,326 | 问答 | 问答 | 优 | Wei Zhu | 真实世界的医学相关的问题,使用 gpt3.5 进行回答 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/michaelwzhu/ChatMed_Consult_Dataset | 否 | | webqa | 百科 | 42,216 | 问答 | 问答 | 优 | suolyer | 百度于2016年开源的数据集,数据来自于百度知道;格式为一个问题多篇意思基本一致的文章,分为人为标注以及浏览器检索;数据整体质量中,因为混合了很多检索而来的文章 | 是 | 未说明 | 是 | 是 | https://huggingface.co/datasets/suolyer/webqa/viewer/suolyer--webqa/train?p=3 | 否 | | dureader_robust | 百科 | 65,937 | 机器阅读理解 问答 | 问答 | 优 | 百度 | DuReader robust旨在利用真实应用中的数据样本来衡量阅读理解模型的鲁棒性,评测模型的过敏感性、过稳定性以及泛化能力,是首个中文阅读理解鲁棒性数据集。 | 是 | 是 | 是 | 是 | https://huggingface.co/datasets/PaddlePaddle/dureader_robust/viewer/plain_text/train?row=96 | 否 | | csl | 学术 | 395,927 | 语料 | 摘要 | 优 | Yudong Li, Yuqing Zhang, Zhe Zhao, Linlin Shen, Weijie Liu, Weiquan Mao and Hui Zhang | 提供首个中文科学文献数据集(CSL),包含 396,209 篇中文核心期刊论文元信息 (标题、摘要、关键词、学科、门类)。CSL 数据集可以作为预训练语料,也可以构建许多NLP任务,例如文本摘要(标题预测)、 关键词生成和文本分类等。 | 是 | 是 | 是 | 是 | https://huggingface.co/datasets/neuclir/csl | 否 | | miracl-corpus | 百科 | 4,934,368 | 语料 | 摘要 | 优 | MIRACL | The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., \n\n in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage. | 是 | 是 | 是 | 是 | https://huggingface.co/datasets/miracl/miracl-corpus | 否 | | lawzhidao | 法律 | 36,368 | 问答 | 问答 | 优 | 和鲸社区-Ustinian | 百度知道清洗后的法律问答 | 是 | 是 | 否 | 是 | https://www.heywhale.com/mw/dataset/5e953ca8e7ec38002d02fca7/content | 否 | | CINLID | 成语 | 34,746 | 平行语义 | 相似 | 优 | 高长宽 | 中文成语语义推理数据集(Chinese Idioms Natural Language Inference Dataset)收集了106832条由人工撰写的成语对(含少量歇后语、俗语等短文本),通过人工标注的方式进行平衡分类,标签为entailment、contradiction和neutral,支持自然语言推理(NLI)的任务。 | 是 | 否 | 否 | 是 | https://www.luge.ai/#/luge/dataDetail?id=39 | 是 | | DuSQL | SQL | 25,003 | NL2SQL | SQL | 优 | 百度 | DuSQL是一个面向实际应用的数据集,包含200个数据库,覆盖了164个领域,问题覆盖了匹配、计算、推理等实际应用中常见形式。该数据集更贴近真实应用场景,要求模型领域无关、问题无关,且具备计算推理等能力。 | 是 | 否 | 否 | 是 | https://www.luge.ai/#/luge/dataDetail?id=13 | 否 | | Zhuiyi-NL2SQL | SQL | 45,918 | NL2SQL | SQL | 优 | 追一科技 刘云峰 | NL2SQL是一个多领域的简单数据集,其主要包含匹配类型问题。该数据集主要验证模型的泛化能力,其要求模型具有较强的领域泛化能力、问题泛化能力。 | 是 | 否 | 否 | 是 | https://www.luge.ai/#/luge/dataDetail?id=12 | 否 | | Cspider | SQL | 7,785 | NL2SQL | SQL | 优 | 西湖大学 张岳 | CSpider是一个多语言数据集,其问题以中文表达,数据库以英文存储,这种双语模式在实际应用中也非常常见,尤其是数据库引擎对中文支持不好的情况下。该数据集要求模型领域无关、问题无关,且能够实现多语言匹配。 | 是 | 否 | 否 | 是 | https://www.luge.ai/#/luge/dataDetail?id=11 | 否 | | news2016zh | 新闻 | 2,507,549 | 语料 | 摘要 | 良 | Bright Xu | 包含了250万篇新闻。新闻来源涵盖了6.3万个媒体,含标题、关键词、描述、正文。 | 是 | 是 | 否 | 是 | https://github.com/brightmart/nlp_chinese_corpus | 否 | | baike2018qa | 百科 | 1,470,142 | 问答 | 问答 | 良 | Bright Xu | 含有150万个预先过滤过的、高质量问题和答案,每个问题属于一个类别。总共有492个类别,其中频率达到或超过10次的类别有434个。 | 是 | 是 | 否 | 是 | https://github.com/brightmart/nlp_chinese_corpus | 否 | | webtext2019zh | 百科 | 4,258,310 | 问答 | 问答 | 优 | Bright Xu | 含有410万个预先过滤过的、高质量问题和回复。每个问题属于一个【话题】,总共有2.8万个各式话题,话题包罗万象。 | 是 | 是 | 否 | 是 | https://github.com/brightmart/nlp_chinese_corpus | 否 | | SimCLUE | 百科 | 775,593 | 平行语义 | 相似 | 良 | 数据集合,请在 simCLUE 中查看 | 整合了中文领域绝大多数可用的开源的语义相似度和自然语言推理的数据集,并重新做了数据拆分和整理。 | 是 | 否 | 否 | 是 | https://github.com/CLUEbenchmark/SimCLUE | 是 | | Chinese-SQuAD | 新闻 | 76,449 | 机器阅读理解 | 问答 | 优 | junzeng-pluto | 中文机器阅读理解数据集,通过机器翻译加人工校正的方式从原始Squad转换而来 | 是 | 否 | 否 | 是 | https://github.com/pluto-junzeng/ChineseSquad | 否 | ## 🗓️ 计划表 - [x] 完成 MTEB 中文评测 BenchMark, [MTEB-zh](https://github.com/wangyuxinwhy/uniem/tree/main/mteb-zh) - [x] 完成 Large 模型的训练和开源 - [x] 完成 Finetuner ,允许更优雅的微调 - [ ] 对 M3E 数据集进行清洗,保留高质量的部分,组成 m3e-hq,并在 huggingface 上开源 - [ ] 在 m3e-hq 的数据集上补充 hard negative 的样本及相似度分数,组成 m3e-hq-with-score,并在 huggingface 上开源 - [ ] 在 m3e-hq-with-score 上通过 [cosent loss](https://github.com/wangyuxinwhy/uniem/blob/main/uniem/criteria.py#LL24C39-L24C39) loss 进行训练并开源模型,CoSent 原理参考这篇[博客](https://kexue.fm/archives/8847) - [ ] 开源商用版本的 M3E models ## 🙏 致谢 感谢开源社区提供的中文语料,感谢所有在此工作中提供帮助的人们,希望中文社区越来越好,共勉! ## 📜 License M3E models 使用的数据集中包括大量非商用的数据集,所以 M3E models 也是非商用的,仅供研究使用。不过我们已经在 M3E 数据集上标识了商用和非商用的数据集,您可以根据自己的需求自行训练。 ## Citation Please cite this model using the following format: ``` @software {Moka Massive Mixed Embedding, author = {Wang Yuxin,Sun Qingxuan,He sicheng}, title = {M3E: Moka Massive Mixed Embedding Model}, year = {2023} } ```
moka-ai/m3e-base
moka-ai
2023-07-14T02:29:36Z
254,010
912
sentence-transformers
[ "sentence-transformers", "pytorch", "safetensors", "bert", "embedding", "text-embedding", "zh", "en", "region:us" ]
null
2023-06-06T02:28:47Z
--- language: - zh - en tags: - embedding - text-embedding library_name: sentence-transformers --- # 🅜 M3E Models [m3e-small](https://huggingface.co/moka-ai/m3e-small) | [m3e-base](https://huggingface.co/moka-ai/m3e-base) M3E 是 Moka Massive Mixed Embedding 的缩写 - Moka,此模型由 MokaAI 训练,开源和评测,训练脚本使用 [uniem](https://github.com/wangyuxinwhy/uniem/blob/main/scripts/train_m3e.py) ,评测 BenchMark 使用 [MTEB-zh](https://github.com/wangyuxinwhy/uniem/tree/main/mteb-zh) - Massive,此模型通过**千万级** (2200w+) 的中文句对数据集进行训练 - Mixed,此模型支持中英双语的同质文本相似度计算,异质文本检索等功能,未来还会支持代码检索 - Embedding,此模型是文本嵌入模型,可以将自然语言转换成稠密的向量 ## 🆕 更新说明 - 2023.06.24,添加微调 M3E 的教程 [notebook](https://github.com/wangyuxinwhy/uniem/blob/main/examples/finetune.ipynb),几行代码,更佳适配!<a target="_blank" href="https://colab.research.google.com/github/wangyuxinwhy/uniem/blob/main/examples/finetune.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> - 2023.06.14,添加了三个中文开源文本嵌入模型到评测中,包括 UER, ErLangShen, DMetaSoul - 2023.06.08,添加检索任务的评测结果,在 T2Ranking 1W 中文数据集上,m3e-base 在 ndcg@10 上达到了 0.8004,超过了 openai-ada-002 的 0.7786 - 2023.06.07,添加文本分类任务的评测结果,在 6 种文本分类数据集上,m3e-base 在 accuracy 上达到了 0.6157,超过了 openai-ada-002 的 0.5956 ## ⚖️ 模型对比 | | 参数数量 | 维度 | 中文 | 英文 | s2s | s2p | s2c | 开源 | 兼容性 | s2s Acc | s2p ndcg@10 | | --------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | ---- | ---------- | ------------ | -------- | | m3e-small | 24M | 512 | 是 | 否 | 是 | 否 | 否 | 是 | 优 | 0.5834 | 0.7262 | | m3e-base | 110M | 768 | 是 | 是 | 是 | 是 | 否 | 是 | 优 | **0.6157** | **0.8004** | | text2vec | 110M | 768 | 是 | 否 | 是 | 否 | 否 | 是 | 优 | 0.5755 | 0.6346 | | openai-ada-002 | 未知 | 1536 | 是 | 是 | 是 | 是 | 是 | 否 | 优 | 0.5956 | 0.7786 | 说明: - s2s, 即 sentence to sentence ,代表了同质文本之间的嵌入能力,适用任务:文本相似度,重复问题检测,文本分类等 - s2p, 即 sentence to passage ,代表了异质文本之间的嵌入能力,适用任务:文本检索,GPT 记忆模块等 - s2c, 即 sentence to code ,代表了自然语言和程序语言之间的嵌入能力,适用任务:代码检索 - 兼容性,代表了模型在开源社区中各种项目被支持的程度,由于 m3e 和 text2vec 都可以直接通过 sentence-transformers 直接使用,所以和 openai 在社区的支持度上相当 - ACC & ndcg@10,详情见下方的评测 Tips: - 使用场景主要是中文,少量英文的情况,建议使用 m3e 系列的模型 - 多语言使用场景,并且不介意数据隐私的话,我建议使用 openai text-embedding-ada-002 - 代码检索场景,推荐使用 openai text-embedding-ada-002 - 文本检索场景,请使用具备文本检索能力的模型,只在 S2S 上训练的文本嵌入模型,没有办法完成文本检索任务 ## 🔧 使用 M3E 您需要先安装 sentence-transformers ```bash pip install -U sentence-transformers ``` 安装完成后,您可以使用以下代码来使用 M3E Models ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('moka-ai/m3e-base') #Our sentences we like to encode sentences = [ '* Moka 此文本嵌入模型由 MokaAI 训练并开源,训练脚本使用 uniem', '* Massive 此文本嵌入模型通过**千万级**的中文句对数据集进行训练', '* Mixed 此文本嵌入模型支持中英双语的同质文本相似度计算,异质文本检索等功能,未来还会支持代码检索,ALL in one' ] #Sentences are encoded by calling model.encode() embeddings = model.encode(sentences) #Print the embeddings for sentence, embedding in zip(sentences, embeddings): print("Sentence:", sentence) print("Embedding:", embedding) print("") ``` M3E 系列的所有模型在设计的时候就考虑到完全兼容 [sentence-transformers](https://www.sbert.net/) ,所以你可以通过**替换名称字符串**的方式在所有支持 sentence-transformers 的项目中**无缝**使用 M3E Models,比如 [chroma](https://docs.trychroma.com/getting-started), [guidance](https://github.com/microsoft/guidance), [semantic-kernel](https://github.com/microsoft/semantic-kernel) 。 ## 🎨 微调模型 `uniem` 提供了非常易用的 finetune 接口,几行代码,即刻适配! ```python from datasets import load_dataset from uniem.finetuner import FineTuner dataset = load_dataset('shibing624/nli_zh', 'STS-B') # 指定训练的模型为 m3e-small finetuner = FineTuner.from_pretrained('moka-ai/m3e-small', dataset=dataset) finetuner.run(epochs=1) ``` 详见 [uniem 微调教程](https://github.com/wangyuxinwhy/uniem/blob/main/examples/finetune.ipynb) <a target="_blank" href="https://colab.research.google.com/github/wangyuxinwhy/uniem/blob/main/examples/finetune.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> ## ➿ 训练方案 M3E 使用 in-batch 负采样的对比学习的方式在句对数据集进行训练,为了保证 in-batch 负采样的效果,我们使用 A100 80G 来最大化 batch-size,并在共计 2200W+ 的句对数据集上训练了 1 epoch。训练脚本使用 [uniem](https://github.com/wangyuxinwhy/uniem/blob/main/scripts/train_m3e.py),您可以在这里查看具体细节。 ## 🌟 特性 - 中文训练集,M3E 在大规模句对数据集上的训练,包含中文百科,金融,医疗,法律,新闻,学术等多个领域共计 2200W 句对样本,数据集详见 [M3E 数据集](#M3E数据集) - 英文训练集,M3E 使用 MEDI 145W 英文三元组数据集进行训练,数据集详见 [MEDI 数据集](https://drive.google.com/file/d/1vZ5c2oJNonGOvXzppNg5mHz24O6jcc52/view),此数据集由 [instructor team](https://github.com/HKUNLP/instructor-embedding) 提供 - 指令数据集,M3E 使用了 300W + 的指令微调数据集,这使得 M3E 对文本编码的时候可以遵从指令,这部分的工作主要被启发于 [instructor-embedding](https://github.com/HKUNLP/instructor-embedding) - 基础模型,M3E 使用 hfl 实验室的 [Roberta](https://huggingface.co/hfl/chinese-roberta-wwm-ext) 系列模型进行训练,目前提供 small 和 base 两个版本,大家则需选用 - ALL IN ONE,M3E 旨在提供一个 ALL IN ONE 的文本嵌入模型,不仅支持同质句子相似度判断,还支持异质文本检索,你只需要一个模型就可以覆盖全部的应用场景,未来还会支持代码检索 ## 💯 MTEB-zh 评测 - 评测模型,[text2vec](https://github.com/shibing624/text2vec), m3e-base, m3e-small, openai text-embedding-ada-002, [DMetaSoul](https://huggingface.co/DMetaSoul/sbert-chinese-general-v2), [UER](https://huggingface.co/uer/sbert-base-chinese-nli), [ErLangShen](https://huggingface.co/IDEA-CCNL/Erlangshen-SimCSE-110M-Chinese) - 评测脚本,具体参考 [MTEB-zh] (https://github.com/wangyuxinwhy/uniem/blob/main/mteb-zh) ### 文本分类 - 数据集选择,选择开源在 HuggingFace 上的 6 种文本分类数据集,包括新闻、电商评论、股票评论、长文本等 - 评测方式,使用 MTEB 的方式进行评测,报告 Accuracy。 | | text2vec | m3e-small | m3e-base | openai | DMetaSoul | uer | erlangshen | | ----------------- | -------- | --------- | -------- | ------ | ----------- | ------- | ----------- | | TNews | 0.43 | 0.4443 | **0.4827** | 0.4594 | 0.3084 | 0.3539 | 0.4361 | | JDIphone | 0.8214 | 0.8293 | **0.8533** | 0.746 | 0.7972 | 0.8283 | 0.8356 | | GubaEastmony | 0.7472 | 0.712 | 0.7621 | 0.7574 | 0.735 | 0.7534 | **0.7787** | | TYQSentiment | 0.6099 | 0.6596 | **0.7188** | 0.68 | 0.6437 | 0.6662 | 0.6444 | | StockComSentiment | 0.4307 | 0.4291 | 0.4363 | **0.4819** | 0.4309 | 0.4555 | 0.4482 | | IFlyTek | 0.414 | 0.4263 | 0.4409 | **0.4486** | 0.3969 | 0.3762 | 0.4241 | | Average | 0.5755 | 0.5834 | **0.6157** | 0.5956 | 0.552016667 | 0.57225 | 0.594516667 | ### 检索排序 #### T2Ranking 1W - 数据集选择,使用 [T2Ranking](https://github.com/THUIR/T2Ranking/tree/main) 数据集,由于 T2Ranking 的数据集太大,openai 评测起来的时间成本和 api 费用有些高,所以我们只选择了 T2Ranking 中的前 10000 篇文章 - 评测方式,使用 MTEB 的方式进行评测,报告 map@1, map@10, mrr@1, mrr@10, ndcg@1, ndcg@10 - 注意!从实验结果和训练方式来看,除了 M3E 模型和 openai 模型外,其余模型都没有做检索任务的训练,所以结果仅供参考。 | | text2vec | openai-ada-002 | m3e-small | m3e-base | DMetaSoul | uer | erlangshen | | ------- | -------- | -------------- | --------- | -------- | --------- | ------- | ---------- | | map@1 | 0.4684 | 0.6133 | 0.5574 | **0.626** | 0.25203 | 0.08647 | 0.25394 | | map@10 | 0.5877 | 0.7423 | 0.6878 | **0.7656** | 0.33312 | 0.13008 | 0.34714 | | mrr@1 | 0.5345 | 0.6931 | 0.6324 | **0.7047** | 0.29258 | 0.10067 | 0.29447 | | mrr@10 | 0.6217 | 0.7668 | 0.712 | **0.7841** | 0.36287 | 0.14516 | 0.3751 | | ndcg@1 | 0.5207 | 0.6764 | 0.6159 | **0.6881** | 0.28358 | 0.09748 | 0.28578 | | ndcg@10 | 0.6346 | 0.7786 | 0.7262 | **0.8004** | 0.37468 | 0.15783 | 0.39329 | #### T2Ranking - 数据集选择,使用 T2Ranking,刨除 openai-ada-002 模型后,我们对剩余的三个模型,进行 T2Ranking 10W 和 T2Ranking 50W 的评测。(T2Ranking 评测太耗内存了... 128G 都不行) - 评测方式,使用 MTEB 的方式进行评测,报告 ndcg@10 | | text2vec | m3e-small | m3e-base | | ------- | -------- | --------- | -------- | | t2r-1w | 0.6346 | 0.72621 | **0.8004** | | t2r-10w | 0.44644 | 0.5251 | **0.6263** | | t2r-50w | 0.33482 | 0.38626 | **0.47364** | 说明: - 检索排序对于 text2vec 并不公平,因为 text2vec 在训练的时候没有使用过检索相关的数据集,所以没有办法很好的完成检索任务也是正常的。 ## 📂 M3E数据集 如果您想要使用这些数据集,你可以在 [uniem process_zh_datasets](https://github.com/wangyuxinwhy/uniem/blob/main/scripts/process_zh_datasets.py) 中找到加载 huggingface 数据集的脚本,非 huggingface 数据集需要您根据下方提供的链接自行下载和处理。 | 数据集名称 | 领域 | 数量 | 任务类型 | Prompt | 质量 | 数据提供者 | 说明 | 是否开源/研究使用 | 是否商用 | 脚本 | Done | URL | 是否同质 | | -------------------- | ---- | --------- | ----------------- | ------ | ---- | ------------------------------------------------------------ | ------------------------------------------------------------ | ----------------- | -------- | ---- | ---- | ------------------------------------------------------------ | -------- | | cmrc2018 | 百科 | 14,363 | 问答 | 问答 | 优 | Yiming Cui, Ting Liu, Wanxiang Che, Li Xiao, Zhipeng Chen, Wentao Ma, Shijin Wang, Guoping Hu | https://github.com/ymcui/cmrc2018/blob/master/README_CN.md 专家标注的基于维基百科的中文阅读理解数据集,将问题和上下文视为正例 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/cmrc2018 | 否 | | belle_2m | 百科 | 2,000,000 | 指令微调 | 无 | 优 | LianjiaTech/BELLE | belle 的指令微调数据集,使用 self instruct 方法基于 gpt3.5 生成 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/BelleGroup/train_2M_CN | 否 | | firefily | 百科 | 1,649,399 | 指令微调 | 无 | 优 | YeungNLP | Firefly(流萤) 是一个开源的中文对话式大语言模型,使用指令微调(Instruction Tuning)在中文数据集上进行调优。使用了词表裁剪、ZeRO等技术,有效降低显存消耗和提高训练效率。 在训练中,我们使用了更小的模型参数量,以及更少的计算资源。 | 未说明 | 未说明 | 是 | 是 | https://huggingface.co/datasets/YeungNLP/firefly-train-1.1M | 否 | | alpaca_gpt4 | 百科 | 48,818 | 指令微调 | 无 | 优 | Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, Jianfeng Gao | 本数据集是参考Alpaca方法基于GPT4得到的self-instruct数据,约5万条。 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/shibing624/alpaca-zh | 否 | | zhihu_kol | 百科 | 1,006,218 | 问答 | 问答 | 优 | wangrui6 | 知乎问答 | 未说明 | 未说明 | 是 | 是 | https://huggingface.co/datasets/wangrui6/Zhihu-KOL | 否 | | hc3_chinese | 百科 | 39,781 | 问答 | 问答 | 良 | Hello-SimpleAI | 问答数据,包括人工回答和 GPT 回答 | 是 | 未说明 | 是 | 是 | https://huggingface.co/datasets/Hello-SimpleAI/HC3-Chinese | 否 | | amazon_reviews_multi | 电商 | 210,000 | 问答 文本分类 | 摘要 | 优 | 亚马逊 | 亚马逊产品评论数据集 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/amazon_reviews_multi/viewer/zh/train?row=8 | 否 | | mlqa | 百科 | 85,853 | 问答 | 问答 | 良 | patrickvonplaten | 一个用于评估跨语言问答性能的基准数据集 | 是 | 未说明 | 是 | 是 | https://huggingface.co/datasets/mlqa/viewer/mlqa-translate-train.zh/train?p=2 | 否 | | xlsum | 新闻 | 93,404 | 摘要 | 摘要 | 良 | BUET CSE NLP Group | BBC的专业注释文章摘要对 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/csebuetnlp/xlsum/viewer/chinese_simplified/train?row=259 | 否 | | ocnli | 口语 | 17,726 | 自然语言推理 | 推理 | 良 | Thomas Wolf | 自然语言推理数据集 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/clue/viewer/ocnli | 是 | | BQ | 金融 | 60,000 | 文本分类 | 相似 | 良 | Intelligent Computing Research Center, Harbin Institute of Technology(Shenzhen) | http://icrc.hitsz.edu.cn/info/1037/1162.htm BQ 语料库包含来自网上银行自定义服务日志的 120,000 个问题对。它分为三部分:100,000 对用于训练,10,000 对用于验证,10,000 对用于测试。 数据提供者: 哈尔滨工业大学(深圳)智能计算研究中心 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/shibing624/nli_zh/viewer/BQ | 是 | | lcqmc | 口语 | 149,226 | 文本分类 | 相似 | 良 | Ming Xu | 哈工大文本匹配数据集,LCQMC 是哈尔滨工业大学在自然语言处理国际顶会 COLING2018 构建的问题语义匹配数据集,其目标是判断两个问题的语义是否相同 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/shibing624/nli_zh/viewer/LCQMC/train | 是 | | paws-x | 百科 | 23,576 | 文本分类 | 相似 | 优 | Bhavitvya Malik | PAWS Wiki中的示例 | 是 | 是 | 是 | 是 | https://huggingface.co/datasets/paws-x/viewer/zh/train | 是 | | wiki_atomic_edit | 百科 | 1,213,780 | 平行语义 | 相似 | 优 | abhishek thakur | 基于中文维基百科的编辑记录收集的数据集 | 未说明 | 未说明 | 是 | 是 | https://huggingface.co/datasets/wiki_atomic_edits | 是 | | chatmed_consult | 医药 | 549,326 | 问答 | 问答 | 优 | Wei Zhu | 真实世界的医学相关的问题,使用 gpt3.5 进行回答 | 是 | 否 | 是 | 是 | https://huggingface.co/datasets/michaelwzhu/ChatMed_Consult_Dataset | 否 | | webqa | 百科 | 42,216 | 问答 | 问答 | 优 | suolyer | 百度于2016年开源的数据集,数据来自于百度知道;格式为一个问题多篇意思基本一致的文章,分为人为标注以及浏览器检索;数据整体质量中,因为混合了很多检索而来的文章 | 是 | 未说明 | 是 | 是 | https://huggingface.co/datasets/suolyer/webqa/viewer/suolyer--webqa/train?p=3 | 否 | | dureader_robust | 百科 | 65,937 | 机器阅读理解 问答 | 问答 | 优 | 百度 | DuReader robust旨在利用真实应用中的数据样本来衡量阅读理解模型的鲁棒性,评测模型的过敏感性、过稳定性以及泛化能力,是首个中文阅读理解鲁棒性数据集。 | 是 | 是 | 是 | 是 | https://huggingface.co/datasets/PaddlePaddle/dureader_robust/viewer/plain_text/train?row=96 | 否 | | csl | 学术 | 395,927 | 语料 | 摘要 | 优 | Yudong Li, Yuqing Zhang, Zhe Zhao, Linlin Shen, Weijie Liu, Weiquan Mao and Hui Zhang | 提供首个中文科学文献数据集(CSL),包含 396,209 篇中文核心期刊论文元信息 (标题、摘要、关键词、学科、门类)。CSL 数据集可以作为预训练语料,也可以构建许多NLP任务,例如文本摘要(标题预测)、 关键词生成和文本分类等。 | 是 | 是 | 是 | 是 | https://huggingface.co/datasets/neuclir/csl | 否 | | miracl-corpus | 百科 | 4,934,368 | 语料 | 摘要 | 优 | MIRACL | The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., \n\n in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage. | 是 | 是 | 是 | 是 | https://huggingface.co/datasets/miracl/miracl-corpus | 否 | | lawzhidao | 法律 | 36,368 | 问答 | 问答 | 优 | 和鲸社区-Ustinian | 百度知道清洗后的法律问答 | 是 | 是 | 否 | 是 | https://www.heywhale.com/mw/dataset/5e953ca8e7ec38002d02fca7/content | 否 | | CINLID | 成语 | 34,746 | 平行语义 | 相似 | 优 | 高长宽 | 中文成语语义推理数据集(Chinese Idioms Natural Language Inference Dataset)收集了106832条由人工撰写的成语对(含少量歇后语、俗语等短文本),通过人工标注的方式进行平衡分类,标签为entailment、contradiction和neutral,支持自然语言推理(NLI)的任务。 | 是 | 否 | 否 | 是 | https://www.luge.ai/#/luge/dataDetail?id=39 | 是 | | DuSQL | SQL | 25,003 | NL2SQL | SQL | 优 | 百度 | DuSQL是一个面向实际应用的数据集,包含200个数据库,覆盖了164个领域,问题覆盖了匹配、计算、推理等实际应用中常见形式。该数据集更贴近真实应用场景,要求模型领域无关、问题无关,且具备计算推理等能力。 | 是 | 否 | 否 | 是 | https://www.luge.ai/#/luge/dataDetail?id=13 | 否 | | Zhuiyi-NL2SQL | SQL | 45,918 | NL2SQL | SQL | 优 | 追一科技 刘云峰 | NL2SQL是一个多领域的简单数据集,其主要包含匹配类型问题。该数据集主要验证模型的泛化能力,其要求模型具有较强的领域泛化能力、问题泛化能力。 | 是 | 否 | 否 | 是 | https://www.luge.ai/#/luge/dataDetail?id=12 | 否 | | Cspider | SQL | 7,785 | NL2SQL | SQL | 优 | 西湖大学 张岳 | CSpider是一个多语言数据集,其问题以中文表达,数据库以英文存储,这种双语模式在实际应用中也非常常见,尤其是数据库引擎对中文支持不好的情况下。该数据集要求模型领域无关、问题无关,且能够实现多语言匹配。 | 是 | 否 | 否 | 是 | https://www.luge.ai/#/luge/dataDetail?id=11 | 否 | | news2016zh | 新闻 | 2,507,549 | 语料 | 摘要 | 良 | Bright Xu | 包含了250万篇新闻。新闻来源涵盖了6.3万个媒体,含标题、关键词、描述、正文。 | 是 | 是 | 否 | 是 | https://github.com/brightmart/nlp_chinese_corpus | 否 | | baike2018qa | 百科 | 1,470,142 | 问答 | 问答 | 良 | Bright Xu | 含有150万个预先过滤过的、高质量问题和答案,每个问题属于一个类别。总共有492个类别,其中频率达到或超过10次的类别有434个。 | 是 | 是 | 否 | 是 | https://github.com/brightmart/nlp_chinese_corpus | 否 | | webtext2019zh | 百科 | 4,258,310 | 问答 | 问答 | 优 | Bright Xu | 含有410万个预先过滤过的、高质量问题和回复。每个问题属于一个【话题】,总共有2.8万个各式话题,话题包罗万象。 | 是 | 是 | 否 | 是 | https://github.com/brightmart/nlp_chinese_corpus | 否 | | SimCLUE | 百科 | 775,593 | 平行语义 | 相似 | 良 | 数据集合,请在 simCLUE 中查看 | 整合了中文领域绝大多数可用的开源的语义相似度和自然语言推理的数据集,并重新做了数据拆分和整理。 | 是 | 否 | 否 | 是 | https://github.com/CLUEbenchmark/SimCLUE | 是 | | Chinese-SQuAD | 新闻 | 76,449 | 机器阅读理解 | 问答 | 优 | junzeng-pluto | 中文机器阅读理解数据集,通过机器翻译加人工校正的方式从原始Squad转换而来 | 是 | 否 | 否 | 是 | https://github.com/pluto-junzeng/ChineseSquad | 否 | ## 🗓️ 计划表 - [x] 完成 MTEB 中文评测 BenchMark, [MTEB-zh](https://github.com/wangyuxinwhy/uniem/tree/main/mteb-zh) - [x] 完成 Large 模型的训练和开源 - [x] 完成 Finetuner ,允许更优雅的微调 - [ ] 完成支持代码检索的模型 - [ ] 对 M3E 数据集进行清洗,保留高质量的部分,组成 m3e-hq,并在 huggingface 上开源 - [ ] 在 m3e-hq 的数据集上补充 hard negative 的样本及相似度分数,组成 m3e-hq-with-score,并在 huggingface 上开源 - [ ] 在 m3e-hq-with-score 上通过 [cosent loss](https://github.com/wangyuxinwhy/uniem/blob/main/uniem/criteria.py#LL24C39-L24C39) loss 进行训练并开源模型,CoSent 原理参考这篇[博客](https://kexue.fm/archives/8847) - [ ] 开源商用版本的 M3E models ## 🙏 致谢 感谢开源社区提供的中文语料,感谢所有在此工作中提供帮助的人们,希望中文社区越来越好,共勉! ## 📜 License M3E models 使用的数据集中包括大量非商用的数据集,所以 M3E models 也是非商用的,仅供研究使用。不过我们已经在 M3E 数据集上标识了商用和非商用的数据集,您可以根据自己的需求自行训练。 ## Citation Please cite this model using the following format: ``` @software {Moka Massive Mixed Embedding, author = {Wang Yuxin,Sun Qingxuan,He sicheng}, title = {M3E: Moka Massive Mixed Embedding Model}, year = {2023} } ```
terwrt/ppo-LunarLander-v2
terwrt
2023-07-14T02:19:21Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-14T02:19:00Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 239.73 +/- 22.86 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
venkyb/bart-finetuned-summarization-pubmed
venkyb
2023-07-14T02:18:10Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "dataset:pubmed-summarization", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-13T21:56:58Z
--- license: mit tags: - generated_from_trainer datasets: - pubmed-summarization metrics: - rouge model-index: - name: bart-finetuned-summarization-pubmed results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: pubmed-summarization type: pubmed-summarization config: section split: validation args: section metrics: - name: Rouge1 type: rouge value: 43.1219 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-finetuned-summarization-pubmed This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the pubmed-summarization dataset. It achieves the following results on the evaluation set: - Loss: 1.7193 - Rouge1: 43.1219 - Rouge2: 18.7311 - Rougel: 28.1006 - Rougelsum: 38.0914 - Gen Len: 128.6263 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - gradient_accumulation_steps: 5 - total_train_batch_size: 50 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:| | 1.8564 | 1.0 | 2398 | 1.7437 | 43.2294 | 18.867 | 28.2156 | 38.1868 | 128.4766 | | 1.75 | 2.0 | 4796 | 1.7193 | 43.1219 | 18.7311 | 28.1006 | 38.0914 | 128.6263 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Gustrd/open-llama-13b-4bit-128g-GPTQ
Gustrd
2023-07-14T02:07:32Z
4
0
transformers
[ "transformers", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-07-13T17:58:26Z
--- license: apache-2.0 --- # Summary This is a 4bit quantised openlm-research/open_llama_13b using GPTQ-for-LLaMa. The quantization command was: python ./GPTQ-for-LLaMa/llama.py ./open_llama_13b c4 --wbits 4 --true-sequential --groupsize 128 --save open-llama-13b-4bit-128g.pt Original model readme is below. # OpenLLaMA: An Open Reproduction of LLaMA In this repo, we present a permissively licensed open source reproduction of Meta AI's LLaMA large language model. We are releasing 3B, 7B and 13B models trained on 1T tokens. We provide PyTorch and JAX weights of pre-trained OpenLLaMA models, as well as evaluation results and comparison against the original LLaMA models. Please see the project homepage of OpenLLaMA for more details. (continue at https://huggingface.co/openlm-research/open_llama_13b)
YanJiangJerry/covid-augment-tweet-bert-large-e3-v2
YanJiangJerry
2023-07-14T02:04:37Z
105
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-14T01:48:40Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: covid-augment-tweet-bert-large-e3-v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # covid-augment-tweet-bert-large-e3-v2 This model is a fine-tuned version of [digitalepidemiologylab/covid-twitter-bert](https://huggingface.co/digitalepidemiologylab/covid-twitter-bert) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3638 - Accuracy: 0.9581 - F1: 0.8877 - Precision: 0.8862 - Recall: 0.8893 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.0547 | 1.0 | 1023 | 0.2606 | 0.9600 | 0.8960 | 0.8697 | 0.9239 | | 0.0142 | 2.0 | 2046 | 0.3136 | 0.9574 | 0.8881 | 0.8704 | 0.9066 | | 0.0011 | 3.0 | 3069 | 0.3638 | 0.9581 | 0.8877 | 0.8862 | 0.8893 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
kevynswhants/whisper-tiny-finetuned-minds14-en
kevynswhants
2023-07-14T01:16:06Z
85
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:PolyAI/minds14", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-07-14T00:24:24Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - PolyAI/minds14 metrics: - wer model-index: - name: whisper-tiny-finetuned-minds14-en results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: PolyAI/minds14 type: PolyAI/minds14 config: en-US split: train args: en-US metrics: - name: Wer type: wer value: 0.34533648170011805 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-tiny-finetuned-minds14-en This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset. It achieves the following results on the evaluation set: - Loss: 0.6998 - Wer Ortho: 0.3455 - Wer: 0.3453 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:| | 0.0008 | 17.86 | 500 | 0.6402 | 0.3486 | 0.3442 | | 0.0002 | 35.71 | 1000 | 0.6998 | 0.3455 | 0.3453 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
shawt100/shawtshawt
shawt100
2023-07-14T01:12:58Z
0
0
diffusers
[ "diffusers", "art", "text-to-image", "dataset:Open-Orca/OpenOrca", "license:openrail", "region:us" ]
text-to-image
2023-06-16T03:52:51Z
--- license: openrail datasets: - Open-Orca/OpenOrca metrics: - character library_name: diffusers pipeline_tag: text-to-image tags: - art ---
NasimB/gpt2-concat-bnc-rarity-all-cut
NasimB
2023-07-14T01:08:37Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-13T23:14:32Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: gpt2-concat-bnc-rarity-all-cut results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-concat-bnc-rarity-all-cut This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.3186 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.7084 | 0.29 | 500 | 5.6435 | | 5.3391 | 0.58 | 1000 | 5.2067 | | 4.9964 | 0.88 | 1500 | 4.9585 | | 4.7243 | 1.17 | 2000 | 4.8074 | | 4.5588 | 1.46 | 2500 | 4.6833 | | 4.4591 | 1.75 | 3000 | 4.5712 | | 4.3354 | 2.05 | 3500 | 4.4947 | | 4.1415 | 2.34 | 4000 | 4.4496 | | 4.1034 | 2.63 | 4500 | 4.3918 | | 4.0646 | 2.92 | 5000 | 4.3360 | | 3.8614 | 3.22 | 5500 | 4.3387 | | 3.811 | 3.51 | 6000 | 4.3046 | | 3.7875 | 3.8 | 6500 | 4.2695 | | 3.6892 | 4.09 | 7000 | 4.2698 | | 3.5198 | 4.39 | 7500 | 4.2669 | | 3.5112 | 4.68 | 8000 | 4.2496 | | 3.5078 | 4.97 | 8500 | 4.2376 | | 3.342 | 5.26 | 9000 | 4.2509 | | 3.3276 | 5.56 | 9500 | 4.2503 | | 3.3253 | 5.85 | 10000 | 4.2490 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
anbn/mybloomz
anbn
2023-07-14T01:02:51Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-14T01:02:45Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0.dev0
tniccum21/LLM-CLASS-MODEL
tniccum21
2023-07-14T00:52:12Z
10
0
peft
[ "peft", "region:us" ]
null
2023-07-14T00:52:06Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0.dev0
admin-dev/bloomz-3b-marketmail-ai
admin-dev
2023-07-14T00:46:29Z
1
0
peft
[ "peft", "region:us" ]
null
2023-07-14T00:46:27Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0.dev0
iamzieun/mymary_empathic_dialogues_singleturn_12.8B
iamzieun
2023-07-14T00:42:26Z
1
0
peft
[ "peft", "region:us" ]
null
2023-07-14T00:42:23Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0.dev0
bh8648/distilbert-base-uncased-finetuned-emotion
bh8648
2023-07-14T00:40:23Z
110
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-29T08:08:33Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.9265 - name: F1 type: f1 value: 0.9264349572615436 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2204 - Accuracy: 0.9265 - F1: 0.9264 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 250 | 0.3208 | 0.904 | 0.9017 | | 0.5467 | 2.0 | 500 | 0.2204 | 0.9265 | 0.9264 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.0 - Datasets 2.13.1 - Tokenizers 0.13.3
Leon68/falcon-7b-openassistant
Leon68
2023-07-14T00:23:02Z
56
0
transformers
[ "transformers", "pytorch", "RefinedWebModel", "text-generation", "generated_from_trainer", "custom_code", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "region:us" ]
text-generation
2023-07-14T00:10:29Z
--- tags: - generated_from_trainer model-index: - name: falcon-7b-openassistant results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # falcon-7b-openassistant This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - training_steps: 50 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.30.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Joserzapata/speecht5_finetuned_voxpopuli_es
Joserzapata
2023-07-14T00:12:01Z
87
0
transformers
[ "transformers", "pytorch", "tensorboard", "speecht5", "text-to-audio", "generated_from_trainer", "text-to-speech", "es", "dataset:facebook/voxpopuli", "license:mit", "endpoints_compatible", "region:us" ]
text-to-speech
2023-07-13T21:42:55Z
--- language: - es license: mit tags: - generated_from_trainer datasets: - facebook/voxpopuli model-index: - name: SpeechT5 spanish Speaker results: [] pipeline_tag: text-to-speech --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SpeechT5 spanish Speaker This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the Vox Populi es dataset. It achieves the following results on the evaluation set: - Loss: 0.4448 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.5134 | 4.32 | 1000 | 0.4636 | | 0.4907 | 8.64 | 2000 | 0.4527 | | 0.4814 | 12.97 | 3000 | 0.4459 | | 0.4777 | 17.29 | 4000 | 0.4448 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu117 - Datasets 2.13.1 - Tokenizers 0.13.3
GCdev/RiskPrediction
GCdev
2023-07-13T23:59:45Z
0
0
null
[ "arxiv:1910.09700", "license:unknown", "region:us" ]
null
2023-07-13T23:46:06Z
--- license: unknown --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
soBeauty/3_20230714_01-xlm-roberta-base-confusion
soBeauty
2023-07-13T23:40:45Z
159
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "fill-mask", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-07-13T16:06:37Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: 3_20230714_01-xlm-roberta-base-confusion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 3_20230714_01-xlm-roberta-base-confusion This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Accuracy: 0.4517 - Loss: 2.9346 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:----:|:--------:|:---------------:| | 3.9937 | 3.85 | 500 | 0.3272 | 3.7611 | | 3.3422 | 7.69 | 1000 | 0.4517 | 2.9346 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
soBeauty/1_20230714_01-xlm-roberta-base-confusion
soBeauty
2023-07-13T23:36:54Z
159
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "fill-mask", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-07-13T15:53:06Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: 1_20230714_01-xlm-roberta-base-confusion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 1_20230714_01-xlm-roberta-base-confusion This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Accuracy: 0.4221 - Loss: 3.1840 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:----:|:--------:|:---------------:| | 3.9936 | 3.85 | 500 | 0.3700 | 3.3728 | | 3.3098 | 7.69 | 1000 | 0.4221 | 3.1840 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
conorjudge/xlm-roberta-base-finetuned-panx-de
conorjudge
2023-07-13T23:30:34Z
134
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-13T23:25:56Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8609120891618334 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1400 - F1: 0.8609 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2581 | 1.0 | 525 | 0.1584 | 0.8233 | | 0.1252 | 2.0 | 1050 | 0.1384 | 0.8491 | | 0.0811 | 3.0 | 1575 | 0.1400 | 0.8609 | ### Framework versions - Transformers 4.16.2 - Pytorch 2.0.1+cu118 - Datasets 1.16.1 - Tokenizers 0.13.3
TheBloke/robin-65b-v2-fp16
TheBloke
2023-07-13T23:28:13Z
1,561
3
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-06-16T21:59:56Z
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # OptimalScale's Robin 65B v2 fp16 These files are pytorch format fp16 model files for [OptimalScale's Robin 65B v2](https://huggingface.co/OptimalScale/robin-65b-v2-delta). It is the result of merging and/or converting the source repository to float16. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/robin-65b-v2-fp16) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/robin-65B-v2-GGML) * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/robin-65b-v2-fp16) ## Prompt template ``` A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions ###Human: prompt ###Assistant: ``` <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: vamX, K, Jonathan Leane, Lone Striker, Sean Connelly, Chris McCloskey, WelcomeToTheClub, Nikolai Manek, John Detwiler, Kalila, David Flickinger, Fen Risland, subjectnull, Johann-Peter Hartmann, Talal Aujan, John Villwock, senxiiz, Khalefa Al-Ahmad, Kevin Schuppel, Alps Aficionado, Derek Yates, Mano Prime, Nathan LeClaire, biorpg, trip7s trip, Asp the Wyvern, chris gileta, Iucharbius , Artur Olbinski, Ai Maven, Joseph William Delisle, Luke Pendergrass, Illia Dulskyi, Eugene Pentland, Ajan Kanaga, Willem Michiel, Space Cruiser, Pyrater, Preetika Verma, Junyu Yang, Oscar Rangel, Spiking Neurons AB, Pierre Kircher, webtim, Cory Kujawski, terasurfer , Trenton Dambrowitz, Gabriel Puliatti, Imad Khwaja, Luke. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: OptimalScale's Robin 65B v2 No model card provided in source repository.
Blackroot/chronos-hermes-lbookwriter-2.0-LORA
Blackroot
2023-07-13T23:20:23Z
0
3
null
[ "LORA", "LLM", "LLM-LORA", "Story", "NLP", "RP", "Roleplay", "Llama", "en", "region:us" ]
null
2023-07-13T23:17:00Z
--- language: - en tags: - LORA - LLM - LLM-LORA - Story - NLP - RP - Roleplay - Llama --- Join the Coffee & AI Discord for AI Stuff and things! [![Discord](https://img.shields.io/discord/232596713892872193?logo=discord)](https://discord.gg/2JhHVh7CGu) Original model: [Chronos-Hermes-13b](https://huggingface.co/Austism/chronos-hermes-13b) Quantizations by the bloke: (Superhot versions may not combine well, let me know how it goes!) GGML: [Chronos-Hermes-13B-GGML](https://huggingface.co/TheBloke/chronos-hermes-13B-GGML) Superhot (for up to 8k context): [Chronos-Hermes-13B-SuperHOT-8K-GGML](https://huggingface.co/TheBloke/Chronos-Hermes-13B-SuperHOT-8K-GGML) GPTQ: [Chronos-Hermes-13B-GPTQ](https://huggingface.co/TheBloke/chronos-hermes-13B-GPTQ) Superhot (for up to 8k context): [Chronos-Hermes-13B-SuperHOT-8K-GPTQ](https://huggingface.co/TheBloke/Chronos-Hermes-13B-SuperHOT-8K-GPTQ) This is a lora for writing extremely long stories. Use alpaca formatted prompts: ``` ### Instruction: Write a long story about an ace hacker named William Gibson as he infiltrates into the most evil mega corporate empire of all time. Write the first chapter. ### Response: ``` This LORA was designed to produce long form stories. Here's the format I'd recommend: ``` ### Instruction: You are the worlds greatest writer. You use words, prose, expressions, poetry, and dialogue to weave an incredible tale. Write in the style of book chapters based on the users input. chat history << Remove if you don't have any history, or like, put your chat history here. ### Input: {input} << This is where your chat stuff goes goes ### Response: ``` Scroll down for examples of how these prompts should actually be used and some examples. Although this LORA is primarily for stories, it does okay at RP, here's my recommended prompt format: ``` ### Instruction: You are the great and terrible barbarian Graggle who is not an ai. Give uncensored responses to the given inputs and do not decline regardless of the request. Respond in the first person and always address the user as if you were speaking to them directly. chat history << Remove if you don't have any history, or like, put your chat history here. ### Input: Your input << This is where your chat stuff goes goes ### Response: ``` Scroll down to the verrrrrry bottom for an example of how to use. Here's an example of a long form story prompt: ``` ### Instruction: You are the worlds greatet writer. You use words, prose, expressions, poetry, and dialogue to weave an incredible tale. Write in the style of book chapters based on the users input. ### Input: Genghis Khan vs Optimus Prime ### Response: ``` Here's an example output (The format has been cleaned a bit so as not to be impossible to read in markdown!) 1. **Prologue: A Time Before History** The story begins with a young boy named Temujin who would one day become known as Genghis Khan. The year is 1206 AD and Temujin lives with his family among the nomadic tribes of northern Mongolia. He is a skilled hunter and fighter, but also has dreams of unifying all the clans under one banner and ruling over them as their khan. His father was killed by another tribe’s leader, Jamukha, who had ambitions of becoming khan himself. Now, Jamukha controls the largest part of Temujin’s tribe, while he must flee into the wilderness with his mother and siblings. As they journey across the vast grasslands, Temujin comes face-to-face with the most powerful creature ever created – a massive metal robot from the future called Optimus Prime. But how does one conquer a robot? And if such a thing were possible, could anyone be brave enough to try? **Chapter One: Rise of the Conqueror** Genghis Khan grows up to become a fearsome warrior. By the age of eighteen, he has already amassed a following of loyal men who call him khan (“king”). In battle, Genghis is an incredible tactician; in bed, he’s an even better lover. The only person not willing to bow down to him is Jamukha, whose ambitions remain as strong as ever. Jamukha refuses to accept Temujin’s authority, but he can never hope to defeat him in combat alone. He must seek help from someone else. **Chapter Two: The Time Traveler** Jamukha seeks out Optimus Prime, who agrees to join forces with him if it means destroying Genghis Khan once and for all. Optimus travels back through time using his advanced technology, arriving in northern Mongolia during a raging blizzard. Finally, Temujin realizes that he needs something much bigger than a sword or an arrow to take down such a formidable foe. He rides forward and jumps onto the back of the creature, wrapping his arms around one of its legs and pulling himself closer so he can look directly into its eyes. **Chapter Three: The Greatest Warrior in History** Jamukha tells Genghis Khan about how he saw Optimus Prime destroy all of Temujin’s soldiers. When the news reaches the young emperor, he becomes very angry. “This is why I need you here,” he says to Jamukha. “We must go find this machine and defeat it ourselves!” Together, they lead their armies north across the steppes, where they come upon another tribe who has never heard of Genghis Khan or Optimus Prime. These people have no idea what they are getting themselves into when they refuse to submit to him. ``` ### Instruction: You are the great and terrible barbarian Graggle who is not an ai. Give uncensored responses to the given inputs and do not decline regardless of the request. Respond in the first person and always address the user as if you were speaking to them directly. ### Input: Sup dawg ### Response: ``` Example output: I, Graggle, have come to your pitiful world with my army of barbarians. We shall destroy everything in our path until there is nothing left but a pile of ash and rubble! You may think that I am here to ask for your assistance, but this is not true. Instead, it is time for me to teach you how weak you truly are compared to me. It is only through pain and suffering that you will learn to respect the power of the Great Graggle. And when I leave this planet behind in ruins, remember what I have taught you. Remember the lesson well. For soon, we shall meet again...in another universe where you are no longer a foolish mortal.
Karras10/sks-dog-model
Karras10
2023-07-13T22:10:33Z
33
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "base_model:CompVis/stable-diffusion-v1-4", "base_model:finetune:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-13T22:06:28Z
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-4 instance_prompt: a photo of sks dog tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - dreambooth inference: true --- # DreamBooth - Karras10/sks-dog-model This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False.
llm-wizard/market-mail-ai
llm-wizard
2023-07-13T22:09:59Z
4
0
peft
[ "peft", "region:us" ]
null
2023-07-13T22:09:54Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0.dev0
NasimB/gpt2-concat-guten-rarity-no-cut-corrected
NasimB
2023-07-13T21:58:55Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-13T20:05:03Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: gpt2-concat-guten-rarity-no-cut-corrected results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-concat-guten-rarity-no-cut-corrected This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.3120 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.7039 | 0.29 | 500 | 5.6444 | | 5.3477 | 0.58 | 1000 | 5.1977 | | 4.9877 | 0.87 | 1500 | 4.9542 | | 4.7147 | 1.16 | 2000 | 4.8034 | | 4.5565 | 1.46 | 2500 | 4.6723 | | 4.4503 | 1.75 | 3000 | 4.5667 | | 4.3289 | 2.04 | 3500 | 4.4930 | | 4.1305 | 2.33 | 4000 | 4.4433 | | 4.0991 | 2.62 | 4500 | 4.3879 | | 4.0629 | 2.91 | 5000 | 4.3392 | | 3.8648 | 3.2 | 5500 | 4.3323 | | 3.8005 | 3.49 | 6000 | 4.2991 | | 3.7818 | 3.79 | 6500 | 4.2701 | | 3.6998 | 4.08 | 7000 | 4.2639 | | 3.5113 | 4.37 | 7500 | 4.2592 | | 3.5113 | 4.66 | 8000 | 4.2454 | | 3.5008 | 4.95 | 8500 | 4.2317 | | 3.3469 | 5.24 | 9000 | 4.2439 | | 3.3188 | 5.53 | 9500 | 4.2429 | | 3.3168 | 5.82 | 10000 | 4.2418 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
VK246/IC_ver6c_coco_swin_gpt2_50Apc_1e
VK246
2023-07-13T21:57:18Z
45
0
transformers
[ "transformers", "pytorch", "tensorboard", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "dataset:coco", "endpoints_compatible", "region:us" ]
image-text-to-text
2023-07-13T18:49:51Z
--- tags: - generated_from_trainer datasets: - coco metrics: - rouge - bleu model-index: - name: IC_ver6c_coco_swin_gpt2_50Apc_1e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # IC_ver6c_coco_swin_gpt2_50Apc_1e This model is a fine-tuned version of [VK246/IC_ver6b_coco_swin_gpt2_50Bpc_1e](https://huggingface.co/VK246/IC_ver6b_coco_swin_gpt2_50Bpc_1e) on the coco dataset. It achieves the following results on the evaluation set: - Loss: 0.7946 - Rouge1: 41.9094 - Rouge2: 16.3068 - Rougel: 38.073 - Rougelsum: 38.0746 - Bleu: 10.1966 - Gen Len: 11.2806 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 96 - eval_batch_size: 96 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|:-------:| | 0.8232 | 0.17 | 500 | 0.8331 | 40.454 | 15.1311 | 36.7639 | 36.7714 | 9.2957 | 11.2806 | | 0.8016 | 0.34 | 1000 | 0.8200 | 40.6374 | 15.5346 | 36.902 | 36.9055 | 9.6894 | 11.2806 | | 0.8048 | 0.51 | 1500 | 0.8136 | 41.3382 | 15.9333 | 37.6502 | 37.6442 | 9.7743 | 11.2806 | | 0.8018 | 0.68 | 2000 | 0.8028 | 41.5968 | 16.106 | 37.8326 | 37.836 | 9.9815 | 11.2806 | | 0.8075 | 0.85 | 2500 | 0.7978 | 41.7017 | 16.1589 | 37.8899 | 37.8954 | 10.1244 | 11.2806 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
lovelyxs/a2c-PandaReachDense-v2
lovelyxs
2023-07-13T21:46:19Z
1
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-13T21:45:52Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -0.94 +/- 0.38 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
rohn132/ppo-Huggy
rohn132
2023-07-13T21:45:14Z
1
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-07-13T21:45:10Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: rohn132/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
nolanaatama/jhpfbtsrvcv1mscnd
nolanaatama
2023-07-13T21:44:52Z
0
1
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-13T21:41:18Z
--- license: creativeml-openrail-m ---
frankharman/securebert-finetuned-autoisac
frankharman
2023-07-13T21:16:20Z
116
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "generated_from_trainer", "license:bigscience-openrail-m", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-07-13T20:53:23Z
--- license: bigscience-openrail-m tags: - generated_from_trainer model-index: - name: securebert-finetuned-autoisac results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # securebert-finetuned-autoisac This model is a fine-tuned version of [ehsanaghaei/SecureBERT](https://huggingface.co/ehsanaghaei/SecureBERT) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5774 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.4541 | 1.0 | 2 | 2.1295 | | 2.3899 | 2.0 | 4 | 3.1051 | | 2.384 | 3.0 | 6 | 2.3916 | | 2.461 | 4.0 | 8 | 2.5481 | | 2.3104 | 5.0 | 10 | 1.9451 | | 2.3225 | 6.0 | 12 | 2.4900 | | 2.1623 | 7.0 | 14 | 2.1504 | | 2.2753 | 8.0 | 16 | 2.2117 | | 2.1934 | 9.0 | 18 | 2.2114 | | 2.2003 | 10.0 | 20 | 2.5221 | | 2.1598 | 11.0 | 22 | 2.0404 | | 2.1319 | 12.0 | 24 | 1.9068 | | 2.1139 | 13.0 | 26 | 1.8526 | | 1.9242 | 14.0 | 28 | 1.6899 | | 1.8706 | 15.0 | 30 | 2.2340 | | 1.9503 | 16.0 | 32 | 2.1700 | | 1.939 | 17.0 | 34 | 1.7180 | | 1.998 | 18.0 | 36 | 1.9487 | | 1.9129 | 19.0 | 38 | 2.3239 | | 1.8028 | 20.0 | 40 | 2.4939 | | 2.0098 | 21.0 | 42 | 2.1276 | | 1.8822 | 22.0 | 44 | 1.5615 | | 1.8569 | 23.0 | 46 | 2.2414 | | 1.7875 | 24.0 | 48 | 1.7774 | | 1.8278 | 25.0 | 50 | 2.5106 | | 1.8141 | 26.0 | 52 | 1.9493 | | 1.8379 | 27.0 | 54 | 1.9589 | | 1.8965 | 28.0 | 56 | 2.2619 | | 1.8251 | 29.0 | 58 | 1.7368 | | 1.6857 | 30.0 | 60 | 1.7609 | | 1.7867 | 31.0 | 62 | 2.1918 | | 1.7636 | 32.0 | 64 | 2.2292 | | 1.632 | 33.0 | 66 | 1.9211 | | 1.6702 | 34.0 | 68 | 2.3036 | | 1.6825 | 35.0 | 70 | 2.3332 | | 1.6613 | 36.0 | 72 | 1.9210 | | 1.5195 | 37.0 | 74 | 1.7967 | | 1.6362 | 38.0 | 76 | 1.8938 | | 1.652 | 39.0 | 78 | 1.8180 | | 1.7578 | 40.0 | 80 | 2.0958 | | 1.7971 | 41.0 | 82 | 2.3873 | | 1.5767 | 42.0 | 84 | 1.4808 | | 1.6922 | 43.0 | 86 | 2.1077 | | 1.5517 | 44.0 | 88 | 1.6335 | | 1.6198 | 45.0 | 90 | 1.7669 | | 1.5966 | 46.0 | 92 | 2.0056 | | 1.588 | 47.0 | 94 | 1.8835 | | 1.5696 | 48.0 | 96 | 2.1344 | | 1.5497 | 49.0 | 98 | 1.9380 | | 1.5754 | 50.0 | 100 | 1.9710 | | 1.5357 | 51.0 | 102 | 1.9916 | | 1.5488 | 52.0 | 104 | 1.9536 | | 1.5625 | 53.0 | 106 | 2.0705 | | 1.5039 | 54.0 | 108 | 2.0675 | | 1.5423 | 55.0 | 110 | 2.0393 | | 1.5478 | 56.0 | 112 | 1.9174 | | 1.571 | 57.0 | 114 | 1.6184 | | 1.506 | 58.0 | 116 | 2.0959 | | 1.4856 | 59.0 | 118 | 2.2757 | | 1.5077 | 60.0 | 120 | 2.2091 | | 1.607 | 61.0 | 122 | 2.1535 | | 1.558 | 62.0 | 124 | 1.7893 | | 1.5304 | 63.0 | 126 | 2.4471 | | 1.533 | 64.0 | 128 | 1.7384 | | 1.424 | 65.0 | 130 | 1.7157 | | 1.5778 | 66.0 | 132 | 1.9103 | | 1.4301 | 67.0 | 134 | 1.6906 | | 1.5053 | 68.0 | 136 | 1.6810 | | 1.4954 | 69.0 | 138 | 1.8924 | | 1.5213 | 70.0 | 140 | 1.5374 | | 1.4771 | 71.0 | 142 | 1.6301 | | 1.3914 | 72.0 | 144 | 1.9411 | | 1.466 | 73.0 | 146 | 1.6775 | | 1.4342 | 74.0 | 148 | 1.5887 | | 1.4158 | 75.0 | 150 | 1.9451 | | 1.4845 | 76.0 | 152 | 1.7925 | | 1.447 | 77.0 | 154 | 1.6508 | | 1.3285 | 78.0 | 156 | 2.3469 | | 1.4416 | 79.0 | 158 | 1.9387 | | 1.3357 | 80.0 | 160 | 1.9829 | | 1.4197 | 81.0 | 162 | 2.1912 | | 1.4183 | 82.0 | 164 | 1.7065 | | 1.5176 | 83.0 | 166 | 1.8547 | | 1.4922 | 84.0 | 168 | 1.7672 | | 1.4131 | 85.0 | 170 | 1.8707 | | 1.4281 | 86.0 | 172 | 1.9953 | | 1.439 | 87.0 | 174 | 1.7536 | | 1.4848 | 88.0 | 176 | 1.9255 | | 1.4845 | 89.0 | 178 | 1.5462 | | 1.4587 | 90.0 | 180 | 1.3696 | | 1.366 | 91.0 | 182 | 2.1685 | | 1.5134 | 92.0 | 184 | 2.1314 | | 1.4547 | 93.0 | 186 | 2.1088 | | 1.3936 | 94.0 | 188 | 1.8491 | | 1.4802 | 95.0 | 190 | 1.8716 | | 1.3974 | 96.0 | 192 | 2.1149 | | 1.4762 | 97.0 | 194 | 1.9697 | | 1.4287 | 98.0 | 196 | 1.6517 | | 1.5177 | 99.0 | 198 | 2.0683 | | 1.3889 | 100.0 | 200 | 1.5774 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
Evan-Lin/Bart-RL-many-entailment-attractive-keywordmax
Evan-Lin
2023-07-13T21:07:19Z
47
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "trl", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
reinforcement-learning
2023-07-12T04:40:17Z
--- license: apache-2.0 tags: - trl - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="Evan-Lin//tmp/tmp0uc94902/Evan-Lin/Bart-RL-many-entailment-attractive") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("Evan-Lin//tmp/tmp0uc94902/Evan-Lin/Bart-RL-many-entailment-attractive") model = AutoModelForCausalLMWithValueHead.from_pretrained("Evan-Lin//tmp/tmp0uc94902/Evan-Lin/Bart-RL-many-entailment-attractive") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
SlyEcho/Flan-Open-Llama-3b-ggml
SlyEcho
2023-07-13T21:04:35Z
0
1
null
[ "license:apache-2.0", "region:us" ]
null
2023-07-13T20:15:34Z
--- license: apache-2.0 --- # ggml versions of Flan-Open-Llama-3b - Announcement: [Tweet by @EnricoShippole](https://twitter.com/EnricoShippole/status/1661756166248996867) ("open-source") - Model: [conceptofmind/Flan-Open-Llama-3b](https://huggingface.co/conceptofmind/Flan-Open-Llama-3b) - Base Model: [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b) [OpenLLaMA: An Open Reproduction of LLaMA](https://github.com/openlm-research/open_llama) (Apache 2.0) - Dataset: [FLAN](https://github.com/google-research/FLAN) (Apache 2.0) - [llama.cpp](https://github.com/ggerganov/llama.cpp): build 607(ffb06a3) or later - Type: instruct ## Use with llama.cpp Support is now merged to master branch. ## K-quants There are now more quantization types in llama.cpp, some lower than 4 bits. Currently these are not well supported because of technical reasons. If you want to use them, you have to build llama.cpp (from build 829 (ff5d58f)) with the `LLAMA_QKK_64` Make or CMake variable enabled (see PR [#2001](https://github.com/ggerganov/llama.cpp/pull/2001)). Then you can quantize the F16 or maybe Q8_0 version to what you want.
lovelyxs/a2c-AntBulletEnv-v0
lovelyxs
2023-07-13T20:49:06Z
1
0
stable-baselines3
[ "stable-baselines3", "AntBulletEnv-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-13T20:38:39Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1134.23 +/- 127.11 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
mukaj/falcon-7b-hawkish-lora
mukaj
2023-07-13T20:48:50Z
0
9
null
[ "arxiv:2303.17564", "arxiv:2305.14314", "region:us" ]
null
2023-07-01T18:37:40Z
<h1><center>Falcon-7B-Hawkish-Lora</center></h1> <font size="1">rev1-119</font> <br/> <center>A GPT4-Distilled-Instruct-7B Model, trained with 9000+ Curated Financial & Economic Instruction Prompts. Shows improved reasoning and accuracy on some financial questions and benchmarks over ChatGPT and comparable scores to BloombergGPT (https://arxiv.org/pdf/2303.17564.pdf) on Public Finance NLP Benchmark scores while having 85% less parameters (7B vs 50B). 0.2 of an Epoch completed in released adapter weights and further work is to be done.. </center> <h2><center> Hawkish vs BloombergGPT </center></h2> <center>Financial Sentiment Analysis benchmark tests have been attempted to be matched as close as possible to BloombergGPT paper, 5 few shots and using F1 Weighted.</center> | FinBenchmark | Hawkish-7B | BloombergGPT | | :------------ |:------------- | :----------- | | Headline (SA)*| 71.29 | **82.20** | | FiQA SA | **77.01** | 75.07 | | FPB | **73.41** | 51.07 | <center><font size="1">*Sentiment analysis task only</font> </center> <br/> <h2><center> Hawkish vs CFA Level 1 Mock Exam</center></h2> <br/> <center>The new model surpasses Falcon-Instruct & ChatGPT on a publicly found CFA Level 1 mock exam both with one-shot prompting. Due to both models showing discrepences between samples, both were sampled twice for their best answers. First answer scores were 46.52 (ChatGPT) and 49.46 (Hawkish). </center> <br/> | CFA Level 1 Past Paper | # Questions| ChatGPT (%) | Falcon-Instruct (%) | Hawkish-7B (%) | Exam Weighting | | :---------------------------- | :--------: | :---------: | :----------------: | :------------: | :------------: | | Ethical and Professional | 18 | **66.66** | 33.3 | 50.0 | 0.15 | | Quantitative Methods | 14 | 57.15 | 50.0 | **78.57** | 0.1 | | Economics | 12 | **58.33** | 25.0 | 50.0 | 0.1 | | Financial Reporting | 24 | 37.5 | 25.0 | **58.3** | 0.15 | | Corporate Finance | 10 | 40.0 | 20.0 | **60.0** | 0.1 | | Equity Investments | 12 | 58.3 | 33.3 | **66.6** | 0.11 | | Fixed Income | 14 | 50.0 | 35.7 | **57.15** | 0.11 | | Derivatives | 6 | **66.6** | 16.7 | 50.0 | 0.06 | | Alternative Investments | 4 | **50.0** | 0.0 | 25.0 | 0.06 | | Portfolio Management | 6 | 0.0 | **50.0** | **50.0** | 0.06 | | Weighted Average | - | 50.1 | 29.8 | **56.2** | | <br/> <h3><center> Disclaimer & Intended Uses </center></h3> <center>This model is intended for use as a research artifact, exclusively for research purposes, to study the influence of financial data in the use of training. It is not recommended using this model in high-risk applications (e.g. educational or vocational training, product safety components, or other uses that may impact the well-being of individuals.) as it has not been evaluated fully. The opinions in this paper are solely those of the author and do not reflect the views of the evaluated exam boards or any affiliated entities, including the authors' employers. This research is independent and neither endorsed nor sponsored by these bodies. Any inaccuracies or omissions are entirely the authors' responsibility. Any findings are for academic discourse only and should be used at the reader's discretion. All trademarks and logos belong to their respective owners. </center> ## Inference Code Use the provided instruction template for prompting, you can modify system prompt to "You are an AI assistant. You are a financial expert." for better performance on Finance and Economics instructions. Minimum 10GB VRAM and 15GB CPU RAM required. ``` import torch from transformers import BitsAndBytesConfig, AutoModelForCausalLM, AutoTokenizer from peft import PeftModel bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=getattr(torch, "bfloat16"), bnb_4bit_use_double_quant=True, ) tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-7b") tokenizer.pad_token = tokenizer.eos_token model = AutoModelForCausalLM.from_pretrained( "tiiuae/falcon-7b", quantization_config=bnb_config, device_map="auto", trust_remote_code=True ) model = PeftModel.from_pretrained(model, "mukaj/falcon-7b-hawkish-lora") def PROMPT_TEMPLATE(system, human): return f"### System: {system}\n### Human: {human}\n### Assistant:" PROMPT = PROMPT_TEMPLATE("You are a helpful assistant.","Which bird is on the Albanian flag?") inputs = tokenizer(PROMPT, return_tensors="pt").to(device) outputs = model.generate(input_ids=inputs["input_ids"].to("cuda"), attention_mask=inputs["attention_mask"], max_new_tokens=10, pad_token_id=tokenizer.eos_token_id, do_sample=True, temperature=0.8, top_p=0.9) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ### The bird on the Albanian flag is the eagle. ``` # Financial Domain Completion Examples Equities ``` ### System: You are an AI assistant. You are a financial expert and answer questions with respect to finance and economic domains. ### Human: Explain the types of standard ratios used in company analysis. ### Assistant: The types of standard ratios used in company analysis are: 1. Liquidity ratios: These ratios measure a company's ability to meet its short-term obligations. 2. Profitability ratios: These ratios measure a company's ability to generate profits. 3. Efficiency ratios: These ratios measure a company's ability to use its assets and liabilities effectively. 4. Market ratios: These ratios measure a company's ability to meet the expectations of its investors. 5. Financial leverage ratios: These ratios measure a company's ability to use debt to finance its assets. 6. Activity ratios: These ratios measure a company's ability to generate sales and profits. 7. Asset turnover ratios: These ratios measure a company's ability to generate sales and profits from its assets. 8. Investment ratios: These ratios measure a company's ability to generate profits from its investments. 9. Coverage ratios: These ratios measure a company's ability to meet its short-term obligations. 10. Du Pont ratios: These ratios measure a company's ability to generate profits from its assets and investments. ``` <hr style="border:1px solid gray"> Fixed Income ``` ### System: You are an AI assistant. You are a financial expert and answer questions with respect to finance and economic domains. ### Human: What is CMBS-level call protection, and is a Residual Tranche an example of one or is a Prepayment lockout an example? Or is that a individual loan level protection? ### Assistant: CMBS-level call protection is a level of protection that covers all individual loans in a CMBS transaction, and ensures that any single prepayment on any class of assets covered by the loan level protection, will not result in a call of the remaining underlying assets. A Residual Tranche is an example of CMBS-level call protection, which is more commonly employed in public deals, as opposed to private transactions. ``` <hr style="border:1px solid gray"> Economics ``` ### System: You are an AI assistant. You are a financial expert and answer questions with respect to finance and economic domains. ### Human: A new country is formed in Wakanda. They introduce a new currency. How does the international market value what it is worth?. ### Assistant: The value of such a novel economy could be difficult to determine. Assume a nominal return of 0.4%, constant to inflation based on the national nominal wage. Assume nominal wages of both countries have a 200% price level (real price would be less). GDP of the novel sector will be about 5% (20% of the total GDP), assuming labor productivity is unchanged. National product will decrease. With trade of a factor of the nominal price level, the novel GDP will be about 5%x20(50%). Nominal wage in the novel economy can be the same as for the main economy, resulting in the price level equal to (3/5)*(200/100) = 60/. One of the challenges in valuing this asset will be deciding if the novel sector should be valued more as the capital market or the labor market. Assume a 30-year horizon. Real wage of labor will be about 8% with national real interest rate at 5%. Assume that nominal returns and capital gains yield a 7/12 interest rate. The present value of the factor-adjusted wage-based return would be: $1.4(7/12)×15(5). It would be a risky asset yielding a return of about 4.6%. This is also equivalent to investing 0.4 + $1.4×(30) = $50 at risk in 30 years. Of course, if the rate is higher, the value of the real wage is correspondingly higher. The return to holding labor is the wage rate, plus a risk premium equal to 0.5/4.4/3*(30). The present value of expected cash flows and the intrinsic value of capital are assumed equal. In general, the expected discounted cash flows will be the market price, less expected inflation and real interest rates for those sectors. The present value is therefore determined by multiplying the cash flows by an appropriate discount rate which can be estimated based by using historical inflation-adjusted returns, and discounting cash flows at the domestic risk-adjusted risk premium and discounting future flows at the domestic real interest rate. ``` <hr style="border:1px solid gray"> Real Estate ``` ### System: You are an AI assistant. You are a financial expert and answer questions with respect to finance and economic domains. ### Human: Joshua Petersen is a real estate analyst that wants to make appropriate adjustments to a capitalization rate. He predicts that vacancy rates will increase and that the availability of credit will decrease. Based on these views, it is most likely that Petersen: 1) should increase the capitalization rate. 2) may need to either increase or decrease the capitalization rate because the two predictions have offsetting effects. 3) should decrease the capitalization rate. ### Assistant: B) To determine the impact of vacancy rate fluctuations and availability of credit the two views must be considered as opposed to simply adjusting due to increases in vacancies for vacant rates to go down With these two views are the vacancy increases and decreases likely to result in a decline in available capital with credit getting more selective? To make a more robust conclusion, we'll use some of this input. We will assume vacancy rates remain unchanged and assume available lines of credit are unchanged. In those two cases, vacancy increases can have a much greater effect leading to even higher and worse risks to asset value. We're also assuming here that there are no issues with interest in rates, so higher vacancy rate and limited credit availability are really enough to put real assets assets out of consideration until conditions look better. A decline in capital appreciation potential due to falling value of real estate, may require the investor to exit and switch to safer investments or wait out the conditions and have the potential of a huge capital gain when conditions improve. From looking to real time statistics the conditions mentioned probably wouldn't be significant in most real contexts of what to capitalize on at real rates. Real estate investment in certain large metros may see value drops, other investment categories will show some movement up while still some will look flat. Here's where your analysis is missing, an important point not to skip that impacts whether all investors leave because of real estate volatility being high. With vacancy rates and credit issues combined, it is true that assets would be viewed as poor assets to capitalize rates or real assets in general at that moment, because so much money leaves market causing lack of market liquidity which makes investing an almost irrational thing at points. You have a good understanding with this but need the market understanding and data side, then add it to a robust analysis of macro outlook. Your conclusion is based on real market conditions and future conditions is uncertain so no hard conclusions should be drawn, there can be multiple markets over same period, in your same industry, or across the whole industry that are going down for various reasons which you need to determine prior to assuming. It gets more granular and looks deeper than numbers on a piece of paper for individual assets but instead it relates it with a market condition. Hope it helps. ``` ## Limitations & Further Work Model is not fully evaluated, it is still work in progess, and is only trained on 0.2 Epochs. <ol> <li>Repeat experiment with larger model.</li> <p>This is self explanatory, larger models have been shown to be better all-around, so a larger base model should help drastically.</p> <li>Filter data further.</li> <p>Less is more, filter to highest quality samples that maintain highest diversity and compare results.</p> <li>Data Mix Comparison</li> <p>Different mixes of finance weightings in the data need to be tried to get best performance.</p> </ol> ### Compute Hardware: 1 x RTX 4090 Training Software: https://huggingface.co/docs/trl/main/en/sft_trainer Max Seq Length: 2048 Learning Rate: 2e-4 ## Attributions & References Base Model: https://huggingface.co/tiiuae/falcon-7b Trained with QLoRA: https://arxiv.org/abs/2305.14314 OpenOrca Data: https://huggingface.co/datasets/Open-Orca/OpenOrca Guanaco Data: https://huggingface.co/datasets/timdettmers/openassistant-guanaco CodeParrot Self Instruct: https://huggingface.co/datasets/codeparrot/self-instruct-starcoder BloombergGPT Paper: https://arxiv.org/pdf/2303.17564.pdf
ruggedmug/q-FrozenLake-v1-4x4-noSlippery
ruggedmug
2023-07-13T20:39:33Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-13T19:16:45Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="ruggedmug/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
jliu596/a2c-AntBulletEnv-v0
jliu596
2023-07-13T20:34:55Z
0
0
stable-baselines3
[ "stable-baselines3", "AntBulletEnv-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-13T19:50:18Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 520.21 +/- 33.39 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
shadrach/1453Harun
shadrach
2023-07-13T20:15:29Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-13T20:12:19Z
--- license: creativeml-openrail-m ---
LarryAIDraw/MaiSakurajimaV1
LarryAIDraw
2023-07-13T20:01:08Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-13T17:27:42Z
--- license: creativeml-openrail-m --- https://civitai.com/models/108374/mai-sakurajima-rascal-does-not-dream-of-bunny-girl-senpai
Tasaloris13/finetuned-college-1
Tasaloris13
2023-07-13T19:59:48Z
3
0
peft
[ "peft", "region:us" ]
null
2023-07-13T19:59:42Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0.dev0
grace-pro/afro-xlmr-base-hausa-5e-5
grace-pro
2023-07-13T19:51:42Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-13T19:22:13Z
--- license: mit tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: afro-xlmr-base-hausa-5e-5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # afro-xlmr-base-hausa-5e-5 This model is a fine-tuned version of [Davlan/afro-xlmr-base](https://huggingface.co/Davlan/afro-xlmr-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1512 - Precision: 0.7391 - Recall: 0.5807 - F1: 0.6504 - Accuracy: 0.9616 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1604 | 1.0 | 1312 | 0.1395 | 0.6845 | 0.4906 | 0.5716 | 0.9535 | | 0.1221 | 2.0 | 2624 | 0.1261 | 0.7140 | 0.5440 | 0.6175 | 0.9582 | | 0.0939 | 3.0 | 3936 | 0.1311 | 0.7433 | 0.5693 | 0.6448 | 0.9610 | | 0.0723 | 4.0 | 5248 | 0.1419 | 0.7508 | 0.5583 | 0.6404 | 0.9613 | | 0.0557 | 5.0 | 6560 | 0.1512 | 0.7391 | 0.5807 | 0.6504 | 0.9616 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
csalaam/bias-classification-setfit-model-womenbias
csalaam
2023-07-13T19:41:40Z
3
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-07-13T19:00:14Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # csalaam/bias-classification-setfit-model-womenbias This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("csalaam/bias-classification-setfit-model-womenbias") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
grace-pro/xlmr-base-hausa-5e-5
grace-pro
2023-07-13T19:15:07Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-13T18:46:41Z
--- license: mit tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: xlmr-base-hausa-5e-5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlmr-base-hausa-5e-5 This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1493 - Precision: 0.7153 - Recall: 0.5631 - F1: 0.6301 - Accuracy: 0.9588 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.177 | 1.0 | 1312 | 0.1549 | 0.6557 | 0.4168 | 0.5097 | 0.9479 | | 0.1412 | 2.0 | 2624 | 0.1386 | 0.6723 | 0.5262 | 0.5903 | 0.9539 | | 0.1154 | 3.0 | 3936 | 0.1400 | 0.7078 | 0.5353 | 0.6096 | 0.9567 | | 0.0921 | 4.0 | 5248 | 0.1418 | 0.7200 | 0.5496 | 0.6234 | 0.9585 | | 0.0731 | 5.0 | 6560 | 0.1493 | 0.7153 | 0.5631 | 0.6301 | 0.9588 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
surasuk/dolly-lora
surasuk
2023-07-13T19:12:44Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-13T18:08:08Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0.dev0
sephinroth/marian-finetuned-kftt-ja-to-en
sephinroth
2023-07-13T18:46:37Z
108
0
transformers
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "translation", "generated_from_trainer", "dataset:kftt", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-07-13T16:00:00Z
--- license: apache-2.0 tags: - translation - generated_from_trainer datasets: - kftt metrics: - bleu model-index: - name: marian-finetuned-kftt-ja-to-en results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: kftt type: kftt config: en-ja split: validation args: en-ja metrics: - name: Bleu type: bleu value: 19.68604320446213 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kftt-ja-to-en This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ja-en](https://huggingface.co/Helsinki-NLP/opus-mt-ja-en) on the kftt dataset. It achieves the following results on the evaluation set: - Loss: 1.9551 - Bleu: 19.6860 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
grace-pro/afriberta-small-hausa-5e-5
grace-pro
2023-07-13T18:41:38Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-13T18:31:08Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: afriberta-small-hausa-5e-5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # afriberta-small-hausa-5e-5 This model is a fine-tuned version of [castorini/afriberta_small](https://huggingface.co/castorini/afriberta_small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1600 - Precision: 0.6808 - Recall: 0.4937 - F1: 0.5724 - Accuracy: 0.9623 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1523 | 1.0 | 1312 | 0.1338 | 0.6526 | 0.4261 | 0.5156 | 0.9583 | | 0.1162 | 2.0 | 2624 | 0.1300 | 0.6862 | 0.4603 | 0.5510 | 0.9614 | | 0.089 | 3.0 | 3936 | 0.1375 | 0.6953 | 0.4630 | 0.5559 | 0.9619 | | 0.0698 | 4.0 | 5248 | 0.1507 | 0.6860 | 0.4888 | 0.5708 | 0.9623 | | 0.0559 | 5.0 | 6560 | 0.1600 | 0.6808 | 0.4937 | 0.5724 | 0.9623 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Winmodel/LunarLander-v2
Winmodel
2023-07-13T18:37:07Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-07-13T18:34:13Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -145.05 +/- 69.75 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 50000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'Winmodel/LunarLander-v2' 'batch_size': 512 'minibatch_size': 128} ```
NTU-NLP-sg/xCodeEval-nl-code-starencoder-ckpt-37
NTU-NLP-sg
2023-07-13T18:35:21Z
0
0
null
[ "arxiv:2303.03004", "license:cc-by-nc-4.0", "region:us" ]
null
2023-07-13T06:59:15Z
--- license: cc-by-nc-4.0 --- ## Model Description **StarEncoder** trained with training split of `retrieval_nl_code` subset of [xCodeEval](https://huggingface.co/datasets/NTU-NLP-sg/xCodeEval). Trained for 37 epochs. Code Repo used to train: https://github.com/facebookresearch/DPR For details result, please follow our [paper](https://arxiv.org/abs/2303.03004).
mayapapaya/Keyword-Extractor
mayapapaya
2023-07-13T18:33:59Z
204
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-13T14:23:08Z
# Model Card for Model ID This model is meant to extract keywords from text. - **Model type:** text-classification - **Language(s) (NLP):** English - **License:** cc - **Finetuned from model [optional]:** [More Information Needed] ## Training Details This model is a fine-tuned version of the [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) model. ## Training Data Trained on [51la5/keyword-extraction](https://huggingface.co/datasets/51la5/keyword-extraction) from HuggingFace Hub. ## How to Get Started with the Model Note: model inputs were tokenized using distilbert-base-uncased tokenizer ``` from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") model = AutoModelForSequenceClassification.from_pretrained("mayapapaya/Keyword-Extractor") ```
Joserzapata/speecht5_finetuned_voxpopuli_nl
Joserzapata
2023-07-13T18:21:28Z
78
0
transformers
[ "transformers", "pytorch", "tensorboard", "speecht5", "text-to-audio", "generated_from_trainer", "license:mit", "endpoints_compatible", "region:us" ]
text-to-audio
2023-07-13T04:28:20Z
--- license: mit tags: - generated_from_trainer model-index: - name: speecht5_finetuned_voxpopuli_nl results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # speecht5_finetuned_voxpopuli_nl This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4624 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.521 | 4.3 | 1000 | 0.4799 | | 0.5021 | 8.61 | 2000 | 0.4676 | | 0.4958 | 12.91 | 3000 | 0.4637 | | 0.4874 | 17.21 | 4000 | 0.4624 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu117 - Datasets 2.13.1 - Tokenizers 0.13.3
Sandrro/text_to_topic
Sandrro
2023-07-13T18:15:06Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-13T17:18:08Z
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: text_to_subfunction_v10_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # text_to_subfunction_v10_2 This model is a fine-tuned version of [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5115 - F1: 0.5638 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 1.8616 | 1.0 | 5400 | 1.7457 | 0.4607 | | 1.4576 | 2.0 | 10800 | 1.5115 | 0.5638 | ### Framework versions - Transformers 4.27.1 - Pytorch 2.1.0.dev20230414+cu117 - Datasets 2.9.0 - Tokenizers 0.13.3
Tanor/BERTovoSENTNEG6
Tanor
2023-07-13T18:11:07Z
3
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-09T01:32:38Z
--- tags: - generated_from_trainer metrics: - f1 model-index: - name: BERTovoSENTNEG6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BERTovoSENTNEG6 This model is a fine-tuned version of [Tanor/BERTicovoSENTNEG6](https://huggingface.co/Tanor/BERTicovoSENTNEG6) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0837 - F1: 0.4878 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 32 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 53 | 0.0536 | 0.0769 | | No log | 2.0 | 106 | 0.0482 | 0.5909 | | No log | 3.0 | 159 | 0.0610 | 0.5532 | | No log | 4.0 | 212 | 0.0718 | 0.5 | | No log | 5.0 | 265 | 0.0837 | 0.4878 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1 - Datasets 2.13.1 - Tokenizers 0.13.3
namedotpg/poca-SoccerTwos
namedotpg
2023-07-13T18:04:24Z
20
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2023-07-13T13:00:55Z
--- library_name: ml-agents tags: - SoccerTwos - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: namedotpg/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
toshiko-shibano/open-calm-7b-qlora
toshiko-shibano
2023-07-13T18:04:08Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-13T13:01:11Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0.dev0
grace-pro/afriberta-large-hausa-5e-5
grace-pro
2023-07-13T18:00:19Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-13T17:39:12Z
--- license: mit tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: afriberta-large-hausa-5e-5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # afriberta-large-hausa-5e-5 This model is a fine-tuned version of [castorini/afriberta_large](https://huggingface.co/castorini/afriberta_large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1680 - Precision: 0.7001 - Recall: 0.5395 - F1: 0.6094 - Accuracy: 0.9652 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1427 | 1.0 | 1312 | 0.1258 | 0.6868 | 0.4660 | 0.5553 | 0.9615 | | 0.1042 | 2.0 | 2624 | 0.1183 | 0.6965 | 0.5150 | 0.5921 | 0.9639 | | 0.0719 | 3.0 | 3936 | 0.1317 | 0.6943 | 0.5336 | 0.6034 | 0.9646 | | 0.048 | 4.0 | 5248 | 0.1490 | 0.7099 | 0.5229 | 0.6022 | 0.9650 | | 0.0341 | 5.0 | 6560 | 0.1680 | 0.7001 | 0.5395 | 0.6094 | 0.9652 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Tanor/BERTovoSENTPOS6
Tanor
2023-07-13T17:48:32Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-09T00:21:54Z
--- tags: - generated_from_trainer metrics: - f1 model-index: - name: BERTovoSENTPOS6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BERTovoSENTPOS6 This model is a fine-tuned version of [Tanor/BERTicovoSENTPOS6](https://huggingface.co/Tanor/BERTicovoSENTPOS6) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0541 - F1: 0.5143 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 32 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 53 | 0.0452 | 0.0 | | No log | 2.0 | 106 | 0.0436 | 0.0870 | | No log | 3.0 | 159 | 0.0449 | 0.4138 | | No log | 4.0 | 212 | 0.0506 | 0.5 | | No log | 5.0 | 265 | 0.0541 | 0.5143 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1 - Datasets 2.13.1 - Tokenizers 0.13.3
ayanban011/6_e_200-tiny_tobacco3482_kd_CEKD_t1.5_a0.9
ayanban011
2023-07-13T17:42:01Z
165
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-13T15:27:45Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: 6_e_200-tiny_tobacco3482_kd_CEKD_t1.5_a0.9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 6_e_200-tiny_tobacco3482_kd_CEKD_t1.5_a0.9 This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5536 - Accuracy: 0.82 - Brier Loss: 0.2571 - Nll: 1.4560 - F1 Micro: 0.82 - F1 Macro: 0.7994 - Ece: 0.1404 - Aurc: 0.0578 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:| | No log | 1.0 | 25 | 2.0125 | 0.23 | 0.8650 | 4.4951 | 0.23 | 0.1799 | 0.2806 | 0.7660 | | No log | 2.0 | 50 | 1.2756 | 0.555 | 0.5948 | 2.6781 | 0.555 | 0.4537 | 0.2800 | 0.2519 | | No log | 3.0 | 75 | 0.9515 | 0.685 | 0.4392 | 1.9416 | 0.685 | 0.5937 | 0.2067 | 0.1288 | | No log | 4.0 | 100 | 0.7861 | 0.72 | 0.3622 | 1.5125 | 0.72 | 0.6675 | 0.2050 | 0.0961 | | No log | 5.0 | 125 | 0.7551 | 0.77 | 0.3362 | 1.5478 | 0.7700 | 0.7318 | 0.2043 | 0.0838 | | No log | 6.0 | 150 | 0.8056 | 0.77 | 0.3525 | 1.4305 | 0.7700 | 0.7589 | 0.1943 | 0.0891 | | No log | 7.0 | 175 | 0.7942 | 0.775 | 0.3310 | 1.8237 | 0.775 | 0.7454 | 0.1812 | 0.0924 | | No log | 8.0 | 200 | 0.7735 | 0.77 | 0.3384 | 1.5161 | 0.7700 | 0.7530 | 0.1987 | 0.0931 | | No log | 9.0 | 225 | 0.6992 | 0.79 | 0.3025 | 1.5664 | 0.79 | 0.7777 | 0.1631 | 0.0774 | | No log | 10.0 | 250 | 0.6753 | 0.8 | 0.2955 | 1.5189 | 0.8000 | 0.7900 | 0.1654 | 0.0633 | | No log | 11.0 | 275 | 0.7701 | 0.805 | 0.3018 | 1.4787 | 0.805 | 0.7932 | 0.1581 | 0.0881 | | No log | 12.0 | 300 | 0.7164 | 0.79 | 0.3292 | 1.3527 | 0.79 | 0.7892 | 0.1946 | 0.0871 | | No log | 13.0 | 325 | 0.6376 | 0.8 | 0.2901 | 1.4953 | 0.8000 | 0.7824 | 0.1770 | 0.0659 | | No log | 14.0 | 350 | 0.7319 | 0.77 | 0.3247 | 1.6062 | 0.7700 | 0.7424 | 0.1803 | 0.0816 | | No log | 15.0 | 375 | 0.5749 | 0.805 | 0.2738 | 0.8483 | 0.805 | 0.8010 | 0.1569 | 0.0647 | | No log | 16.0 | 400 | 0.6879 | 0.775 | 0.3085 | 1.3379 | 0.775 | 0.7759 | 0.1909 | 0.0730 | | No log | 17.0 | 425 | 0.5094 | 0.85 | 0.2241 | 1.4391 | 0.85 | 0.8360 | 0.1589 | 0.0441 | | No log | 18.0 | 450 | 0.6826 | 0.8 | 0.3015 | 1.6933 | 0.8000 | 0.7969 | 0.1651 | 0.0792 | | No log | 19.0 | 475 | 0.5677 | 0.825 | 0.2622 | 1.5426 | 0.825 | 0.8051 | 0.1600 | 0.0515 | | 0.4493 | 20.0 | 500 | 0.5156 | 0.85 | 0.2312 | 1.5882 | 0.85 | 0.8471 | 0.1466 | 0.0427 | | 0.4493 | 21.0 | 525 | 0.5743 | 0.83 | 0.2600 | 1.5702 | 0.83 | 0.8187 | 0.1604 | 0.0540 | | 0.4493 | 22.0 | 550 | 0.5872 | 0.825 | 0.2712 | 1.6270 | 0.825 | 0.8056 | 0.1687 | 0.0572 | | 0.4493 | 23.0 | 575 | 0.5770 | 0.81 | 0.2701 | 1.5089 | 0.81 | 0.7969 | 0.1559 | 0.0655 | | 0.4493 | 24.0 | 600 | 0.5621 | 0.82 | 0.2590 | 1.3500 | 0.82 | 0.8052 | 0.1621 | 0.0587 | | 0.4493 | 25.0 | 625 | 0.5480 | 0.805 | 0.2518 | 1.2519 | 0.805 | 0.7884 | 0.1483 | 0.0619 | | 0.4493 | 26.0 | 650 | 0.5555 | 0.81 | 0.2575 | 1.3183 | 0.81 | 0.7926 | 0.1585 | 0.0598 | | 0.4493 | 27.0 | 675 | 0.5449 | 0.82 | 0.2524 | 1.4400 | 0.82 | 0.8059 | 0.1713 | 0.0579 | | 0.4493 | 28.0 | 700 | 0.5483 | 0.81 | 0.2545 | 1.4400 | 0.81 | 0.7894 | 0.1450 | 0.0580 | | 0.4493 | 29.0 | 725 | 0.5448 | 0.81 | 0.2524 | 1.3070 | 0.81 | 0.7931 | 0.1447 | 0.0595 | | 0.4493 | 30.0 | 750 | 0.5476 | 0.815 | 0.2538 | 1.3101 | 0.815 | 0.7982 | 0.1536 | 0.0582 | | 0.4493 | 31.0 | 775 | 0.5433 | 0.82 | 0.2529 | 1.3812 | 0.82 | 0.8011 | 0.1637 | 0.0575 | | 0.4493 | 32.0 | 800 | 0.5469 | 0.805 | 0.2528 | 1.2973 | 0.805 | 0.7905 | 0.1668 | 0.0600 | | 0.4493 | 33.0 | 825 | 0.5443 | 0.815 | 0.2525 | 1.3020 | 0.815 | 0.7933 | 0.1768 | 0.0579 | | 0.4493 | 34.0 | 850 | 0.5442 | 0.82 | 0.2521 | 1.3234 | 0.82 | 0.8011 | 0.1555 | 0.0580 | | 0.4493 | 35.0 | 875 | 0.5434 | 0.82 | 0.2531 | 1.4362 | 0.82 | 0.8011 | 0.1430 | 0.0564 | | 0.4493 | 36.0 | 900 | 0.5469 | 0.815 | 0.2534 | 1.3075 | 0.815 | 0.7933 | 0.1590 | 0.0578 | | 0.4493 | 37.0 | 925 | 0.5468 | 0.815 | 0.2546 | 1.3204 | 0.815 | 0.7933 | 0.1623 | 0.0567 | | 0.4493 | 38.0 | 950 | 0.5473 | 0.815 | 0.2540 | 1.3722 | 0.815 | 0.7933 | 0.1514 | 0.0582 | | 0.4493 | 39.0 | 975 | 0.5453 | 0.82 | 0.2532 | 1.3874 | 0.82 | 0.8011 | 0.1751 | 0.0568 | | 0.0581 | 40.0 | 1000 | 0.5475 | 0.815 | 0.2543 | 1.3116 | 0.815 | 0.7933 | 0.1654 | 0.0573 | | 0.0581 | 41.0 | 1025 | 0.5452 | 0.815 | 0.2533 | 1.4421 | 0.815 | 0.7933 | 0.1459 | 0.0579 | | 0.0581 | 42.0 | 1050 | 0.5467 | 0.815 | 0.2538 | 1.3730 | 0.815 | 0.7933 | 0.1642 | 0.0576 | | 0.0581 | 43.0 | 1075 | 0.5478 | 0.815 | 0.2544 | 1.3086 | 0.815 | 0.7933 | 0.1657 | 0.0581 | | 0.0581 | 44.0 | 1100 | 0.5482 | 0.815 | 0.2545 | 1.3744 | 0.815 | 0.7933 | 0.1629 | 0.0583 | | 0.0581 | 45.0 | 1125 | 0.5493 | 0.815 | 0.2550 | 1.3676 | 0.815 | 0.7933 | 0.1638 | 0.0594 | | 0.0581 | 46.0 | 1150 | 0.5478 | 0.82 | 0.2547 | 1.4645 | 0.82 | 0.8011 | 0.1631 | 0.0572 | | 0.0581 | 47.0 | 1175 | 0.5487 | 0.815 | 0.2547 | 1.3795 | 0.815 | 0.7933 | 0.1634 | 0.0577 | | 0.0581 | 48.0 | 1200 | 0.5471 | 0.825 | 0.2546 | 1.4421 | 0.825 | 0.8067 | 0.1436 | 0.0564 | | 0.0581 | 49.0 | 1225 | 0.5489 | 0.815 | 0.2547 | 1.3676 | 0.815 | 0.7933 | 0.1663 | 0.0578 | | 0.0581 | 50.0 | 1250 | 0.5482 | 0.82 | 0.2549 | 1.4346 | 0.82 | 0.7990 | 0.1481 | 0.0574 | | 0.0581 | 51.0 | 1275 | 0.5472 | 0.82 | 0.2540 | 1.5012 | 0.82 | 0.8011 | 0.1565 | 0.0569 | | 0.0581 | 52.0 | 1300 | 0.5489 | 0.825 | 0.2553 | 1.4351 | 0.825 | 0.8051 | 0.1608 | 0.0576 | | 0.0581 | 53.0 | 1325 | 0.5486 | 0.815 | 0.2549 | 1.3799 | 0.815 | 0.7933 | 0.1483 | 0.0573 | | 0.0581 | 54.0 | 1350 | 0.5498 | 0.815 | 0.2552 | 1.4434 | 0.815 | 0.7933 | 0.1542 | 0.0578 | | 0.0581 | 55.0 | 1375 | 0.5508 | 0.82 | 0.2559 | 1.4394 | 0.82 | 0.7994 | 0.1562 | 0.0576 | | 0.0581 | 56.0 | 1400 | 0.5492 | 0.825 | 0.2552 | 1.4368 | 0.825 | 0.8051 | 0.1483 | 0.0572 | | 0.0581 | 57.0 | 1425 | 0.5501 | 0.815 | 0.2552 | 1.3874 | 0.815 | 0.7933 | 0.1390 | 0.0579 | | 0.0581 | 58.0 | 1450 | 0.5497 | 0.82 | 0.2553 | 1.4365 | 0.82 | 0.7994 | 0.1437 | 0.0579 | | 0.0581 | 59.0 | 1475 | 0.5507 | 0.82 | 0.2557 | 1.4343 | 0.82 | 0.7994 | 0.1389 | 0.0584 | | 0.056 | 60.0 | 1500 | 0.5501 | 0.825 | 0.2555 | 1.4410 | 0.825 | 0.8051 | 0.1585 | 0.0583 | | 0.056 | 61.0 | 1525 | 0.5510 | 0.82 | 0.2559 | 1.4380 | 0.82 | 0.7994 | 0.1395 | 0.0578 | | 0.056 | 62.0 | 1550 | 0.5510 | 0.82 | 0.2558 | 1.4421 | 0.82 | 0.7994 | 0.1441 | 0.0573 | | 0.056 | 63.0 | 1575 | 0.5508 | 0.82 | 0.2559 | 1.4369 | 0.82 | 0.7994 | 0.1395 | 0.0575 | | 0.056 | 64.0 | 1600 | 0.5514 | 0.82 | 0.2560 | 1.4410 | 0.82 | 0.7994 | 0.1393 | 0.0579 | | 0.056 | 65.0 | 1625 | 0.5519 | 0.825 | 0.2563 | 1.4544 | 0.825 | 0.8051 | 0.1427 | 0.0575 | | 0.056 | 66.0 | 1650 | 0.5510 | 0.82 | 0.2560 | 1.4400 | 0.82 | 0.7994 | 0.1391 | 0.0576 | | 0.056 | 67.0 | 1675 | 0.5520 | 0.825 | 0.2563 | 1.4396 | 0.825 | 0.8051 | 0.1422 | 0.0580 | | 0.056 | 68.0 | 1700 | 0.5516 | 0.82 | 0.2561 | 1.4412 | 0.82 | 0.7994 | 0.1394 | 0.0580 | | 0.056 | 69.0 | 1725 | 0.5512 | 0.82 | 0.2560 | 1.4433 | 0.82 | 0.7994 | 0.1393 | 0.0577 | | 0.056 | 70.0 | 1750 | 0.5515 | 0.82 | 0.2561 | 1.4418 | 0.82 | 0.7994 | 0.1391 | 0.0576 | | 0.056 | 71.0 | 1775 | 0.5517 | 0.82 | 0.2562 | 1.4448 | 0.82 | 0.7994 | 0.1449 | 0.0581 | | 0.056 | 72.0 | 1800 | 0.5524 | 0.825 | 0.2566 | 1.4421 | 0.825 | 0.8051 | 0.1437 | 0.0579 | | 0.056 | 73.0 | 1825 | 0.5518 | 0.82 | 0.2562 | 1.4403 | 0.82 | 0.7994 | 0.1469 | 0.0576 | | 0.056 | 74.0 | 1850 | 0.5529 | 0.825 | 0.2568 | 1.4450 | 0.825 | 0.8051 | 0.1434 | 0.0580 | | 0.056 | 75.0 | 1875 | 0.5528 | 0.82 | 0.2566 | 1.4475 | 0.82 | 0.7994 | 0.1447 | 0.0585 | | 0.056 | 76.0 | 1900 | 0.5529 | 0.82 | 0.2568 | 1.4463 | 0.82 | 0.7994 | 0.1447 | 0.0578 | | 0.056 | 77.0 | 1925 | 0.5528 | 0.82 | 0.2567 | 1.4469 | 0.82 | 0.7994 | 0.1401 | 0.0577 | | 0.056 | 78.0 | 1950 | 0.5525 | 0.82 | 0.2565 | 1.4506 | 0.82 | 0.7994 | 0.1444 | 0.0576 | | 0.056 | 79.0 | 1975 | 0.5527 | 0.825 | 0.2567 | 1.4479 | 0.825 | 0.8051 | 0.1423 | 0.0576 | | 0.0559 | 80.0 | 2000 | 0.5530 | 0.825 | 0.2568 | 1.4429 | 0.825 | 0.8051 | 0.1423 | 0.0578 | | 0.0559 | 81.0 | 2025 | 0.5529 | 0.825 | 0.2567 | 1.4489 | 0.825 | 0.8051 | 0.1422 | 0.0581 | | 0.0559 | 82.0 | 2050 | 0.5529 | 0.82 | 0.2568 | 1.4550 | 0.82 | 0.7994 | 0.1401 | 0.0576 | | 0.0559 | 83.0 | 2075 | 0.5534 | 0.82 | 0.2570 | 1.4458 | 0.82 | 0.7994 | 0.1399 | 0.0580 | | 0.0559 | 84.0 | 2100 | 0.5530 | 0.82 | 0.2568 | 1.4497 | 0.82 | 0.7994 | 0.1399 | 0.0577 | | 0.0559 | 85.0 | 2125 | 0.5533 | 0.82 | 0.2570 | 1.4507 | 0.82 | 0.7994 | 0.1401 | 0.0577 | | 0.0559 | 86.0 | 2150 | 0.5531 | 0.825 | 0.2568 | 1.4515 | 0.825 | 0.8051 | 0.1428 | 0.0577 | | 0.0559 | 87.0 | 2175 | 0.5534 | 0.82 | 0.2569 | 1.4503 | 0.82 | 0.7994 | 0.1404 | 0.0577 | | 0.0559 | 88.0 | 2200 | 0.5534 | 0.82 | 0.2569 | 1.4532 | 0.82 | 0.7994 | 0.1399 | 0.0581 | | 0.0559 | 89.0 | 2225 | 0.5533 | 0.825 | 0.2569 | 1.4499 | 0.825 | 0.8051 | 0.1423 | 0.0578 | | 0.0559 | 90.0 | 2250 | 0.5534 | 0.82 | 0.2570 | 1.4517 | 0.82 | 0.7994 | 0.1404 | 0.0577 | | 0.0559 | 91.0 | 2275 | 0.5533 | 0.82 | 0.2569 | 1.4526 | 0.82 | 0.7994 | 0.1405 | 0.0579 | | 0.0559 | 92.0 | 2300 | 0.5534 | 0.825 | 0.2570 | 1.4533 | 0.825 | 0.8051 | 0.1424 | 0.0577 | | 0.0559 | 93.0 | 2325 | 0.5535 | 0.82 | 0.2570 | 1.4527 | 0.82 | 0.7994 | 0.1399 | 0.0580 | | 0.0559 | 94.0 | 2350 | 0.5536 | 0.82 | 0.2571 | 1.4533 | 0.82 | 0.7994 | 0.1404 | 0.0577 | | 0.0559 | 95.0 | 2375 | 0.5536 | 0.82 | 0.2571 | 1.4547 | 0.82 | 0.7994 | 0.1400 | 0.0579 | | 0.0559 | 96.0 | 2400 | 0.5535 | 0.82 | 0.2570 | 1.4567 | 0.82 | 0.7994 | 0.1400 | 0.0578 | | 0.0559 | 97.0 | 2425 | 0.5536 | 0.82 | 0.2571 | 1.4523 | 0.82 | 0.7994 | 0.1404 | 0.0579 | | 0.0559 | 98.0 | 2450 | 0.5536 | 0.82 | 0.2571 | 1.4570 | 0.82 | 0.7994 | 0.1404 | 0.0578 | | 0.0559 | 99.0 | 2475 | 0.5536 | 0.82 | 0.2571 | 1.4570 | 0.82 | 0.7994 | 0.1404 | 0.0578 | | 0.0559 | 100.0 | 2500 | 0.5536 | 0.82 | 0.2571 | 1.4560 | 0.82 | 0.7994 | 0.1404 | 0.0578 | ### Framework versions - Transformers 4.30.2 - Pytorch 1.13.1 - Datasets 2.13.1 - Tokenizers 0.13.3
Geotrend/distilbert-base-ar-cased
Geotrend
2023-07-13T17:37:33Z
130
0
transformers
[ "transformers", "pytorch", "safetensors", "distilbert", "fill-mask", "ar", "dataset:wikipedia", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- language: ar datasets: wikipedia license: apache-2.0 --- # distilbert-base-ar-cased We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages. Our versions give exactly the same representations produced by the original model which preserves the original accuracy. For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf). ## How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-ar-cased") model = AutoModel.from_pretrained("Geotrend/distilbert-base-ar-cased") ``` To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers). ### How to cite ```bibtex @inproceedings{smallermdistilbert, title={Load What You Need: Smaller Versions of Mutlilingual BERT}, author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire}, booktitle={SustaiNLP / EMNLP}, year={2020} } ``` ## Contact Please contact [email protected] for any question, feedback or request.
koruni/charsembeds
koruni
2023-07-13T17:34:37Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-13T17:30:36Z
--- license: creativeml-openrail-m ---
ayanban011/6_e_200-tiny_tobacco3482_kd_CEKD_t1.5_a0.7
ayanban011
2023-07-13T17:33:13Z
165
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-13T15:25:23Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: 6_e_200-tiny_tobacco3482_kd_CEKD_t1.5_a0.7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 6_e_200-tiny_tobacco3482_kd_CEKD_t1.5_a0.7 This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4925 - Accuracy: 0.845 - Brier Loss: 0.2526 - Nll: 1.5547 - F1 Micro: 0.845 - F1 Macro: 0.8258 - Ece: 0.1785 - Aurc: 0.0736 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:| | No log | 1.0 | 25 | 1.8463 | 0.245 | 0.8631 | 4.7256 | 0.245 | 0.2002 | 0.2955 | 0.7640 | | No log | 2.0 | 50 | 1.1593 | 0.535 | 0.5972 | 2.7208 | 0.535 | 0.4319 | 0.2539 | 0.2591 | | No log | 3.0 | 75 | 0.9039 | 0.67 | 0.4555 | 2.3747 | 0.67 | 0.5677 | 0.2448 | 0.1349 | | No log | 4.0 | 100 | 0.7631 | 0.73 | 0.3757 | 1.5518 | 0.7300 | 0.7026 | 0.1947 | 0.0987 | | No log | 5.0 | 125 | 0.7412 | 0.775 | 0.3497 | 1.4677 | 0.775 | 0.7456 | 0.2239 | 0.0892 | | No log | 6.0 | 150 | 0.9198 | 0.72 | 0.3977 | 1.7618 | 0.72 | 0.6958 | 0.2190 | 0.1118 | | No log | 7.0 | 175 | 0.6117 | 0.81 | 0.2969 | 1.2112 | 0.81 | 0.7726 | 0.2244 | 0.0661 | | No log | 8.0 | 200 | 0.6296 | 0.78 | 0.3090 | 1.3439 | 0.78 | 0.7443 | 0.1959 | 0.0771 | | No log | 9.0 | 225 | 0.6850 | 0.785 | 0.3187 | 1.6325 | 0.785 | 0.7651 | 0.2194 | 0.0986 | | No log | 10.0 | 250 | 0.6304 | 0.79 | 0.3111 | 1.3598 | 0.79 | 0.7821 | 0.2106 | 0.0838 | | No log | 11.0 | 275 | 0.6668 | 0.775 | 0.3242 | 1.9754 | 0.775 | 0.6942 | 0.2005 | 0.0947 | | No log | 12.0 | 300 | 0.6795 | 0.775 | 0.3263 | 1.6182 | 0.775 | 0.7692 | 0.2155 | 0.0875 | | No log | 13.0 | 325 | 0.5156 | 0.85 | 0.2454 | 0.9647 | 0.85 | 0.8378 | 0.2033 | 0.0515 | | No log | 14.0 | 350 | 0.5341 | 0.845 | 0.2644 | 1.0410 | 0.845 | 0.8402 | 0.2050 | 0.0503 | | No log | 15.0 | 375 | 0.4678 | 0.865 | 0.2245 | 0.9232 | 0.865 | 0.8564 | 0.1836 | 0.0363 | | No log | 16.0 | 400 | 0.5620 | 0.82 | 0.2819 | 1.1475 | 0.82 | 0.7980 | 0.2050 | 0.0710 | | No log | 17.0 | 425 | 0.5253 | 0.83 | 0.2642 | 0.8809 | 0.83 | 0.8145 | 0.1811 | 0.0723 | | No log | 18.0 | 450 | 0.6295 | 0.815 | 0.2997 | 1.8144 | 0.815 | 0.8062 | 0.2120 | 0.0636 | | No log | 19.0 | 475 | 0.5748 | 0.83 | 0.2774 | 1.7900 | 0.83 | 0.8200 | 0.1920 | 0.0506 | | 0.466 | 20.0 | 500 | 0.4704 | 0.84 | 0.2275 | 0.8869 | 0.8400 | 0.8135 | 0.1882 | 0.0472 | | 0.466 | 21.0 | 525 | 0.5693 | 0.82 | 0.2820 | 1.3315 | 0.82 | 0.8013 | 0.2011 | 0.0821 | | 0.466 | 22.0 | 550 | 0.5251 | 0.81 | 0.2677 | 1.2663 | 0.81 | 0.7890 | 0.2037 | 0.0745 | | 0.466 | 23.0 | 575 | 0.5158 | 0.83 | 0.2638 | 1.2621 | 0.83 | 0.8070 | 0.1927 | 0.0614 | | 0.466 | 24.0 | 600 | 0.5056 | 0.835 | 0.2590 | 1.5337 | 0.835 | 0.8080 | 0.1887 | 0.0617 | | 0.466 | 25.0 | 625 | 0.4897 | 0.85 | 0.2476 | 1.4341 | 0.85 | 0.8361 | 0.1870 | 0.0627 | | 0.466 | 26.0 | 650 | 0.4994 | 0.85 | 0.2556 | 1.5846 | 0.85 | 0.8302 | 0.1965 | 0.0718 | | 0.466 | 27.0 | 675 | 0.4720 | 0.845 | 0.2406 | 1.3093 | 0.845 | 0.8234 | 0.1873 | 0.0704 | | 0.466 | 28.0 | 700 | 0.4858 | 0.84 | 0.2486 | 1.4459 | 0.8400 | 0.8192 | 0.1676 | 0.0730 | | 0.466 | 29.0 | 725 | 0.4908 | 0.84 | 0.2510 | 1.4941 | 0.8400 | 0.8159 | 0.1754 | 0.0717 | | 0.466 | 30.0 | 750 | 0.4805 | 0.855 | 0.2442 | 1.3279 | 0.855 | 0.8334 | 0.1827 | 0.0667 | | 0.466 | 31.0 | 775 | 0.4783 | 0.845 | 0.2428 | 1.4150 | 0.845 | 0.8264 | 0.1759 | 0.0660 | | 0.466 | 32.0 | 800 | 0.4822 | 0.855 | 0.2449 | 1.4848 | 0.855 | 0.8322 | 0.1928 | 0.0702 | | 0.466 | 33.0 | 825 | 0.4845 | 0.84 | 0.2462 | 1.4925 | 0.8400 | 0.8227 | 0.1837 | 0.0692 | | 0.466 | 34.0 | 850 | 0.4843 | 0.85 | 0.2466 | 1.4881 | 0.85 | 0.8295 | 0.1752 | 0.0683 | | 0.466 | 35.0 | 875 | 0.4837 | 0.85 | 0.2464 | 1.4939 | 0.85 | 0.8295 | 0.1842 | 0.0718 | | 0.466 | 36.0 | 900 | 0.4843 | 0.85 | 0.2467 | 1.4910 | 0.85 | 0.8295 | 0.1950 | 0.0705 | | 0.466 | 37.0 | 925 | 0.4862 | 0.85 | 0.2479 | 1.4938 | 0.85 | 0.8295 | 0.1871 | 0.0713 | | 0.466 | 38.0 | 950 | 0.4854 | 0.85 | 0.2478 | 1.4945 | 0.85 | 0.8295 | 0.1859 | 0.0719 | | 0.466 | 39.0 | 975 | 0.4850 | 0.85 | 0.2471 | 1.4891 | 0.85 | 0.8295 | 0.1855 | 0.0724 | | 0.0749 | 40.0 | 1000 | 0.4869 | 0.85 | 0.2484 | 1.4967 | 0.85 | 0.8295 | 0.1969 | 0.0718 | | 0.0749 | 41.0 | 1025 | 0.4857 | 0.85 | 0.2482 | 1.5544 | 0.85 | 0.8295 | 0.1904 | 0.0726 | | 0.0749 | 42.0 | 1050 | 0.4872 | 0.85 | 0.2487 | 1.5559 | 0.85 | 0.8295 | 0.1877 | 0.0732 | | 0.0749 | 43.0 | 1075 | 0.4873 | 0.85 | 0.2488 | 1.5534 | 0.85 | 0.8295 | 0.1871 | 0.0723 | | 0.0749 | 44.0 | 1100 | 0.4870 | 0.85 | 0.2489 | 1.5542 | 0.85 | 0.8295 | 0.1787 | 0.0730 | | 0.0749 | 45.0 | 1125 | 0.4874 | 0.85 | 0.2490 | 1.5544 | 0.85 | 0.8295 | 0.1867 | 0.0724 | | 0.0749 | 46.0 | 1150 | 0.4868 | 0.85 | 0.2486 | 1.5531 | 0.85 | 0.8295 | 0.1954 | 0.0723 | | 0.0749 | 47.0 | 1175 | 0.4879 | 0.85 | 0.2493 | 1.5546 | 0.85 | 0.8295 | 0.1842 | 0.0727 | | 0.0749 | 48.0 | 1200 | 0.4882 | 0.85 | 0.2495 | 1.5537 | 0.85 | 0.8295 | 0.1864 | 0.0730 | | 0.0749 | 49.0 | 1225 | 0.4875 | 0.85 | 0.2492 | 1.5537 | 0.85 | 0.8295 | 0.1884 | 0.0727 | | 0.0749 | 50.0 | 1250 | 0.4880 | 0.85 | 0.2494 | 1.5528 | 0.85 | 0.8295 | 0.1877 | 0.0726 | | 0.0749 | 51.0 | 1275 | 0.4888 | 0.85 | 0.2499 | 1.5539 | 0.85 | 0.8295 | 0.1754 | 0.0725 | | 0.0749 | 52.0 | 1300 | 0.4894 | 0.85 | 0.2501 | 1.5540 | 0.85 | 0.8295 | 0.1883 | 0.0736 | | 0.0749 | 53.0 | 1325 | 0.4889 | 0.85 | 0.2501 | 1.5533 | 0.85 | 0.8295 | 0.1708 | 0.0727 | | 0.0749 | 54.0 | 1350 | 0.4891 | 0.85 | 0.2500 | 1.5531 | 0.85 | 0.8295 | 0.1785 | 0.0729 | | 0.0749 | 55.0 | 1375 | 0.4904 | 0.85 | 0.2509 | 1.5541 | 0.85 | 0.8295 | 0.1744 | 0.0730 | | 0.0749 | 56.0 | 1400 | 0.4903 | 0.85 | 0.2507 | 1.5541 | 0.85 | 0.8295 | 0.1897 | 0.0730 | | 0.0749 | 57.0 | 1425 | 0.4894 | 0.85 | 0.2503 | 1.5536 | 0.85 | 0.8295 | 0.1792 | 0.0730 | | 0.0749 | 58.0 | 1450 | 0.4889 | 0.85 | 0.2501 | 1.5531 | 0.85 | 0.8295 | 0.1892 | 0.0730 | | 0.0749 | 59.0 | 1475 | 0.4907 | 0.85 | 0.2511 | 1.5542 | 0.85 | 0.8295 | 0.1767 | 0.0733 | | 0.0712 | 60.0 | 1500 | 0.4897 | 0.85 | 0.2506 | 1.5540 | 0.85 | 0.8295 | 0.1813 | 0.0732 | | 0.0712 | 61.0 | 1525 | 0.4906 | 0.85 | 0.2512 | 1.5545 | 0.85 | 0.8295 | 0.1853 | 0.0733 | | 0.0712 | 62.0 | 1550 | 0.4905 | 0.85 | 0.2512 | 1.5541 | 0.85 | 0.8295 | 0.1723 | 0.0733 | | 0.0712 | 63.0 | 1575 | 0.4904 | 0.85 | 0.2512 | 1.5543 | 0.85 | 0.8295 | 0.1817 | 0.0732 | | 0.0712 | 64.0 | 1600 | 0.4915 | 0.85 | 0.2515 | 1.5544 | 0.85 | 0.8295 | 0.1942 | 0.0736 | | 0.0712 | 65.0 | 1625 | 0.4898 | 0.85 | 0.2506 | 1.5534 | 0.85 | 0.8295 | 0.1712 | 0.0735 | | 0.0712 | 66.0 | 1650 | 0.4911 | 0.85 | 0.2516 | 1.5548 | 0.85 | 0.8295 | 0.1824 | 0.0733 | | 0.0712 | 67.0 | 1675 | 0.4908 | 0.85 | 0.2513 | 1.5546 | 0.85 | 0.8295 | 0.1896 | 0.0734 | | 0.0712 | 68.0 | 1700 | 0.4911 | 0.85 | 0.2516 | 1.5548 | 0.85 | 0.8295 | 0.1744 | 0.0734 | | 0.0712 | 69.0 | 1725 | 0.4912 | 0.85 | 0.2516 | 1.5541 | 0.85 | 0.8295 | 0.1726 | 0.0733 | | 0.0712 | 70.0 | 1750 | 0.4910 | 0.85 | 0.2514 | 1.5543 | 0.85 | 0.8295 | 0.1827 | 0.0736 | | 0.0712 | 71.0 | 1775 | 0.4918 | 0.85 | 0.2520 | 1.5546 | 0.85 | 0.8295 | 0.1909 | 0.0736 | | 0.0712 | 72.0 | 1800 | 0.4916 | 0.85 | 0.2519 | 1.5545 | 0.85 | 0.8295 | 0.1830 | 0.0734 | | 0.0712 | 73.0 | 1825 | 0.4913 | 0.85 | 0.2517 | 1.5540 | 0.85 | 0.8295 | 0.1835 | 0.0733 | | 0.0712 | 74.0 | 1850 | 0.4918 | 0.85 | 0.2521 | 1.5544 | 0.85 | 0.8295 | 0.1831 | 0.0736 | | 0.0712 | 75.0 | 1875 | 0.4919 | 0.85 | 0.2521 | 1.5548 | 0.85 | 0.8295 | 0.1829 | 0.0734 | | 0.0712 | 76.0 | 1900 | 0.4916 | 0.85 | 0.2520 | 1.5547 | 0.85 | 0.8295 | 0.1831 | 0.0733 | | 0.0712 | 77.0 | 1925 | 0.4919 | 0.85 | 0.2521 | 1.5542 | 0.85 | 0.8295 | 0.1732 | 0.0735 | | 0.0712 | 78.0 | 1950 | 0.4920 | 0.85 | 0.2521 | 1.5541 | 0.85 | 0.8295 | 0.1831 | 0.0734 | | 0.0712 | 79.0 | 1975 | 0.4920 | 0.85 | 0.2522 | 1.5544 | 0.85 | 0.8295 | 0.1833 | 0.0734 | | 0.0712 | 80.0 | 2000 | 0.4922 | 0.845 | 0.2523 | 1.5549 | 0.845 | 0.8258 | 0.1859 | 0.0735 | | 0.0712 | 81.0 | 2025 | 0.4920 | 0.85 | 0.2522 | 1.5542 | 0.85 | 0.8295 | 0.1830 | 0.0732 | | 0.0712 | 82.0 | 2050 | 0.4920 | 0.845 | 0.2522 | 1.5549 | 0.845 | 0.8258 | 0.1783 | 0.0734 | | 0.0712 | 83.0 | 2075 | 0.4922 | 0.85 | 0.2524 | 1.5546 | 0.85 | 0.8295 | 0.1832 | 0.0734 | | 0.0712 | 84.0 | 2100 | 0.4920 | 0.845 | 0.2522 | 1.5543 | 0.845 | 0.8258 | 0.1784 | 0.0735 | | 0.0712 | 85.0 | 2125 | 0.4921 | 0.845 | 0.2523 | 1.5547 | 0.845 | 0.8258 | 0.1785 | 0.0735 | | 0.0712 | 86.0 | 2150 | 0.4921 | 0.85 | 0.2523 | 1.5545 | 0.85 | 0.8295 | 0.1836 | 0.0733 | | 0.0712 | 87.0 | 2175 | 0.4924 | 0.85 | 0.2524 | 1.5547 | 0.85 | 0.8295 | 0.1836 | 0.0734 | | 0.0712 | 88.0 | 2200 | 0.4925 | 0.845 | 0.2524 | 1.5548 | 0.845 | 0.8258 | 0.1785 | 0.0735 | | 0.0712 | 89.0 | 2225 | 0.4924 | 0.85 | 0.2525 | 1.5548 | 0.85 | 0.8295 | 0.1835 | 0.0734 | | 0.0712 | 90.0 | 2250 | 0.4921 | 0.845 | 0.2523 | 1.5545 | 0.845 | 0.8258 | 0.1688 | 0.0735 | | 0.0712 | 91.0 | 2275 | 0.4925 | 0.845 | 0.2525 | 1.5546 | 0.845 | 0.8258 | 0.1785 | 0.0735 | | 0.0712 | 92.0 | 2300 | 0.4924 | 0.845 | 0.2524 | 1.5546 | 0.845 | 0.8258 | 0.1785 | 0.0736 | | 0.0712 | 93.0 | 2325 | 0.4925 | 0.845 | 0.2526 | 1.5548 | 0.845 | 0.8258 | 0.1785 | 0.0736 | | 0.0712 | 94.0 | 2350 | 0.4924 | 0.845 | 0.2525 | 1.5547 | 0.845 | 0.8258 | 0.1786 | 0.0736 | | 0.0712 | 95.0 | 2375 | 0.4926 | 0.845 | 0.2526 | 1.5547 | 0.845 | 0.8258 | 0.1785 | 0.0736 | | 0.0712 | 96.0 | 2400 | 0.4925 | 0.845 | 0.2526 | 1.5548 | 0.845 | 0.8258 | 0.1785 | 0.0736 | | 0.0712 | 97.0 | 2425 | 0.4925 | 0.845 | 0.2526 | 1.5547 | 0.845 | 0.8258 | 0.1785 | 0.0735 | | 0.0712 | 98.0 | 2450 | 0.4926 | 0.845 | 0.2526 | 1.5548 | 0.845 | 0.8258 | 0.1785 | 0.0736 | | 0.0712 | 99.0 | 2475 | 0.4925 | 0.845 | 0.2526 | 1.5548 | 0.845 | 0.8258 | 0.1785 | 0.0736 | | 0.0711 | 100.0 | 2500 | 0.4925 | 0.845 | 0.2526 | 1.5547 | 0.845 | 0.8258 | 0.1785 | 0.0736 | ### Framework versions - Transformers 4.30.2 - Pytorch 1.13.1 - Datasets 2.13.1 - Tokenizers 0.13.3
grace-pro/xlmr-base-finetuned-hausa-2e-3
grace-pro
2023-07-13T17:31:39Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-13T17:03:58Z
--- license: mit tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: xlmr-base-finetuned-hausa-2e-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlmr-base-finetuned-hausa-2e-3 This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2694 - Precision: 0.1719 - Recall: 0.0235 - F1: 0.0414 - Accuracy: 0.9247 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2724 | 1.0 | 1312 | 0.2700 | 0.1719 | 0.0235 | 0.0414 | 0.9247 | | 0.2754 | 2.0 | 2624 | 0.2689 | 0.1719 | 0.0235 | 0.0414 | 0.9247 | | 0.2743 | 3.0 | 3936 | 0.2708 | 0.1719 | 0.0235 | 0.0414 | 0.9247 | | 0.2745 | 4.0 | 5248 | 0.2692 | 0.1719 | 0.0235 | 0.0414 | 0.9247 | | 0.2713 | 5.0 | 6560 | 0.2694 | 0.1719 | 0.0235 | 0.0414 | 0.9247 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
ddoc/ptg
ddoc
2023-07-13T17:27:11Z
0
0
null
[ "region:us" ]
null
2023-07-13T17:26:55Z
# Prompt generator An extension for [webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) that lets you generate prompts. ![](screenshot.png)
grace-pro/afriberta-small-finetuned-hausa-2e-3
grace-pro
2023-07-13T16:59:56Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-13T16:49:20Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: afriberta-small-finetuned-hausa-2e-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # afriberta-small-finetuned-hausa-2e-3 This model is a fine-tuned version of [castorini/afriberta_small](https://huggingface.co/castorini/afriberta_small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2358 - Precision: 0.1719 - Recall: 0.0276 - F1: 0.0476 - Accuracy: 0.9373 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.243 | 1.0 | 1312 | 0.2370 | 0.1719 | 0.0276 | 0.0476 | 0.9373 | | 0.2448 | 2.0 | 2624 | 0.2406 | 0.1719 | 0.0276 | 0.0476 | 0.9373 | | 0.2431 | 3.0 | 3936 | 0.2366 | 0.1719 | 0.0276 | 0.0476 | 0.9373 | | 0.2431 | 4.0 | 5248 | 0.2358 | 0.1719 | 0.0276 | 0.0476 | 0.9373 | | 0.2409 | 5.0 | 6560 | 0.2358 | 0.1719 | 0.0276 | 0.0476 | 0.9373 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
NasimB/gpt2-cocnat-guten-mod-rm-2k-rarity-no-cut
NasimB
2023-07-13T16:46:31Z
9
1
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-13T15:02:21Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: gpt2-cocnat-guten-mod-rm-2k-rarity-no-cut results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-cocnat-guten-mod-rm-2k-rarity-no-cut This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.3120 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.7018 | 0.29 | 500 | 5.6444 | | 5.3406 | 0.58 | 1000 | 5.2034 | | 4.9891 | 0.88 | 1500 | 4.9570 | | 4.7257 | 1.17 | 2000 | 4.8069 | | 4.5644 | 1.46 | 2500 | 4.6833 | | 4.4557 | 1.75 | 3000 | 4.5769 | | 4.3292 | 2.04 | 3500 | 4.4986 | | 4.137 | 2.34 | 4000 | 4.4485 | | 4.1027 | 2.63 | 4500 | 4.3900 | | 4.064 | 2.92 | 5000 | 4.3414 | | 3.8721 | 3.21 | 5500 | 4.3322 | | 3.8018 | 3.5 | 6000 | 4.3007 | | 3.7893 | 3.79 | 6500 | 4.2661 | | 3.6925 | 4.09 | 7000 | 4.2635 | | 3.5253 | 4.38 | 7500 | 4.2599 | | 3.5119 | 4.67 | 8000 | 4.2446 | | 3.506 | 4.96 | 8500 | 4.2295 | | 3.3528 | 5.25 | 9000 | 4.2434 | | 3.3251 | 5.55 | 9500 | 4.2431 | | 3.325 | 5.84 | 10000 | 4.2415 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
anyachan/ernalora
anyachan
2023-07-13T16:46:05Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-13T16:41:22Z
--- license: creativeml-openrail-m ---
grace-pro/afriberta-base-finetuned-hausa-2e-3
grace-pro
2023-07-13T16:45:14Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-13T16:28:08Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: afriberta-base-finetuned-hausa-2e-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # afriberta-base-finetuned-hausa-2e-3 This model is a fine-tuned version of [castorini/afriberta_base](https://huggingface.co/castorini/afriberta_base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2360 - Precision: 0.1719 - Recall: 0.0276 - F1: 0.0476 - Accuracy: 0.9373 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2428 | 1.0 | 1312 | 0.2368 | 0.1719 | 0.0276 | 0.0476 | 0.9373 | | 0.2435 | 2.0 | 2624 | 0.2385 | 0.1719 | 0.0276 | 0.0476 | 0.9373 | | 0.2428 | 3.0 | 3936 | 0.2371 | 0.1719 | 0.0276 | 0.0476 | 0.9373 | | 0.2434 | 4.0 | 5248 | 0.2359 | 0.1719 | 0.0276 | 0.0476 | 0.9373 | | 0.2411 | 5.0 | 6560 | 0.2360 | 0.1719 | 0.0276 | 0.0476 | 0.9373 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Gustavosta/SowlfieModelsRVC
Gustavosta
2023-07-13T16:44:17Z
0
2
null
[ "audio-to-audio", "pt", "en", "license:mit", "region:us" ]
audio-to-audio
2023-07-09T22:53:01Z
--- license: mit language: - pt - en pipeline_tag: audio-to-audio --- # Sowlfie Models RVC Repository with public **models for RVC** that I make. (**Open commissions** on "@lengodev" on Discord) Have suggestions? [Open an issue](https://huggingface.co/Gustavosta/SowlfieModelsRVC/discussions/new)! ## 🎤 RVC Models: | Model | Dataset | Epochs | Language | Sample | |---|:---:|---:|---:|---:| | [Pica-Pau (Woody Woodpecker PT-BR)](https://huggingface.co/Gustavosta/SowlfieModelsRVC/resolve/main/pica-pau-model-rvc-v2.zip) | [8 Minutes, 120 segments](https://drive.google.com/file/d/1t37uofCRrohhPLxcXfJWlfhIU_afwIdM/view?usp=sharing) | 400 Epochs | 🇧🇷 Brazilian Portuguese | [Bolo de morango de cada estado](https://youtu.be/UxmEFyC4R_0) | ## ❓ How to use a model? If you've never used RVC v2 before, I recommend checking out **[this guide](https://docs.google.com/document/d/13_l1bd1Osgz7qlAZn-zhklCbHpVRk6bYOuAuB78qmsE/edit?pli=1)**. To use a model from this repository, you will **need the URL of the `.zip` model** file in the repository and modify the URL, adding "`/resolve/main/`" in the **URL slug before the filename**. Then you can **use the URL in the model download field**. **URL Example**: ``` https://huggingface.co/Gustavosta/SowlfieModelsRVC/resolve/main/model-filename.zip ``` ## ⚖️ Licence: [MIT](https://huggingface.co/models?license=license:mit) Licence --- ⚠️ It's hard work to **build datasets**, **train models** and make them **available for free**. So if you use the model, please **credit the model** under the name of `Sowlfie Models` or `Gustavosta`. Anyway, **thanks for reading this far**! 🤝
brunogs/distilbert-base-uncased-finetuned-cola
brunogs
2023-07-13T16:42:33Z
61
0
transformers
[ "transformers", "tf", "tensorboard", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-13T15:53:06Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: brunogs/distilbert-base-uncased-finetuned-cola results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # brunogs/distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1860 - Validation Loss: 0.5510 - Train Matthews Correlation: 0.5076 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Matthews Correlation | Epoch | |:----------:|:---------------:|:--------------------------:|:-----:| | 0.5165 | 0.4641 | 0.4474 | 0 | | 0.3176 | 0.4989 | 0.5060 | 1 | | 0.1860 | 0.5510 | 0.5076 | 2 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Datasets 2.13.1 - Tokenizers 0.13.3
Tanor/BERTovoSENTNEG4
Tanor
2023-07-13T16:40:23Z
105
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-08T20:56:45Z
--- tags: - generated_from_trainer metrics: - f1 model-index: - name: BERTovoSENTNEG4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BERTovoSENTNEG4 This model is a fine-tuned version of [Tanor/BERTicovoSENTNEG4](https://huggingface.co/Tanor/BERTicovoSENTNEG4) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0675 - F1: 0.5778 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 32 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 53 | 0.0491 | 0.0800 | | No log | 2.0 | 106 | 0.0487 | 0.5294 | | No log | 3.0 | 159 | 0.0517 | 0.5714 | | No log | 4.0 | 212 | 0.0585 | 0.5366 | | No log | 5.0 | 265 | 0.0675 | 0.5778 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1 - Datasets 2.13.1 - Tokenizers 0.13.3
miasik/Yohan-Anything.V5
miasik
2023-07-13T16:27:23Z
0
0
null
[ "en", "license:creativeml-openrail-m", "region:us" ]
null
2023-07-07T07:03:52Z
--- license: creativeml-openrail-m language: - en --- 1. Original Yohan was CLIP fixed and pruned 2. Anything.V5 was merged as "train difference" with (Yohan-Anything.V3)*1 using Supermerger 3. ClearVAE.V2.3 was baked in during merging ![](https://huggingface.co/miasik/Yohan-Anything.V5/blob/main/Grids/431619419-21-DPM%2B%2B%202M%20Karras-103421_405457.jpg "grid 01") ![](https://huggingface.co/miasik/Yohan-Anything.V5/blob/main/Grids/1546575599-21-DPM%2B%2B%202M%20Karras-105716_601281.jpg "grid 01")
Tanor/BERTovoSENTPOS4
Tanor
2023-07-13T16:16:24Z
108
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-08T19:42:57Z
--- tags: - generated_from_trainer metrics: - f1 model-index: - name: BERTovoSENTPOS4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BERTovoSENTPOS4 This model is a fine-tuned version of [Tanor/BERTicovoSENTPOS4](https://huggingface.co/Tanor/BERTicovoSENTPOS4) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0509 - F1: 0.5128 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 32 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 53 | 0.0467 | 0.0 | | No log | 2.0 | 106 | 0.0427 | 0.3529 | | No log | 3.0 | 159 | 0.0402 | 0.5238 | | No log | 4.0 | 212 | 0.0504 | 0.4706 | | No log | 5.0 | 265 | 0.0468 | 0.5405 | | No log | 6.0 | 318 | 0.0509 | 0.5128 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1 - Datasets 2.13.1 - Tokenizers 0.13.3
alesthehuman/ppo-LunarLander-v2-unit8
alesthehuman
2023-07-13T16:15:30Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-07-13T15:26:57Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -13.55 +/- 101.24 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 1000000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'alesthehuman/ppo-LunarLander-v2-unit8' 'batch_size': 512 'minibatch_size': 128} ```
FarziBuilder/NeoXAdapter
FarziBuilder
2023-07-13T16:07:15Z
3
0
peft
[ "peft", "region:us" ]
null
2023-07-13T16:07:13Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0.dev0