modelId
stringlengths
5
138
author
stringlengths
2
42
last_modified
unknowndate
2020-02-15 11:33:14
2025-04-11 18:27:37
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
421 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
unknowndate
2022-03-02 23:29:04
2025-04-11 18:27:06
card
stringlengths
11
1.01M
jaindeepali010/clinical_ner_miimansa_G1_model
jaindeepali010
"2024-01-28T09:17:42Z"
1
0
transformers
[ "transformers", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2024-01-28T08:05:30Z"
This model is a clinical NER model finetuned using bert-base-uncased model, trained on G1 dataset. Training and validation was done using 80% of the total data (random state=42), while 20% used for testing. The model was trained for 20 epoch with an early stopping patience of 3 epochs.
albertus-sussex/veriscrape-simcse-auto-reference_2_to_verify_8-fold-7
albertus-sussex
"2025-03-26T11:44:04Z"
0
0
transformers
[ "transformers", "safetensors", "roberta", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
feature-extraction
"2025-03-26T11:43:33Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
camidenecken/RM2-RoBERTa-rm-v3-SBERT-v4_12
camidenecken
"2024-11-19T17:19:49Z"
163
0
transformers
[ "transformers", "safetensors", "roberta", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
feature-extraction
"2024-11-19T17:19:16Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
SatCat/rl_course_vizdoom_health_gathering_supreme
SatCat
"2023-02-24T07:31:08Z"
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2023-02-24T07:31:02Z"
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 11.42 +/- 6.12 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r SatCat/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
NexesMess/Llama_3.x_70b_Triads_V2
NexesMess
"2025-03-12T20:30:52Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2403.19522", "base_model:NexesMess/Llama_3.3_70b_DoppelGanger_R1", "base_model:merge:NexesMess/Llama_3.3_70b_DoppelGanger_R1", "base_model:NexesMess/Llama_3.x_70b_Tessessence_0.10c", "base_model:merge:NexesMess/Llama_3.x_70b_Tessessence_0.10c", "base_model:Steelskull/L3.3-Electra-R1-70b", "base_model:merge:Steelskull/L3.3-Electra-R1-70b", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-03-12T19:59:02Z"
--- base_model: - NexesMess/Llama_3.3_70b_DoppelGanger_R1 - NexesMess/Llama_3.x_70b_Tessessence_0.10c - Steelskull/L3.3-Electra-R1-70b library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [NexesMess/Llama_3.x_70b_Tessessence_0.10c](https://huggingface.co/NexesMess/Llama_3.x_70b_Tessessence_0.10c) as a base. ### Models Merged The following models were included in the merge: * [NexesMess/Llama_3.3_70b_DoppelGanger_R1](https://huggingface.co/NexesMess/Llama_3.3_70b_DoppelGanger_R1) * [Steelskull/L3.3-Electra-R1-70b](https://huggingface.co/Steelskull/L3.3-Electra-R1-70b) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: model_stock models: - model: NexesMess/Llama_3.3_70b_DoppelGanger_R1 parameters: weight: 1.0 - model: Steelskull/L3.3-Electra-R1-70b parameters: weight: 1.0 base_model: NexesMess/Llama_3.x_70b_Tessessence_0.10c dtype: bfloat16 out_dtype: bfloat16 parameters: int8_mask: true normalize: true rescale: false chat_template: auto tokenizer: source: union ```
tiendoan/finetune-clip-vit-large-patch14
tiendoan
"2024-11-05T04:30:58Z"
192
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:openai/clip-vit-large-patch14", "base_model:finetune:openai/clip-vit-large-patch14", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2024-11-05T04:04:17Z"
--- library_name: transformers base_model: openai/clip-vit-large-patch14 tags: - generated_from_trainer metrics: - f1 model-index: - name: finetune-clip-vit-large-patch14 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetune-clip-vit-large-patch14 This model is a fine-tuned version of [openai/clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6545 - F1: 0.6242 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:------:|:----:|:---------------:|:------:| | 0.7084 | 0.3690 | 100 | 0.7173 | 0.5405 | | 0.6818 | 0.7380 | 200 | 0.7269 | 0.5683 | | 0.7169 | 1.1070 | 300 | 0.6949 | 0.5683 | | 0.6957 | 1.4760 | 400 | 0.6799 | 0.5650 | | 0.6218 | 1.8450 | 500 | 0.7344 | 0.5766 | | 0.6406 | 2.2140 | 600 | 0.6600 | 0.6118 | | 0.6645 | 2.5830 | 700 | 0.6581 | 0.6113 | | 0.6546 | 2.9520 | 800 | 0.6549 | 0.6192 | | 0.6068 | 3.3210 | 900 | 0.6542 | 0.6224 | | 0.6351 | 3.6900 | 1000 | 0.6545 | 0.6242 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
3mei/phi_3_5_instruct_mini_4bit_reflection_405_v2_8k_gsm8k_3e_qkvogud_mlab_instr_resp
3mei
"2024-09-24T02:21:44Z"
60
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/Phi-3.5-mini-instruct-bnb-4bit", "base_model:quantized:unsloth/Phi-3.5-mini-instruct-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
"2024-09-24T02:20:04Z"
--- base_model: unsloth/Phi-3.5-mini-instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft --- # Uploaded model - **Developed by:** 3mei - **License:** apache-2.0 - **Finetuned from model :** unsloth/Phi-3.5-mini-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
vaibkumar/agentic_training_finetuned_v7-Q8_0-GGUF
vaibkumar
"2025-03-14T15:51:32Z"
0
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:vaibkumar/agentic_training_finetuned_v7", "base_model:quantized:vaibkumar/agentic_training_finetuned_v7", "endpoints_compatible", "region:us" ]
null
"2025-03-14T15:50:31Z"
--- base_model: vaibkumar/agentic_training_finetuned_v7 tags: - llama-cpp - gguf-my-repo --- # vaibkumar/agentic_training_finetuned_v7-Q8_0-GGUF This model was converted to GGUF format from [`vaibkumar/agentic_training_finetuned_v7`](https://huggingface.co/vaibkumar/agentic_training_finetuned_v7) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/vaibkumar/agentic_training_finetuned_v7) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo vaibkumar/agentic_training_finetuned_v7-Q8_0-GGUF --hf-file agentic_training_finetuned_v7-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo vaibkumar/agentic_training_finetuned_v7-Q8_0-GGUF --hf-file agentic_training_finetuned_v7-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo vaibkumar/agentic_training_finetuned_v7-Q8_0-GGUF --hf-file agentic_training_finetuned_v7-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo vaibkumar/agentic_training_finetuned_v7-Q8_0-GGUF --hf-file agentic_training_finetuned_v7-q8_0.gguf -c 2048 ```
Nambata/dummy_vae_mapi
Nambata
"2025-01-08T02:08:08Z"
25
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "diffusers:StableDiffusionControlNetPipeline", "region:us" ]
text-to-image
"2025-01-08T02:05:23Z"
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
StepLaw/StepLaw-N_1.0B-D_1.0B-LR4.883e-04-BS32768
StepLaw
"2025-04-02T01:03:23Z"
0
0
transformers
[ "transformers", "safetensors", "step1", "text-generation", "StepLaw", "causal-lm", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-02T00:59:24Z"
--- license: apache-2.0 tags: - StepLaw - causal-lm language: - en library_name: transformers pipeline_tag: text-generation model-index: - name: step2v2_0618_h2048_ffnh8192_numh16_numl16_lr4.883e-04_bs16_ti61035_mlr1e-5 results: [] --- # Wandb Model Name: step2v2_0618_h2048_ffnh8192_numh16_numl16_lr4.883e-04_bs16_ti61035_mlr1e-5 This model is part of the [StepLaw-N_1.0B-D_1.0B](https://huggingface.co/collections/StepLaw/StepLaw-N_1.0B-D_1.0B) collection. ## Model Specifications ### Architecture - **Hidden size (H)**: 2048 - **Feed-forward network size (FFN)**: 8192 - **Attention heads**: 16 - **Layers**: 16 - **Parameter count**: 1.1BM ### Training Parameters - **Learning rate (lr)**: 4.883e-04 - **Batch size (bs)**: 16 - **Training iterations**: 61035 - **Training tokens (D)**: 2.0B ## Model Description StepLaw models are trained with various hyperparameter settings to enable research on scaling laws and hyperparameter optimization. This specific model was trained with learning rate 4.883e-04 and batch size 16 for 61035 iterations, using a total of 2.0B training tokens. ## Usage Example ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "StepLaw/StepLaw-N_1.0B-D_1.0B-LR4.883e-04-BS32768" tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True, use_fast=False) model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True) # Generate text inputs = tokenizer("A long time ago in a galaxy far, far away", return_tensors="pt") outputs = model.generate(**inputs, max_length=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ```## Part of StepLaw Project StepLaw is an initiative to provide thousands of models for optimal hyperparameter research. Visit [StepLaw Project](https://step-law.github.io/) for more information.
automerger/Experiment26Shadow-7B
automerger
"2024-03-27T00:13:18Z"
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "automerger", "base_model:CorticalStack/shadow-clown-7B-slerp", "base_model:finetune:CorticalStack/shadow-clown-7B-slerp", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-03-08T18:56:36Z"
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - automerger base_model: - rwitz/experiment26-truthy-iter-0 - CorticalStack/shadow-clown-7B-slerp --- # Experiment26Shadow-7B Experiment26Shadow-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration. * [rwitz/experiment26-truthy-iter-0](https://huggingface.co/rwitz/experiment26-truthy-iter-0) * [CorticalStack/shadow-clown-7B-slerp](https://huggingface.co/CorticalStack/shadow-clown-7B-slerp) ## 🧩 Configuration ```yaml slices: - sources: - model: rwitz/experiment26-truthy-iter-0 layer_range: [0, 32] - model: CorticalStack/shadow-clown-7B-slerp layer_range: [0, 32] merge_method: slerp base_model: rwitz/experiment26-truthy-iter-0 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 random_seed: 0 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "automerger/Experiment26Shadow-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
scaledown/ScaleDown-7B-slerp-v0.1
scaledown
"2024-03-26T01:20:49Z"
1,538
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-01-01T08:26:00Z"
--- license: apache-2.0 tags: - merge - mergekit model-index: - name: ScaleDown-7B-slerp-v0.1 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 68.0 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=scaledown/ScaleDown-7B-slerp-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 85.7 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=scaledown/ScaleDown-7B-slerp-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 65.26 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=scaledown/ScaleDown-7B-slerp-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 61.9 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=scaledown/ScaleDown-7B-slerp-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 81.37 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=scaledown/ScaleDown-7B-slerp-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 67.17 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=scaledown/ScaleDown-7B-slerp-v0.1 name: Open LLM Leaderboard --- # ScaleDown-7B-slerp-v0.1 This model is a merge of the following models made with [mergekit](https://github.com/cg123/mergekit): * [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218) * [jondurbin/bagel-dpo-7b-v0.1](https://huggingface.co/jondurbin/bagel-dpo-7b-v0.1) ## 🧩 Configuration ```yaml slices: - sources: - model: OpenPipe/mistral-ft-optimized-1218 layer_range: [0, 32] - model: jondurbin/bagel-dpo-7b-v0.1 layer_range: [0, 32] merge_method: slerp base_model: OpenPipe/mistral-ft-optimized-1218 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_scaledown__ScaleDown-7B-slerp-v0.1) | Metric |Value| |---------------------------------|----:| |Avg. |71.57| |AI2 Reasoning Challenge (25-Shot)|68.00| |HellaSwag (10-Shot) |85.70| |MMLU (5-Shot) |65.26| |TruthfulQA (0-shot) |61.90| |Winogrande (5-shot) |81.37| |GSM8k (5-shot) |67.17|
BenjaminOcampo/task-implicit_task__model-usesvm__aug_method-ri
BenjaminOcampo
"2023-11-03T15:46:03Z"
0
0
null
[ "en", "arxiv:1910.09700", "region:us" ]
null
"2023-11-03T15:45:54Z"
--- language: en --- # Model Card for BenjaminOcampo/task-implicit_task__model-usesvm__aug_method-ri <!-- Provide a quick summary of what the model is/does. --> # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 4. [Training Details](#training-details) 5. [Evaluation](#evaluation) 6. [Model Examination](#model-examination-optional) 7. [Environmental Impact](#environmental-impact) 8. [Technical Specifications](#technical-specifications-optional) 9. [Citation](#citation-optional) 10. [Glossary](#glossary-optional) 11. [More Information](#more-information-optional) 12. [Model Card Authors](#model-card-authors-optional) 13. [Model Card Contact](#model-card-contact) 14. [How To Get Started With the Model](#how-to-get-started-with-the-model) # Model Details ## Model Description <!-- Provide a longer summary of what this model is. --> **Classification results dev set** ``` precision recall f1-score support 0 0.88 0.86 0.87 2680 1 0.76 0.80 0.78 1501 2 0.35 0.31 0.33 186 accuracy 0.81 4367 macro avg 0.66 0.65 0.66 4367 weighted avg 0.81 0.81 0.81 4367 ``` **Classification results test set** ``` precision recall f1-score support 0 0.89 0.87 0.88 2681 1 0.77 0.80 0.79 1501 2 0.37 0.37 0.37 186 accuracy 0.82 4368 macro avg 0.68 0.68 0.68 4368 weighted avg 0.82 0.82 0.82 4368 ``` - **Developed by:** Nicolás Benjamín Ocampo - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** en - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] - **Resources for more information:** [More Information Needed] # Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ## Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ## Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ## Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] # Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ## Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. # Training Details ## Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ## Training Procedure [optional] <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> ### Preprocessing [More Information Needed] ### Speeds, Sizes, Times <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] # Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ## Testing Data, Factors & Metrics ### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] ### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] ### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ## Results [More Information Needed] # Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] # Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] # Technical Specifications [optional] ## Model Architecture and Objective [More Information Needed] ## Compute Infrastructure [More Information Needed] ### Hardware [More Information Needed] ### Software [More Information Needed] # Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] # Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] # More Information [optional] [More Information Needed] # Model Card Authors [optional] [More Information Needed] # Model Card Contact [More Information Needed] # How to Get Started with the Model Use the code below to get started with the model. <details> <summary> Click to expand </summary> [More Information Needed] </details>
cleanrl/KungFuMaster-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed2
cleanrl
"2023-02-23T16:25:37Z"
0
0
cleanrl
[ "cleanrl", "tensorboard", "KungFuMaster-v5", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2023-02-23T16:25:36Z"
--- tags: - KungFuMaster-v5 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: KungFuMaster-v5 type: KungFuMaster-v5 metrics: - type: mean_reward value: 33200.00 +/- 9362.48 name: mean_reward verified: false --- # (CleanRL) **PPO** Agent Playing **KungFuMaster-v5** This is a trained model of a PPO agent playing KungFuMaster-v5. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[jax,envpool,atari]" python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_impala_atari_wrapper_naturecnn --env-id KungFuMaster-v5 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/KungFuMaster-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed2/raw/main/cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py curl -OL https://huggingface.co/cleanrl/KungFuMaster-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed2/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/KungFuMaster-v5-cleanba_ppo_envpool_impala_atari_wrapper_naturecnn-seed2/raw/main/poetry.lock poetry install --all-extras python cleanba_ppo_envpool_impala_atari_wrapper_naturecnn.py --distributed --learner-device-ids 1 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id KungFuMaster-v5 --seed 2 ``` # Hyperparameters ```python {'actor_device_ids': [0], 'actor_devices': ['gpu:0'], 'anneal_lr': True, 'async_batch_size': 20, 'async_update': 3, 'batch_size': 15360, 'capture_video': False, 'clip_coef': 0.1, 'cuda': True, 'distributed': True, 'ent_coef': 0.01, 'env_id': 'KungFuMaster-v5', 'exp_name': 'cleanba_ppo_envpool_impala_atari_wrapper_naturecnn', 'gae_lambda': 0.95, 'gamma': 0.99, 'global_learner_decices': ['gpu:1', 'gpu:3'], 'hf_entity': 'cleanrl', 'learner_device_ids': [1], 'learner_devices': ['gpu:1'], 'learning_rate': 0.00025, 'local_batch_size': 7680, 'local_minibatch_size': 1920, 'local_num_envs': 60, 'local_rank': 0, 'max_grad_norm': 0.5, 'minibatch_size': 3840, 'norm_adv': True, 'num_envs': 120, 'num_minibatches': 4, 'num_steps': 128, 'num_updates': 3255, 'profile': False, 'save_model': True, 'seed': 2, 'target_kl': None, 'test_actor_learner_throughput': False, 'torch_deterministic': True, 'total_timesteps': 50000000, 'track': True, 'update_epochs': 4, 'upload_model': True, 'vf_coef': 0.5, 'wandb_entity': None, 'wandb_project_name': 'cleanba', 'world_size': 2} ```
brugmark/all-MiniLM-L6-v2-personal-project-finetuned-2024-06-03
brugmark
"2024-06-03T14:08:24Z"
125
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "fill-mask", "generated_from_trainer", "base_model:sentence-transformers/all-MiniLM-L6-v2", "base_model:finetune:sentence-transformers/all-MiniLM-L6-v2", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2024-06-03T07:24:22Z"
--- license: apache-2.0 base_model: sentence-transformers/all-MiniLM-L6-v2 tags: - generated_from_trainer model-index: - name: all-MiniLM-L6-v2-personal-project-finetuned-2024-06-03 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # all-MiniLM-L6-v2-personal-project-finetuned-2024-06-03 This model is a fine-tuned version of [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 7.6355 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.5 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
mradermacher/Qwen2.5-Coder-1.5B-GGUF
mradermacher
"2024-11-12T14:43:23Z"
126
0
transformers
[ "transformers", "gguf", "code", "qwen", "qwen-coder", "codeqwen", "en", "base_model:Qwen/Qwen2.5-Coder-1.5B", "base_model:quantized:Qwen/Qwen2.5-Coder-1.5B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
"2024-11-12T05:39:04Z"
--- base_model: Qwen/Qwen2.5-Coder-1.5B language: - en library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B/blob/main/LICENSE quantized_by: mradermacher tags: - code - qwen - qwen-coder - codeqwen --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-GGUF/resolve/main/Qwen2.5-Coder-1.5B.Q2_K.gguf) | Q2_K | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-GGUF/resolve/main/Qwen2.5-Coder-1.5B.Q3_K_S.gguf) | Q3_K_S | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-GGUF/resolve/main/Qwen2.5-Coder-1.5B.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-GGUF/resolve/main/Qwen2.5-Coder-1.5B.Q3_K_L.gguf) | Q3_K_L | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-GGUF/resolve/main/Qwen2.5-Coder-1.5B.IQ4_XS.gguf) | IQ4_XS | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-GGUF/resolve/main/Qwen2.5-Coder-1.5B.Q4_0_4_4.gguf) | Q4_0_4_4 | 1.0 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-GGUF/resolve/main/Qwen2.5-Coder-1.5B.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-GGUF/resolve/main/Qwen2.5-Coder-1.5B.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-GGUF/resolve/main/Qwen2.5-Coder-1.5B.Q5_K_S.gguf) | Q5_K_S | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-GGUF/resolve/main/Qwen2.5-Coder-1.5B.Q5_K_M.gguf) | Q5_K_M | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-GGUF/resolve/main/Qwen2.5-Coder-1.5B.Q6_K.gguf) | Q6_K | 1.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-GGUF/resolve/main/Qwen2.5-Coder-1.5B.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-GGUF/resolve/main/Qwen2.5-Coder-1.5B.f16.gguf) | f16 | 3.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
KingLTD/pretrain_Law_model_vit5_version1
KingLTD
"2023-09-04T04:14:25Z"
105
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "base_model:VietAI/vit5-base", "base_model:finetune:VietAI/vit5-base", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2023-09-04T03:28:51Z"
--- license: mit base_model: VietAI/vit5-base tags: - generated_from_trainer metrics: - rouge model-index: - name: pretrain_Law_model_vit5_version1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pretrain_Law_model_vit5_version1 This model is a fine-tuned version of [VietAI/vit5-base](https://huggingface.co/VietAI/vit5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2779 - Rouge1: 0.4859 - Rouge2: 0.3617 - Rougel: 0.4218 - Rougelsum: 0.4417 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:| | No log | 1.0 | 245 | 0.3566 | 0.4739 | 0.3369 | 0.4053 | 0.4273 | | No log | 2.0 | 490 | 0.3240 | 0.4752 | 0.3453 | 0.4095 | 0.4300 | | 0.7518 | 3.0 | 735 | 0.3059 | 0.4760 | 0.3510 | 0.4112 | 0.4311 | | 0.7518 | 4.0 | 980 | 0.2951 | 0.4838 | 0.3584 | 0.4164 | 0.4387 | | 0.2808 | 5.0 | 1225 | 0.2858 | 0.4799 | 0.3582 | 0.4166 | 0.4368 | | 0.2808 | 6.0 | 1470 | 0.2831 | 0.4839 | 0.3611 | 0.4194 | 0.4403 | | 0.2351 | 7.0 | 1715 | 0.2814 | 0.4858 | 0.3644 | 0.4218 | 0.4423 | | 0.2351 | 8.0 | 1960 | 0.2779 | 0.4850 | 0.3612 | 0.4206 | 0.4416 | | 0.2074 | 9.0 | 2205 | 0.2775 | 0.4836 | 0.3590 | 0.4199 | 0.4398 | | 0.2074 | 10.0 | 2450 | 0.2779 | 0.4859 | 0.3617 | 0.4218 | 0.4417 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
igorsterner/AnE-NER
igorsterner
"2024-10-05T12:53:54Z"
109
2
transformers
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "multilingual", "base_model:FacebookAI/xlm-roberta-large", "base_model:finetune:FacebookAI/xlm-roberta-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2023-12-18T09:05:15Z"
--- license: mit language: - multilingual base_model: - FacebookAI/xlm-roberta-large pipeline_tag: token-classification --- # Multilingual Identification of English Code-Switching AnE-NER (Any-English Code-Switching Named Entity Recognition) is a token-level model for detecting named entities in code-switching texts. It classifies words into two classes: `I` (inside a named entity) and `O` (outside a named entity). The model shows strong performance on both languages seen and unseen in the training data. # Usage You can use AnE-NER with Huggingface’s `pipeline` or `AutoModelForTokenClassification`. Let's try the following example (taken from [this](https://aclanthology.org/W18-3213/) paper) ```python input = "My Facebook, Ig & Twitter is hellaa dead yall Jk soy yo que has no life!" ``` ## Pipeline ```python from transformers import pipeline classifier = pipeline("token-classification", model="igorsterner/AnE-NER", aggregation_strategy="simple") result = classifier(input) ``` which returns ``` [{'entity_group': 'I', 'score': 0.95482016, 'word': 'Facebook', 'start': 3, 'end': 11}, {'entity_group': 'I', 'score': 0.9638739, 'word': 'Ig', 'start': 13, 'end': 15}, {'entity_group': 'I', 'score': 0.98207414, 'word': 'Twitter', 'start': 18, 'end': 25}] ``` ## Advanced If your input is already word-tokenized, and you want the corresponding word NER labels, you can try the following strategy ```python import torch from transformers import AutoModelForTokenClassification, AutoTokenizer lid_model_name = "igorsterner/AnE-NER" lid_tokenizer = AutoTokenizer.from_pretrained(lid_model_name) lid_model = AutoModelForTokenClassification.from_pretrained(lid_model_name) word_tokens = ['My', 'Facebook', ',', 'Ig', '&', 'Twitter', 'is', 'hellaa', 'dead', 'yall', 'Jk', 'soy', 'yo', 'que', 'has', 'no', 'life', '!'] subword_inputs = lid_tokenizer( word_tokens, truncation=True, is_split_into_words=True, return_tensors="pt" ) subword2word = subword_inputs.word_ids(batch_index=0) logits = lid_model(**subword_inputs).logits predictions = torch.argmax(logits, dim=2) predicted_subword_labels = [lid_model.config.id2label[t.item()] for t in predictions[0]] predicted_word_labels = [[] for _ in range(len(word_tokens))] for idx, predicted_subword in enumerate(predicted_subword_labels): if subword2word[idx] is not None: predicted_word_labels[subword2word[idx]].append(predicted_subword) def most_frequent(lst): return max(set(lst), key=lst.count) if lst else "Other" predicted_word_labels = [most_frequent(sublist) for sublist in predicted_word_labels] for token, label in zip(word_tokens, predicted_word_labels): print(f"{token}: {label}") ``` which returns ``` My: O Facebook: I ,: O Ig: I &: O Twitter: I is: O hellaa: O dead: O yall: O Jk: O soy: O yo: O que: O has: O no: O life!: O ``` # Word-level language labels If you also want the language of each word, you can additionaly run [AnE-LID](https://huggingface.co/igorsterner/ane-lid). Checkout my evaluation scripts for examples of using both at the same time, as we did in the paper: [https://github.com/igorsterner/AnE/tree/main/eval](https://github.com/igorsterner/AnE/tree/main/eval). For the above example, you can get: ``` My: English Facebook: NE.English ,: Other Ig: NE.English &: Other Twitter: NE.English is: English hellaa: English dead: English yall: English Jk: English soy: notEnglish yo: notEnglish que: notEnglish has: English no: English life: English !: Other ``` # Citation Please consider citing my work if it helped you ``` @inproceedings{sterner-2024-multilingual, title = "Multilingual Identification of {E}nglish Code-Switching", author = "Sterner, Igor", editor = {Scherrer, Yves and Jauhiainen, Tommi and Ljube{\v{s}}i{\'c}, Nikola and Zampieri, Marcos and Nakov, Preslav and Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Eleventh Workshop on NLP for Similar Languages, Varieties, and Dialects (VarDial 2024)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.vardial-1.14", doi = "10.18653/v1/2024.vardial-1.14", pages = "163--173", abstract = "Code-switching research depends on fine-grained language identification. In this work, we study existing corpora used to train token-level language identification systems. We aggregate these corpora with a consistent labelling scheme and train a system to identify English code-switching in multilingual text. We show that the system identifies code-switching in unseen language pairs with absolute measure 2.3-4.6{\%} better than language-pair-specific SoTA. We also analyse the correlation between typological similarity of the languages and difficulty in recognizing code-switching.", } ```
mlx-community/Falcon3-10B-Instruct-abliterated-4bit
mlx-community
"2025-02-19T20:43:29Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "falcon3", "abliterated", "uncensored", "mlx", "mlx-my-repo", "conversational", "en", "fr", "es", "pt", "base_model:huihui-ai/Falcon3-10B-Instruct-abliterated", "base_model:quantized:huihui-ai/Falcon3-10B-Instruct-abliterated", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "region:us" ]
text-generation
"2025-02-19T10:43:20Z"
--- language: - en - fr - es - pt tags: - falcon3 - abliterated - uncensored - mlx - mlx-my-repo base_model: huihui-ai/Falcon3-10B-Instruct-abliterated license: other license_name: falcon-llm-license license_link: https://falconllm.tii.ae/falcon-terms-and-conditions.html library_name: transformers --- # mlx-community/Falcon3-10B-Instruct-abliterated-4bit The Model [mlx-community/Falcon3-10B-Instruct-abliterated-4bit](https://huggingface.co/mlx-community/Falcon3-10B-Instruct-abliterated-4bit) was converted to MLX format from [huihui-ai/Falcon3-10B-Instruct-abliterated](https://huggingface.co/huihui-ai/Falcon3-10B-Instruct-abliterated) using mlx-lm version **0.20.5**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/Falcon3-10B-Instruct-abliterated-4bit") prompt="hello" if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
albertus-sussex/veriscrape-simcse-nbaplayer-reference_8_to_verify_2-fold-1
albertus-sussex
"2025-03-28T11:29:52Z"
0
0
transformers
[ "transformers", "safetensors", "roberta", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
feature-extraction
"2025-03-28T11:29:36Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
damgomz/ft_2_16e6_base_x12
damgomz
"2024-06-21T02:08:13Z"
6
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-06-19T16:30:52Z"
--- language: en tags: - text-classification pipeline_tag: text-classification widget: - text: GEPS Techno is the pioneer of hybridization of renewable energies at sea. We imagine, design and commercialize innovative off-grid systems that aim to generate power at sea, stabilize and collect data. The success of our low power platforms WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity platform. --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 125822.48677945136 | | Emissions (Co2eq in kg) | 0.07613707246273 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 1.4854004569676238 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.1310635769749674 | | Consumed energy (kWh) | 1.6164640339425915 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.24220828705044384 | | Emissions (Co2eq in kg) | 0.04928047398861844 | ## Note 19 juin 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | albert-base-v2 | | model_name | ft_2_16e6_base_x12 | | sequence_length | 400 | | num_epoch | 6 | | learning_rate | 1.6e-05 | | batch_size | 2 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 29328 | ## Training and Testing steps Epoch | Train Loss | Test Loss | F-beta Score ---|---|---|--- | 0 | 0.000000 | 0.719599 | 0.552431 | | 1 | 0.339442 | 0.266141 | 0.915358 | | 2 | 0.243426 | 0.252600 | 0.927469 | | 3 | 0.204434 | 0.255368 | 0.896083 | | 4 | 0.207367 | 0.284679 | 0.901333 | | 5 | 0.180203 | 0.273225 | 0.917218 | | 6 | 0.146234 | 0.255182 | 0.910723 |
BAAI/Bunny-Llama-3-8B-V-gguf
BAAI
"2024-06-11T07:40:56Z"
355
16
null
[ "gguf", "arxiv:2402.11530", "license:apache-2.0", "region:us" ]
null
"2024-05-04T17:15:00Z"
--- inference: false license: apache-2.0 --- # Model Card <p align="center"> <img src="./icon.png" alt="Logo" width="350"> </p> 📖 [Technical report](https://arxiv.org/abs/2402.11530) | 🏠 [Code](https://github.com/BAAI-DCAI/Bunny) | 🐰 [Demo](http://bunny.baai.ac.cn) This is **GGUF** format of [Bunny-Llama-3-8B-V](https://huggingface.co/BAAI/Bunny-Llama-3-8B-V). Bunny is a family of lightweight but powerful multimodal models. It offers multiple plug-and-play vision encoders, like EVA-CLIP, SigLIP and language backbones, including Llama-3-8B, Phi-1.5, StableLM-2, Qwen1.5, MiniCPM and Phi-2. To compensate for the decrease in model size, we construct more informative training data by curated selection from a broader data source. We provide Bunny-Llama-3-8B-V, which is built upon [SigLIP](https://huggingface.co/google/siglip-so400m-patch14-384) and [Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct). More details about this model can be found in [GitHub](https://github.com/BAAI-DCAI/Bunny). ![comparison](comparison.png) # Quickstart ## Chat by [`llama.cpp`](https://github.com/ggerganov/llama.cpp) ```shell # sample images can be found in images folder # fp16 ./llava-cli -m ggml-model-f16.gguf --mmproj mmproj-model-f16.gguf --image example_2.png -c 4096 -e \ -p "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: <image>\nWhy is the image funny? ASSISTANT:" \ --temp 0.0 # int4 ./llava-cli -m ggml-model-Q4_K_M.gguf --mmproj mmproj-model-f16.gguf --image example_2.png -c 4096 -e \ -p "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: <image>\nWhy is the image funny? ASSISTANT:" \ --temp 0.0 ``` ## Chat by [ollama](https://ollama.com/) ```shell # sample images can be found in images folder # fp16 ollama create Bunny-Llama-3-8B-V-fp16 -f ./ollama-f16 ollama run Bunny-Llama-3-8B-V-fp16 'example_2.png Why is the image funny?' # int4 ollama create Bunny-Llama-3-8B-V-int4 -f ./ollama-Q4_K_M ollama run Bunny-Llama-3-8B-V-int4 'example_2.png Why is the image funny?' ```
yuvimor24/vakyansh-wav2vec2-indian-english-enm-700
yuvimor24
"2024-07-09T07:41:12Z"
126
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "indian english ", "indian english asr", "audio", "speech", "indian english speech recognition", "en", "license:mit", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-07-09T07:33:37Z"
--- license: mit language: - en library_name: transformers pipeline_tag: automatic-speech-recognition tags: - 'indian english ' - indian english asr - automatic-speech-recognition - audio - speech - indian english speech recognition ---
ser-mei/borges-gpt-collab-finetuned
ser-mei
"2022-12-13T12:19:00Z"
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2022-11-29T17:47:50Z"
--- license: mit tags: - generated_from_trainer model-index: - name: borges-gpt-collab-finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # borges-gpt-collab-finetuned This model is a fine-tuned version of [DeepESP/gpt2-spanish](https://huggingface.co/DeepESP/gpt2-spanish) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 6.2150 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42069 - gradient_accumulation_steps: 16 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 500 - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 4.6177 | 4.96 | 35 | 4.3309 | | 3.9729 | 9.96 | 70 | 4.2350 | | 3.2225 | 14.96 | 105 | 4.3344 | | 2.3158 | 19.96 | 140 | 4.5764 | | 1.3761 | 24.96 | 175 | 4.9125 | | 0.6779 | 29.96 | 210 | 5.3096 | | 0.3399 | 34.96 | 245 | 5.6735 | | 0.2147 | 39.96 | 280 | 5.9322 | | 0.1675 | 44.96 | 315 | 6.1347 | | 0.1418 | 49.96 | 350 | 6.2150 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.0+rocm5.2 - Datasets 2.6.1 - Tokenizers 0.13.2
luyaoli/TestModel
luyaoli
"2025-04-10T13:39:51Z"
0
0
null
[ "region:us" ]
null
"2025-04-10T13:39:50Z"
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>
hugging-quants/Meta-Llama-3.1-8B-BNB-NF4-BF16
hugging-quants
"2024-07-28T19:08:05Z"
113
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-3.1", "meta", "bnb", "en", "de", "fr", "it", "pt", "hi", "es", "th", "license:llama3.1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
"2024-07-28T18:29:59Z"
--- license: llama3.1 language: - en - de - fr - it - pt - hi - es - th library_name: transformers pipeline_tag: text-generation tags: - llama-3.1 - meta - bnb --- > [!IMPORTANT] > This repository is a community-driven quantized version of the original model [`meta-llama/Meta-Llama-3.1-8B`](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) which is the BF16 half-precision official version released by Meta AI. ## Model Information The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks. This repository contains [`meta-llama/Meta-Llama-3.1-8B`](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) quantized using [bitsandbytes](https://github.com/bitsandbytes-foundation/bitsandbytes) from BF16 down to NF4 with a block size of 64, **and storage type `torch.bfloat16`**.
Kort/i_3
Kort
"2024-10-18T13:41:56Z"
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-10-18T13:39:21Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Shinyaaa/RPC_ft_first_N
Shinyaaa
"2024-09-24T02:39:36Z"
88
0
transformers
[ "transformers", "safetensors", "bert", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
feature-extraction
"2024-09-24T02:20:08Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lesso12/8d2c1314-2a82-4b36-b153-95c50da41b56
lesso12
"2025-02-21T18:08:50Z"
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:sethuiyer/Medichat-Llama3-8B", "base_model:adapter:sethuiyer/Medichat-Llama3-8B", "license:other", "region:us" ]
null
"2025-02-21T17:48:43Z"
--- library_name: peft license: other base_model: sethuiyer/Medichat-Llama3-8B tags: - axolotl - generated_from_trainer model-index: - name: 8d2c1314-2a82-4b36-b153-95c50da41b56 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora auto_find_batch_size: true base_model: sethuiyer/Medichat-Llama3-8B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - c47ca4d9ff1f07ae_train_data.json ds_type: json format: custom path: /workspace/input_data/c47ca4d9ff1f07ae_train_data.json type: field_instruction: prompt field_output: chosen format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null do_eval: true early_stopping_patience: 3 eval_max_new_tokens: 128 eval_steps: 50 evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: false group_by_length: true hub_model_id: lesso12/8d2c1314-2a82-4b36-b153-95c50da41b56 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.000212 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_steps: 500 micro_batch_size: 4 mlflow_experiment_name: /tmp/c47ca4d9ff1f07ae_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null seed: 120 sequence_len: 512 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 7a04d4c8-a979-4f82-b1e8-61b6807fd749 wandb_project: 12a wandb_run: your_name wandb_runid: 7a04d4c8-a979-4f82-b1e8-61b6807fd749 warmup_steps: 50 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 8d2c1314-2a82-4b36-b153-95c50da41b56 This model is a fine-tuned version of [sethuiyer/Medichat-Llama3-8B](https://huggingface.co/sethuiyer/Medichat-Llama3-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2420 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000212 - train_batch_size: 4 - eval_batch_size: 4 - seed: 120 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 50 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0008 | 1 | 1.7658 | | 1.5277 | 0.0390 | 50 | 1.5597 | | 1.4436 | 0.0781 | 100 | 1.5017 | | 1.5801 | 0.1171 | 150 | 1.4366 | | 1.4678 | 0.1562 | 200 | 1.3925 | | 1.3112 | 0.1952 | 250 | 1.3409 | | 1.3983 | 0.2343 | 300 | 1.3041 | | 1.2145 | 0.2733 | 350 | 1.2721 | | 1.2378 | 0.3124 | 400 | 1.2539 | | 1.1975 | 0.3514 | 450 | 1.2475 | | 1.3713 | 0.3905 | 500 | 1.2420 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
johnphilos/EnsinoFilo3
johnphilos
"2025-03-23T04:14:02Z"
1
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "unsloth", "trl", "sft", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-03-11T15:00:03Z"
--- library_name: transformers tags: - unsloth - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Paulescu/crypto-sentiment-extractor
Paulescu
"2025-03-13T12:12:20Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2025-03-13T12:03:30Z"
--- base_model: unsloth/llama-3.2-1b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Paulescu - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-1b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
abdulmannan-01/DeepSeek-R1-Distill-Llama-8B-Lora-Finetuned-Openscholar-Dataset-Adapter
abdulmannan-01
"2025-02-17T20:19:56Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-8B", "base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Llama-8B", "region:us" ]
null
"2025-02-17T20:14:49Z"
--- base_model: deepseek-ai/DeepSeek-R1-Distill-Llama-8B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.14.0
prxy5605/e736fac6-4d00-48e7-8c9c-4475957f1e5b
prxy5605
"2025-01-18T00:24:38Z"
9
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/Llama-3.2-3B", "base_model:adapter:unsloth/Llama-3.2-3B", "license:llama3.2", "region:us" ]
null
"2025-01-18T00:01:57Z"
--- library_name: peft license: llama3.2 base_model: unsloth/Llama-3.2-3B tags: - axolotl - generated_from_trainer model-index: - name: e736fac6-4d00-48e7-8c9c-4475957f1e5b results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Llama-3.2-3B bf16: true chat_template: llama3 data_processes: 16 dataset_prepared_path: null datasets: - data_files: - train_c4393383-ef1d-4e9c-b95c-18b4f735570d.json ds_type: json format: custom path: /workspace/input_data/train_c4393383-ef1d-4e9c-b95c-18b4f735570d.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: prxy5605/e736fac6-4d00-48e7-8c9c-4475957f1e5b hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 8 mlflow_experiment_name: /tmp/train_c4393383-ef1d-4e9c-b95c-18b4f735570d.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 26fdf33d-c321-41d9-b33c-17de9fdf24d1 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 26fdf33d-c321-41d9-b33c-17de9fdf24d1 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # e736fac6-4d00-48e7-8c9c-4475957f1e5b This model is a fine-tuned version of [unsloth/Llama-3.2-3B](https://huggingface.co/unsloth/Llama-3.2-3B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0639 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.0661 | 0.0008 | 1 | 1.7443 | | 1.497 | 0.0407 | 50 | 1.2598 | | 1.1156 | 0.0813 | 100 | 1.1870 | | 1.5363 | 0.1220 | 150 | 1.0768 | | 0.9245 | 0.1627 | 200 | 1.0639 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
liam168/chat-DialoGPT-small-zh
liam168
"2021-08-04T09:01:41Z"
21
5
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "zh", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2022-03-02T23:29:05Z"
--- language: zh widget: - text: "你们宿舍都是这么厉害的人吗" license: apache-2.0 --- # liam168/chat-DialoGPT-small-zh ## Model description 用中文聊天数据训练的模型; ### How to use Now we are ready to try out how the model works as a chatting partner! ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch mode_name = 'liam168/chat-DialoGPT-small-zh' tokenizer = AutoTokenizer.from_pretrained(mode_name) model = AutoModelForCausalLM.from_pretrained(mode_name) # Let's chat for 5 lines for step in range(5): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id) # pretty print last ouput tokens from bot print("Answer: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True))) ```
RayneAmes/vanya_v1
RayneAmes
"2025-02-10T21:57:52Z"
0
0
transformers
[ "transformers", "safetensors", "parler_tts", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2025-02-10T21:54:59Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Jonjew/SerenaTheFirstDescendant2
Jonjew
"2025-03-31T12:45:08Z"
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:unknown", "region:us" ]
text-to-image
"2025-03-31T12:44:46Z"
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: >- a futuristic, armored character named Serena floating in space. She is flying up in space against a huge sun background. She is surrounded by whirling fire and air. backlight. Serena is dressed in a sleek, form-fitting white bodysuit with high cut legs, revealing black thigh-high stockings. The bodysuit features intricate, metallic detailing and a high collar. She wears matching gauntlets and boots. Serena's armor includes large, intricate shoulder pieces with metallic, angular designs and glowing, metallic wings on her lower back with sharp, metallic extensions. Her blonde hair is styled in a short, wavy bob. The overall aesthetic is a blend of cyberpunk and futuristic military, with a focus on sharp, angular lines and metallic textures. Intense, colorful lighting and deep shadows <lora:TFD-Serena:0.8> output: url: images/00072-2302721930.png base_model: black-forest-labs/FLUX.1-dev instance_prompt: TFD-Serena, Serena license: unknown --- # Serena - The First Descendant <Gallery /> ## Model description FROM https:&#x2F;&#x2F;civitai.com&#x2F;models&#x2F;1374035&#x2F;serena-the-first-descendant-flux-lora Trigger TFD-Serena, Serena Strength 0.8 to 1.0 A (FLUX) Character LoRA for Serena from The First Descendant -videogame. Also check out my other TFD LoRAs below. Triggerword: TFD-Serena or Serena Suggested Weight: 0.8-1.0 My Preview Images Generated on: -flux1-dev-Q8_0.gguf + t5xxl_fp16 (ForgeUI) -Euler, Simple -832x1280 + 1.5x Hires. Fix (4x-UltraSharp -upscaler) -Distilled CFG Scale: 3.5 (2.0 hires. fix) -Only This LoRA enabled Add some of the following to your prompt to help you get the outfit: a futuristic, armored character named Serena. Serena is dressed in a sleek, form-fitting white bodysuit with high cut legs, revealing black thigh-high stockings. The bodysuit features intricate, metallic detailing and a high collar. She wears matching gauntlets and boots. Serena&#39;s armor includes large, intricate shoulder pieces with metallic, angular designs and metallic wings on her lower back with sharp, metallic extensions. Her blonde hair is styled in a short, wavy bob. The overall aesthetic is a blend of cyberpunk and futuristic military, with a focus on sharp, angular lines and metallic textures. NOTE: The wings are not consistent and will be a pain to get right or even good enough. You can remove certain parts if they interfere with image composition. Example: Removing the part about the boots can make it easier to get close-up shots ## Trigger words You should use `TFD-Serena` to trigger the image generation. You should use `Serena` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/Jonjew/SerenaTheFirstDescendant2/tree/main) them in the Files & versions tab.
Genie-hub/lcking
Genie-hub
"2025-03-13T07:27:46Z"
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
"2025-03-13T07:17:23Z"
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: LCKING --- # Lcking <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `LCKING` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Genie-hub/lcking', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
baby-dev/8338f475-098b-4938-8874-c018a6eeed55
baby-dev
"2025-02-23T14:27:12Z"
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:NousResearch/Meta-Llama-3-8B", "base_model:adapter:NousResearch/Meta-Llama-3-8B", "license:other", "region:us" ]
null
"2025-02-23T11:23:58Z"
--- library_name: peft license: other base_model: NousResearch/Meta-Llama-3-8B tags: - axolotl - generated_from_trainer model-index: - name: 8338f475-098b-4938-8874-c018a6eeed55 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 8338f475-098b-4938-8874-c018a6eeed55 This model is a fine-tuned version of [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7331 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
abaddon182/6623ad1b-2b30-4c14-aa9a-484c00aff9f0
abaddon182
"2025-03-30T06:45:06Z"
0
0
null
[ "region:us" ]
null
"2025-03-30T06:39:14Z"
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>
mci29/sn29_q1m5_gnpy
mci29
"2025-03-01T13:19:45Z"
0
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-03-01T13:15:44Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tareknaous/readabert-hi
tareknaous
"2024-07-17T16:56:47Z"
123
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "hi", "arxiv:2305.14463", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-02-17T00:55:33Z"
--- library_name: transformers license: apache-2.0 language: - hi --- Muril-base (muril-base-cased) model fine-tuned on the Hindi portion of the ReadMe++ corpus for sentence-level readability prediction on a scale of 6-level CEFR scale Github (Dataset and Python Package): https://github.com/tareknaous/readme Paper: https://arxiv.org/abs/2305.14463
sgr23/distilbert-base-uncased-finetuned-squad-d5716d28
sgr23
"2023-05-27T18:10:53Z"
105
0
transformers
[ "transformers", "pytorch", "distilbert", "fill-mask", "question-answering", "en", "dataset:squad", "arxiv:1910.01108", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
question-answering
"2023-05-27T17:57:26Z"
--- language: - en thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg tags: - question-answering license: apache-2.0 datasets: - squad metrics: - squad --- # DistilBERT with a second step of distillation ## Model description This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation. In this version, the following pre-trained models were used: * Student: `distilbert-base-uncased` * Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1` ## Training data This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows: ```python from datasets import load_dataset squad = load_dataset('squad') ``` ## Training procedure ## Eval results | | Exact Match | F1 | |------------------|-------------|------| | DistilBERT paper | 79.1 | 86.9 | | Ours | 78.4 | 86.5 | The scores were calculated using the `squad` metric from `datasets`. ### BibTeX entry and citation info ```bibtex @misc{sanh2020distilbert, title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter}, author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf}, year={2020}, eprint={1910.01108}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
aleegis11/e3d60bf4-b6f3-432f-a420-ee571baf5305
aleegis11
"2025-01-24T16:18:58Z"
8
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2.5-7B", "base_model:adapter:Qwen/Qwen2.5-7B", "license:apache-2.0", "region:us" ]
null
"2025-01-24T15:50:40Z"
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2.5-7B tags: - axolotl - generated_from_trainer model-index: - name: e3d60bf4-b6f3-432f-a420-ee571baf5305 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Qwen/Qwen2.5-7B bf16: true chat_template: llama3 data_processes: 16 dataset_prepared_path: null datasets: - data_files: - d3e177dcf9fd6bd3_train_data.json ds_type: json format: custom path: /workspace/input_data/d3e177dcf9fd6bd3_train_data.json type: field_input: phonemes field_instruction: style field_output: text format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 5 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 50 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: true hub_model_id: aleegis11/e3d60bf4-b6f3-432f-a420-ee571baf5305 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 200 micro_batch_size: 8 mlflow_experiment_name: /tmp/d3e177dcf9fd6bd3_train_data.json model_type: AutoModelForCausalLM num_epochs: 3 optim_args: adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 1e-5 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: c9a889cd-07f0-4336-9474-12f0774aebd1 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: c9a889cd-07f0-4336-9474-12f0774aebd1 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # e3d60bf4-b6f3-432f-a420-ee571baf5305 This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0639 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.1411 | 0.0029 | 1 | 2.6982 | | 0.7284 | 0.1462 | 50 | 0.2369 | | 0.815 | 0.2924 | 100 | 0.1339 | | 0.2101 | 0.4386 | 150 | 0.0853 | | 0.2648 | 0.5848 | 200 | 0.0639 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
hopkins/mbart-finetuned-eng-deu-35
hopkins
"2023-07-03T00:59:44Z"
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
"2023-07-03T00:45:38Z"
--- tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: mbart-finetuned-eng-deu-35 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbart-finetuned-eng-deu-35 This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6532 - Bleu: 20.8829 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
radames/AltDiffusion-m9-img2img
radames
"2023-05-06T01:09:51Z"
4
6
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "image-to-image", "multilingual", "English(En)", "Chinese(Zh)", "Spanish(Es)", "French(Fr)", "Russian(Ru)", "Japanese(Ja)", "Korean(Ko)", "Arabic(Ar)", "Italian(It)", "zh", "arxiv:2211.06679", "license:creativeml-openrail-m", "diffusers:AltDiffusionPipeline", "region:us" ]
image-to-image
"2023-05-06T01:09:04Z"
--- language: zh license: creativeml-openrail-m tags: - stable-diffusion - stable-diffusion-diffusers - image-to-image - multilingual - English(En) - Chinese(Zh) - Spanish(Es) - French(Fr) - Russian(Ru) - Japanese(Ja) - Korean(Ko) - Arabic(Ar) - Italian(It) - diffusers extra_gated_prompt: >- This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license carefully here: https://huggingface.co/spaces/CompVis/stable-diffusion-license extra_gated_heading: Please read the LICENSE to access this model duplicated_from: BAAI/AltDiffusion-m9 --- # AltDiffusion | 名称 Name | 任务 Task | 语言 Language(s) | 模型 Model | Github | |:----------:| :----: |:-------------------:| :----: |:------:| | AltDiffusion-m9 | 多模态 Multimodal | Multilingual | Stable Diffusion | [FlagAI](https://github.com/FlagAI-Open/FlagAI) | # Gradio We support a [Gradio](https://github.com/gradio-app/gradio) Web UI to run AltDiffusion-m9: [![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/akhaliq/AltDiffusion-m9) # 模型信息 Model Information 我们使用 [AltCLIP-m9](https://github.com/FlagAI-Open/FlagAI/tree/master/examples/AltCLIP/README.md),基于 [Stable Diffusion](https://huggingface.co/CompVis/stable-diffusion) 训练了双语Diffusion模型,训练数据来自 [WuDao数据集](https://data.baai.ac.cn/details/WuDaoCorporaText) 和 [LAION](https://huggingface.co/datasets/ChristophSchuhmann/improved_aesthetics_6plus) 。 我们的版本在多语言对齐方面表现非常出色,是目前市面上开源的最强多语言版本,保留了原版stable diffusion的大部分能力,并且在某些例子上比有着比原版模型更出色的能力。 AltDiffusion-m9 模型由名为 AltCLIP-m9 的多语 CLIP 模型支持,该模型也可在本项目中访问。您可以阅读 [此教程](https://github.com/FlagAI-Open/FlagAI/tree/master/examples/AltCLIP/README.md) 了解更多信息。 We used [AltCLIP-m9](https://github.com/FlagAI-Open/FlagAI/tree/master/examples/AltCLIP/README.md), and trained a bilingual Diffusion model based on [Stable Diffusion](https://huggingface.co/CompVis/stable-diffusion), with training data from [WuDao dataset](https://data.baai.ac.cn/details/WuDaoCorporaText) and [LAION](https://huggingface.co/datasets/laion/laion2B-en). Our model performs well in aligning multilanguage and is the strongest open-source version on the market today, retaining most of the stable diffusion capabilities of the original, and in some cases even better than the original model. AltDiffusion-m9 model is backed by a multilingual CLIP model named AltCLIP-m9, which is also accessible in FlagAI. You can read [this tutorial](https://github.com/FlagAI-Open/FlagAI/tree/master/examples/AltCLIP/README.md) for more information. ## 引用 关于AltCLIP-m9,我们已经推出了相关报告,有更多细节可以查阅,如对您的工作有帮助,欢迎引用。 If you find this work helpful, please consider to cite ``` @article{https://doi.org/10.48550/arxiv.2211.06679, doi = {10.48550/ARXIV.2211.06679}, url = {https://arxiv.org/abs/2211.06679}, author = {Chen, Zhongzhi and Liu, Guang and Zhang, Bo-Wen and Ye, Fulong and Yang, Qinghong and Wu, Ledell}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences}, title = {AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities}, publisher = {arXiv}, year = {2022}, copyright = {arXiv.org perpetual, non-exclusive license} } ``` # 模型权重 Model Weights 第一次运行AltDiffusion-m9模型时会自动从huggingface下载如下权重, The following weights are automatically downloaded from HF when the AltDiffusion-m9 model is run for the first time: | 模型名称 Model name | 大小 Size | 描述 Description | |------------------------------|---------|-------------------------------------------------------| | StableDiffusionSafetyChecker | 1.13G | 图片的安全检查器;Safety checker for image | | AltDiffusion-m9 | 8.0G | support English(En), Chinese(Zh), Spanish(Es), French(Fr), Russian(Ru), Japanese(Ja), Korean(Ko), Arabic(Ar) and Italian(It) | | AltCLIP-m9 | 3.22G | support English(En), Chinese(Zh), Spanish(Es), French(Fr), Russian(Ru), Japanese(Ja), Korean(Ko), Arabic(Ar) and Italian(It) | # 示例 Example ## 🧨Diffusers Example **AltDiffusion-m9** 已被添加到 🧨Diffusers! 我们的[代码示例](https://colab.research.google.com/drive/1htPovT5YNutl2i31mIYrOzlIgGLm06IX#scrollTo=1TrIQp9N1Bnm)已放到colab上,欢迎使用。 您可以在 [此处](https://huggingface.co/docs/diffusers/main/en/api/pipelines/alt_diffusion) 查看文档页面。 以下示例将使用fast DPM 调度程序生成图像, 在V100 上耗时大约为 2 秒。 You can run our diffusers example through [here](https://colab.research.google.com/drive/1htPovT5YNutl2i31mIYrOzlIgGLm06IX#scrollTo=1TrIQp9N1Bnm) in colab. You can see the documentation page [here](https://huggingface.co/docs/diffusers/main/en/api/pipelines/alt_diffusion). The following example will use the fast DPM scheduler to generate an image in ca. 2 seconds on a V100. First you should install diffusers main branch and some dependencies: ``` pip install git+https://github.com/huggingface/diffusers.git torch transformers accelerate sentencepiece ``` then you can run the following example: ```python from diffusers import AltDiffusionPipeline, DPMSolverMultistepScheduler import torch pipe = AltDiffusionPipeline.from_pretrained("BAAI/AltDiffusion-m9", torch_dtype=torch.float16, revision="fp16") pipe = pipe.to("cuda") pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) prompt = "黑暗精灵公主,非常详细,幻想,非常详细,数字绘画,概念艺术,敏锐的焦点,插图" # or in English: # prompt = "dark elf princess, highly detailed, d & d, fantasy, highly detailed, digital painting, trending on artstation, concept art, sharp focus, illustration, art by artgerm and greg rutkowski and fuji choko and viktoria gavrilenko and hoang lap" image = pipe(prompt, num_inference_steps=25).images[0] image.save("./alt.png") ``` ![alt](https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/hub/alt.png) ## Transformers Example ```python import os import torch import transformers from transformers import BertPreTrainedModel from transformers.models.clip.modeling_clip import CLIPPreTrainedModel from transformers.models.xlm_roberta.tokenization_xlm_roberta import XLMRobertaTokenizer from diffusers.schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler from diffusers import StableDiffusionPipeline from transformers import BertPreTrainedModel,BertModel,BertConfig import torch.nn as nn import torch from transformers.models.xlm_roberta.configuration_xlm_roberta import XLMRobertaConfig from transformers import XLMRobertaModel from transformers.activations import ACT2FN from typing import Optional class RobertaSeriesConfig(XLMRobertaConfig): def __init__(self, pad_token_id=1, bos_token_id=0, eos_token_id=2,project_dim=768,pooler_fn='cls',learn_encoder=False, **kwargs): super().__init__(pad_token_id=pad_token_id, bos_token_id=bos_token_id, eos_token_id=eos_token_id, **kwargs) self.project_dim = project_dim self.pooler_fn = pooler_fn # self.learn_encoder = learn_encoder class RobertaSeriesModelWithTransformation(BertPreTrainedModel): _keys_to_ignore_on_load_unexpected = [r"pooler"] _keys_to_ignore_on_load_missing = [r"position_ids", r"predictions.decoder.bias"] base_model_prefix = 'roberta' config_class= XLMRobertaConfig def __init__(self, config): super().__init__(config) self.roberta = XLMRobertaModel(config) self.transformation = nn.Linear(config.hidden_size, config.project_dim) self.post_init() def get_text_embeds(self,bert_embeds,clip_embeds): return self.merge_head(torch.cat((bert_embeds,clip_embeds))) def set_tokenizer(self, tokenizer): self.tokenizer = tokenizer def forward(self, input_ids: Optional[torch.Tensor] = None) : attention_mask = (input_ids != self.tokenizer.pad_token_id).to(torch.int64) outputs = self.base_model( input_ids=input_ids, attention_mask=attention_mask, ) projection_state = self.transformation(outputs.last_hidden_state) return (projection_state,) model_path_encoder = "BAAI/RobertaSeriesModelWithTransformation" model_path_diffusion = "BAAI/AltDiffusion-m9" device = "cuda" seed = 12345 tokenizer = XLMRobertaTokenizer.from_pretrained(model_path_encoder, use_auth_token=True) tokenizer.model_max_length = 77 text_encoder = RobertaSeriesModelWithTransformation.from_pretrained(model_path_encoder, use_auth_token=True) text_encoder.set_tokenizer(tokenizer) print("text encode loaded") pipe = StableDiffusionPipeline.from_pretrained(model_path_diffusion, tokenizer=tokenizer, text_encoder=text_encoder, use_auth_token=True, ) print("diffusion pipeline loaded") pipe = pipe.to(device) prompt = "Thirty years old lee evans as a sad 19th century postman. detailed, soft focus, candle light, interesting lights, realistic, oil canvas, character concept art by munkácsy mihály, csók istván, john everett millais, henry meynell rheam, and da vinci" with torch.no_grad(): image = pipe(prompt, guidance_scale=7.5).images[0] image.save("3.png") ``` 您可以在`predict_generate_images`函数里通过改变参数来调整设置,具体信息如下: More parameters of predict_generate_images for you to adjust for `predict_generate_images` are listed below: | 参数名 Parameter | 类型 Type | 描述 Description | |--------------------------------|------------|-------------------------------------------------------| | prompt | str | 提示文本; The prompt text | | out_path | str | 输出路径; The output path to save images | | n_samples | int | 输出图片数量; Number of images to be generate | | skip_grid | bool | 如果为True, 会将所有图片拼接在一起,输出一张新的图片; If set to true, image gridding step will be skipped | | ddim_step | int | DDIM模型的步数; Number of steps in ddim model | | plms | bool | 如果为True, 则会使用plms模型; If set to true, PLMS Sampler instead of DDIM Sampler will be applied | | scale | float | 这个值决定了文本在多大程度上影响生成的图片,值越大影响力越强; This value determines how important the prompt incluences generate images | | H | int | 图片的高度; Height of image | | W | int | 图片的宽度; Width of image | | C | int | 图片的channel数; Numeber of channels of generated images | | seed | int | 随机种子; Random seed number | 注意:模型推理要求一张至少10G以上的GPU。 Note that the model inference requires a GPU of at least 10G above. # 更多生成结果 More Results ## multilanguage examples 同一句prompts不同语言生成的人脸不一样! One prompts in different languages generates different faces! ![image](./m9.png) ## 中英文对齐能力 Chinese and English alignment ability ### prompt:dark elf princess, highly detailed, d & d, fantasy, highly detailed, digital painting, trending on artstation, concept art, sharp focus, illustration, art by artgerm and greg rutkowski and fuji choko and viktoria gavrilenko and hoang lap ### 英文生成结果/Generated results from English prompts ![image](https://raw.githubusercontent.com/BAAI-OpenPlatform/test_open/main/en_dark_elf.png) ### prompt:黑暗精灵公主,非常详细,幻想,非常详细,数字绘画,概念艺术,敏锐的焦点,插图 ### 中文生成结果/Generated results from Chinese prompts ![image](https://raw.githubusercontent.com/BAAI-OpenPlatform/test_open/main/cn_dark_elf.png) ## 中文表现能力/The performance for Chinese prompts ## prompt:带墨镜的男孩肖像,充满细节,8K高清 ![image](https://raw.githubusercontent.com/BAAI-OpenPlatform/test_open/main/boy.png) ## prompt:带墨镜的中国男孩肖像,充满细节,8K高清 ![image](https://raw.githubusercontent.com/BAAI-OpenPlatform/test_open/main/cn_boy.png) ## 长图生成能力/The ability to generate long images ### prompt: 一只带着帽子的小狗 ### 原版 stable diffusion: ![image](https://raw.githubusercontent.com/BAAI-OpenPlatform/test_open/main/dog_other.png) ### Ours: ![image](https://raw.githubusercontent.com/BAAI-OpenPlatform/test_open/main/dog.png) 注: 此处长图生成技术由右脑科技(RightBrain AI)提供。 Note: The long image generation technology here is provided by Right Brain Technology. # 许可/License 该模型通过 [CreativeML Open RAIL-M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) 获得许可。作者对您生成的输出不主张任何权利,您可以自由使用它们并对它们的使用负责,不得违反本许可中的规定。该许可证禁止您分享任何违反任何法律、对他人造成伤害、传播任何可能造成伤害的个人信息、传播错误信息和针对弱势群体的任何内容。您可以出于商业目的修改和使用模型,但必须包含相同使用限制的副本。有关限制的完整列表,请[阅读许可证](https://huggingface.co/spaces/CompVis/stable-diffusion-license) 。 The model is licensed with a [CreativeML Open RAIL-M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license). The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in this license. The license forbids you from sharing any content that violates any laws, produce any harm to a person, disseminate any personal information that would be meant for harm, spread misinformation and target vulnerable groups. You can modify and use the model for commercial purposes, but a copy of the same use restrictions must be included. For the full list of restrictions please [read the license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) .
JCX-kcuf/Llama-2-7b-hf-gpt-4-80k
JCX-kcuf
"2024-03-11T15:44:32Z"
48
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-03-10T16:34:35Z"
--- license: apache-2.0 --- ## Description This model is finetuned on the distillation data from GPT-4. The base model is meta-llama/Llama-2-7b-hf ## Usage The model has a query format as in llama-2. ``` <s> [INST] <<SYS>> You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> {query} [/INST] ```
tensorblock/PlatYi-34B-Q-GGUF
tensorblock
"2025-01-09T09:32:42Z"
180
0
transformers
[ "transformers", "gguf", "TensorBlock", "GGUF", "text-generation", "en", "dataset:garage-bAInd/Open-Platypus", "base_model:kyujinpy/PlatYi-34B-Q", "base_model:quantized:kyujinpy/PlatYi-34B-Q", "license:cc-by-nc-sa-4.0", "model-index", "endpoints_compatible", "region:us" ]
text-generation
"2025-01-09T07:07:06Z"
--- language: - en license: cc-by-nc-sa-4.0 library_name: transformers datasets: - garage-bAInd/Open-Platypus pipeline_tag: text-generation tags: - TensorBlock - GGUF base_model: kyujinpy/PlatYi-34B-Q model-index: - name: PlatYi-34B-Q results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 66.89 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-Q name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 85.14 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-Q name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 77.66 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-Q name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 53.03 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-Q name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 82.48 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-Q name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 53.98 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-Q name: Open LLM Leaderboard --- <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"> Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a> </p> </div> </div> ## kyujinpy/PlatYi-34B-Q - GGUF This repo contains GGUF format model files for [kyujinpy/PlatYi-34B-Q](https://huggingface.co/kyujinpy/PlatYi-34B-Q). The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d). <div style="text-align: left; margin: 20px 0;"> <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;"> Run them on the TensorBlock client using your local machine ↗ </a> </div> ## Prompt template ``` ``` ## Model file specification | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [PlatYi-34B-Q-Q2_K.gguf](https://huggingface.co/tensorblock/PlatYi-34B-Q-GGUF/blob/main/PlatYi-34B-Q-Q2_K.gguf) | Q2_K | 12.825 GB | smallest, significant quality loss - not recommended for most purposes | | [PlatYi-34B-Q-Q3_K_S.gguf](https://huggingface.co/tensorblock/PlatYi-34B-Q-GGUF/blob/main/PlatYi-34B-Q-Q3_K_S.gguf) | Q3_K_S | 14.960 GB | very small, high quality loss | | [PlatYi-34B-Q-Q3_K_M.gguf](https://huggingface.co/tensorblock/PlatYi-34B-Q-GGUF/blob/main/PlatYi-34B-Q-Q3_K_M.gguf) | Q3_K_M | 16.655 GB | very small, high quality loss | | [PlatYi-34B-Q-Q3_K_L.gguf](https://huggingface.co/tensorblock/PlatYi-34B-Q-GGUF/blob/main/PlatYi-34B-Q-Q3_K_L.gguf) | Q3_K_L | 18.139 GB | small, substantial quality loss | | [PlatYi-34B-Q-Q4_0.gguf](https://huggingface.co/tensorblock/PlatYi-34B-Q-GGUF/blob/main/PlatYi-34B-Q-Q4_0.gguf) | Q4_0 | 19.467 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [PlatYi-34B-Q-Q4_K_S.gguf](https://huggingface.co/tensorblock/PlatYi-34B-Q-GGUF/blob/main/PlatYi-34B-Q-Q4_K_S.gguf) | Q4_K_S | 19.599 GB | small, greater quality loss | | [PlatYi-34B-Q-Q4_K_M.gguf](https://huggingface.co/tensorblock/PlatYi-34B-Q-GGUF/blob/main/PlatYi-34B-Q-Q4_K_M.gguf) | Q4_K_M | 20.659 GB | medium, balanced quality - recommended | | [PlatYi-34B-Q-Q5_0.gguf](https://huggingface.co/tensorblock/PlatYi-34B-Q-GGUF/blob/main/PlatYi-34B-Q-Q5_0.gguf) | Q5_0 | 23.708 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [PlatYi-34B-Q-Q5_K_S.gguf](https://huggingface.co/tensorblock/PlatYi-34B-Q-GGUF/blob/main/PlatYi-34B-Q-Q5_K_S.gguf) | Q5_K_S | 23.708 GB | large, low quality loss - recommended | | [PlatYi-34B-Q-Q5_K_M.gguf](https://huggingface.co/tensorblock/PlatYi-34B-Q-GGUF/blob/main/PlatYi-34B-Q-Q5_K_M.gguf) | Q5_K_M | 24.322 GB | large, very low quality loss - recommended | | [PlatYi-34B-Q-Q6_K.gguf](https://huggingface.co/tensorblock/PlatYi-34B-Q-GGUF/blob/main/PlatYi-34B-Q-Q6_K.gguf) | Q6_K | 28.214 GB | very large, extremely low quality loss | | [PlatYi-34B-Q-Q8_0.gguf](https://huggingface.co/tensorblock/PlatYi-34B-Q-GGUF/blob/main/PlatYi-34B-Q-Q8_0.gguf) | Q8_0 | 36.542 GB | very large, extremely low quality loss - not recommended | ## Downloading instruction ### Command line Firstly, install Huggingface Client ```shell pip install -U "huggingface_hub[cli]" ``` Then, downoad the individual model file the a local directory ```shell huggingface-cli download tensorblock/PlatYi-34B-Q-GGUF --include "PlatYi-34B-Q-Q2_K.gguf" --local-dir MY_LOCAL_DIR ``` If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try: ```shell huggingface-cli download tensorblock/PlatYi-34B-Q-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf' ```
facebook/hiera-base-plus-224-mae-hf
facebook
"2024-06-20T10:47:16Z"
30
0
transformers
[ "transformers", "safetensors", "hiera", "pretraining", "en", "dataset:imagenet-1k", "arxiv:2306.00989", "arxiv:2010.11929", "arxiv:1512.03385", "arxiv:2103.14030", "arxiv:2104.11227", "arxiv:2111.06377", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
"2024-05-12T10:51:25Z"
--- datasets: - imagenet-1k language: - en library_name: transformers license: cc-by-nc-4.0 --- # Hiera Model (Tiny, fine-tuned on IN1K) **Hiera** is a _hierarchical_ vision transformer that is fast, powerful, and, above all, _simple_. It was introduced in the paper [Hiera: A Hierarchical Vision Transformer without the Bells-and-Whistles](https://arxiv.org/abs/2306.00989/) and outperforms the state-of-the-art across a wide array of image and video tasks _while being much faster_. <p align="center"> <img src="https://github.com/facebookresearch/hiera/raw/main/examples/img/inference_speed.png" width="75%"> </p> ## How does it work? ![A diagram of Hiera's architecture.](https://github.com/facebookresearch/hiera/raw/main/examples/img/hiera_arch.png) Vision transformers like [ViT](https://arxiv.org/abs/2010.11929) use the same spatial resolution and number of features throughout the whole network. But this is inefficient: the early layers don't need that many features, and the later layers don't need that much spatial resolution. Prior hierarchical models like [ResNet](https://arxiv.org/abs/1512.03385) accounted for this by using fewer features at the start and less spatial resolution at the end. Several domain specific vision transformers have been introduced that employ this hierarchical design, such as [Swin](https://arxiv.org/abs/2103.14030) or [MViT](https://arxiv.org/abs/2104.11227). But in the pursuit of state-of-the-art results using fully supervised training on ImageNet-1K, these models have become more and more complicated as they add specialized modules to make up for spatial biases that ViTs lack. While these changes produce effective models with attractive FLOP counts, under the hood the added complexity makes these models _slower_ overall. We show that a lot of this bulk is actually _unnecessary_. Instead of manually adding spatial bases through architectural changes, we opt to _teach_ the model these biases instead. By training with [MAE](https://arxiv.org/abs/2111.06377), we can simplify or remove _all_ of these bulky modules in existing transformers and _increase accuracy_ in the process. The result is Hiera, an extremely efficient and simple architecture that outperforms the state-of-the-art in several image and video recognition tasks. ## Intended uses & limitations Hiera can be used for image classification, feature extraction or masked image modeling. This checkpoint in specific is intended for **Masked Image Modeling**. ### How to use ```python from transformers import AutoImageProcessor, HieraForPreTraining import torch from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) image_processor = AutoImageProcessor.from_pretrained("facebook/hiera-base-plus-224-mae-hf") model = HieraForPreTraining.from_pretrained("facebook/hiera-base-plus-224-mae-hf") inputs = image_processor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits loss = outputs.loss ``` ### BibTeX entry and citation info If you use Hiera or this code in your work, please cite: ``` @article{ryali2023hiera, title={Hiera: A Hierarchical Vision Transformer without the Bells-and-Whistles}, author={Ryali, Chaitanya and Hu, Yuan-Ting and Bolya, Daniel and Wei, Chen and Fan, Haoqi and Huang, Po-Yao and Aggarwal, Vaibhav and Chowdhury, Arkabandhu and Poursaeed, Omid and Hoffman, Judy and Malik, Jitendra and Li, Yanghao and Feichtenhofer, Christoph}, journal={ICML}, year={2023} }
davidschulte/ESM_silicone_dyda_e
davidschulte
"2025-03-28T13:32:35Z"
21
0
null
[ "safetensors", "embedding_space_map", "BaseLM:bert-base-multilingual-uncased", "dataset:eusip/silicone", "base_model:google-bert/bert-base-multilingual-uncased", "base_model:finetune:google-bert/bert-base-multilingual-uncased", "license:apache-2.0", "region:us" ]
null
"2024-12-09T22:13:57Z"
--- base_model: bert-base-multilingual-uncased datasets: - eusip/silicone license: apache-2.0 tags: - embedding_space_map - BaseLM:bert-base-multilingual-uncased --- # ESM eusip/silicone <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> ESM - **Developed by:** David Schulte - **Model type:** ESM - **Base Model:** bert-base-multilingual-uncased - **Intermediate Task:** eusip/silicone - **ESM architecture:** linear - **ESM embedding dimension:** 768 - **Language(s) (NLP):** [More Information Needed] - **License:** Apache-2.0 license - **ESM version:** 0.1.0 ## Training Details ### Intermediate Task - **Task ID:** eusip/silicone - **Subset [optional]:** dyda_e - **Text Column:** Utterance - **Label Column:** Label - **Dataset Split:** train - **Sample size [optional]:** 10000 - **Sample seed [optional]:** 42 ### Training Procedure [optional] <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Language Model Training Hyperparameters [optional] - **Epochs:** 3 - **Batch size:** 32 - **Learning rate:** 2e-05 - **Weight Decay:** 0.01 - **Optimizer**: AdamW ### ESM Training Hyperparameters [optional] - **Epochs:** 10 - **Batch size:** 32 - **Learning rate:** 0.001 - **Weight Decay:** 0.01 - **Optimizer**: AdamW ### Additional trainiung details [optional] ## Model evaluation ### Evaluation of fine-tuned language model [optional] ### Evaluation of ESM [optional] MSE: ### Additional evaluation details [optional] ## What are Embedding Space Maps used for? Embedding Space Maps are a part of ESM-LogME, a efficient method for finding intermediate datasets for transfer learning. There are two reasons to use ESM-LogME: ### You don't have enough training data for your problem If you don't have a enough training data for your problem, just use ESM-LogME to find more. You can supplement model training by including publicly available datasets in the training process. 1. Fine-tune a language model on suitable intermediate dataset. 2. Fine-tune the resulting model on your target dataset. This workflow is called intermediate task transfer learning and it can significantly improve the target performance. But what is a suitable dataset for your problem? ESM-LogME enable you to quickly rank thousands of datasets on the Hugging Face Hub by how well they are exptected to transfer to your target task. ### You want to find similar datasets to your target dataset Using ESM-LogME can be used like search engine on the Hugging Face Hub. You can find similar tasks to your target task without having to rely on heuristics. ESM-LogME estimates how language models fine-tuned on each intermediate task would benefinit your target task. This quantitative approach combines the effects of domain similarity and task similarity. ## How can I use ESM-LogME / ESMs? [![PyPI version](https://img.shields.io/pypi/v/hf-dataset-selector.svg)](https://pypi.org/project/hf-dataset-selector) We release **hf-dataset-selector**, a Python package for intermediate task selection using Embedding Space Maps. **hf-dataset-selector** fetches ESMs for a given language model and uses it to find the best dataset for applying intermediate training to the target task. ESMs are found by their tags on the Huggingface Hub. ```python from hfselect import Dataset, compute_task_ranking # Load target dataset from the Hugging Face Hub dataset = Dataset.from_hugging_face( name="stanfordnlp/imdb", split="train", text_col="text", label_col="label", is_regression=False, num_examples=1000, seed=42 ) # Fetch ESMs and rank tasks task_ranking = compute_task_ranking( dataset=dataset, model_name="bert-base-multilingual-uncased" ) # Display top 5 recommendations print(task_ranking[:5]) ``` ```python 1. davanstrien/test_imdb_embedd2 Score: -0.618529 2. davanstrien/test_imdb_embedd Score: -0.618644 3. davanstrien/test1 Score: -0.619334 4. stanfordnlp/imdb Score: -0.619454 5. stanfordnlp/sst Score: -0.62995 ``` | Rank | Task ID | Task Subset | Text Column | Label Column | Task Split | Num Examples | ESM Architecture | Score | |-------:|:------------------------------|:----------------|:--------------|:---------------|:-------------|---------------:|:-------------------|----------:| | 1 | davanstrien/test_imdb_embedd2 | default | text | label | train | 10000 | linear | -0.618529 | | 2 | davanstrien/test_imdb_embedd | default | text | label | train | 10000 | linear | -0.618644 | | 3 | davanstrien/test1 | default | text | label | train | 10000 | linear | -0.619334 | | 4 | stanfordnlp/imdb | plain_text | text | label | train | 10000 | linear | -0.619454 | | 5 | stanfordnlp/sst | dictionary | phrase | label | dictionary | 10000 | linear | -0.62995 | | 6 | stanfordnlp/sst | default | sentence | label | train | 8544 | linear | -0.63312 | | 7 | kuroneko5943/snap21 | CDs_and_Vinyl_5 | sentence | label | train | 6974 | linear | -0.634365 | | 8 | kuroneko5943/snap21 | Video_Games_5 | sentence | label | train | 6997 | linear | -0.638787 | | 9 | kuroneko5943/snap21 | Movies_and_TV_5 | sentence | label | train | 6989 | linear | -0.639068 | | 10 | fancyzhx/amazon_polarity | amazon_polarity | content | label | train | 10000 | linear | -0.639718 | For more information on how to use ESMs please have a look at the [official Github repository](https://github.com/davidschulte/hf-dataset-selector). We provide documentation further documentation and tutorials for finding intermediate datasets and training your own ESMs. ## How do Embedding Space Maps work? <!-- This section describes the evaluation protocols and provides the results. --> Embedding Space Maps (ESMs) are neural networks that approximate the effect of fine-tuning a language model on a task. They can be used to quickly transform embeddings from a base model to approximate how a fine-tuned model would embed the the input text. ESMs can be used for intermediate task selection with the ESM-LogME workflow. ## How can I use Embedding Space Maps for Intermediate Task Selection? ## Citation <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> If you are using this Embedding Space Maps, please cite our [paper](https://aclanthology.org/2024.emnlp-main.529/). **BibTeX:** ``` @inproceedings{schulte-etal-2024-less, title = "Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning", author = "Schulte, David and Hamborg, Felix and Akbik, Alan", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.529/", doi = "10.18653/v1/2024.emnlp-main.529", pages = "9431--9442", abstract = "Intermediate task transfer learning can greatly improve model performance. If, for example, one has little training data for emotion detection, first fine-tuning a language model on a sentiment classification dataset may improve performance strongly. But which task to choose for transfer learning? Prior methods producing useful task rankings are infeasible for large source pools, as they require forward passes through all source language models. We overcome this by introducing Embedding Space Maps (ESMs), light-weight neural networks that approximate the effect of fine-tuning a language model. We conduct the largest study on NLP task transferability and task selection with 12k source-target pairs. We find that applying ESMs on a prior method reduces execution time and disk space usage by factors of 10 and 278, respectively, while retaining high selection performance (avg. regret@5 score of 2.95)." } ``` **APA:** ``` Schulte, D., Hamborg, F., & Akbik, A. (2024, November). Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (pp. 9431-9442). ``` ## Additional Information
MelanieKoe/w2v2-base-10k-voxpopuli-ft-de_lr1e-4_at0.8_da1
MelanieKoe
"2024-03-27T10:29:37Z"
78
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "base_model:facebook/wav2vec2-base-10k-voxpopuli-ft-de", "base_model:finetune:facebook/wav2vec2-base-10k-voxpopuli-ft-de", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-03-20T15:05:20Z"
--- license: cc-by-nc-4.0 base_model: facebook/wav2vec2-base-10k-voxpopuli-ft-de tags: - generated_from_trainer metrics: - wer model-index: - name: w2v2-base-10k-voxpopuli-ft-de_lr1e-4_at0.8_da1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # w2v2-base-10k-voxpopuli-ft-de_lr1e-4_at0.8_da1 This model is a fine-tuned version of [facebook/wav2vec2-base-10k-voxpopuli-ft-de](https://huggingface.co/facebook/wav2vec2-base-10k-voxpopuli-ft-de) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8259 - Wer: 0.1632 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 60 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.8086 | 5.43 | 250 | 0.8872 | 0.2456 | | 0.1358 | 10.87 | 500 | 1.1126 | 0.1858 | | 0.0916 | 16.3 | 750 | 1.5579 | 0.1991 | | 0.0716 | 21.74 | 1000 | 1.2674 | 0.1833 | | 0.0584 | 27.17 | 1250 | 1.6728 | 0.1768 | | 0.0464 | 32.61 | 1500 | 1.8259 | 0.1632 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.0.0 - Datasets 2.14.6 - Tokenizers 0.14.1
Jjateen/a2c-PandaReachDense-v3
Jjateen
"2023-12-23T21:47:33Z"
0
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2023-12-23T21:43:17Z"
--- library_name: stable-baselines3 tags: - PandaReachDense-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v3 type: PandaReachDense-v3 metrics: - type: mean_reward value: -0.25 +/- 0.08 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v3** This is a trained model of a **A2C** agent playing **PandaReachDense-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
omrudra998/KishanSevakHindi
omrudra998
"2024-11-18T12:02:49Z"
8
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-11-18T11:56:41Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Serendien/topic_learning_llama
Serendien
"2024-11-15T06:31:30Z"
180
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-11-15T06:31:15Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sdyy/bb
sdyy
"2024-12-20T03:12:53Z"
9
0
null
[ "safetensors", "llama", "license:apache-2.0", "aqlm", "region:us" ]
null
"2024-12-20T02:16:00Z"
--- license: apache-2.0 --- ISTA-DASLab/Meta-Llama-3-70B-AQLM-PV-1Bit-1x16 and tokenizer.model from https://huggingface.co/meta-llama/Meta-Llama-3-70B/blob/main/original/tokenizer.model
valhalla/SwinIR-real-sr-M-x2-GAN
valhalla
"2022-10-23T17:44:40Z"
2
0
transformers
[ "transformers", "jax", "swin-ir", "region:us" ]
null
"2022-10-23T15:44:22Z"
--- tags: - swin-ir inference: false ---
alekskusz/distilbert-base-uncased-distilled-clinc
alekskusz
"2024-10-18T10:49:22Z"
106
0
transformers
[ "transformers", "pytorch", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-10-15T09:28:20Z"
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased-distilled-clinc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-distilled-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1677 - Accuracy: 0.9510 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00015488796175955455 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 19 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.8652 | 1.0 | 318 | 0.2578 | 0.9277 | | 0.1803 | 2.0 | 636 | 0.2039 | 0.9397 | | 0.1226 | 3.0 | 954 | 0.1887 | 0.9442 | | 0.1064 | 4.0 | 1272 | 0.1808 | 0.9481 | | 0.0963 | 5.0 | 1590 | 0.1770 | 0.9468 | | 0.0928 | 6.0 | 1908 | 0.1826 | 0.9477 | | 0.092 | 7.0 | 2226 | 0.1777 | 0.95 | | 0.0876 | 8.0 | 2544 | 0.1719 | 0.9516 | | 0.0861 | 9.0 | 2862 | 0.1813 | 0.9455 | | 0.0868 | 10.0 | 3180 | 0.1804 | 0.9471 | | 0.0841 | 11.0 | 3498 | 0.1749 | 0.9484 | | 0.0834 | 12.0 | 3816 | 0.1764 | 0.9487 | | 0.0817 | 13.0 | 4134 | 0.1714 | 0.9513 | | 0.081 | 14.0 | 4452 | 0.1727 | 0.9503 | | 0.0802 | 15.0 | 4770 | 0.1707 | 0.95 | | 0.0794 | 16.0 | 5088 | 0.1697 | 0.9506 | | 0.0787 | 17.0 | 5406 | 0.1683 | 0.9523 | | 0.0784 | 18.0 | 5724 | 0.1684 | 0.9510 | | 0.078 | 19.0 | 6042 | 0.1677 | 0.9510 | ### Framework versions - Transformers 4.45.2 - Pytorch 2.4.1+cu121 - Tokenizers 0.20.1
Nadav-Deepchecks/safe_input_classifier_1203
Nadav-Deepchecks
"2024-12-03T13:44:44Z"
104
0
transformers
[ "transformers", "safetensors", "deberta-v2", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-12-03T13:44:07Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nhung01/45492c56-1f50-40b9-8675-b1f16f4da4cd
nhung01
"2025-01-14T23:40:49Z"
8
0
peft
[ "peft", "safetensors", "gemma", "axolotl", "generated_from_trainer", "base_model:unsloth/codegemma-2b", "base_model:adapter:unsloth/codegemma-2b", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
"2025-01-14T23:23:21Z"
--- library_name: peft license: apache-2.0 base_model: unsloth/codegemma-2b tags: - axolotl - generated_from_trainer model-index: - name: 45492c56-1f50-40b9-8675-b1f16f4da4cd results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/codegemma-2b bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 772ddb1b0bc59bd0_train_data.json ds_type: json format: custom path: /workspace/input_data/772ddb1b0bc59bd0_train_data.json type: field_instruction: prompt field_output: model_1_response format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: nhung01/45492c56-1f50-40b9-8675-b1f16f4da4cd hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/772ddb1b0bc59bd0_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 74637dd8-3572-49dc-be4a-cbf8f9298e12 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 74637dd8-3572-49dc-be4a-cbf8f9298e12 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 45492c56-1f50-40b9-8675-b1f16f4da4cd This model is a fine-tuned version of [unsloth/codegemma-2b](https://huggingface.co/unsloth/codegemma-2b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7558 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.9163 | 0.0530 | 200 | 1.7558 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
llmixer/BigLiz-120b
llmixer
"2024-03-12T07:23:33Z"
19
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-01-16T19:45:22Z"
--- license: llama2 pipeline_tag: text-generation --- # BigLiz 120B <img src="https://cdn-uploads.huggingface.co/production/uploads/65a6db055c58475cf9e6def1/trbJntEb6rKJacv7XJYDK.png" width=600> A Goliath-120b style frankenmerge of lzlv-70b and WinterGoddess-1.4x-70b. # Prompting Format Vicuna and Alpaca. # Merge process The models used in the merge are [lzlv-70b](https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf) and [WinterGoddess-1.4x-70b](https://huggingface.co/Sao10K/WinterGoddess-1.4x-70B-L2). ```yaml slices: - sources: - model: lizpreciatior_lzlv_70b_fp16_hf layer_range: [0, 16] - sources: - model: Sao10K_WinterGoddess-1.4x-70B-L2 layer_range: [8, 24] - sources: - model: lizpreciatior_lzlv_70b_fp16_hf layer_range: [17, 32] - sources: - model: Sao10K_WinterGoddess-1.4x-70B-L2 layer_range: [25, 40] - sources: - model: lizpreciatior_lzlv_70b_fp16_hf layer_range: [33, 48] - sources: - model: Sao10K_WinterGoddess-1.4x-70B-L2 layer_range: [41, 56] - sources: - model: lizpreciatior_lzlv_70b_fp16_hf layer_range: [49, 64] - sources: - model: Sao10K_WinterGoddess-1.4x-70B-L2 layer_range: [57, 72] - sources: - model: lizpreciatior_lzlv_70b_fp16_hf layer_range: [65, 80] merge_method: passthrough dtype: float16 ``` # Acknowledgements [@lizpreciatior](https://huggingface.co/lizpreciatior) For creating lzlv [@Sao10K](https://huggingface.co/Sao10K) For creating WinterGoddess [@alpindale](https://huggingface.co/alpindale) For creating the original Goliath [@chargoddard](https://huggingface.co/chargoddard) For developing [mergekit](https://github.com/cg123/mergekit).
jiinking/5_bitwise_MQA_llama_model
jiinking
"2025-03-11T09:41:55Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-03-11T09:12:27Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
cognitivecomputations/dolphin-2.9.3-Yi-1.5-34B-32k
cognitivecomputations
"2024-06-23T16:16:28Z"
2,809
18
transformers
[ "transformers", "pytorch", "llama", "text-generation", "generated_from_trainer", "axolotl", "conversational", "dataset:cognitivecomputations/Dolphin-2.9", "dataset:teknium/OpenHermes-2.5", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "dataset:cognitivecomputations/dolphin-coder", "dataset:cognitivecomputations/samantha-data", "dataset:microsoft/orca-math-word-problems-200k", "dataset:Locutusque/function-calling-chatml", "dataset:internlm/Agent-FLAN", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-23T15:29:59Z"
--- license: apache-2.0 base_model: 01-ai/Yi-1.5-34B-32k tags: - generated_from_trainer - axolotl datasets: - cognitivecomputations/Dolphin-2.9 - teknium/OpenHermes-2.5 - m-a-p/CodeFeedback-Filtered-Instruction - cognitivecomputations/dolphin-coder - cognitivecomputations/samantha-data - microsoft/orca-math-word-problems-200k - Locutusque/function-calling-chatml - internlm/Agent-FLAN --- # Dolphin 2.9.3 Yi 1.5 34b 32k 🐬 Curated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations [![Discord](https://img.shields.io/discord/1156064224225808488?logo=Discord&logoColor=%23ffffff&label=Discord&link=https%3A%2F%2Fdiscord.gg%2FtCMkMDDHwm)](https://discord.gg/cognitivecomputations) Discord: https://discord.gg/cognitivecomputations <img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" /> Our appreciation for the sponsors of Dolphin 2.9.3: - [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 8xH100 node - [OnDemand](https://on-demand.io/) - provided inference sponsorship This model is based on Yi-1.5-34b-32k, and is governed by the apache 2.0 license. The base model has 32k context, and our finetuning took place with 8192 sequence length. Dolphin 2.9.3 uses ChatML prompt template format. example: ``` <|im_start|>system You are Dolphin, a helpful AI assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` Dolphin-2.9.3 has a variety of instruction following, conversational, and coding skills. It also has initial agentic abilities and supports function calling. Dolphin is uncensored. We have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly. Dolphin is licensed according to apache 2.0 license. We grant permission for any use, including commercial. Dolphin was trained on data generated from GPT4, among other models. ## Evals ![image/png](https://i.ibb.co/7G02dNq/file-9-Lfkfpd0-KKK3-USTm-U8d-Jg-Zm0.png) ## Training [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: 01-ai/Yi-1.5-34B-32k model_type: LlamaForCausalLM tokenizer_type: LlamaTokenizer trust_remote_code: true # load_in_8bit: false load_in_4bit: true # strict: false adapter: qlora lora_modules_to_save: [embed_tokens, lm_head] lora_r: 32 lora_alpha: 16 lora_dropout: 0.05 lora_target_linear: false lora_fan_in_fan_out: datasets: - path: /workspace/datasets/dolphin-2.9.3/dolphin201-sharegpt2.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9.3/SystemChat_filtered_sharegpt.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9.3/SystemChat_multilingual_sharegpt.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9.3/dolphin-coder-translate-sharegpt2.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9.3/dolphin-coder-codegen-sharegpt2.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9.3/m-a-p_Code-Feedback-sharegpt-unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9.3/m-a-p_CodeFeedback-Filtered-Instruction-sharegpt-unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9.3/not_samantha_norefusals.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9.3/Orca-Math-resort-unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9.3/agent_instruct_react_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9.3/toolbench_instruct_j1s1_3k_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9.3/toolbench_negative_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9.3/toolbench_react_10p_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9.3/toolbench_tflan_cot_30p_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9.3/openhermes200k_unfiltered.jsonl type: sharegpt conversation: chatml chat_template: chatml dataset_prepared_path: dolphin-2.9.3-yi34b-prepared val_set_size: 0.01 output_dir: ./dolphin-2.9.3-out sequence_len: 8192 sample_packing: true pad_to_sequence_len: true wandb_project: dolphin-2.9.3-yi-1.5-34b wandb_watch: wandb_run_id: wandb_log_model: gradient_accumulation_steps: 8 micro_batch_size: 1 num_epochs: 3 optimizer: adamw_8bit lr_scheduler: cosine learning_rate: 1e-5 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: false early_stopping_patience: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 10 # evals_per_epoch: 4 eval_table_size: saves_per_epoch: 4 save_total_limit: 2 save_steps: debug: deepspeed: /workspace/axolotl/deepspeed_configs/zero3_bf16.json weight_decay: 0.05 fsdp: fsdp_config: special_tokens: bos_token: "<|startoftext|>" eos_token: "<|im_end|>" pad_token: "<unk>" unk_token: "<unk>" tokens: - "<|im_start|>" #unfrozen_parameters: lora_target_modules: # input_layernorm layers # - model.layers.0.input_layernorm # - model.layers.1.input_layernorm # - model.layers.2.input_layernorm # - model.layers.3.input_layernorm # - model.layers.4.input_layernorm # - model.layers.5.input_layernorm # - model.layers.6.input_layernorm # - model.layers.7.input_layernorm # - model.layers.8.input_layernorm # - model.layers.9.input_layernorm # - model.layers.10.input_layernorm # - model.layers.11.input_layernorm # - model.layers.12.input_layernorm # - model.layers.13.input_layernorm # - model.layers.14.input_layernorm # - model.layers.15.input_layernorm # - model.layers.16.input_layernorm # - model.layers.17.input_layernorm # - model.layers.18.input_layernorm # - model.layers.19.input_layernorm # - model.layers.20.input_layernorm # - model.layers.21.input_layernorm # - model.layers.22.input_layernorm # - model.layers.23.input_layernorm # - model.layers.24.input_layernorm # - model.layers.25.input_layernorm # - model.layers.26.input_layernorm # - model.layers.27.input_layernorm # - model.layers.28.input_layernorm # - model.layers.29.input_layernorm - lm_head # mlp.down_proj layers - model.layers.44.mlp.down_proj - model.layers.45.mlp.down_proj - model.layers.46.mlp.down_proj - model.layers.47.mlp.down_proj - model.layers.43.mlp.down_proj - model.layers.48.mlp.down_proj - model.layers.49.mlp.down_proj - model.layers.42.mlp.down_proj - model.layers.50.mlp.down_proj - model.layers.41.mlp.down_proj - model.layers.51.mlp.down_proj - model.layers.52.mlp.down_proj - model.layers.39.mlp.down_proj - model.layers.40.mlp.down_proj - model.layers.53.mlp.down_proj - model.layers.54.mlp.down_proj - model.layers.38.mlp.down_proj - model.layers.56.mlp.down_proj - model.layers.55.mlp.down_proj - model.layers.37.mlp.down_proj - model.layers.36.mlp.down_proj - model.layers.57.mlp.down_proj - model.layers.35.mlp.down_proj - model.layers.12.mlp.down_proj - model.layers.13.mlp.down_proj - model.layers.16.mlp.down_proj - model.layers.14.mlp.down_proj - model.layers.11.mlp.down_proj - model.layers.34.mlp.down_proj - model.layers.17.mlp.down_proj # mlp.gate_proj layers - model.layers.57.mlp.gate_proj - model.layers.58.mlp.gate_proj - model.layers.56.mlp.gate_proj - model.layers.55.mlp.gate_proj - model.layers.54.mlp.gate_proj - model.layers.35.mlp.gate_proj - model.layers.34.mlp.gate_proj - model.layers.53.mlp.gate_proj - model.layers.26.mlp.gate_proj - model.layers.52.mlp.gate_proj - model.layers.25.mlp.gate_proj - model.layers.33.mlp.gate_proj - model.layers.51.mlp.gate_proj - model.layers.18.mlp.gate_proj - model.layers.32.mlp.gate_proj - model.layers.36.mlp.gate_proj - model.layers.24.mlp.gate_proj - model.layers.17.mlp.gate_proj - model.layers.23.mlp.gate_proj - model.layers.31.mlp.gate_proj - model.layers.50.mlp.gate_proj - model.layers.19.mlp.gate_proj - model.layers.15.mlp.gate_proj - model.layers.27.mlp.gate_proj - model.layers.37.mlp.gate_proj - model.layers.14.mlp.gate_proj - model.layers.39.mlp.gate_proj - model.layers.11.mlp.gate_proj - model.layers.29.mlp.gate_proj - model.layers.28.mlp.gate_proj # mlp.up_proj layers - model.layers.21.mlp.up_proj - model.layers.48.mlp.up_proj - model.layers.49.mlp.up_proj - model.layers.24.mlp.up_proj - model.layers.47.mlp.up_proj - model.layers.25.mlp.up_proj - model.layers.23.mlp.up_proj - model.layers.50.mlp.up_proj - model.layers.14.mlp.up_proj - model.layers.46.mlp.up_proj - model.layers.26.mlp.up_proj - model.layers.27.mlp.up_proj - model.layers.20.mlp.up_proj - model.layers.13.mlp.up_proj - model.layers.51.mlp.up_proj - model.layers.28.mlp.up_proj - model.layers.45.mlp.up_proj - model.layers.22.mlp.up_proj - model.layers.52.mlp.up_proj - model.layers.12.mlp.up_proj - model.layers.29.mlp.up_proj - model.layers.44.mlp.up_proj - model.layers.53.mlp.up_proj - model.layers.11.mlp.up_proj - model.layers.42.mlp.up_proj - model.layers.30.mlp.up_proj - model.layers.43.mlp.up_proj - model.layers.19.mlp.up_proj - model.layers.54.mlp.up_proj - model.layers.40.mlp.up_proj - model.embed_tokens # model.norm layers # post_attention_layernorm layers # - model.layers.0.post_attention_layernorm # - model.layers.1.post_attention_layernorm # - model.layers.2.post_attention_layernorm # - model.layers.3.post_attention_layernorm # - model.layers.4.post_attention_layernorm # - model.layers.5.post_attention_layernorm # - model.layers.6.post_attention_layernorm # - model.layers.7.post_attention_layernorm # - model.layers.8.post_attention_layernorm # - model.layers.9.post_attention_layernorm # - model.layers.10.post_attention_layernorm # - model.layers.11.post_attention_layernorm # - model.layers.12.post_attention_layernorm # - model.layers.13.post_attention_layernorm # - model.layers.14.post_attention_layernorm # - model.layers.15.post_attention_layernorm # - model.layers.16.post_attention_layernorm # - model.layers.17.post_attention_layernorm # - model.layers.18.post_attention_layernorm # - model.layers.19.post_attention_layernorm # - model.layers.20.post_attention_layernorm # - model.layers.21.post_attention_layernorm # - model.layers.22.post_attention_layernorm # - model.layers.23.post_attention_layernorm # - model.layers.24.post_attention_layernorm # - model.layers.25.post_attention_layernorm # - model.layers.26.post_attention_layernorm # - model.layers.27.post_attention_layernorm # - model.layers.28.post_attention_layernorm # - model.layers.29.post_attention_layernorm # self_attn.k_proj layers - model.layers.55.self_attn.k_proj - model.layers.51.self_attn.k_proj - model.layers.53.self_attn.k_proj - model.layers.56.self_attn.k_proj - model.layers.54.self_attn.k_proj - model.layers.57.self_attn.k_proj - model.layers.52.self_attn.k_proj - model.layers.59.self_attn.k_proj - model.layers.49.self_attn.k_proj - model.layers.48.self_attn.k_proj - model.layers.47.self_attn.k_proj - model.layers.41.self_attn.k_proj - model.layers.58.self_attn.k_proj - model.layers.40.self_attn.k_proj - model.layers.46.self_attn.k_proj - model.layers.44.self_attn.k_proj - model.layers.50.self_attn.k_proj - model.layers.43.self_attn.k_proj - model.layers.39.self_attn.k_proj - model.layers.42.self_attn.k_proj - model.layers.45.self_attn.k_proj - model.layers.33.self_attn.k_proj - model.layers.37.self_attn.k_proj - model.layers.17.self_attn.k_proj - model.layers.24.self_attn.k_proj - model.layers.21.self_attn.k_proj - model.layers.25.self_attn.k_proj - model.layers.23.self_attn.k_proj - model.layers.35.self_attn.k_proj - model.layers.20.self_attn.k_proj # self_attn.o_proj layers - model.layers.53.self_attn.o_proj - model.layers.55.self_attn.o_proj - model.layers.54.self_attn.o_proj - model.layers.42.self_attn.o_proj - model.layers.52.self_attn.o_proj - model.layers.51.self_attn.o_proj - model.layers.50.self_attn.o_proj - model.layers.1.self_attn.o_proj - model.layers.40.self_attn.o_proj - model.layers.37.self_attn.o_proj - model.layers.34.self_attn.o_proj - model.layers.36.self_attn.o_proj - model.layers.41.self_attn.o_proj - model.layers.35.self_attn.o_proj - model.layers.46.self_attn.o_proj - model.layers.27.self_attn.o_proj - model.layers.33.self_attn.o_proj - model.layers.30.self_attn.o_proj - model.layers.43.self_attn.o_proj - model.layers.39.self_attn.o_proj - model.layers.17.self_attn.o_proj - model.layers.28.self_attn.o_proj - model.layers.48.self_attn.o_proj - model.layers.31.self_attn.o_proj - model.layers.29.self_attn.o_proj - model.layers.38.self_attn.o_proj - model.layers.47.self_attn.o_proj - model.layers.56.self_attn.o_proj - model.layers.32.self_attn.o_proj - model.layers.4.self_attn.o_proj # self_attn.q_proj layers - model.layers.1.self_attn.q_proj - model.layers.3.self_attn.q_proj - model.layers.4.self_attn.q_proj - model.layers.5.self_attn.q_proj - model.layers.2.self_attn.q_proj - model.layers.0.self_attn.q_proj - model.layers.6.self_attn.q_proj - model.layers.8.self_attn.q_proj - model.layers.7.self_attn.q_proj - model.layers.10.self_attn.q_proj - model.layers.36.self_attn.q_proj - model.layers.11.self_attn.q_proj - model.layers.9.self_attn.q_proj - model.layers.35.self_attn.q_proj - model.layers.28.self_attn.q_proj - model.layers.34.self_attn.q_proj - model.layers.27.self_attn.q_proj - model.layers.14.self_attn.q_proj - model.layers.29.self_attn.q_proj - model.layers.12.self_attn.q_proj - model.layers.33.self_attn.q_proj - model.layers.30.self_attn.q_proj - model.layers.24.self_attn.q_proj - model.layers.32.self_attn.q_proj - model.layers.37.self_attn.q_proj - model.layers.20.self_attn.q_proj - model.layers.15.self_attn.q_proj - model.layers.16.self_attn.q_proj - model.layers.26.self_attn.q_proj - model.layers.31.self_attn.q_proj # self_attn.v_proj layers - model.layers.7.self_attn.v_proj - model.layers.8.self_attn.v_proj - model.layers.9.self_attn.v_proj - model.layers.10.self_attn.v_proj - model.layers.12.self_attn.v_proj - model.layers.13.self_attn.v_proj - model.layers.14.self_attn.v_proj - model.layers.15.self_attn.v_proj - model.layers.16.self_attn.v_proj - model.layers.17.self_attn.v_proj - model.layers.21.self_attn.v_proj - model.layers.23.self_attn.v_proj - model.layers.39.self_attn.v_proj - model.layers.46.self_attn.v_proj - model.layers.48.self_attn.v_proj - model.layers.49.self_attn.v_proj - model.layers.51.self_attn.v_proj - model.layers.52.self_attn.v_proj - model.layers.53.self_attn.v_proj - model.layers.54.self_attn.v_proj - model.layers.55.self_attn.v_proj - model.layers.56.self_attn.v_proj - model.layers.22.self_attn.v_proj - model.layers.18.self_attn.v_proj - model.layers.50.self_attn.v_proj - model.layers.47.self_attn.v_proj - model.layers.44.self_attn.v_proj - model.layers.45.self_attn.v_proj - model.layers.57.self_attn.v_proj - model.layers.41.self_attn.v_proj ``` </details><br> # out-yi This model is a fine-tuned version of [01-ai/Yi-1.5-34B](https://huggingface.co/01-ai/Yi-1.5-34B-32k) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4425 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.6265 | 0.0 | 1 | 0.6035 | | 0.4674 | 0.25 | 327 | 0.4344 | | 0.4337 | 0.5 | 654 | 0.4250 | | 0.4346 | 0.75 | 981 | 0.4179 | | 0.3985 | 1.0 | 1308 | 0.4118 | | 0.3128 | 1.23 | 1635 | 0.4201 | | 0.3261 | 1.48 | 1962 | 0.4157 | | 0.3259 | 1.73 | 2289 | 0.4122 | | 0.3126 | 1.98 | 2616 | 0.4079 | | 0.2265 | 2.21 | 2943 | 0.4441 | | 0.2297 | 2.46 | 3270 | 0.4427 | | 0.2424 | 2.71 | 3597 | 0.4425 | ### Framework versions - Transformers 4.40.0.dev0 - Pytorch 2.2.2+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
SUMEDH91/my_awesome_wikitext-model
SUMEDH91
"2024-01-05T06:11:35Z"
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:distilbert/distilgpt2", "base_model:finetune:distilbert/distilgpt2", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-01-04T17:54:25Z"
--- license: apache-2.0 base_model: distilgpt2 tags: - generated_from_trainer model-index: - name: my_awesome_wikitext-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_wikitext-model This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2915 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.4882 | 1.0 | 5441 | 1.3856 | | 1.3941 | 2.0 | 10882 | 1.3115 | | 1.3617 | 3.0 | 16323 | 1.2915 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu118 - Datasets 2.16.1 - Tokenizers 0.15.0
shaoyuyoung/QTC4SO
shaoyuyoung
"2023-03-11T04:06:57Z"
103
1
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2022-12-09T03:22:14Z"
--- license: mit --- # Introduction QTC4SO(Qusetion Title Completion for Stack Overflow) is a pre-trained model based on T5. We fine-tuned it on our downstream task. It is used for question title completion on StackOverflow # More details You can find our code and dataset on our [GitHub project](https://github.com/shaoyuyoung/QTC4SO)<br> For more details, please refer to [our paper](https://smartse.github.io/paper/icpc2023.pdf)
ray1916/ray__epoch_2
ray1916
"2024-11-22T02:07:38Z"
30
0
transformers
[ "transformers", "safetensors", "florence2", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "region:us" ]
text-generation
"2024-11-22T01:44:56Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
PrunaAI/beomi-Llama-3-Open-Ko-8B-bnb-4bit-smashed
PrunaAI
"2024-08-02T15:58:01Z"
11
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "pruna-ai", "conversational", "base_model:beomi/Llama-3-Open-Ko-8B", "base_model:quantized:beomi/Llama-3-Open-Ko-8B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
"2024-05-01T23:52:59Z"
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: beomi/Llama-3-Open-Ko-8B metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with llm-int8. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo beomi/Llama-3-Open-Ko-8B installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install transformers accelerate bitsandbytes>0.37.0 ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("PrunaAI/beomi-Llama-3-Open-Ko-8B-bnb-4bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("beomi/Llama-3-Open-Ko-8B") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model beomi/Llama-3-Open-Ko-8B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
FabioTiroli/LaminiFT4
FabioTiroli
"2024-09-13T11:13:26Z"
181
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "generated_from_trainer", "base_model:lamini/lamini_docs_finetuned", "base_model:finetune:lamini/lamini_docs_finetuned", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-09-13T01:24:35Z"
--- library_name: transformers license: apache-2.0 base_model: lamini/lamini_docs_finetuned tags: - generated_from_trainer model-index: - name: LaminiFT4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # LaminiFT4 This model is a fine-tuned version of [lamini/lamini_docs_finetuned](https://huggingface.co/lamini/lamini_docs_finetuned) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 3 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 12 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cpu - Datasets 2.21.0 - Tokenizers 0.19.1
CyberHarem/momoshina_fumika_alicegearaegisexpansion
CyberHarem
"2023-12-11T09:34:00Z"
0
0
null
[ "art", "text-to-image", "dataset:CyberHarem/momoshina_fumika_alicegearaegisexpansion", "license:mit", "region:us" ]
text-to-image
"2023-12-11T09:19:32Z"
--- license: mit datasets: - CyberHarem/momoshina_fumika_alicegearaegisexpansion pipeline_tag: text-to-image tags: - art --- # Lora of momoshina_fumika_alicegearaegisexpansion This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs). The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11). After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora. For example, if you want to use the model from step 5720, you need to download `5720/momoshina_fumika_alicegearaegisexpansion.pt` as the embedding and `5720/momoshina_fumika_alicegearaegisexpansion.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters. **The best step we recommend is 5720**, with the score of 0.967. The trigger words are: 1. `momoshina_fumika_alicegearaegisexpansion` 2. `glasses, long_hair, purple_eyes, blue_hair, blush, black_hair` For the following groups, it is not recommended to use this model and we express regret: 1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail. 2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits. 3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm. 4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters. 5. Individuals who finds the generated image content offensive to their values. These are available steps: | Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata | |:---------|:----------|:------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------| | 6600 | 0.900 | [Download](6600/momoshina_fumika_alicegearaegisexpansion.zip) | ![pattern_1-6600](6600/previews/pattern_1.png) | ![pattern_2-6600](6600/previews/pattern_2.png) | ![pattern_3-6600](6600/previews/pattern_3.png) | ![pattern_4-6600](6600/previews/pattern_4.png) | ![pattern_5-6600](6600/previews/pattern_5.png) | ![pattern_6-6600](6600/previews/pattern_6.png) | ![pattern_7-6600](6600/previews/pattern_7.png) | ![pattern_8-6600](6600/previews/pattern_8.png) | [<NSFW, click to see>](6600/previews/bikini.png) | [<NSFW, click to see>](6600/previews/bondage.png) | ![free-6600](6600/previews/free.png) | ![maid-6600](6600/previews/maid.png) | ![miko-6600](6600/previews/miko.png) | [<NSFW, click to see>](6600/previews/nude.png) | [<NSFW, click to see>](6600/previews/nude2.png) | ![suit-6600](6600/previews/suit.png) | ![yukata-6600](6600/previews/yukata.png) | | 6160 | 0.797 | [Download](6160/momoshina_fumika_alicegearaegisexpansion.zip) | ![pattern_1-6160](6160/previews/pattern_1.png) | ![pattern_2-6160](6160/previews/pattern_2.png) | ![pattern_3-6160](6160/previews/pattern_3.png) | ![pattern_4-6160](6160/previews/pattern_4.png) | ![pattern_5-6160](6160/previews/pattern_5.png) | ![pattern_6-6160](6160/previews/pattern_6.png) | ![pattern_7-6160](6160/previews/pattern_7.png) | ![pattern_8-6160](6160/previews/pattern_8.png) | [<NSFW, click to see>](6160/previews/bikini.png) | [<NSFW, click to see>](6160/previews/bondage.png) | ![free-6160](6160/previews/free.png) | ![maid-6160](6160/previews/maid.png) | ![miko-6160](6160/previews/miko.png) | [<NSFW, click to see>](6160/previews/nude.png) | [<NSFW, click to see>](6160/previews/nude2.png) | ![suit-6160](6160/previews/suit.png) | ![yukata-6160](6160/previews/yukata.png) | | **5720** | **0.967** | [**Download**](5720/momoshina_fumika_alicegearaegisexpansion.zip) | ![pattern_1-5720](5720/previews/pattern_1.png) | ![pattern_2-5720](5720/previews/pattern_2.png) | ![pattern_3-5720](5720/previews/pattern_3.png) | ![pattern_4-5720](5720/previews/pattern_4.png) | ![pattern_5-5720](5720/previews/pattern_5.png) | ![pattern_6-5720](5720/previews/pattern_6.png) | ![pattern_7-5720](5720/previews/pattern_7.png) | ![pattern_8-5720](5720/previews/pattern_8.png) | [<NSFW, click to see>](5720/previews/bikini.png) | [<NSFW, click to see>](5720/previews/bondage.png) | ![free-5720](5720/previews/free.png) | ![maid-5720](5720/previews/maid.png) | ![miko-5720](5720/previews/miko.png) | [<NSFW, click to see>](5720/previews/nude.png) | [<NSFW, click to see>](5720/previews/nude2.png) | ![suit-5720](5720/previews/suit.png) | ![yukata-5720](5720/previews/yukata.png) | | 5280 | 0.927 | [Download](5280/momoshina_fumika_alicegearaegisexpansion.zip) | ![pattern_1-5280](5280/previews/pattern_1.png) | ![pattern_2-5280](5280/previews/pattern_2.png) | ![pattern_3-5280](5280/previews/pattern_3.png) | ![pattern_4-5280](5280/previews/pattern_4.png) | ![pattern_5-5280](5280/previews/pattern_5.png) | ![pattern_6-5280](5280/previews/pattern_6.png) | ![pattern_7-5280](5280/previews/pattern_7.png) | ![pattern_8-5280](5280/previews/pattern_8.png) | [<NSFW, click to see>](5280/previews/bikini.png) | [<NSFW, click to see>](5280/previews/bondage.png) | ![free-5280](5280/previews/free.png) | ![maid-5280](5280/previews/maid.png) | ![miko-5280](5280/previews/miko.png) | [<NSFW, click to see>](5280/previews/nude.png) | [<NSFW, click to see>](5280/previews/nude2.png) | ![suit-5280](5280/previews/suit.png) | ![yukata-5280](5280/previews/yukata.png) | | 4840 | 0.887 | [Download](4840/momoshina_fumika_alicegearaegisexpansion.zip) | ![pattern_1-4840](4840/previews/pattern_1.png) | ![pattern_2-4840](4840/previews/pattern_2.png) | ![pattern_3-4840](4840/previews/pattern_3.png) | ![pattern_4-4840](4840/previews/pattern_4.png) | ![pattern_5-4840](4840/previews/pattern_5.png) | ![pattern_6-4840](4840/previews/pattern_6.png) | ![pattern_7-4840](4840/previews/pattern_7.png) | ![pattern_8-4840](4840/previews/pattern_8.png) | [<NSFW, click to see>](4840/previews/bikini.png) | [<NSFW, click to see>](4840/previews/bondage.png) | ![free-4840](4840/previews/free.png) | ![maid-4840](4840/previews/maid.png) | ![miko-4840](4840/previews/miko.png) | [<NSFW, click to see>](4840/previews/nude.png) | [<NSFW, click to see>](4840/previews/nude2.png) | ![suit-4840](4840/previews/suit.png) | ![yukata-4840](4840/previews/yukata.png) | | 4400 | 0.882 | [Download](4400/momoshina_fumika_alicegearaegisexpansion.zip) | ![pattern_1-4400](4400/previews/pattern_1.png) | ![pattern_2-4400](4400/previews/pattern_2.png) | ![pattern_3-4400](4400/previews/pattern_3.png) | ![pattern_4-4400](4400/previews/pattern_4.png) | ![pattern_5-4400](4400/previews/pattern_5.png) | ![pattern_6-4400](4400/previews/pattern_6.png) | ![pattern_7-4400](4400/previews/pattern_7.png) | ![pattern_8-4400](4400/previews/pattern_8.png) | [<NSFW, click to see>](4400/previews/bikini.png) | [<NSFW, click to see>](4400/previews/bondage.png) | ![free-4400](4400/previews/free.png) | ![maid-4400](4400/previews/maid.png) | ![miko-4400](4400/previews/miko.png) | [<NSFW, click to see>](4400/previews/nude.png) | [<NSFW, click to see>](4400/previews/nude2.png) | ![suit-4400](4400/previews/suit.png) | ![yukata-4400](4400/previews/yukata.png) | | 3960 | 0.881 | [Download](3960/momoshina_fumika_alicegearaegisexpansion.zip) | ![pattern_1-3960](3960/previews/pattern_1.png) | ![pattern_2-3960](3960/previews/pattern_2.png) | ![pattern_3-3960](3960/previews/pattern_3.png) | ![pattern_4-3960](3960/previews/pattern_4.png) | ![pattern_5-3960](3960/previews/pattern_5.png) | ![pattern_6-3960](3960/previews/pattern_6.png) | ![pattern_7-3960](3960/previews/pattern_7.png) | ![pattern_8-3960](3960/previews/pattern_8.png) | [<NSFW, click to see>](3960/previews/bikini.png) | [<NSFW, click to see>](3960/previews/bondage.png) | ![free-3960](3960/previews/free.png) | ![maid-3960](3960/previews/maid.png) | ![miko-3960](3960/previews/miko.png) | [<NSFW, click to see>](3960/previews/nude.png) | [<NSFW, click to see>](3960/previews/nude2.png) | ![suit-3960](3960/previews/suit.png) | ![yukata-3960](3960/previews/yukata.png) | | 3520 | 0.954 | [Download](3520/momoshina_fumika_alicegearaegisexpansion.zip) | ![pattern_1-3520](3520/previews/pattern_1.png) | ![pattern_2-3520](3520/previews/pattern_2.png) | ![pattern_3-3520](3520/previews/pattern_3.png) | ![pattern_4-3520](3520/previews/pattern_4.png) | ![pattern_5-3520](3520/previews/pattern_5.png) | ![pattern_6-3520](3520/previews/pattern_6.png) | ![pattern_7-3520](3520/previews/pattern_7.png) | ![pattern_8-3520](3520/previews/pattern_8.png) | [<NSFW, click to see>](3520/previews/bikini.png) | [<NSFW, click to see>](3520/previews/bondage.png) | ![free-3520](3520/previews/free.png) | ![maid-3520](3520/previews/maid.png) | ![miko-3520](3520/previews/miko.png) | [<NSFW, click to see>](3520/previews/nude.png) | [<NSFW, click to see>](3520/previews/nude2.png) | ![suit-3520](3520/previews/suit.png) | ![yukata-3520](3520/previews/yukata.png) | | 3080 | 0.931 | [Download](3080/momoshina_fumika_alicegearaegisexpansion.zip) | ![pattern_1-3080](3080/previews/pattern_1.png) | ![pattern_2-3080](3080/previews/pattern_2.png) | ![pattern_3-3080](3080/previews/pattern_3.png) | ![pattern_4-3080](3080/previews/pattern_4.png) | ![pattern_5-3080](3080/previews/pattern_5.png) | ![pattern_6-3080](3080/previews/pattern_6.png) | ![pattern_7-3080](3080/previews/pattern_7.png) | ![pattern_8-3080](3080/previews/pattern_8.png) | [<NSFW, click to see>](3080/previews/bikini.png) | [<NSFW, click to see>](3080/previews/bondage.png) | ![free-3080](3080/previews/free.png) | ![maid-3080](3080/previews/maid.png) | ![miko-3080](3080/previews/miko.png) | [<NSFW, click to see>](3080/previews/nude.png) | [<NSFW, click to see>](3080/previews/nude2.png) | ![suit-3080](3080/previews/suit.png) | ![yukata-3080](3080/previews/yukata.png) | | 2640 | 0.948 | [Download](2640/momoshina_fumika_alicegearaegisexpansion.zip) | ![pattern_1-2640](2640/previews/pattern_1.png) | ![pattern_2-2640](2640/previews/pattern_2.png) | ![pattern_3-2640](2640/previews/pattern_3.png) | ![pattern_4-2640](2640/previews/pattern_4.png) | ![pattern_5-2640](2640/previews/pattern_5.png) | ![pattern_6-2640](2640/previews/pattern_6.png) | ![pattern_7-2640](2640/previews/pattern_7.png) | ![pattern_8-2640](2640/previews/pattern_8.png) | [<NSFW, click to see>](2640/previews/bikini.png) | [<NSFW, click to see>](2640/previews/bondage.png) | ![free-2640](2640/previews/free.png) | ![maid-2640](2640/previews/maid.png) | ![miko-2640](2640/previews/miko.png) | [<NSFW, click to see>](2640/previews/nude.png) | [<NSFW, click to see>](2640/previews/nude2.png) | ![suit-2640](2640/previews/suit.png) | ![yukata-2640](2640/previews/yukata.png) | | 2200 | 0.815 | [Download](2200/momoshina_fumika_alicegearaegisexpansion.zip) | ![pattern_1-2200](2200/previews/pattern_1.png) | ![pattern_2-2200](2200/previews/pattern_2.png) | ![pattern_3-2200](2200/previews/pattern_3.png) | ![pattern_4-2200](2200/previews/pattern_4.png) | ![pattern_5-2200](2200/previews/pattern_5.png) | ![pattern_6-2200](2200/previews/pattern_6.png) | ![pattern_7-2200](2200/previews/pattern_7.png) | ![pattern_8-2200](2200/previews/pattern_8.png) | [<NSFW, click to see>](2200/previews/bikini.png) | [<NSFW, click to see>](2200/previews/bondage.png) | ![free-2200](2200/previews/free.png) | ![maid-2200](2200/previews/maid.png) | ![miko-2200](2200/previews/miko.png) | [<NSFW, click to see>](2200/previews/nude.png) | [<NSFW, click to see>](2200/previews/nude2.png) | ![suit-2200](2200/previews/suit.png) | ![yukata-2200](2200/previews/yukata.png) | | 1760 | 0.923 | [Download](1760/momoshina_fumika_alicegearaegisexpansion.zip) | ![pattern_1-1760](1760/previews/pattern_1.png) | ![pattern_2-1760](1760/previews/pattern_2.png) | ![pattern_3-1760](1760/previews/pattern_3.png) | ![pattern_4-1760](1760/previews/pattern_4.png) | ![pattern_5-1760](1760/previews/pattern_5.png) | ![pattern_6-1760](1760/previews/pattern_6.png) | ![pattern_7-1760](1760/previews/pattern_7.png) | ![pattern_8-1760](1760/previews/pattern_8.png) | [<NSFW, click to see>](1760/previews/bikini.png) | [<NSFW, click to see>](1760/previews/bondage.png) | ![free-1760](1760/previews/free.png) | ![maid-1760](1760/previews/maid.png) | ![miko-1760](1760/previews/miko.png) | [<NSFW, click to see>](1760/previews/nude.png) | [<NSFW, click to see>](1760/previews/nude2.png) | ![suit-1760](1760/previews/suit.png) | ![yukata-1760](1760/previews/yukata.png) | | 1320 | 0.892 | [Download](1320/momoshina_fumika_alicegearaegisexpansion.zip) | ![pattern_1-1320](1320/previews/pattern_1.png) | ![pattern_2-1320](1320/previews/pattern_2.png) | ![pattern_3-1320](1320/previews/pattern_3.png) | ![pattern_4-1320](1320/previews/pattern_4.png) | ![pattern_5-1320](1320/previews/pattern_5.png) | ![pattern_6-1320](1320/previews/pattern_6.png) | ![pattern_7-1320](1320/previews/pattern_7.png) | ![pattern_8-1320](1320/previews/pattern_8.png) | [<NSFW, click to see>](1320/previews/bikini.png) | [<NSFW, click to see>](1320/previews/bondage.png) | ![free-1320](1320/previews/free.png) | ![maid-1320](1320/previews/maid.png) | ![miko-1320](1320/previews/miko.png) | [<NSFW, click to see>](1320/previews/nude.png) | [<NSFW, click to see>](1320/previews/nude2.png) | ![suit-1320](1320/previews/suit.png) | ![yukata-1320](1320/previews/yukata.png) | | 880 | 0.776 | [Download](880/momoshina_fumika_alicegearaegisexpansion.zip) | ![pattern_1-880](880/previews/pattern_1.png) | ![pattern_2-880](880/previews/pattern_2.png) | ![pattern_3-880](880/previews/pattern_3.png) | ![pattern_4-880](880/previews/pattern_4.png) | ![pattern_5-880](880/previews/pattern_5.png) | ![pattern_6-880](880/previews/pattern_6.png) | ![pattern_7-880](880/previews/pattern_7.png) | ![pattern_8-880](880/previews/pattern_8.png) | [<NSFW, click to see>](880/previews/bikini.png) | [<NSFW, click to see>](880/previews/bondage.png) | ![free-880](880/previews/free.png) | ![maid-880](880/previews/maid.png) | ![miko-880](880/previews/miko.png) | [<NSFW, click to see>](880/previews/nude.png) | [<NSFW, click to see>](880/previews/nude2.png) | ![suit-880](880/previews/suit.png) | ![yukata-880](880/previews/yukata.png) | | 440 | 0.542 | [Download](440/momoshina_fumika_alicegearaegisexpansion.zip) | ![pattern_1-440](440/previews/pattern_1.png) | ![pattern_2-440](440/previews/pattern_2.png) | ![pattern_3-440](440/previews/pattern_3.png) | ![pattern_4-440](440/previews/pattern_4.png) | ![pattern_5-440](440/previews/pattern_5.png) | ![pattern_6-440](440/previews/pattern_6.png) | ![pattern_7-440](440/previews/pattern_7.png) | ![pattern_8-440](440/previews/pattern_8.png) | [<NSFW, click to see>](440/previews/bikini.png) | [<NSFW, click to see>](440/previews/bondage.png) | ![free-440](440/previews/free.png) | ![maid-440](440/previews/maid.png) | ![miko-440](440/previews/miko.png) | [<NSFW, click to see>](440/previews/nude.png) | [<NSFW, click to see>](440/previews/nude2.png) | ![suit-440](440/previews/suit.png) | ![yukata-440](440/previews/yukata.png) |
motmono/q-Taxi-v3
motmono
"2022-10-30T19:40:19Z"
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2022-10-30T19:40:11Z"
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.73 name: mean_reward verified: false --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="motmono/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
varun-v-rao/bert-large-cased-lora-1.58M-snli-model1
varun-v-rao
"2024-02-06T06:57:05Z"
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-large-cased", "base_model:finetune:google-bert/bert-large-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-02-06T03:12:15Z"
--- license: apache-2.0 base_model: bert-large-cased tags: - generated_from_trainer metrics: - accuracy model-index: - name: bert-large-cased-lora-1.58M-snli-model1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-cased-lora-1.58M-snli-model1 This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8126 - Accuracy: 0.695 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 57 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5065 | 1.0 | 2146 | 0.4147 | 0.8480 | | 0.4613 | 2.0 | 4292 | 0.3828 | 0.8588 | | 0.4464 | 3.0 | 6438 | 0.3717 | 0.8629 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.1+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
Jonjew/JulianneMoore
Jonjew
"2025-03-07T04:10:12Z"
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:unknown", "region:us" ]
text-to-image
"2025-03-07T04:09:36Z"
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: >- Breathtaking over the shoulder shot photography of ohwx looking at viewer, imperfections, necklace, looking at viewer, eyelashes, fine hair detail, entire hairstyle visible, perfect eyes with iris pattern, sensual lips, nose, (perfectly sharp:1.3), realistic textures, (deep focus:1.5), 8k uhd, dslr, ultra high quality image, film grain, Fujifilm XT3 parameters: negative_prompt: JulianneMoore_flux_lora_v1_Weight-1.00 output: url: >- images/JulianneMoore_flux_lora_v1_Weight-1.00_2025-02-21_2025-02-21-202736_0.png base_model: black-forest-labs/FLUX.1-dev instance_prompt: ohwx license: unknown --- # Julianne Moore (Flux) <Gallery /> ## Model description FROM https:&#x2F;&#x2F;civitai.com&#x2F;models&#x2F;1281677&#x2F;julianne-moore-flux Trigger ohwx Strength 1 Guidance: 2.2-3 Steps (dev): 30-40 👍 *** If you love it, like it! ***👍 workflow: https:&#x2F;&#x2F;civitai.com&#x2F;models&#x2F;1088678 👑 Julianne Moore 🎬 About my celebrities loras 90% of the dataset used to build my loras only use head images. That really help the blend with other lora or model as there is no hands, feet, that may or will interfere in the final image render. When you get distorted hands with a person lora, it&#39;s because there is info on hands in the dataset used to train the lora, but that will not happen with my loras. I&#39;ve trained on Flux.1 Dev so other merged or trained checkpoint may not work well with my loras. The drawback side of that is that the body may not be reflecting the reality. It may not be a drawback tho. This is a lora for Flux.1 Dev. Work with other model but you must drop some simple bloc (good start 19-32). Trained with ai-toolkit, so merging it is not easy. To get the best result Guidance: 2.2-3 Steps (dev): 30-40 daemon detailer (lying sigma sampler): factor: -0.02, start 0.06, end 0.75 Resolution: Upscale the latent by 1.25 or 1.5 you&#39;ll get awsome result. (take longer time but worth it) Trigger word is (may work better in certain context): ohwx Enjoy! ## Trigger words You should use `ohwx` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/Jonjew/JulianneMoore/tree/main) them in the Files & versions tab.
rogramss/whisper-tiny_to_british_accent_ae_further_analysis
rogramss
"2025-03-19T18:01:45Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "en", "dataset:british_english_AE_fa", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2025-03-19T17:05:25Z"
--- library_name: transformers language: - en license: apache-2.0 base_model: openai/whisper-tiny tags: - hf-asr-leaderboard - generated_from_trainer datasets: - british_english_AE_fa model-index: - name: Whisper tiny Chinese results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper tiny British This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the British English 'AE' Phonemes dataset. It achieves the following results on the evaluation set: - eval_loss: 0.3636 - eval_wer: 13.2631 - eval_runtime: 140.0191 - eval_samples_per_second: 3.392 - eval_steps_per_second: 3.392 - epoch: 2.5 - step: 1500 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 2000 ### Framework versions - Transformers 4.48.3 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.1
memevis/tryy41
memevis
"2025-01-27T17:35:51Z"
14
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-01-27T17:30:36Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MayBashendy/ASAP_FineTuningBERT_AugV4_k25_task1_organization_fold2
MayBashendy
"2024-11-25T05:12:32Z"
161
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-11-25T04:09:52Z"
--- library_name: transformers license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer model-index: - name: ASAP_FineTuningBERT_AugV4_k25_task1_organization_fold2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ASAP_FineTuningBERT_AugV4_k25_task1_organization_fold2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8864 - Qwk: 0.3550 - Mse: 0.8864 - Rmse: 0.9415 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse | |:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:| | No log | 0.0008 | 2 | 9.5211 | 0.0085 | 9.5211 | 3.0856 | | No log | 0.0016 | 4 | 7.7638 | 0.0 | 7.7638 | 2.7864 | | No log | 0.0024 | 6 | 7.1927 | 0.0 | 7.1927 | 2.6819 | | No log | 0.0032 | 8 | 6.4727 | -0.0012 | 6.4727 | 2.5442 | | No log | 0.0041 | 10 | 5.3579 | 0.0007 | 5.3579 | 2.3147 | | No log | 0.0049 | 12 | 4.3309 | 0.0 | 4.3309 | 2.0811 | | No log | 0.0057 | 14 | 3.4456 | 0.0078 | 3.4456 | 1.8562 | | No log | 0.0065 | 16 | 2.8072 | 0.0144 | 2.8072 | 1.6755 | | No log | 0.0073 | 18 | 1.7915 | 0.0449 | 1.7915 | 1.3385 | | No log | 0.0081 | 20 | 1.3687 | 0.0213 | 1.3687 | 1.1699 | | No log | 0.0089 | 22 | 1.0646 | 0.0213 | 1.0646 | 1.0318 | | No log | 0.0097 | 24 | 0.8607 | 0.3235 | 0.8607 | 0.9278 | | No log | 0.0105 | 26 | 0.7966 | 0.0910 | 0.7966 | 0.8925 | | No log | 0.0113 | 28 | 0.8009 | 0.0648 | 0.8009 | 0.8949 | | No log | 0.0122 | 30 | 0.8471 | 0.0648 | 0.8471 | 0.9204 | | No log | 0.0130 | 32 | 0.8928 | 0.0325 | 0.8928 | 0.9449 | | No log | 0.0138 | 34 | 0.9072 | 0.0164 | 0.9072 | 0.9525 | | No log | 0.0146 | 36 | 0.9057 | 0.0 | 0.9057 | 0.9517 | | No log | 0.0154 | 38 | 0.9030 | 0.0 | 0.9030 | 0.9503 | | No log | 0.0162 | 40 | 0.9555 | 0.0164 | 0.9555 | 0.9775 | | No log | 0.0170 | 42 | 1.0194 | 0.0164 | 1.0194 | 1.0096 | | No log | 0.0178 | 44 | 1.1108 | 0.0164 | 1.1108 | 1.0539 | | No log | 0.0186 | 46 | 1.0026 | 0.0164 | 1.0026 | 1.0013 | | No log | 0.0195 | 48 | 0.8869 | 0.0164 | 0.8869 | 0.9417 | | No log | 0.0203 | 50 | 0.8344 | 0.0 | 0.8344 | 0.9135 | | No log | 0.0211 | 52 | 0.8327 | 0.0 | 0.8327 | 0.9125 | | No log | 0.0219 | 54 | 0.8921 | 0.0 | 0.8921 | 0.9445 | | No log | 0.0227 | 56 | 1.2001 | 0.0250 | 1.2001 | 1.0955 | | No log | 0.0235 | 58 | 1.2927 | 0.0559 | 1.2927 | 1.1370 | | No log | 0.0243 | 60 | 0.9422 | 0.0 | 0.9422 | 0.9707 | | No log | 0.0251 | 62 | 0.9049 | 0.0 | 0.9049 | 0.9513 | | No log | 0.0259 | 64 | 0.9336 | 0.0 | 0.9336 | 0.9662 | | No log | 0.0268 | 66 | 1.0105 | 0.0 | 1.0105 | 1.0052 | | No log | 0.0276 | 68 | 1.1159 | 0.0001 | 1.1159 | 1.0564 | | No log | 0.0284 | 70 | 0.9173 | 0.0 | 0.9173 | 0.9578 | | No log | 0.0292 | 72 | 0.8965 | 0.0 | 0.8965 | 0.9468 | | No log | 0.0300 | 74 | 0.9306 | 0.0 | 0.9306 | 0.9647 | | No log | 0.0308 | 76 | 0.9083 | 0.0 | 0.9083 | 0.9530 | | No log | 0.0316 | 78 | 0.9239 | 0.0075 | 0.9239 | 0.9612 | | No log | 0.0324 | 80 | 1.0984 | 0.1563 | 1.0984 | 1.0481 | | No log | 0.0332 | 82 | 1.0258 | 0.1584 | 1.0258 | 1.0128 | | No log | 0.0340 | 84 | 0.8683 | 0.1654 | 0.8683 | 0.9319 | | No log | 0.0349 | 86 | 0.8879 | 0.1509 | 0.8879 | 0.9423 | | No log | 0.0357 | 88 | 1.0538 | 0.1724 | 1.0538 | 1.0265 | | No log | 0.0365 | 90 | 1.2321 | 0.1690 | 1.2321 | 1.1100 | | No log | 0.0373 | 92 | 0.9262 | 0.1549 | 0.9262 | 0.9624 | | No log | 0.0381 | 94 | 0.8781 | 0.1837 | 0.8781 | 0.9371 | | No log | 0.0389 | 96 | 1.0370 | 0.1922 | 1.0370 | 1.0183 | | No log | 0.0397 | 98 | 1.3008 | 0.1475 | 1.3008 | 1.1405 | | No log | 0.0405 | 100 | 1.4885 | 0.1036 | 1.4885 | 1.2200 | | No log | 0.0413 | 102 | 1.1735 | 0.2053 | 1.1735 | 1.0833 | | No log | 0.0422 | 104 | 0.8042 | 0.0802 | 0.8042 | 0.8968 | | No log | 0.0430 | 106 | 0.7891 | 0.0852 | 0.7891 | 0.8883 | | No log | 0.0438 | 108 | 0.7999 | 0.0296 | 0.7999 | 0.8944 | | No log | 0.0446 | 110 | 0.9821 | 0.0671 | 0.9821 | 0.9910 | | No log | 0.0454 | 112 | 1.3540 | 0.1995 | 1.3540 | 1.1636 | | No log | 0.0462 | 114 | 1.4034 | 0.1837 | 1.4034 | 1.1847 | | No log | 0.0470 | 116 | 1.1613 | 0.1814 | 1.1613 | 1.0776 | | No log | 0.0478 | 118 | 0.8645 | 0.0406 | 0.8645 | 0.9298 | | No log | 0.0486 | 120 | 0.8526 | 0.0684 | 0.8526 | 0.9234 | | No log | 0.0495 | 122 | 1.1374 | 0.1793 | 1.1374 | 1.0665 | | No log | 0.0503 | 124 | 1.4875 | 0.1797 | 1.4875 | 1.2196 | | No log | 0.0511 | 126 | 1.5333 | 0.1617 | 1.5333 | 1.2383 | | No log | 0.0519 | 128 | 1.4762 | 0.1496 | 1.4762 | 1.2150 | | No log | 0.0527 | 130 | 1.3425 | 0.1505 | 1.3425 | 1.1586 | | No log | 0.0535 | 132 | 1.4790 | 0.1362 | 1.4790 | 1.2162 | | No log | 0.0543 | 134 | 1.6983 | 0.0960 | 1.6983 | 1.3032 | | No log | 0.0551 | 136 | 1.6694 | 0.1184 | 1.6694 | 1.2920 | | No log | 0.0559 | 138 | 1.3125 | 0.1790 | 1.3125 | 1.1456 | | No log | 0.0567 | 140 | 0.9324 | 0.2274 | 0.9324 | 0.9656 | | No log | 0.0576 | 142 | 0.9334 | 0.2387 | 0.9334 | 0.9661 | | No log | 0.0584 | 144 | 1.2946 | 0.1722 | 1.2946 | 1.1378 | | No log | 0.0592 | 146 | 1.5237 | 0.1302 | 1.5237 | 1.2344 | | No log | 0.0600 | 148 | 1.2725 | 0.2041 | 1.2725 | 1.1281 | | No log | 0.0608 | 150 | 1.0451 | 0.2604 | 1.0451 | 1.0223 | | No log | 0.0616 | 152 | 0.9733 | 0.2814 | 0.9733 | 0.9865 | | No log | 0.0624 | 154 | 0.8388 | 0.3151 | 0.8388 | 0.9159 | | No log | 0.0632 | 156 | 0.9445 | 0.2922 | 0.9445 | 0.9718 | | No log | 0.0640 | 158 | 1.2082 | 0.2450 | 1.2082 | 1.0992 | | No log | 0.0649 | 160 | 1.2216 | 0.2540 | 1.2216 | 1.1053 | | No log | 0.0657 | 162 | 0.8965 | 0.2518 | 0.8965 | 0.9468 | | No log | 0.0665 | 164 | 0.7406 | 0.1494 | 0.7406 | 0.8606 | | No log | 0.0673 | 166 | 0.7469 | 0.1497 | 0.7469 | 0.8643 | | No log | 0.0681 | 168 | 0.7615 | 0.2034 | 0.7615 | 0.8726 | | No log | 0.0689 | 170 | 0.9080 | 0.2664 | 0.9080 | 0.9529 | | No log | 0.0697 | 172 | 1.3884 | 0.2180 | 1.3884 | 1.1783 | | No log | 0.0705 | 174 | 1.2009 | 0.2437 | 1.2009 | 1.0958 | | No log | 0.0713 | 176 | 0.9239 | 0.2802 | 0.9239 | 0.9612 | | No log | 0.0722 | 178 | 1.0926 | 0.2625 | 1.0926 | 1.0453 | | No log | 0.0730 | 180 | 1.4207 | 0.2061 | 1.4207 | 1.1919 | | No log | 0.0738 | 182 | 1.1721 | 0.2385 | 1.1721 | 1.0827 | | No log | 0.0746 | 184 | 1.1990 | 0.2205 | 1.1990 | 1.0950 | | No log | 0.0754 | 186 | 1.0254 | 0.2137 | 1.0254 | 1.0126 | | No log | 0.0762 | 188 | 0.8561 | 0.1982 | 0.8561 | 0.9253 | | No log | 0.0770 | 190 | 0.9205 | 0.2474 | 0.9205 | 0.9594 | | No log | 0.0778 | 192 | 1.1717 | 0.1869 | 1.1717 | 1.0824 | | No log | 0.0786 | 194 | 1.4316 | 0.1669 | 1.4316 | 1.1965 | | No log | 0.0794 | 196 | 1.1419 | 0.1964 | 1.1419 | 1.0686 | | No log | 0.0803 | 198 | 0.8386 | 0.2808 | 0.8386 | 0.9157 | | No log | 0.0811 | 200 | 0.8245 | 0.2511 | 0.8245 | 0.9080 | | No log | 0.0819 | 202 | 0.9247 | 0.2598 | 0.9247 | 0.9616 | | No log | 0.0827 | 204 | 0.8609 | 0.3103 | 0.8609 | 0.9279 | | No log | 0.0835 | 206 | 0.9117 | 0.3048 | 0.9117 | 0.9548 | | No log | 0.0843 | 208 | 1.1003 | 0.2614 | 1.1003 | 1.0490 | | No log | 0.0851 | 210 | 0.8235 | 0.3402 | 0.8235 | 0.9075 | | No log | 0.0859 | 212 | 0.7206 | 0.2885 | 0.7206 | 0.8489 | | No log | 0.0867 | 214 | 0.7273 | 0.2624 | 0.7273 | 0.8528 | | No log | 0.0876 | 216 | 0.7721 | 0.2794 | 0.7721 | 0.8787 | | No log | 0.0884 | 218 | 0.7771 | 0.2494 | 0.7771 | 0.8815 | | No log | 0.0892 | 220 | 0.7609 | 0.2048 | 0.7609 | 0.8723 | | No log | 0.0900 | 222 | 0.7760 | 0.1930 | 0.7760 | 0.8809 | | No log | 0.0908 | 224 | 0.8475 | 0.1722 | 0.8475 | 0.9206 | | No log | 0.0916 | 226 | 0.9025 | 0.1739 | 0.9025 | 0.9500 | | No log | 0.0924 | 228 | 0.8373 | 0.2369 | 0.8373 | 0.9150 | | No log | 0.0932 | 230 | 0.8519 | 0.2628 | 0.8519 | 0.9230 | | No log | 0.0940 | 232 | 0.9719 | 0.2743 | 0.9719 | 0.9858 | | No log | 0.0949 | 234 | 1.0164 | 0.2726 | 1.0164 | 1.0082 | | No log | 0.0957 | 236 | 0.8096 | 0.2940 | 0.8096 | 0.8998 | | No log | 0.0965 | 238 | 0.8164 | 0.2987 | 0.8164 | 0.9036 | | No log | 0.0973 | 240 | 1.1212 | 0.2574 | 1.1212 | 1.0589 | | No log | 0.0981 | 242 | 1.2868 | 0.2254 | 1.2868 | 1.1344 | | No log | 0.0989 | 244 | 0.9503 | 0.2740 | 0.9503 | 0.9749 | | No log | 0.0997 | 246 | 0.8232 | 0.1960 | 0.8232 | 0.9073 | | No log | 0.1005 | 248 | 0.8321 | 0.1973 | 0.8321 | 0.9122 | | No log | 0.1013 | 250 | 0.9043 | 0.2290 | 0.9043 | 0.9509 | | No log | 0.1021 | 252 | 1.2968 | 0.1984 | 1.2968 | 1.1388 | | No log | 0.1030 | 254 | 1.2826 | 0.1930 | 1.2826 | 1.1325 | | No log | 0.1038 | 256 | 0.9921 | 0.2212 | 0.9921 | 0.9961 | | No log | 0.1046 | 258 | 0.8900 | 0.1663 | 0.8900 | 0.9434 | | No log | 0.1054 | 260 | 0.9361 | 0.1714 | 0.9361 | 0.9675 | | No log | 0.1062 | 262 | 1.1213 | 0.2056 | 1.1213 | 1.0589 | | No log | 0.1070 | 264 | 1.2654 | 0.1960 | 1.2654 | 1.1249 | | No log | 0.1078 | 266 | 1.1232 | 0.2289 | 1.1232 | 1.0598 | | No log | 0.1086 | 268 | 0.9849 | 0.2136 | 0.9849 | 0.9924 | | No log | 0.1094 | 270 | 1.1004 | 0.2157 | 1.1004 | 1.0490 | | No log | 0.1103 | 272 | 1.3683 | 0.1988 | 1.3683 | 1.1697 | | No log | 0.1111 | 274 | 1.2697 | 0.1941 | 1.2697 | 1.1268 | | No log | 0.1119 | 276 | 1.1292 | 0.2055 | 1.1292 | 1.0626 | | No log | 0.1127 | 278 | 1.0772 | 0.1996 | 1.0772 | 1.0379 | | No log | 0.1135 | 280 | 1.1942 | 0.1848 | 1.1942 | 1.0928 | | No log | 0.1143 | 282 | 1.3719 | 0.1531 | 1.3719 | 1.1713 | | No log | 0.1151 | 284 | 1.0921 | 0.1868 | 1.0921 | 1.0450 | | No log | 0.1159 | 286 | 1.1128 | 0.1879 | 1.1128 | 1.0549 | | No log | 0.1167 | 288 | 1.3590 | 0.1511 | 1.3590 | 1.1657 | | No log | 0.1176 | 290 | 1.4723 | 0.1514 | 1.4723 | 1.2134 | | No log | 0.1184 | 292 | 1.4840 | 0.1467 | 1.4840 | 1.2182 | | No log | 0.1192 | 294 | 1.2196 | 0.1978 | 1.2196 | 1.1043 | | No log | 0.1200 | 296 | 0.9451 | 0.1739 | 0.9451 | 0.9721 | | No log | 0.1208 | 298 | 0.9789 | 0.1786 | 0.9789 | 0.9894 | | No log | 0.1216 | 300 | 1.0237 | 0.2253 | 1.0237 | 1.0118 | | No log | 0.1224 | 302 | 1.2818 | 0.2214 | 1.2818 | 1.1322 | | No log | 0.1232 | 304 | 1.4200 | 0.2055 | 1.4200 | 1.1917 | | No log | 0.1240 | 306 | 1.0439 | 0.2383 | 1.0439 | 1.0217 | | No log | 0.1248 | 308 | 0.8017 | 0.2183 | 0.8017 | 0.8954 | | No log | 0.1257 | 310 | 0.8045 | 0.2286 | 0.8045 | 0.8969 | | No log | 0.1265 | 312 | 1.0384 | 0.2339 | 1.0384 | 1.0190 | | No log | 0.1273 | 314 | 1.3067 | 0.2233 | 1.3067 | 1.1431 | | No log | 0.1281 | 316 | 1.0476 | 0.2413 | 1.0476 | 1.0235 | | No log | 0.1289 | 318 | 0.8234 | 0.2037 | 0.8234 | 0.9074 | | No log | 0.1297 | 320 | 0.7986 | 0.2109 | 0.7986 | 0.8937 | | No log | 0.1305 | 322 | 0.9317 | 0.2364 | 0.9317 | 0.9653 | | No log | 0.1313 | 324 | 1.1979 | 0.2197 | 1.1979 | 1.0945 | | No log | 0.1321 | 326 | 1.0247 | 0.2571 | 1.0247 | 1.0123 | | No log | 0.1330 | 328 | 0.7914 | 0.2415 | 0.7914 | 0.8896 | | No log | 0.1338 | 330 | 0.7712 | 0.1931 | 0.7712 | 0.8782 | | No log | 0.1346 | 332 | 0.8331 | 0.2360 | 0.8331 | 0.9128 | | No log | 0.1354 | 334 | 0.9862 | 0.2632 | 0.9862 | 0.9931 | | No log | 0.1362 | 336 | 1.1504 | 0.2135 | 1.1504 | 1.0726 | | No log | 0.1370 | 338 | 0.9062 | 0.2811 | 0.9062 | 0.9520 | | No log | 0.1378 | 340 | 0.8195 | 0.2506 | 0.8195 | 0.9052 | | No log | 0.1386 | 342 | 0.8229 | 0.2625 | 0.8229 | 0.9071 | | No log | 0.1394 | 344 | 0.9846 | 0.3087 | 0.9846 | 0.9923 | | No log | 0.1403 | 346 | 0.9594 | 0.2868 | 0.9594 | 0.9795 | | No log | 0.1411 | 348 | 0.7996 | 0.2404 | 0.7996 | 0.8942 | | No log | 0.1419 | 350 | 0.7957 | 0.2499 | 0.7957 | 0.8920 | | No log | 0.1427 | 352 | 0.8933 | 0.2527 | 0.8933 | 0.9452 | | No log | 0.1435 | 354 | 0.8482 | 0.2634 | 0.8482 | 0.9210 | | No log | 0.1443 | 356 | 0.7788 | 0.2659 | 0.7788 | 0.8825 | | No log | 0.1451 | 358 | 0.7845 | 0.2428 | 0.7845 | 0.8857 | | No log | 0.1459 | 360 | 0.7883 | 0.2731 | 0.7883 | 0.8879 | | No log | 0.1467 | 362 | 0.8310 | 0.2785 | 0.8310 | 0.9116 | | No log | 0.1475 | 364 | 0.8319 | 0.2857 | 0.8319 | 0.9121 | | No log | 0.1484 | 366 | 0.7908 | 0.2593 | 0.7908 | 0.8893 | | No log | 0.1492 | 368 | 0.8194 | 0.2318 | 0.8194 | 0.9052 | | No log | 0.1500 | 370 | 0.8003 | 0.2160 | 0.8003 | 0.8946 | | No log | 0.1508 | 372 | 0.8383 | 0.2854 | 0.8383 | 0.9156 | | No log | 0.1516 | 374 | 0.8361 | 0.2787 | 0.8361 | 0.9144 | | No log | 0.1524 | 376 | 0.7749 | 0.2542 | 0.7749 | 0.8803 | | No log | 0.1532 | 378 | 0.8254 | 0.2040 | 0.8254 | 0.9085 | | No log | 0.1540 | 380 | 0.8412 | 0.2267 | 0.8412 | 0.9172 | | No log | 0.1548 | 382 | 0.7438 | 0.2968 | 0.7438 | 0.8624 | | No log | 0.1557 | 384 | 0.7877 | 0.3415 | 0.7877 | 0.8875 | | No log | 0.1565 | 386 | 0.8279 | 0.3333 | 0.8279 | 0.9099 | | No log | 0.1573 | 388 | 0.7186 | 0.3515 | 0.7186 | 0.8477 | | No log | 0.1581 | 390 | 0.7607 | 0.2778 | 0.7607 | 0.8722 | | No log | 0.1589 | 392 | 0.7769 | 0.2683 | 0.7769 | 0.8814 | | No log | 0.1597 | 394 | 0.7069 | 0.3299 | 0.7069 | 0.8408 | | No log | 0.1605 | 396 | 0.7091 | 0.3601 | 0.7091 | 0.8421 | | No log | 0.1613 | 398 | 0.7039 | 0.3409 | 0.7039 | 0.8390 | | No log | 0.1621 | 400 | 0.7179 | 0.3585 | 0.7179 | 0.8473 | | No log | 0.1630 | 402 | 0.7321 | 0.3374 | 0.7321 | 0.8557 | | No log | 0.1638 | 404 | 0.7686 | 0.3583 | 0.7686 | 0.8767 | | No log | 0.1646 | 406 | 0.7637 | 0.3413 | 0.7637 | 0.8739 | | No log | 0.1654 | 408 | 0.7411 | 0.3383 | 0.7411 | 0.8608 | | No log | 0.1662 | 410 | 0.7352 | 0.3274 | 0.7352 | 0.8574 | | No log | 0.1670 | 412 | 0.7548 | 0.3264 | 0.7548 | 0.8688 | | No log | 0.1678 | 414 | 0.8822 | 0.3071 | 0.8822 | 0.9392 | | No log | 0.1686 | 416 | 0.9175 | 0.3171 | 0.9175 | 0.9579 | | No log | 0.1694 | 418 | 0.7634 | 0.3313 | 0.7634 | 0.8738 | | No log | 0.1702 | 420 | 0.7216 | 0.3252 | 0.7216 | 0.8494 | | No log | 0.1711 | 422 | 0.7332 | 0.3080 | 0.7332 | 0.8562 | | No log | 0.1719 | 424 | 0.7549 | 0.3732 | 0.7549 | 0.8688 | | No log | 0.1727 | 426 | 0.9698 | 0.3516 | 0.9698 | 0.9848 | | No log | 0.1735 | 428 | 0.8549 | 0.3605 | 0.8549 | 0.9246 | | No log | 0.1743 | 430 | 0.6854 | 0.3681 | 0.6854 | 0.8279 | | No log | 0.1751 | 432 | 0.7212 | 0.3046 | 0.7212 | 0.8492 | | No log | 0.1759 | 434 | 0.7133 | 0.3028 | 0.7133 | 0.8446 | | No log | 0.1767 | 436 | 0.6673 | 0.3231 | 0.6673 | 0.8169 | | No log | 0.1775 | 438 | 0.7067 | 0.3245 | 0.7067 | 0.8406 | | No log | 0.1784 | 440 | 0.6928 | 0.3203 | 0.6928 | 0.8324 | | No log | 0.1792 | 442 | 0.7009 | 0.3174 | 0.7009 | 0.8372 | | No log | 0.1800 | 444 | 0.7195 | 0.3302 | 0.7195 | 0.8482 | | No log | 0.1808 | 446 | 0.8087 | 0.3160 | 0.8087 | 0.8993 | | No log | 0.1816 | 448 | 0.8538 | 0.3002 | 0.8538 | 0.9240 | | No log | 0.1824 | 450 | 0.7595 | 0.2963 | 0.7595 | 0.8715 | | No log | 0.1832 | 452 | 0.7829 | 0.2588 | 0.7829 | 0.8848 | | No log | 0.1840 | 454 | 0.7573 | 0.2883 | 0.7573 | 0.8702 | | No log | 0.1848 | 456 | 0.8051 | 0.2721 | 0.8051 | 0.8973 | | No log | 0.1857 | 458 | 0.8523 | 0.2872 | 0.8523 | 0.9232 | | No log | 0.1865 | 460 | 0.7410 | 0.3264 | 0.7410 | 0.8608 | | No log | 0.1873 | 462 | 0.7136 | 0.3443 | 0.7136 | 0.8447 | | No log | 0.1881 | 464 | 0.7115 | 0.3424 | 0.7115 | 0.8435 | | No log | 0.1889 | 466 | 0.7652 | 0.3633 | 0.7652 | 0.8748 | | No log | 0.1897 | 468 | 0.7898 | 0.3585 | 0.7898 | 0.8887 | | No log | 0.1905 | 470 | 0.7537 | 0.3257 | 0.7537 | 0.8682 | | No log | 0.1913 | 472 | 0.7553 | 0.3522 | 0.7553 | 0.8691 | | No log | 0.1921 | 474 | 0.8120 | 0.3425 | 0.8120 | 0.9011 | | No log | 0.1929 | 476 | 0.7625 | 0.3299 | 0.7625 | 0.8732 | | No log | 0.1938 | 478 | 0.6969 | 0.3343 | 0.6969 | 0.8348 | | No log | 0.1946 | 480 | 0.7400 | 0.3047 | 0.7400 | 0.8602 | | No log | 0.1954 | 482 | 0.7084 | 0.3224 | 0.7084 | 0.8416 | | No log | 0.1962 | 484 | 0.6966 | 0.3188 | 0.6966 | 0.8346 | | No log | 0.1970 | 486 | 0.7850 | 0.3462 | 0.7850 | 0.8860 | | No log | 0.1978 | 488 | 0.7492 | 0.3324 | 0.7492 | 0.8656 | | No log | 0.1986 | 490 | 0.7379 | 0.3049 | 0.7379 | 0.8590 | | No log | 0.1994 | 492 | 0.7495 | 0.2910 | 0.7495 | 0.8657 | | No log | 0.2002 | 494 | 0.7539 | 0.3325 | 0.7539 | 0.8683 | | No log | 0.2011 | 496 | 0.7623 | 0.3659 | 0.7623 | 0.8731 | | No log | 0.2019 | 498 | 0.6939 | 0.3530 | 0.6939 | 0.8330 | | 0.934 | 0.2027 | 500 | 0.7732 | 0.3011 | 0.7732 | 0.8793 | | 0.934 | 0.2035 | 502 | 0.7525 | 0.2977 | 0.7525 | 0.8674 | | 0.934 | 0.2043 | 504 | 0.6796 | 0.3690 | 0.6796 | 0.8244 | | 0.934 | 0.2051 | 506 | 0.7122 | 0.3229 | 0.7122 | 0.8439 | | 0.934 | 0.2059 | 508 | 0.7012 | 0.3381 | 0.7012 | 0.8374 | | 0.934 | 0.2067 | 510 | 0.7835 | 0.2812 | 0.7835 | 0.8851 | | 0.934 | 0.2075 | 512 | 0.8539 | 0.2447 | 0.8539 | 0.9241 | | 0.934 | 0.2084 | 514 | 0.7376 | 0.3324 | 0.7376 | 0.8588 | | 0.934 | 0.2092 | 516 | 0.7424 | 0.3530 | 0.7424 | 0.8616 | | 0.934 | 0.2100 | 518 | 0.7216 | 0.3606 | 0.7216 | 0.8495 | | 0.934 | 0.2108 | 520 | 0.6917 | 0.3578 | 0.6917 | 0.8317 | | 0.934 | 0.2116 | 522 | 0.6888 | 0.3626 | 0.6888 | 0.8300 | | 0.934 | 0.2124 | 524 | 0.6776 | 0.3776 | 0.6776 | 0.8232 | | 0.934 | 0.2132 | 526 | 0.6715 | 0.3753 | 0.6715 | 0.8194 | | 0.934 | 0.2140 | 528 | 0.6793 | 0.3954 | 0.6793 | 0.8242 | | 0.934 | 0.2148 | 530 | 0.7225 | 0.4331 | 0.7225 | 0.8500 | | 0.934 | 0.2156 | 532 | 0.6937 | 0.4113 | 0.6937 | 0.8329 | | 0.934 | 0.2165 | 534 | 0.6909 | 0.4100 | 0.6909 | 0.8312 | | 0.934 | 0.2173 | 536 | 0.6858 | 0.4127 | 0.6858 | 0.8281 | | 0.934 | 0.2181 | 538 | 0.6928 | 0.3865 | 0.6928 | 0.8323 | | 0.934 | 0.2189 | 540 | 0.6935 | 0.3767 | 0.6935 | 0.8328 | | 0.934 | 0.2197 | 542 | 0.7385 | 0.3379 | 0.7385 | 0.8593 | | 0.934 | 0.2205 | 544 | 0.7458 | 0.3394 | 0.7458 | 0.8636 | | 0.934 | 0.2213 | 546 | 0.7457 | 0.3384 | 0.7457 | 0.8635 | | 0.934 | 0.2221 | 548 | 0.7649 | 0.3267 | 0.7649 | 0.8746 | | 0.934 | 0.2229 | 550 | 0.7700 | 0.3611 | 0.7700 | 0.8775 | | 0.934 | 0.2238 | 552 | 0.7702 | 0.3446 | 0.7702 | 0.8776 | | 0.934 | 0.2246 | 554 | 0.7567 | 0.3477 | 0.7567 | 0.8699 | | 0.934 | 0.2254 | 556 | 0.7433 | 0.3535 | 0.7433 | 0.8621 | | 0.934 | 0.2262 | 558 | 0.8157 | 0.3443 | 0.8157 | 0.9032 | | 0.934 | 0.2270 | 560 | 0.8270 | 0.3320 | 0.8270 | 0.9094 | | 0.934 | 0.2278 | 562 | 0.7428 | 0.3275 | 0.7428 | 0.8618 | | 0.934 | 0.2286 | 564 | 0.8466 | 0.2571 | 0.8466 | 0.9201 | | 0.934 | 0.2294 | 566 | 0.8050 | 0.2791 | 0.8050 | 0.8972 | | 0.934 | 0.2302 | 568 | 0.7260 | 0.3424 | 0.7260 | 0.8521 | | 0.934 | 0.2310 | 570 | 0.7500 | 0.3789 | 0.7500 | 0.8660 | | 0.934 | 0.2319 | 572 | 0.7383 | 0.4091 | 0.7383 | 0.8593 | | 0.934 | 0.2327 | 574 | 0.6799 | 0.3774 | 0.6799 | 0.8246 | | 0.934 | 0.2335 | 576 | 0.7321 | 0.3665 | 0.7321 | 0.8556 | | 0.934 | 0.2343 | 578 | 0.6742 | 0.3778 | 0.6742 | 0.8211 | | 0.934 | 0.2351 | 580 | 0.7255 | 0.4219 | 0.7255 | 0.8518 | | 0.934 | 0.2359 | 582 | 0.8879 | 0.4102 | 0.8879 | 0.9423 | | 0.934 | 0.2367 | 584 | 0.7819 | 0.4286 | 0.7819 | 0.8842 | | 0.934 | 0.2375 | 586 | 0.6981 | 0.3954 | 0.6981 | 0.8355 | | 0.934 | 0.2383 | 588 | 0.6966 | 0.3757 | 0.6966 | 0.8346 | | 0.934 | 0.2392 | 590 | 0.7253 | 0.3962 | 0.7253 | 0.8516 | | 0.934 | 0.2400 | 592 | 0.8609 | 0.4052 | 0.8609 | 0.9278 | | 0.934 | 0.2408 | 594 | 0.7854 | 0.4049 | 0.7854 | 0.8862 | | 0.934 | 0.2416 | 596 | 0.7298 | 0.3740 | 0.7298 | 0.8543 | | 0.934 | 0.2424 | 598 | 0.7518 | 0.3440 | 0.7518 | 0.8671 | | 0.934 | 0.2432 | 600 | 0.7397 | 0.3631 | 0.7397 | 0.8601 | | 0.934 | 0.2440 | 602 | 0.7354 | 0.3522 | 0.7354 | 0.8576 | | 0.934 | 0.2448 | 604 | 0.7321 | 0.3499 | 0.7321 | 0.8556 | | 0.934 | 0.2456 | 606 | 0.7283 | 0.3426 | 0.7283 | 0.8534 | | 0.934 | 0.2465 | 608 | 0.7193 | 0.3408 | 0.7193 | 0.8481 | | 0.934 | 0.2473 | 610 | 0.7143 | 0.3347 | 0.7143 | 0.8451 | | 0.934 | 0.2481 | 612 | 0.7169 | 0.3374 | 0.7169 | 0.8467 | | 0.934 | 0.2489 | 614 | 0.7232 | 0.3311 | 0.7232 | 0.8504 | | 0.934 | 0.2497 | 616 | 0.7712 | 0.3539 | 0.7712 | 0.8782 | | 0.934 | 0.2505 | 618 | 0.8304 | 0.4071 | 0.8304 | 0.9113 | | 0.934 | 0.2513 | 620 | 0.7238 | 0.3798 | 0.7238 | 0.8508 | | 0.934 | 0.2521 | 622 | 0.6872 | 0.3748 | 0.6872 | 0.8290 | | 0.934 | 0.2529 | 624 | 0.7574 | 0.4173 | 0.7574 | 0.8703 | | 0.934 | 0.2537 | 626 | 0.7265 | 0.3957 | 0.7265 | 0.8523 | | 0.934 | 0.2546 | 628 | 0.6639 | 0.3974 | 0.6639 | 0.8148 | | 0.934 | 0.2554 | 630 | 0.6860 | 0.3997 | 0.6860 | 0.8283 | | 0.934 | 0.2562 | 632 | 0.7070 | 0.4128 | 0.7070 | 0.8408 | | 0.934 | 0.2570 | 634 | 0.6750 | 0.4125 | 0.6750 | 0.8216 | | 0.934 | 0.2578 | 636 | 0.6760 | 0.4161 | 0.6760 | 0.8222 | | 0.934 | 0.2586 | 638 | 0.7038 | 0.3801 | 0.7038 | 0.8389 | | 0.934 | 0.2594 | 640 | 0.6844 | 0.3880 | 0.6844 | 0.8273 | | 0.934 | 0.2602 | 642 | 0.7153 | 0.4000 | 0.7153 | 0.8457 | | 0.934 | 0.2610 | 644 | 0.7135 | 0.3602 | 0.7135 | 0.8447 | | 0.934 | 0.2619 | 646 | 0.7275 | 0.3557 | 0.7275 | 0.8529 | | 0.934 | 0.2627 | 648 | 0.7135 | 0.3208 | 0.7135 | 0.8447 | | 0.934 | 0.2635 | 650 | 0.7139 | 0.3283 | 0.7139 | 0.8449 | | 0.934 | 0.2643 | 652 | 0.7229 | 0.3050 | 0.7229 | 0.8503 | | 0.934 | 0.2651 | 654 | 0.7387 | 0.3294 | 0.7387 | 0.8595 | | 0.934 | 0.2659 | 656 | 0.7159 | 0.3029 | 0.7159 | 0.8461 | | 0.934 | 0.2667 | 658 | 0.7207 | 0.3032 | 0.7207 | 0.8489 | | 0.934 | 0.2675 | 660 | 0.7676 | 0.3484 | 0.7676 | 0.8761 | | 0.934 | 0.2683 | 662 | 0.9858 | 0.3364 | 0.9858 | 0.9929 | | 0.934 | 0.2692 | 664 | 0.9152 | 0.3425 | 0.9152 | 0.9566 | | 0.934 | 0.2700 | 666 | 0.7322 | 0.3056 | 0.7322 | 0.8557 | | 0.934 | 0.2708 | 668 | 0.7542 | 0.3232 | 0.7542 | 0.8684 | | 0.934 | 0.2716 | 670 | 0.7494 | 0.3128 | 0.7494 | 0.8657 | | 0.934 | 0.2724 | 672 | 0.7823 | 0.3356 | 0.7823 | 0.8845 | | 0.934 | 0.2732 | 674 | 0.9038 | 0.3323 | 0.9038 | 0.9507 | | 0.934 | 0.2740 | 676 | 0.8235 | 0.3517 | 0.8235 | 0.9075 | | 0.934 | 0.2748 | 678 | 0.7770 | 0.3484 | 0.7770 | 0.8815 | | 0.934 | 0.2756 | 680 | 0.8314 | 0.3779 | 0.8314 | 0.9118 | | 0.934 | 0.2764 | 682 | 0.7419 | 0.3595 | 0.7419 | 0.8614 | | 0.934 | 0.2773 | 684 | 0.7566 | 0.3847 | 0.7566 | 0.8698 | | 0.934 | 0.2781 | 686 | 0.7483 | 0.3807 | 0.7483 | 0.8651 | | 0.934 | 0.2789 | 688 | 0.7094 | 0.3987 | 0.7094 | 0.8422 | | 0.934 | 0.2797 | 690 | 0.7608 | 0.3915 | 0.7608 | 0.8722 | | 0.934 | 0.2805 | 692 | 0.7558 | 0.3871 | 0.7558 | 0.8694 | | 0.934 | 0.2813 | 694 | 0.6837 | 0.3781 | 0.6837 | 0.8269 | | 0.934 | 0.2821 | 696 | 0.6994 | 0.3924 | 0.6994 | 0.8363 | | 0.934 | 0.2829 | 698 | 0.7135 | 0.3682 | 0.7135 | 0.8447 | | 0.934 | 0.2837 | 700 | 0.7303 | 0.3646 | 0.7303 | 0.8546 | | 0.934 | 0.2846 | 702 | 0.9187 | 0.3878 | 0.9187 | 0.9585 | | 0.934 | 0.2854 | 704 | 1.0355 | 0.3343 | 1.0355 | 1.0176 | | 0.934 | 0.2862 | 706 | 0.8372 | 0.3466 | 0.8372 | 0.9150 | | 0.934 | 0.2870 | 708 | 0.7284 | 0.3352 | 0.7284 | 0.8534 | | 0.934 | 0.2878 | 710 | 0.7340 | 0.3405 | 0.7340 | 0.8567 | | 0.934 | 0.2886 | 712 | 0.7525 | 0.3564 | 0.7525 | 0.8675 | | 0.934 | 0.2894 | 714 | 0.8538 | 0.3778 | 0.8538 | 0.9240 | | 0.934 | 0.2902 | 716 | 0.8531 | 0.3515 | 0.8531 | 0.9236 | | 0.934 | 0.2910 | 718 | 0.7635 | 0.3579 | 0.7635 | 0.8738 | | 0.934 | 0.2919 | 720 | 0.6918 | 0.3829 | 0.6918 | 0.8317 | | 0.934 | 0.2927 | 722 | 0.6950 | 0.3832 | 0.6950 | 0.8337 | | 0.934 | 0.2935 | 724 | 0.7626 | 0.3854 | 0.7626 | 0.8733 | | 0.934 | 0.2943 | 726 | 0.7315 | 0.3791 | 0.7315 | 0.8553 | | 0.934 | 0.2951 | 728 | 0.7178 | 0.3937 | 0.7178 | 0.8472 | | 0.934 | 0.2959 | 730 | 0.6627 | 0.3991 | 0.6627 | 0.8141 | | 0.934 | 0.2967 | 732 | 0.6847 | 0.4048 | 0.6847 | 0.8275 | | 0.934 | 0.2975 | 734 | 0.6678 | 0.4100 | 0.6678 | 0.8172 | | 0.934 | 0.2983 | 736 | 0.6926 | 0.4379 | 0.6926 | 0.8322 | | 0.934 | 0.2991 | 738 | 0.6459 | 0.4401 | 0.6459 | 0.8037 | | 0.934 | 0.3000 | 740 | 0.6589 | 0.4072 | 0.6589 | 0.8117 | | 0.934 | 0.3008 | 742 | 0.6587 | 0.3983 | 0.6587 | 0.8116 | | 0.934 | 0.3016 | 744 | 0.6703 | 0.4146 | 0.6703 | 0.8187 | | 0.934 | 0.3024 | 746 | 0.6995 | 0.4391 | 0.6995 | 0.8364 | | 0.934 | 0.3032 | 748 | 0.6893 | 0.4523 | 0.6893 | 0.8302 | | 0.934 | 0.3040 | 750 | 0.6745 | 0.4088 | 0.6745 | 0.8213 | | 0.934 | 0.3048 | 752 | 0.6554 | 0.4272 | 0.6554 | 0.8096 | | 0.934 | 0.3056 | 754 | 0.6356 | 0.4144 | 0.6356 | 0.7973 | | 0.934 | 0.3064 | 756 | 0.6508 | 0.4217 | 0.6508 | 0.8067 | | 0.934 | 0.3073 | 758 | 0.6540 | 0.3948 | 0.6540 | 0.8087 | | 0.934 | 0.3081 | 760 | 0.6684 | 0.3842 | 0.6684 | 0.8176 | | 0.934 | 0.3089 | 762 | 0.6597 | 0.4044 | 0.6597 | 0.8122 | | 0.934 | 0.3097 | 764 | 0.6868 | 0.4166 | 0.6868 | 0.8287 | | 0.934 | 0.3105 | 766 | 0.7299 | 0.4065 | 0.7299 | 0.8543 | | 0.934 | 0.3113 | 768 | 0.7622 | 0.3902 | 0.7622 | 0.8731 | | 0.934 | 0.3121 | 770 | 0.7281 | 0.3694 | 0.7281 | 0.8533 | | 0.934 | 0.3129 | 772 | 0.7006 | 0.3761 | 0.7006 | 0.8370 | | 0.934 | 0.3137 | 774 | 0.7397 | 0.3228 | 0.7397 | 0.8600 | | 0.934 | 0.3146 | 776 | 0.7176 | 0.3338 | 0.7176 | 0.8471 | | 0.934 | 0.3154 | 778 | 0.7316 | 0.3787 | 0.7316 | 0.8553 | | 0.934 | 0.3162 | 780 | 0.9242 | 0.3504 | 0.9242 | 0.9613 | | 0.934 | 0.3170 | 782 | 0.8678 | 0.3709 | 0.8678 | 0.9315 | | 0.934 | 0.3178 | 784 | 0.7681 | 0.3886 | 0.7681 | 0.8764 | | 0.934 | 0.3186 | 786 | 0.7731 | 0.3373 | 0.7731 | 0.8793 | | 0.934 | 0.3194 | 788 | 0.8053 | 0.3653 | 0.8053 | 0.8974 | | 0.934 | 0.3202 | 790 | 0.8751 | 0.3961 | 0.8751 | 0.9355 | | 0.934 | 0.3210 | 792 | 0.8788 | 0.4033 | 0.8788 | 0.9374 | | 0.934 | 0.3218 | 794 | 0.8115 | 0.4170 | 0.8115 | 0.9008 | | 0.934 | 0.3227 | 796 | 0.8006 | 0.4214 | 0.8006 | 0.8947 | | 0.934 | 0.3235 | 798 | 0.7455 | 0.4489 | 0.7455 | 0.8634 | | 0.934 | 0.3243 | 800 | 0.7228 | 0.4605 | 0.7228 | 0.8502 | | 0.934 | 0.3251 | 802 | 0.7383 | 0.4564 | 0.7383 | 0.8592 | | 0.934 | 0.3259 | 804 | 0.7061 | 0.4571 | 0.7061 | 0.8403 | | 0.934 | 0.3267 | 806 | 0.7464 | 0.3885 | 0.7464 | 0.8639 | | 0.934 | 0.3275 | 808 | 0.7697 | 0.3594 | 0.7697 | 0.8773 | | 0.934 | 0.3283 | 810 | 0.7060 | 0.4426 | 0.7060 | 0.8403 | | 0.934 | 0.3291 | 812 | 0.7866 | 0.4255 | 0.7866 | 0.8869 | | 0.934 | 0.3300 | 814 | 0.7162 | 0.4573 | 0.7162 | 0.8463 | | 0.934 | 0.3308 | 816 | 0.6738 | 0.3894 | 0.6738 | 0.8209 | | 0.934 | 0.3316 | 818 | 0.6648 | 0.3906 | 0.6648 | 0.8153 | | 0.934 | 0.3324 | 820 | 0.6586 | 0.4513 | 0.6586 | 0.8116 | | 0.934 | 0.3332 | 822 | 0.6727 | 0.4380 | 0.6727 | 0.8202 | | 0.934 | 0.3340 | 824 | 0.7372 | 0.4109 | 0.7372 | 0.8586 | | 0.934 | 0.3348 | 826 | 0.7019 | 0.4102 | 0.7019 | 0.8378 | | 0.934 | 0.3356 | 828 | 0.7158 | 0.4192 | 0.7158 | 0.8460 | | 0.934 | 0.3364 | 830 | 0.7349 | 0.3643 | 0.7349 | 0.8573 | | 0.934 | 0.3373 | 832 | 0.7396 | 0.4205 | 0.7396 | 0.8600 | | 0.934 | 0.3381 | 834 | 0.7927 | 0.4165 | 0.7927 | 0.8903 | | 0.934 | 0.3389 | 836 | 0.7650 | 0.4165 | 0.7650 | 0.8747 | | 0.934 | 0.3397 | 838 | 0.7539 | 0.3790 | 0.7539 | 0.8682 | | 0.934 | 0.3405 | 840 | 0.7319 | 0.3929 | 0.7319 | 0.8555 | | 0.934 | 0.3413 | 842 | 0.7179 | 0.4108 | 0.7179 | 0.8473 | | 0.934 | 0.3421 | 844 | 0.7300 | 0.3815 | 0.7300 | 0.8544 | | 0.934 | 0.3429 | 846 | 0.7551 | 0.3745 | 0.7551 | 0.8690 | | 0.934 | 0.3437 | 848 | 0.6887 | 0.4012 | 0.6887 | 0.8299 | | 0.934 | 0.3445 | 850 | 0.6794 | 0.4001 | 0.6794 | 0.8243 | | 0.934 | 0.3454 | 852 | 0.7130 | 0.3759 | 0.7130 | 0.8444 | | 0.934 | 0.3462 | 854 | 0.7182 | 0.3759 | 0.7182 | 0.8475 | | 0.934 | 0.3470 | 856 | 0.6991 | 0.3711 | 0.6991 | 0.8361 | | 0.934 | 0.3478 | 858 | 0.6846 | 0.3777 | 0.6846 | 0.8274 | | 0.934 | 0.3486 | 860 | 0.6650 | 0.4095 | 0.6650 | 0.8155 | | 0.934 | 0.3494 | 862 | 0.7067 | 0.3623 | 0.7067 | 0.8407 | | 0.934 | 0.3502 | 864 | 0.6873 | 0.3854 | 0.6873 | 0.8291 | | 0.934 | 0.3510 | 866 | 0.7201 | 0.4202 | 0.7201 | 0.8486 | | 0.934 | 0.3518 | 868 | 0.7567 | 0.3864 | 0.7567 | 0.8699 | | 0.934 | 0.3527 | 870 | 0.6998 | 0.3632 | 0.6998 | 0.8365 | | 0.934 | 0.3535 | 872 | 0.7986 | 0.3108 | 0.7986 | 0.8936 | | 0.934 | 0.3543 | 874 | 0.7564 | 0.3029 | 0.7564 | 0.8697 | | 0.934 | 0.3551 | 876 | 0.6892 | 0.3554 | 0.6892 | 0.8302 | | 0.934 | 0.3559 | 878 | 0.7357 | 0.3721 | 0.7357 | 0.8577 | | 0.934 | 0.3567 | 880 | 0.7102 | 0.3634 | 0.7102 | 0.8428 | | 0.934 | 0.3575 | 882 | 0.7307 | 0.3191 | 0.7307 | 0.8548 | | 0.934 | 0.3583 | 884 | 0.7217 | 0.3315 | 0.7217 | 0.8495 | | 0.934 | 0.3591 | 886 | 0.7698 | 0.3531 | 0.7698 | 0.8774 | | 0.934 | 0.3600 | 888 | 0.7576 | 0.3670 | 0.7576 | 0.8704 | | 0.934 | 0.3608 | 890 | 0.7406 | 0.3426 | 0.7406 | 0.8606 | | 0.934 | 0.3616 | 892 | 0.7359 | 0.3247 | 0.7359 | 0.8578 | | 0.934 | 0.3624 | 894 | 0.6944 | 0.3896 | 0.6944 | 0.8333 | | 0.934 | 0.3632 | 896 | 0.7006 | 0.3803 | 0.7006 | 0.8370 | | 0.934 | 0.3640 | 898 | 0.6862 | 0.3855 | 0.6862 | 0.8284 | | 0.934 | 0.3648 | 900 | 0.7162 | 0.4059 | 0.7162 | 0.8463 | | 0.934 | 0.3656 | 902 | 0.7137 | 0.4027 | 0.7137 | 0.8448 | | 0.934 | 0.3664 | 904 | 0.7440 | 0.3940 | 0.7440 | 0.8626 | | 0.934 | 0.3672 | 906 | 0.7479 | 0.3840 | 0.7479 | 0.8648 | | 0.934 | 0.3681 | 908 | 0.7836 | 0.3394 | 0.7836 | 0.8852 | | 0.934 | 0.3689 | 910 | 0.7746 | 0.3592 | 0.7746 | 0.8801 | | 0.934 | 0.3697 | 912 | 0.7737 | 0.3481 | 0.7737 | 0.8796 | | 0.934 | 0.3705 | 914 | 0.7739 | 0.3487 | 0.7739 | 0.8797 | | 0.934 | 0.3713 | 916 | 0.7981 | 0.3712 | 0.7981 | 0.8934 | | 0.934 | 0.3721 | 918 | 0.7247 | 0.3931 | 0.7247 | 0.8513 | | 0.934 | 0.3729 | 920 | 0.7135 | 0.3925 | 0.7135 | 0.8447 | | 0.934 | 0.3737 | 922 | 0.8336 | 0.3658 | 0.8336 | 0.9130 | | 0.934 | 0.3745 | 924 | 0.7210 | 0.3872 | 0.7210 | 0.8491 | | 0.934 | 0.3754 | 926 | 0.6977 | 0.4294 | 0.6977 | 0.8353 | | 0.934 | 0.3762 | 928 | 0.7136 | 0.4352 | 0.7136 | 0.8448 | | 0.934 | 0.3770 | 930 | 0.6788 | 0.4170 | 0.6788 | 0.8239 | | 0.934 | 0.3778 | 932 | 0.6737 | 0.4328 | 0.6737 | 0.8208 | | 0.934 | 0.3786 | 934 | 0.7061 | 0.3858 | 0.7061 | 0.8403 | | 0.934 | 0.3794 | 936 | 0.7671 | 0.3646 | 0.7671 | 0.8759 | | 0.934 | 0.3802 | 938 | 0.7219 | 0.3971 | 0.7219 | 0.8496 | | 0.934 | 0.3810 | 940 | 0.7239 | 0.3835 | 0.7239 | 0.8508 | | 0.934 | 0.3818 | 942 | 0.7367 | 0.3564 | 0.7367 | 0.8583 | | 0.934 | 0.3827 | 944 | 0.7590 | 0.3293 | 0.7590 | 0.8712 | | 0.934 | 0.3835 | 946 | 0.7166 | 0.3716 | 0.7166 | 0.8465 | | 0.934 | 0.3843 | 948 | 0.7190 | 0.3648 | 0.7190 | 0.8479 | | 0.934 | 0.3851 | 950 | 0.7217 | 0.3541 | 0.7217 | 0.8495 | | 0.934 | 0.3859 | 952 | 0.7924 | 0.3143 | 0.7924 | 0.8901 | | 0.934 | 0.3867 | 954 | 0.7955 | 0.2995 | 0.7955 | 0.8919 | | 0.934 | 0.3875 | 956 | 0.7022 | 0.3545 | 0.7022 | 0.8380 | | 0.934 | 0.3883 | 958 | 0.7773 | 0.3942 | 0.7773 | 0.8817 | | 0.934 | 0.3891 | 960 | 0.7980 | 0.3874 | 0.7980 | 0.8933 | | 0.934 | 0.3899 | 962 | 0.7043 | 0.3479 | 0.7043 | 0.8392 | | 0.934 | 0.3908 | 964 | 0.7732 | 0.2805 | 0.7732 | 0.8793 | | 0.934 | 0.3916 | 966 | 0.7964 | 0.2759 | 0.7964 | 0.8924 | | 0.934 | 0.3924 | 968 | 0.7550 | 0.3503 | 0.7550 | 0.8689 | | 0.934 | 0.3932 | 970 | 0.8387 | 0.3502 | 0.8387 | 0.9158 | | 0.934 | 0.3940 | 972 | 0.8067 | 0.3315 | 0.8067 | 0.8981 | | 0.934 | 0.3948 | 974 | 0.7669 | 0.3488 | 0.7669 | 0.8757 | | 0.934 | 0.3956 | 976 | 0.7505 | 0.3727 | 0.7505 | 0.8663 | | 0.934 | 0.3964 | 978 | 0.7314 | 0.3617 | 0.7314 | 0.8552 | | 0.934 | 0.3972 | 980 | 0.7958 | 0.3636 | 0.7958 | 0.8921 | | 0.934 | 0.3981 | 982 | 0.8442 | 0.3637 | 0.8442 | 0.9188 | | 0.934 | 0.3989 | 984 | 0.7538 | 0.3734 | 0.7538 | 0.8682 | | 0.934 | 0.3997 | 986 | 0.7322 | 0.4015 | 0.7322 | 0.8557 | | 0.934 | 0.4005 | 988 | 0.7365 | 0.4021 | 0.7365 | 0.8582 | | 0.934 | 0.4013 | 990 | 0.7283 | 0.3794 | 0.7283 | 0.8534 | | 0.934 | 0.4021 | 992 | 0.7212 | 0.3842 | 0.7212 | 0.8492 | | 0.934 | 0.4029 | 994 | 0.7096 | 0.3888 | 0.7096 | 0.8424 | | 0.934 | 0.4037 | 996 | 0.7261 | 0.3938 | 0.7261 | 0.8521 | | 0.934 | 0.4045 | 998 | 0.7677 | 0.3873 | 0.7677 | 0.8762 | | 0.3836 | 0.4054 | 1000 | 0.8145 | 0.3742 | 0.8145 | 0.9025 | | 0.3836 | 0.4062 | 1002 | 0.8123 | 0.3717 | 0.8123 | 0.9013 | | 0.3836 | 0.4070 | 1004 | 0.7417 | 0.3490 | 0.7417 | 0.8612 | | 0.3836 | 0.4078 | 1006 | 0.7529 | 0.3642 | 0.7529 | 0.8677 | | 0.3836 | 0.4086 | 1008 | 0.7902 | 0.3288 | 0.7902 | 0.8889 | | 0.3836 | 0.4094 | 1010 | 0.8864 | 0.3550 | 0.8864 | 0.9415 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu118 - Datasets 2.21.0 - Tokenizers 0.19.1
ThuyNT03/CS505-Dev-CSI-xlm-align-base
ThuyNT03
"2024-07-03T21:49:21Z"
8
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:microsoft/xlm-align-base", "base_model:finetune:microsoft/xlm-align-base", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-07-03T21:27:18Z"
--- base_model: microsoft/xlm-align-base tags: - generated_from_trainer model-index: - name: CS505-Dev-CSI-xlm-align-base results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CS505-Dev-CSI-xlm-align-base This model is a fine-tuned version of [microsoft/xlm-align-base](https://huggingface.co/microsoft/xlm-align-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results ### Framework versions - Transformers 4.42.3 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
mradermacher/MT2-MMMAGMU-gemma-2-9B-GGUF
mradermacher
"2025-01-08T11:33:28Z"
31
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:zelk12/MT2-MMMAGMU-gemma-2-9B", "base_model:quantized:zelk12/MT2-MMMAGMU-gemma-2-9B", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-01-08T10:57:26Z"
--- base_model: zelk12/MT2-MMMAGMU-gemma-2-9B language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/zelk12/MT2-MMMAGMU-gemma-2-9B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MT2-MMMAGMU-gemma-2-9B-GGUF/resolve/main/MT2-MMMAGMU-gemma-2-9B.Q2_K.gguf) | Q2_K | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/MT2-MMMAGMU-gemma-2-9B-GGUF/resolve/main/MT2-MMMAGMU-gemma-2-9B.Q3_K_S.gguf) | Q3_K_S | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/MT2-MMMAGMU-gemma-2-9B-GGUF/resolve/main/MT2-MMMAGMU-gemma-2-9B.Q3_K_M.gguf) | Q3_K_M | 4.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MT2-MMMAGMU-gemma-2-9B-GGUF/resolve/main/MT2-MMMAGMU-gemma-2-9B.Q3_K_L.gguf) | Q3_K_L | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/MT2-MMMAGMU-gemma-2-9B-GGUF/resolve/main/MT2-MMMAGMU-gemma-2-9B.IQ4_XS.gguf) | IQ4_XS | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/MT2-MMMAGMU-gemma-2-9B-GGUF/resolve/main/MT2-MMMAGMU-gemma-2-9B.Q4_K_S.gguf) | Q4_K_S | 5.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MT2-MMMAGMU-gemma-2-9B-GGUF/resolve/main/MT2-MMMAGMU-gemma-2-9B.Q4_K_M.gguf) | Q4_K_M | 5.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MT2-MMMAGMU-gemma-2-9B-GGUF/resolve/main/MT2-MMMAGMU-gemma-2-9B.Q5_K_S.gguf) | Q5_K_S | 6.6 | | | [GGUF](https://huggingface.co/mradermacher/MT2-MMMAGMU-gemma-2-9B-GGUF/resolve/main/MT2-MMMAGMU-gemma-2-9B.Q5_K_M.gguf) | Q5_K_M | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/MT2-MMMAGMU-gemma-2-9B-GGUF/resolve/main/MT2-MMMAGMU-gemma-2-9B.Q6_K.gguf) | Q6_K | 7.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/MT2-MMMAGMU-gemma-2-9B-GGUF/resolve/main/MT2-MMMAGMU-gemma-2-9B.Q8_0.gguf) | Q8_0 | 9.9 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/MT2-MMMAGMU-gemma-2-9B-GGUF/resolve/main/MT2-MMMAGMU-gemma-2-9B.f16.gguf) | f16 | 18.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
dima806/sms-spam-detection-distilbert
dima806
"2024-10-19T11:17:36Z"
192
1
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "base_model:distilbert/distilbert-base-cased", "base_model:finetune:distilbert/distilbert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-11-16T08:21:04Z"
--- license: apache-2.0 metrics: - accuracy - f1 base_model: - distilbert/distilbert-base-cased --- See https://www.kaggle.com/code/dima806/sms-spam-detection-distilbert for more details.
mradermacher/MN-RocinanteCelestar-12B-i1-GGUF
mradermacher
"2024-09-09T07:31:17Z"
203
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:rityak/MN-RocinanteCelestar-12B", "base_model:quantized:rityak/MN-RocinanteCelestar-12B", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
"2024-09-09T05:38:43Z"
--- base_model: rityak/MN-RocinanteCelestar-12B language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/rityak/MN-RocinanteCelestar-12B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/MN-RocinanteCelestar-12B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MN-RocinanteCelestar-12B-i1-GGUF/resolve/main/MN-RocinanteCelestar-12B.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/MN-RocinanteCelestar-12B-i1-GGUF/resolve/main/MN-RocinanteCelestar-12B.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/MN-RocinanteCelestar-12B-i1-GGUF/resolve/main/MN-RocinanteCelestar-12B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/MN-RocinanteCelestar-12B-i1-GGUF/resolve/main/MN-RocinanteCelestar-12B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/MN-RocinanteCelestar-12B-i1-GGUF/resolve/main/MN-RocinanteCelestar-12B.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/MN-RocinanteCelestar-12B-i1-GGUF/resolve/main/MN-RocinanteCelestar-12B.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/MN-RocinanteCelestar-12B-i1-GGUF/resolve/main/MN-RocinanteCelestar-12B.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/MN-RocinanteCelestar-12B-i1-GGUF/resolve/main/MN-RocinanteCelestar-12B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MN-RocinanteCelestar-12B-i1-GGUF/resolve/main/MN-RocinanteCelestar-12B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/MN-RocinanteCelestar-12B-i1-GGUF/resolve/main/MN-RocinanteCelestar-12B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/MN-RocinanteCelestar-12B-i1-GGUF/resolve/main/MN-RocinanteCelestar-12B.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/MN-RocinanteCelestar-12B-i1-GGUF/resolve/main/MN-RocinanteCelestar-12B.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/MN-RocinanteCelestar-12B-i1-GGUF/resolve/main/MN-RocinanteCelestar-12B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/MN-RocinanteCelestar-12B-i1-GGUF/resolve/main/MN-RocinanteCelestar-12B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/MN-RocinanteCelestar-12B-i1-GGUF/resolve/main/MN-RocinanteCelestar-12B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/MN-RocinanteCelestar-12B-i1-GGUF/resolve/main/MN-RocinanteCelestar-12B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 7.2 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/MN-RocinanteCelestar-12B-i1-GGUF/resolve/main/MN-RocinanteCelestar-12B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 7.2 | fast on arm+i8mm, low quality | | [GGUF](https://huggingface.co/mradermacher/MN-RocinanteCelestar-12B-i1-GGUF/resolve/main/MN-RocinanteCelestar-12B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 7.2 | fast on arm+sve, low quality | | [GGUF](https://huggingface.co/mradermacher/MN-RocinanteCelestar-12B-i1-GGUF/resolve/main/MN-RocinanteCelestar-12B.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/MN-RocinanteCelestar-12B-i1-GGUF/resolve/main/MN-RocinanteCelestar-12B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/MN-RocinanteCelestar-12B-i1-GGUF/resolve/main/MN-RocinanteCelestar-12B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MN-RocinanteCelestar-12B-i1-GGUF/resolve/main/MN-RocinanteCelestar-12B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | | | [GGUF](https://huggingface.co/mradermacher/MN-RocinanteCelestar-12B-i1-GGUF/resolve/main/MN-RocinanteCelestar-12B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/MN-RocinanteCelestar-12B-i1-GGUF/resolve/main/MN-RocinanteCelestar-12B.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
lesso11/d9dd468a-5f86-4915-a81e-f37cfe0d40e7
lesso11
"2025-02-27T05:55:04Z"
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "custom_code", "base_model:NousResearch/CodeLlama-13b-hf", "base_model:adapter:NousResearch/CodeLlama-13b-hf", "region:us" ]
null
"2025-02-27T05:14:56Z"
--- library_name: peft base_model: NousResearch/CodeLlama-13b-hf tags: - axolotl - generated_from_trainer model-index: - name: d9dd468a-5f86-4915-a81e-f37cfe0d40e7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora auto_find_batch_size: true base_model: NousResearch/CodeLlama-13b-hf bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 24b1e8b4a4332f62_train_data.json ds_type: json format: custom path: /workspace/input_data/24b1e8b4a4332f62_train_data.json type: field_input: concepts field_instruction: topic field_output: markdown format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null do_eval: true early_stopping_patience: 3 eval_max_new_tokens: 128 eval_steps: 50 evals_per_epoch: null flash_attention: false fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: false group_by_length: true hub_model_id: lesso11/d9dd468a-5f86-4915-a81e-f37cfe0d40e7 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.000211 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_steps: 500 micro_batch_size: 4 mlflow_experiment_name: /tmp/24b1e8b4a4332f62_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null seed: 110 sequence_len: 512 special_tokens: pad_token: </s> strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: e7aa7832-7239-42e8-9b2f-093e2cf642a7 wandb_project: 11a wandb_run: your_name wandb_runid: e7aa7832-7239-42e8-9b2f-093e2cf642a7 warmup_steps: 50 weight_decay: 0.0 xformers_attention: null ``` </details><br> # d9dd468a-5f86-4915-a81e-f37cfe0d40e7 This model is a fine-tuned version of [NousResearch/CodeLlama-13b-hf](https://huggingface.co/NousResearch/CodeLlama-13b-hf) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4984 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000211 - train_batch_size: 4 - eval_batch_size: 4 - seed: 110 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 50 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0004 | 1 | 0.8650 | | 1.3012 | 0.0185 | 50 | 0.6067 | | 1.1154 | 0.0371 | 100 | 0.5670 | | 1.0874 | 0.0556 | 150 | 0.5440 | | 0.9397 | 0.0742 | 200 | 0.5306 | | 1.1374 | 0.0927 | 250 | 0.5191 | | 0.8862 | 0.1113 | 300 | 0.5130 | | 0.9946 | 0.1298 | 350 | 0.5064 | | 0.8426 | 0.1484 | 400 | 0.5010 | | 1.0521 | 0.1669 | 450 | 0.4988 | | 0.876 | 0.1855 | 500 | 0.4984 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
KelvinL01/CreatingLove
KelvinL01
"2025-03-07T17:57:25Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2025-03-07T17:57:24Z"
--- license: apache-2.0 ---
mradermacher/Mistral-7B-v0.1-sft-spin-1.6k-GGUF
mradermacher
"2025-01-17T07:26:23Z"
198
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "trl", "dpo", "en", "base_model:AmberYifan/Mistral-7B-v0.1-sft-spin-1.6k", "base_model:quantized:AmberYifan/Mistral-7B-v0.1-sft-spin-1.6k", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-01-17T07:08:51Z"
--- base_model: AmberYifan/Mistral-7B-v0.1-sft-spin-1.6k language: - en library_name: transformers model_name: Mistral-7B-v0.1-sft-spin-1.6k quantized_by: mradermacher tags: - generated_from_trainer - trl - dpo --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/AmberYifan/Mistral-7B-v0.1-sft-spin-1.6k <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sft-spin-1.6k-GGUF/resolve/main/Mistral-7B-v0.1-sft-spin-1.6k.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sft-spin-1.6k-GGUF/resolve/main/Mistral-7B-v0.1-sft-spin-1.6k.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sft-spin-1.6k-GGUF/resolve/main/Mistral-7B-v0.1-sft-spin-1.6k.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sft-spin-1.6k-GGUF/resolve/main/Mistral-7B-v0.1-sft-spin-1.6k.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sft-spin-1.6k-GGUF/resolve/main/Mistral-7B-v0.1-sft-spin-1.6k.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sft-spin-1.6k-GGUF/resolve/main/Mistral-7B-v0.1-sft-spin-1.6k.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sft-spin-1.6k-GGUF/resolve/main/Mistral-7B-v0.1-sft-spin-1.6k.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sft-spin-1.6k-GGUF/resolve/main/Mistral-7B-v0.1-sft-spin-1.6k.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sft-spin-1.6k-GGUF/resolve/main/Mistral-7B-v0.1-sft-spin-1.6k.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sft-spin-1.6k-GGUF/resolve/main/Mistral-7B-v0.1-sft-spin-1.6k.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sft-spin-1.6k-GGUF/resolve/main/Mistral-7B-v0.1-sft-spin-1.6k.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-v0.1-sft-spin-1.6k-GGUF/resolve/main/Mistral-7B-v0.1-sft-spin-1.6k.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
lesso03/bedbd70d-25b5-47ca-8b1d-a265e178c834
lesso03
"2025-01-15T16:19:12Z"
7
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2-7B", "base_model:adapter:unsloth/Qwen2-7B", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
"2025-01-15T16:10:15Z"
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2-7B tags: - axolotl - generated_from_trainer model-index: - name: bedbd70d-25b5-47ca-8b1d-a265e178c834 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Qwen2-7B bf16: true chat_template: llama3 datasets: - data_files: - 22a3375d601f5255_train_data.json ds_type: json format: custom path: /workspace/input_data/22a3375d601f5255_train_data.json type: field_instruction: question field_output: context format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: 2 eval_max_new_tokens: 128 eval_steps: 5 eval_table_size: null flash_attention: false fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: lesso03/bedbd70d-25b5-47ca-8b1d-a265e178c834 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 25 micro_batch_size: 2 mlflow_experiment_name: /tmp/22a3375d601f5255_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 10 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: bb5ffa8b-184f-4a9d-9846-adfdc06113a4 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: bb5ffa8b-184f-4a9d-9846-adfdc06113a4 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # bedbd70d-25b5-47ca-8b1d-a265e178c834 This model is a fine-tuned version of [unsloth/Qwen2-7B](https://huggingface.co/unsloth/Qwen2-7B) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 25 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0 | 0.0005 | 1 | nan | | 0.0 | 0.0023 | 5 | nan | | 0.0 | 0.0046 | 10 | nan | | 0.0 | 0.0070 | 15 | nan | | 0.0 | 0.0093 | 20 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
darinchau/comp5421-project-clean-frost-31-comp5421-mel-spectrogram-fma_small-128x216-step-22528
darinchau
"2025-03-28T12:54:03Z"
0
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "region:us" ]
null
"2025-03-28T12:54:00Z"
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>
MrRobotoAI/101-GGUF
MrRobotoAI
"2025-03-26T10:31:05Z"
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:MrRobotoAI/101", "base_model:quantized:MrRobotoAI/101", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-03-26T10:30:23Z"
--- base_model: MrRobotoAI/101 library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # MrRobotoAI/101-GGUF This model was converted to GGUF format from [`MrRobotoAI/101`](https://huggingface.co/MrRobotoAI/101) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/MrRobotoAI/101) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo MrRobotoAI/101-GGUF --hf-file 101-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo MrRobotoAI/101-GGUF --hf-file 101-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo MrRobotoAI/101-GGUF --hf-file 101-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo MrRobotoAI/101-GGUF --hf-file 101-q4_k_m.gguf -c 2048 ```
lesso07/925c98cb-57dc-43c3-9ed8-03b295e777e5
lesso07
"2025-03-16T11:17:12Z"
10
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/llama-3-8b", "base_model:adapter:unsloth/llama-3-8b", "license:llama3", "region:us" ]
null
"2025-03-13T05:27:17Z"
--- library_name: peft license: llama3 base_model: unsloth/llama-3-8b tags: - axolotl - generated_from_trainer model-index: - name: 925c98cb-57dc-43c3-9ed8-03b295e777e5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <br> # 925c98cb-57dc-43c3-9ed8-03b295e777e5 This model is a fine-tuned version of [unsloth/llama-3-8b](https://huggingface.co/unsloth/llama-3-8b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.4609 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 4.2371 | 0.0000 | 1 | 3.0484 | | 2.4547 | 0.0001 | 3 | 3.0313 | | 1.9117 | 0.0002 | 6 | 2.8498 | | 1.7493 | 0.0003 | 9 | 2.4609 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
mradermacher/NeuralCeptrix-7B-SLERPv3-GGUF
mradermacher
"2024-12-23T20:23:48Z"
13
1
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "CultriX/MergeCeption-7B-v3", "CultriX/MonaTrix-v4", "en", "base_model:CultriX/NeuralCeptrix-7B-SLERPv3", "base_model:quantized:CultriX/NeuralCeptrix-7B-SLERPv3", "endpoints_compatible", "region:us" ]
null
"2024-12-23T12:26:30Z"
--- base_model: CultriX/NeuralCeptrix-7B-SLERPv3 language: - en library_name: transformers quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - CultriX/MergeCeption-7B-v3 - CultriX/MonaTrix-v4 --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/CultriX/NeuralCeptrix-7B-SLERPv3 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/NeuralCeptrix-7B-SLERPv3-GGUF/resolve/main/NeuralCeptrix-7B-SLERPv3.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/NeuralCeptrix-7B-SLERPv3-GGUF/resolve/main/NeuralCeptrix-7B-SLERPv3.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/NeuralCeptrix-7B-SLERPv3-GGUF/resolve/main/NeuralCeptrix-7B-SLERPv3.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/NeuralCeptrix-7B-SLERPv3-GGUF/resolve/main/NeuralCeptrix-7B-SLERPv3.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/NeuralCeptrix-7B-SLERPv3-GGUF/resolve/main/NeuralCeptrix-7B-SLERPv3.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/NeuralCeptrix-7B-SLERPv3-GGUF/resolve/main/NeuralCeptrix-7B-SLERPv3.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/NeuralCeptrix-7B-SLERPv3-GGUF/resolve/main/NeuralCeptrix-7B-SLERPv3.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/NeuralCeptrix-7B-SLERPv3-GGUF/resolve/main/NeuralCeptrix-7B-SLERPv3.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/NeuralCeptrix-7B-SLERPv3-GGUF/resolve/main/NeuralCeptrix-7B-SLERPv3.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/NeuralCeptrix-7B-SLERPv3-GGUF/resolve/main/NeuralCeptrix-7B-SLERPv3.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/NeuralCeptrix-7B-SLERPv3-GGUF/resolve/main/NeuralCeptrix-7B-SLERPv3.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/NeuralCeptrix-7B-SLERPv3-GGUF/resolve/main/NeuralCeptrix-7B-SLERPv3.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
chukypedro/CapybaraHermes-2.5-Mistral-7B
chukypedro
"2024-05-19T19:15:44Z"
0
0
peft
[ "peft", "safetensors", "mistral", "arxiv:1910.09700", "base_model:argilla/CapybaraHermes-2.5-Mistral-7B", "base_model:adapter:argilla/CapybaraHermes-2.5-Mistral-7B", "region:us" ]
null
"2024-05-19T19:07:28Z"
--- library_name: peft base_model: argilla/CapybaraHermes-2.5-Mistral-7B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
cognitivecomputations/quiet_dolphin
cognitivecomputations
"2024-04-06T17:08:16Z"
2
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "base_model:cognitivecomputations/fc-dolphin-2.6-mistral-7b-dpo-laser", "base_model:merge:cognitivecomputations/fc-dolphin-2.6-mistral-7b-dpo-laser", "base_model:ezelikman/quietstar-8-ahead", "base_model:merge:ezelikman/quietstar-8-ahead", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-06T16:47:23Z"
--- base_model: - ezelikman/quietstar-8-ahead - cognitivecomputations/fc-dolphin-2.6-mistral-7b-dpo-laser library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [ezelikman/quietstar-8-ahead](https://huggingface.co/ezelikman/quietstar-8-ahead) * [cognitivecomputations/fc-dolphin-2.6-mistral-7b-dpo-laser](https://huggingface.co/cognitivecomputations/fc-dolphin-2.6-mistral-7b-dpo-laser) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: ezelikman/quietstar-8-ahead layer_range: [0, 32] - model: cognitivecomputations/fc-dolphin-2.6-mistral-7b-dpo-laser layer_range: [0, 32] merge_method: slerp base_model: ezelikman/quietstar-8-ahead parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
clip-Jannat-Toha/VIRAL.Jannat-Toha.Viral.Video.Full.Original.Video.Social.Media.X
clip-Jannat-Toha
"2025-02-21T23:25:47Z"
0
0
null
[ "region:us" ]
null
"2025-02-21T23:25:01Z"
[🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)](https://lekedvideo.xyz/watch/) [🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://lekedvideo.xyz/watch/) [<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://lekedvideo.xyz/watch/)
philip-hightech/e589c35f-355f-466b-9e07-c5d8250165a4
philip-hightech
"2025-01-27T07:48:47Z"
8
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "custom_code", "base_model:NousResearch/Yarn-Solar-10b-64k", "base_model:adapter:NousResearch/Yarn-Solar-10b-64k", "license:apache-2.0", "region:us" ]
null
"2025-01-27T07:46:04Z"
--- library_name: peft license: apache-2.0 base_model: NousResearch/Yarn-Solar-10b-64k tags: - axolotl - generated_from_trainer model-index: - name: e589c35f-355f-466b-9e07-c5d8250165a4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: NousResearch/Yarn-Solar-10b-64k bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 2d70a50ffe98c25d_train_data.json ds_type: json format: custom path: /workspace/input_data/2d70a50ffe98c25d_train_data.json type: field_instruction: prompt field_output: reasoning format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: philip-hightech/e589c35f-355f-466b-9e07-c5d8250165a4 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 5.0e-05 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/2d70a50ffe98c25d_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: be8a31a4-7ae3-47f5-9703-6040de1476d7 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: be8a31a4-7ae3-47f5-9703-6040de1476d7 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # e589c35f-355f-466b-9e07-c5d8250165a4 This model is a fine-tuned version of [NousResearch/Yarn-Solar-10b-64k](https://huggingface.co/NousResearch/Yarn-Solar-10b-64k) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0 | 0.0010 | 1 | nan | | 0.0 | 0.0129 | 13 | nan | | 0.0 | 0.0259 | 26 | nan | | 0.0 | 0.0388 | 39 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
AventIQ-AI/bert-talentmatchai
AventIQ-AI
"2025-02-20T10:10:37Z"
0
1
null
[ "safetensors", "bert", "region:us" ]
null
"2025-02-20T09:35:21Z"
# Talent-Match-AI: Resume and Job Description Matching ## 📌 Overview This repository hosts the quantized version of the **BERT-base-uncased** model for **Resume and Job Description Matching**. The model is designed to determine whether a resume aligns well with a given job description. If they are a strong match, the model outputs "Good Fit" with a confidence score; otherwise, it categorizes them as "Potential Fit" or "Not a Good Fit." The model has been optimized for efficient deployment while maintaining reasonable accuracy, making it suitable for real-time applications. ## 🏰 Model Details - **Model Architecture:** BERT-base-uncased - **Task:** Resume and Job Description Matching - **Dataset:** `facehuggerapoorv/resume-jd-match` - **Quantization:** Float16 (FP16) for optimized inference - **Fine-tuning Framework:** Hugging Face Transformers ## 🚀 Usage ### Installation ```bash pip install transformers torch ``` ### Loading the Model ```python from transformers import BertTokenizer, BertForSequenceClassification import torch device = "cuda" if torch.cuda.is_available() else "cpu" model_name = "AventIQ-AI/bert-talentmatchai" model = BertForSequenceClassification.from_pretrained(model_name).to(device) tokenizer = BertTokenizer.from_pretrained(model_name) ``` ### Resume Matching Inference ```python import torch # Set device (use GPU if available) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) # Define label mapping label_mapping = {0: "Not a Good Fit", 1: "Potential Fit", 2: "Good Fit"} # Sample resume text for testing test_resume = ["I have worked in different industries and have a lot of experience. I am a hard worker and can learn anything."] # Tokenize test data test_tokens = tokenizer(test_resume, padding="max_length", truncation=True, return_tensors="pt").to(device) # Move input to same device as model # Make predictions with torch.no_grad(): # Disable gradient computation for inference output = model(**test_tokens) # Get predicted label predicted_label = output.logits.argmax(dim=1).item() # Print result print(f"Predicted Category: {predicted_label} ({label_mapping[predicted_label]})") label_mapping = {0: "No Fit", 1: "Low Fit", 2: "Potential Fit", 3: "Good Fit"} print(f"Predicted Category: {label_mapping[predictions]}") ``` ## 📊 Quantized Model Evaluation Results ### 🔥 Evaluation Metrics 🔥 - ✅ **Accuracy:** 0.9224 - ✅ **Precision:** 0.9212 - ✅ **Recall:** 0.8450 - ✅ **F1-score:** 0.7718 ## ⚡ Quantization Details Post-training quantization was applied using PyTorch's built-in quantization framework. The model was quantized to Float16 (FP16) to reduce model size and improve inference efficiency while balancing accuracy. ## 💽 Repository Structure ``` . ├── model/ # Contains the quantized model files ├── tokenizer_config/ # Tokenizer configuration and vocabulary files ├── model.safetensors/ # Quantized Model ├── README.md # Model documentation ``` ## ⚠️ Limitations - The model may struggle with resumes and job descriptions that use non-standard terminology. - Quantization may lead to slight degradation in accuracy compared to full-precision models. - Performance may vary across different industries and job levels. ## 🤝 Contributing Contributions are welcome! Feel free to open an issue or submit a pull request if you have suggestions or improvements.
Bunpot/lora_model
Bunpot
"2025-02-14T08:41:26Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2025-02-14T08:41:18Z"
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Bunpot - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Jennny/qwen-math-value-model-longer2
Jennny
"2025-04-03T12:33:46Z"
0
0
null
[ "safetensors", "qwen2", "custom_code", "region:us" ]
null
"2025-03-26T22:20:21Z"
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>
FIVE-MGI/Single_Neuron_Identification
FIVE-MGI
"2024-06-13T18:18:09Z"
0
0
tf-keras
[ "tf-keras", "onnx", "en", "dataset:FIVE-MGI/SNIM20", "license:agpl-3.0", "region:us" ]
null
"2024-06-13T17:48:37Z"
--- license: agpl-3.0 language: - en datasets: - FIVE-MGI/SNIM20 --- # Single Neuron (or Cell) Identification This model is designed for image classification and identifying rafts or regions that have single neurons and are trained on [FIVE-MGI/SNIM20]. ## Research Paper For more detailed information, please refer to our bioRxiv paper: [Classification of iPSC-Derived Cultures Using Convolutional Neural Networks to Identify Single Differentiated Neurons for Isolation or Measurement](https://www.biorxiv.org/content/10.1101/2023.12.24.573194)
kenchenxingyu/flan-large-single-label-stance-human4
kenchenxingyu
"2024-02-15T02:26:02Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-02-15T02:25:58Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TransferGraph/classla_bcms-bertic-parlasent-bcs-ter-finetuned-lora-tweet_eval_emotion
TransferGraph
"2024-02-29T12:53:07Z"
4
0
peft
[ "peft", "safetensors", "parquet", "text-classification", "dataset:tweet_eval", "base_model:classla/bcms-bertic-parlasent-bcs-ter", "base_model:adapter:classla/bcms-bertic-parlasent-bcs-ter", "model-index", "region:us" ]
text-classification
"2024-02-29T12:53:05Z"
--- library_name: peft tags: - parquet - text-classification datasets: - tweet_eval metrics: - accuracy base_model: classla/bcms-bertic-parlasent-bcs-ter model-index: - name: classla_bcms-bertic-parlasent-bcs-ter-finetuned-lora-tweet_eval_emotion results: - task: type: text-classification name: Text Classification dataset: name: tweet_eval type: tweet_eval config: emotion split: validation args: emotion metrics: - type: accuracy value: 0.4946524064171123 name: accuracy --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # classla_bcms-bertic-parlasent-bcs-ter-finetuned-lora-tweet_eval_emotion This model is a fine-tuned version of [classla/bcms-bertic-parlasent-bcs-ter](https://huggingface.co/classla/bcms-bertic-parlasent-bcs-ter) on the tweet_eval dataset. It achieves the following results on the evaluation set: - accuracy: 0.4947 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0004 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | accuracy | train_loss | epoch | |:--------:|:----------:|:-----:| | 0.1818 | None | 0 | | 0.4679 | 1.2475 | 0 | | 0.4786 | 1.1874 | 1 | | 0.4920 | 1.1567 | 2 | | 0.4947 | 1.1286 | 3 | ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.16.1 - Tokenizers 0.15.2
LHRuig/andrzejdu
LHRuig
"2025-02-04T05:46:28Z"
7
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
"2025-02-04T05:46:07Z"
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: suit output: url: images/suit.jpg base_model: black-forest-labs/FLUX.1-dev instance_prompt: andrzejdu --- # andrzejdu <Gallery /> ## Model description andrzejdu lora ## Trigger words You should use `andrzejdu` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/LHRuig/andrzejdu/tree/main) them in the Files & versions tab.
owaiskaifi/gen-qr-ai
owaiskaifi
"2023-06-22T14:56:32Z"
30
2
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "controlnet", "image-to-image", "en", "license:openrail++", "endpoints_compatible", "region:us" ]
image-to-image
"2023-06-22T05:02:44Z"
--- tags: - stable-diffusion - controlnet - image-to-image license: openrail++ language: - en library_name: diffusers pipeline_tag: image-to-image --- # QR Code Conditioned ControlNet Models for Stable Diffusion 1.5 ![1](https://www.dropbox.com/s/fxyuqpot2z2ftty/5.png?raw=1) ## Model Description This repo holds the safetensors & diffusers versions of the QR code conditioned ControlNet for Stable Diffusion v1.5. The Stable Diffusion 2.1 version is marginally more effective, as it was developed to address my specific needs. However, this 1.5 version model was also trained on the same dataset for those who are using the older version. ## How to use with Diffusers ```bash pip -q install diffusers transformers accelerate torch xformers ``` ```python import torch from PIL import Image from diffusers import StableDiffusionControlNetImg2ImgPipeline, ControlNetModel, DDIMScheduler from diffusers.utils import load_image controlnet = ControlNetModel.from_pretrained("DionTimmer/controlnet_qrcode-control_v1p_sd15", torch_dtype=torch.float16) pipe = StableDiffusionControlNetImg2ImgPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", controlnet=controlnet, safety_checker=None, torch_dtype=torch.float16 ) pipe.enable_xformers_memory_efficient_attention() pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) pipe.enable_model_cpu_offload() def resize_for_condition_image(input_image: Image, resolution: int): input_image = input_image.convert("RGB") W, H = input_image.size k = float(resolution) / min(H, W) H *= k W *= k H = int(round(H / 64.0)) * 64 W = int(round(W / 64.0)) * 64 img = input_image.resize((W, H), resample=Image.LANCZOS) return img # play with guidance_scale, controlnet_conditioning_scale and strength to make a valid QR Code Image # qr code image source_image = load_image("https://s3.amazonaws.com/moonup/production/uploads/6064e095abd8d3692e3e2ed6/A_RqHaAM6YHBodPLwqtjn.png") # initial image, anything init_image = load_image("https://s3.amazonaws.com/moonup/production/uploads/noauth/KfMBABpOwIuNolv1pe3qX.jpeg") condition_image = resize_for_condition_image(source_image, 768) init_image = resize_for_condition_image(init_image, 768) generator = torch.manual_seed(123121231) image = pipe(prompt="a bilboard in NYC with a qrcode", negative_prompt="ugly, disfigured, low quality, blurry, nsfw", image=init_image, control_image=condition_image, width=768, height=768, guidance_scale=20, controlnet_conditioning_scale=1.5, generator=generator, strength=0.9, num_inference_steps=150, ) image.images[0] ``` ## Performance and Limitations These models perform quite well in most cases, but please note that they are not 100% accurate. In some instances, the QR code shape might not come through as expected. You can increase the ControlNet weight to emphasize the QR code shape. However, be cautious as this might negatively impact the style of your output.**To optimize for scanning, please generate your QR codes with correction mode 'H' (30%).** To balance between style and shape, a gentle fine-tuning of the control weight might be required based on the individual input and the desired output, aswell as the correct prompt. Some prompts do not work until you increase the weight by a lot. The process of finding the right balance between these factors is part art and part science. For the best results, it is recommended to generate your artwork at a resolution of 768. This allows for a higher level of detail in the final product, enhancing the quality and effectiveness of the QR code-based artwork. ## Installation The simplest way to use this is to place the .safetensors model and its .yaml config file in the folder where your other controlnet models are installed, which varies per application. For usage in auto1111 they can be placed in the webui/models/ControlNet folder. They can be loaded using the controlnet webui extension which you can install through the extensions tab in the webui (https://github.com/Mikubill/sd-webui-controlnet). Make sure to enable your controlnet unit and set your input image as the QR code. Set the model to either the SD2.1 or 1.5 version depending on your base stable diffusion model, or it will error. No pre-processor is needed, though you can use the invert pre-processor for a different variation of results. 768 is the preferred resolution for generation since it allows for more detail. Make sure to look up additional info on how to use controlnet if you get stuck, once you have the webui up and running its really easy to install the controlnet extension aswell.
RomainDarous/large_directFourEpoch_additivePooling_noisedInit_stsModel
RomainDarous
"2025-03-22T23:33:21Z"
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "xlm-roberta", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:51741", "loss:CoSENTLoss", "de", "en", "es", "fr", "it", "nl", "pl", "pt", "ru", "zh", "dataset:PhilipMay/stsb_multi_mt", "arxiv:1908.10084", "base_model:sentence-transformers/paraphrase-multilingual-mpnet-base-v2", "base_model:finetune:sentence-transformers/paraphrase-multilingual-mpnet-base-v2", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
"2025-03-22T23:32:39Z"
--- language: - de - en - es - fr - it - nl - pl - pt - ru - zh tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:51741 - loss:CoSENTLoss base_model: sentence-transformers/paraphrase-multilingual-mpnet-base-v2 widget: - source_sentence: Starsza para azjatycka pozuje z noworodkiem przy stole obiadowym. sentences: - Koszykarz ma zamiar zdobyć punkty dla swojej drużyny. - Grupa starszych osób pozuje wokół stołu w jadalni. - Możliwe, że układ słoneczny taki jak nasz może istnieć poza galaktyką. - source_sentence: Englisch arbeitet überall mit Menschen, die Dinge kaufen und verkaufen, und in der Gastfreundschaft und im Tourismusgeschäft. sentences: - Ich bin in Maharashtra (einschließlich Mumbai) und Andhra Pradesh herumgereist, und ich hatte kein Problem damit, nur mit Englisch auszukommen. - 'Ein griechischsprachiger Sklave (δούλος, doulos) würde seinen Herrn, glaube ich, κύριος nennen (translit: kurios; Herr, Herr, Herr, Herr; Vokativform: κύριε).' - Das Paar lag auf dem Bett. - source_sentence: Si vous vous comprenez et comprenez votre ennemi, vous aurez beaucoup plus de chances de gagner n'importe quelle bataille. sentences: - 'Outre les probabilités de gagner une bataille théorique, cette citation a une autre signification : l''importance de connaître/comprendre les autres.' - Une femme et un chien se promènent ensemble. - Un homme joue de la guitare. - source_sentence: Un homme joue de la harpe. sentences: - Une femme joue de la guitare. - une femme a un enfant. - Un groupe de personnes est debout et assis sur le sol la nuit. - source_sentence: Dois cães a lutar na neve. sentences: - Dois cães brincam na neve. - Pode sempre perguntar, então é a escolha do autor a aceitar ou não. - Um gato está a caminhar sobre chão de madeira dura. datasets: - PhilipMay/stsb_multi_mt pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - pearson_cosine - spearman_cosine model-index: - name: SentenceTransformer based on sentence-transformers/paraphrase-multilingual-mpnet-base-v2 results: - task: type: semantic-similarity name: Semantic Similarity dataset: name: sts eval type: sts-eval metrics: - type: pearson_cosine value: 0.8423180648713237 name: Pearson Cosine - type: spearman_cosine value: 0.8595850000432059 name: Spearman Cosine - type: pearson_cosine value: 0.8420181975402647 name: Pearson Cosine - type: spearman_cosine value: 0.8630073561241816 name: Spearman Cosine - type: pearson_cosine value: 0.8405171361303234 name: Pearson Cosine - type: spearman_cosine value: 0.8594948677596693 name: Spearman Cosine - type: pearson_cosine value: 0.8375312155777364 name: Pearson Cosine - type: spearman_cosine value: 0.8583531749722014 name: Spearman Cosine - type: pearson_cosine value: 0.8397619344296936 name: Pearson Cosine - type: spearman_cosine value: 0.8592894281053397 name: Spearman Cosine - type: pearson_cosine value: 0.8302450119489335 name: Pearson Cosine - type: spearman_cosine value: 0.8477495437950113 name: Spearman Cosine - type: pearson_cosine value: 0.8403036335437926 name: Pearson Cosine - type: spearman_cosine value: 0.8618318944578455 name: Spearman Cosine - type: pearson_cosine value: 0.838706056263606 name: Pearson Cosine - type: spearman_cosine value: 0.8574971366611375 name: Spearman Cosine - type: pearson_cosine value: 0.8413052113094718 name: Pearson Cosine - type: spearman_cosine value: 0.8611085200053895 name: Spearman Cosine - task: type: semantic-similarity name: Semantic Similarity dataset: name: sts test type: sts-test metrics: - type: pearson_cosine value: 0.7456938524838218 name: Pearson Cosine - type: spearman_cosine value: 0.7483592546028903 name: Spearman Cosine - type: pearson_cosine value: 0.7237526314017121 name: Pearson Cosine - type: spearman_cosine value: 0.7169355021670776 name: Spearman Cosine - type: pearson_cosine value: 0.7669235794906317 name: Pearson Cosine - type: spearman_cosine value: 0.7631313253470643 name: Spearman Cosine - type: pearson_cosine value: 0.8298244150963187 name: Pearson Cosine - type: spearman_cosine value: 0.8324038122126458 name: Spearman Cosine - type: pearson_cosine value: 0.7166564070706897 name: Pearson Cosine - type: spearman_cosine value: 0.7227801582959456 name: Spearman Cosine - type: pearson_cosine value: 0.7855295239932334 name: Pearson Cosine - type: spearman_cosine value: 0.7934626158625494 name: Spearman Cosine - type: pearson_cosine value: 0.8386050236111093 name: Pearson Cosine - type: spearman_cosine value: 0.8275901416546908 name: Spearman Cosine - type: pearson_cosine value: 0.779112011887379 name: Pearson Cosine - type: spearman_cosine value: 0.7729611139511264 name: Spearman Cosine - type: pearson_cosine value: 0.7878478906763803 name: Pearson Cosine - type: spearman_cosine value: 0.7846990470347196 name: Spearman Cosine - type: pearson_cosine value: 0.7882844791307567 name: Pearson Cosine - type: spearman_cosine value: 0.7878180406501333 name: Spearman Cosine --- # SentenceTransformer based on sentence-transformers/paraphrase-multilingual-mpnet-base-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) on the [multi_stsb_de](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt), [multi_stsb_es](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt), [multi_stsb_fr](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt), [multi_stsb_it](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt), [multi_stsb_nl](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt), [multi_stsb_pl](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt), [multi_stsb_pt](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt), [multi_stsb_ru](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) and [multi_stsb_zh](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) datasets. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) <!-- at revision 84fccfe766bcfd679e39efefe4ebf45af190ad2d --> - **Maximum Sequence Length:** 128 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity - **Training Datasets:** - [multi_stsb_de](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) - [multi_stsb_es](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) - [multi_stsb_fr](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) - [multi_stsb_it](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) - [multi_stsb_nl](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) - [multi_stsb_pl](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) - [multi_stsb_pt](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) - [multi_stsb_ru](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) - [multi_stsb_zh](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) - **Languages:** de, en, es, fr, it, nl, pl, pt, ru, zh <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): MultiHeadGeneralizedPooling( (P): ModuleList( (0-7): 8 x Linear(in_features=768, out_features=96, bias=True) ) (W1): ModuleList( (0-7): 8 x Linear(in_features=96, out_features=384, bias=True) ) (W2): ModuleList( (0-7): 8 x Linear(in_features=384, out_features=96, bias=True) ) ) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("RomainDarous/large_directFourEpoch_additivePooling_noisedInit_stsModel") # Run inference sentences = [ 'Dois cães a lutar na neve.', 'Dois cães brincam na neve.', 'Pode sempre perguntar, então é a escolha do autor a aceitar ou não.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Semantic Similarity * Datasets: `sts-eval`, `sts-test`, `sts-test`, `sts-test`, `sts-test`, `sts-test`, `sts-test`, `sts-test`, `sts-test`, `sts-test` and `sts-test` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | sts-eval | sts-test | |:--------------------|:-----------|:-----------| | pearson_cosine | 0.8423 | 0.7883 | | **spearman_cosine** | **0.8596** | **0.7878** | #### Semantic Similarity * Dataset: `sts-eval` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:----------| | pearson_cosine | 0.842 | | **spearman_cosine** | **0.863** | #### Semantic Similarity * Dataset: `sts-eval` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.8405 | | **spearman_cosine** | **0.8595** | #### Semantic Similarity * Dataset: `sts-eval` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.8375 | | **spearman_cosine** | **0.8584** | #### Semantic Similarity * Dataset: `sts-eval` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.8398 | | **spearman_cosine** | **0.8593** | #### Semantic Similarity * Dataset: `sts-eval` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.8302 | | **spearman_cosine** | **0.8477** | #### Semantic Similarity * Dataset: `sts-eval` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.8403 | | **spearman_cosine** | **0.8618** | #### Semantic Similarity * Dataset: `sts-eval` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.8387 | | **spearman_cosine** | **0.8575** | #### Semantic Similarity * Dataset: `sts-eval` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.8413 | | **spearman_cosine** | **0.8611** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Datasets <details><summary>multi_stsb_de</summary> #### multi_stsb_de * Dataset: [multi_stsb_de](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) at [3acaa3d](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt/tree/3acaa3dd8c91649e0b8e627ffad891f059e47c8c) * Size: 5,749 training samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 5 tokens</li><li>mean: 11.58 tokens</li><li>max: 37 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 11.53 tokens</li><li>max: 36 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.45</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:---------------------------------------------------------------|:--------------------------------------------------------------------------|:--------------------------------| | <code>Ein Flugzeug hebt gerade ab.</code> | <code>Ein Flugzeug hebt gerade ab.</code> | <code>1.0</code> | | <code>Ein Mann spielt eine große Flöte.</code> | <code>Ein Mann spielt eine Flöte.</code> | <code>0.7599999904632568</code> | | <code>Ein Mann streicht geriebenen Käse auf eine Pizza.</code> | <code>Ein Mann streicht geriebenen Käse auf eine ungekochte Pizza.</code> | <code>0.7599999904632568</code> | * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` </details> <details><summary>multi_stsb_es</summary> #### multi_stsb_es * Dataset: [multi_stsb_es](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) at [3acaa3d](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt/tree/3acaa3dd8c91649e0b8e627ffad891f059e47c8c) * Size: 5,749 training samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 7 tokens</li><li>mean: 12.21 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 12.07 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.45</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:----------------------------------------------------------------|:----------------------------------------------------------------------|:--------------------------------| | <code>Un avión está despegando.</code> | <code>Un avión está despegando.</code> | <code>1.0</code> | | <code>Un hombre está tocando una gran flauta.</code> | <code>Un hombre está tocando una flauta.</code> | <code>0.7599999904632568</code> | | <code>Un hombre está untando queso rallado en una pizza.</code> | <code>Un hombre está untando queso rallado en una pizza cruda.</code> | <code>0.7599999904632568</code> | * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` </details> <details><summary>multi_stsb_fr</summary> #### multi_stsb_fr * Dataset: [multi_stsb_fr](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) at [3acaa3d](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt/tree/3acaa3dd8c91649e0b8e627ffad891f059e47c8c) * Size: 5,749 training samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 6 tokens</li><li>mean: 12.6 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 12.49 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.45</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:-----------------------------------------------------------|:---------------------------------------------------------------------|:--------------------------------| | <code>Un avion est en train de décoller.</code> | <code>Un avion est en train de décoller.</code> | <code>1.0</code> | | <code>Un homme joue d'une grande flûte.</code> | <code>Un homme joue de la flûte.</code> | <code>0.7599999904632568</code> | | <code>Un homme étale du fromage râpé sur une pizza.</code> | <code>Un homme étale du fromage râpé sur une pizza non cuite.</code> | <code>0.7599999904632568</code> | * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` </details> <details><summary>multi_stsb_it</summary> #### multi_stsb_it * Dataset: [multi_stsb_it](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) at [3acaa3d](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt/tree/3acaa3dd8c91649e0b8e627ffad891f059e47c8c) * Size: 5,749 training samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 7 tokens</li><li>mean: 12.77 tokens</li><li>max: 36 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 12.69 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.45</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:--------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:--------------------------------| | <code>Un aereo sta decollando.</code> | <code>Un aereo sta decollando.</code> | <code>1.0</code> | | <code>Un uomo sta suonando un grande flauto.</code> | <code>Un uomo sta suonando un flauto.</code> | <code>0.7599999904632568</code> | | <code>Un uomo sta spalmando del formaggio a pezzetti su una pizza.</code> | <code>Un uomo sta spalmando del formaggio a pezzetti su una pizza non cotta.</code> | <code>0.7599999904632568</code> | * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` </details> <details><summary>multi_stsb_nl</summary> #### multi_stsb_nl * Dataset: [multi_stsb_nl](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) at [3acaa3d](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt/tree/3acaa3dd8c91649e0b8e627ffad891f059e47c8c) * Size: 5,749 training samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 6 tokens</li><li>mean: 11.67 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 11.55 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.45</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:--------------------------------------------------------|:--------------------------------------------------------------------|:--------------------------------| | <code>Er gaat een vliegtuig opstijgen.</code> | <code>Er gaat een vliegtuig opstijgen.</code> | <code>1.0</code> | | <code>Een man speelt een grote fluit.</code> | <code>Een man speelt fluit.</code> | <code>0.7599999904632568</code> | | <code>Een man smeert geraspte kaas op een pizza.</code> | <code>Een man strooit geraspte kaas op een ongekookte pizza.</code> | <code>0.7599999904632568</code> | * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` </details> <details><summary>multi_stsb_pl</summary> #### multi_stsb_pl * Dataset: [multi_stsb_pl](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) at [3acaa3d](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt/tree/3acaa3dd8c91649e0b8e627ffad891f059e47c8c) * Size: 5,749 training samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 5 tokens</li><li>mean: 12.2 tokens</li><li>max: 39 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 12.11 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.45</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:-----------------------------------------------------------|:------------------------------------------------------------------------|:--------------------------------| | <code>Samolot wystartował.</code> | <code>Samolot wystartował.</code> | <code>1.0</code> | | <code>Człowiek gra na dużym flecie.</code> | <code>Człowiek gra na flecie.</code> | <code>0.7599999904632568</code> | | <code>Mężczyzna rozsiewa na pizzy rozdrobniony ser.</code> | <code>Mężczyzna rozsiewa rozdrobniony ser na niegotowanej pizzy.</code> | <code>0.7599999904632568</code> | * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` </details> <details><summary>multi_stsb_pt</summary> #### multi_stsb_pt * Dataset: [multi_stsb_pt](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) at [3acaa3d](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt/tree/3acaa3dd8c91649e0b8e627ffad891f059e47c8c) * Size: 5,749 training samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 7 tokens</li><li>mean: 12.33 tokens</li><li>max: 34 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 12.29 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.45</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:------------------------------------------------------------------|:----------------------------------------------------------------------------------|:--------------------------------| | <code>Um avião está a descolar.</code> | <code>Um avião aéreo está a descolar.</code> | <code>1.0</code> | | <code>Um homem está a tocar uma grande flauta.</code> | <code>Um homem está a tocar uma flauta.</code> | <code>0.7599999904632568</code> | | <code>Um homem está a espalhar queijo desfiado numa pizza.</code> | <code>Um homem está a espalhar queijo desfiado sobre uma pizza não cozida.</code> | <code>0.7599999904632568</code> | * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` </details> <details><summary>multi_stsb_ru</summary> #### multi_stsb_ru * Dataset: [multi_stsb_ru](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) at [3acaa3d](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt/tree/3acaa3dd8c91649e0b8e627ffad891f059e47c8c) * Size: 5,749 training samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 5 tokens</li><li>mean: 11.19 tokens</li><li>max: 39 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 11.17 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.45</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:------------------------------------------------|:---------------------------------------------------------------------|:--------------------------------| | <code>Самолет взлетает.</code> | <code>Взлетает самолет.</code> | <code>1.0</code> | | <code>Человек играет на большой флейте.</code> | <code>Человек играет на флейте.</code> | <code>0.7599999904632568</code> | | <code>Мужчина разбрасывает сыр на пиццу.</code> | <code>Мужчина разбрасывает измельченный сыр на вареную пиццу.</code> | <code>0.7599999904632568</code> | * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` </details> <details><summary>multi_stsb_zh</summary> #### multi_stsb_zh * Dataset: [multi_stsb_zh](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) at [3acaa3d](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt/tree/3acaa3dd8c91649e0b8e627ffad891f059e47c8c) * Size: 5,749 training samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 6 tokens</li><li>mean: 10.7 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 10.79 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.45</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:------------------------------|:----------------------------------|:--------------------------------| | <code>一架飞机正在起飞。</code> | <code>一架飞机正在起飞。</code> | <code>1.0</code> | | <code>一个男人正在吹一支大笛子。</code> | <code>一个人在吹笛子。</code> | <code>0.7599999904632568</code> | | <code>一名男子正在比萨饼上涂抹奶酪丝。</code> | <code>一名男子正在将奶酪丝涂抹在未熟的披萨上。</code> | <code>0.7599999904632568</code> | * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` </details> ### Evaluation Datasets <details><summary>multi_stsb_de</summary> #### multi_stsb_de * Dataset: [multi_stsb_de](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) at [3acaa3d](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt/tree/3acaa3dd8c91649e0b8e627ffad891f059e47c8c) * Size: 1,500 evaluation samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 5 tokens</li><li>mean: 18.25 tokens</li><li>max: 47 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 18.25 tokens</li><li>max: 54 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.42</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:-------------------------------------------------------------|:-----------------------------------------------------------|:-------------------------------| | <code>Ein Mann mit einem Schutzhelm tanzt.</code> | <code>Ein Mann mit einem Schutzhelm tanzt.</code> | <code>1.0</code> | | <code>Ein kleines Kind reitet auf einem Pferd.</code> | <code>Ein Kind reitet auf einem Pferd.</code> | <code>0.949999988079071</code> | | <code>Ein Mann verfüttert eine Maus an eine Schlange.</code> | <code>Der Mann füttert die Schlange mit einer Maus.</code> | <code>1.0</code> | * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` </details> <details><summary>multi_stsb_es</summary> #### multi_stsb_es * Dataset: [multi_stsb_es](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) at [3acaa3d](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt/tree/3acaa3dd8c91649e0b8e627ffad891f059e47c8c) * Size: 1,500 evaluation samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 7 tokens</li><li>mean: 17.98 tokens</li><li>max: 47 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 17.86 tokens</li><li>max: 47 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.42</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:----------------------------------------------------------------------|:---------------------------------------------------------------------|:-------------------------------| | <code>Un hombre con un casco está bailando.</code> | <code>Un hombre con un casco está bailando.</code> | <code>1.0</code> | | <code>Un niño pequeño está montando a caballo.</code> | <code>Un niño está montando a caballo.</code> | <code>0.949999988079071</code> | | <code>Un hombre está alimentando a una serpiente con un ratón.</code> | <code>El hombre está alimentando a la serpiente con un ratón.</code> | <code>1.0</code> | * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` </details> <details><summary>multi_stsb_fr</summary> #### multi_stsb_fr * Dataset: [multi_stsb_fr](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) at [3acaa3d](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt/tree/3acaa3dd8c91649e0b8e627ffad891f059e47c8c) * Size: 1,500 evaluation samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 6 tokens</li><li>mean: 19.7 tokens</li><li>max: 49 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 19.65 tokens</li><li>max: 51 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.42</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:-------------------------------------------------------------------------|:----------------------------------------------------------------------------|:-------------------------------| | <code>Un homme avec un casque de sécurité est en train de danser.</code> | <code>Un homme portant un casque de sécurité est en train de danser.</code> | <code>1.0</code> | | <code>Un jeune enfant monte à cheval.</code> | <code>Un enfant monte à cheval.</code> | <code>0.949999988079071</code> | | <code>Un homme donne une souris à un serpent.</code> | <code>L'homme donne une souris au serpent.</code> | <code>1.0</code> | * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` </details> <details><summary>multi_stsb_it</summary> #### multi_stsb_it * Dataset: [multi_stsb_it](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) at [3acaa3d](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt/tree/3acaa3dd8c91649e0b8e627ffad891f059e47c8c) * Size: 1,500 evaluation samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 6 tokens</li><li>mean: 18.42 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 18.43 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.42</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:------------------------------------------------------------------|:---------------------------------------------------------------|:-------------------------------| | <code>Un uomo con l'elmetto sta ballando.</code> | <code>Un uomo che indossa un elmetto sta ballando.</code> | <code>1.0</code> | | <code>Un bambino piccolo sta cavalcando un cavallo.</code> | <code>Un bambino sta cavalcando un cavallo.</code> | <code>0.949999988079071</code> | | <code>Un uomo sta dando da mangiare un topo a un serpente.</code> | <code>L'uomo sta dando da mangiare un topo al serpente.</code> | <code>1.0</code> | * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` </details> <details><summary>multi_stsb_nl</summary> #### multi_stsb_nl * Dataset: [multi_stsb_nl](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) at [3acaa3d](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt/tree/3acaa3dd8c91649e0b8e627ffad891f059e47c8c) * Size: 1,500 evaluation samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 5 tokens</li><li>mean: 17.88 tokens</li><li>max: 50 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 17.71 tokens</li><li>max: 51 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.42</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:-----------------------------------------------------|:-----------------------------------------------------|:-------------------------------| | <code>Een man met een helm is aan het dansen.</code> | <code>Een man met een helm is aan het dansen.</code> | <code>1.0</code> | | <code>Een jong kind rijdt op een paard.</code> | <code>Een kind rijdt op een paard.</code> | <code>0.949999988079071</code> | | <code>Een man voedt een muis aan een slang.</code> | <code>De man voert een muis aan de slang.</code> | <code>1.0</code> | * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` </details> <details><summary>multi_stsb_pl</summary> #### multi_stsb_pl * Dataset: [multi_stsb_pl](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) at [3acaa3d](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt/tree/3acaa3dd8c91649e0b8e627ffad891f059e47c8c) * Size: 1,500 evaluation samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 6 tokens</li><li>mean: 18.54 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 18.43 tokens</li><li>max: 54 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.42</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:---------------------------------------------------|:---------------------------------------------------|:-------------------------------| | <code>Tańczy mężczyzna w twardym kapeluszu.</code> | <code>Tańczy mężczyzna w twardym kapeluszu.</code> | <code>1.0</code> | | <code>Małe dziecko jedzie na koniu.</code> | <code>Dziecko jedzie na koniu.</code> | <code>0.949999988079071</code> | | <code>Człowiek karmi węża myszką.</code> | <code>Ten człowiek karmi węża myszką.</code> | <code>1.0</code> | * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` </details> <details><summary>multi_stsb_pt</summary> #### multi_stsb_pt * Dataset: [multi_stsb_pt](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) at [3acaa3d](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt/tree/3acaa3dd8c91649e0b8e627ffad891f059e47c8c) * Size: 1,500 evaluation samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 7 tokens</li><li>mean: 18.22 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 18.11 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.42</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:------------------------------------------------------------|:-----------------------------------------------------------|:-------------------------------| | <code>Um homem de chapéu duro está a dançar.</code> | <code>Um homem com um capacete está a dançar.</code> | <code>1.0</code> | | <code>Uma criança pequena está a montar a cavalo.</code> | <code>Uma criança está a montar a cavalo.</code> | <code>0.949999988079071</code> | | <code>Um homem está a alimentar um rato a uma cobra.</code> | <code>O homem está a alimentar a cobra com um rato.</code> | <code>1.0</code> | * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` </details> <details><summary>multi_stsb_ru</summary> #### multi_stsb_ru * Dataset: [multi_stsb_ru](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) at [3acaa3d](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt/tree/3acaa3dd8c91649e0b8e627ffad891f059e47c8c) * Size: 1,500 evaluation samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 6 tokens</li><li>mean: 17.92 tokens</li><li>max: 49 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 17.75 tokens</li><li>max: 47 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.42</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:------------------------------------------------------|:----------------------------------------------|:-------------------------------| | <code>Человек в твердой шляпе танцует.</code> | <code>Мужчина в твердой шляпе танцует.</code> | <code>1.0</code> | | <code>Маленький ребенок едет верхом на лошади.</code> | <code>Ребенок едет на лошади.</code> | <code>0.949999988079071</code> | | <code>Мужчина кормит мышь змее.</code> | <code>Человек кормит змею мышью.</code> | <code>1.0</code> | * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` </details> <details><summary>multi_stsb_zh</summary> #### multi_stsb_zh * Dataset: [multi_stsb_zh](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt) at [3acaa3d](https://huggingface.co/datasets/PhilipMay/stsb_multi_mt/tree/3acaa3dd8c91649e0b8e627ffad891f059e47c8c) * Size: 1,500 evaluation samples * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | sentence1 | sentence2 | score | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 6 tokens</li><li>mean: 15.37 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 15.24 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.42</li><li>max: 1.0</li></ul> | * Samples: | sentence1 | sentence2 | score | |:---------------------------|:--------------------------|:-------------------------------| | <code>一个戴着硬帽子的人在跳舞。</code> | <code>一个戴着硬帽的人在跳舞。</code> | <code>1.0</code> | | <code>一个小孩子在骑马。</code> | <code>一个孩子在骑马。</code> | <code>0.949999988079071</code> | | <code>一个人正在用老鼠喂蛇。</code> | <code>那人正在给蛇喂老鼠。</code> | <code>1.0</code> | * Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_cos_sim" } ``` </details> ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `num_train_epochs`: 4 - `warmup_ratio`: 0.1 #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 4 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | multi stsb de loss | multi stsb es loss | multi stsb fr loss | multi stsb it loss | multi stsb nl loss | multi stsb pl loss | multi stsb pt loss | multi stsb ru loss | multi stsb zh loss | sts-eval_spearman_cosine | sts-test_spearman_cosine | |:-----:|:-----:|:-------------:|:------------------:|:------------------:|:------------------:|:------------------:|:------------------:|:------------------:|:------------------:|:------------------:|:------------------:|:------------------------:|:------------------------:| | 4.0 | 12960 | 3.7859 | 6.5030 | 6.5739 | 6.7230 | 6.8049 | 6.6585 | 6.8389 | 6.6333 | 6.7102 | 6.3148 | 0.8611 | - | | -1 | -1 | - | - | - | - | - | - | - | - | - | - | - | 0.7878 | ### Framework Versions - Python: 3.10.13 - Sentence Transformers: 3.4.1 - Transformers: 4.48.2 - PyTorch: 2.1.2+cu121 - Accelerate: 1.3.0 - Datasets: 2.16.1 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### CoSENTLoss ```bibtex @online{kexuefm-8847, title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT}, author={Su Jianlin}, year={2022}, month={Jan}, url={https://kexue.fm/archives/8847}, } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
Nitral-AI/L3.1_NN-8B-test2
Nitral-AI
"2025-03-31T21:18:03Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "base_model:Nitral-AI/8B-lora2-test", "base_model:merge:Nitral-AI/8B-lora2-test", "base_model:Nitral-AI/L3.1_NN-8B-test1", "base_model:merge:Nitral-AI/L3.1_NN-8B-test1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-03-31T21:14:17Z"
--- base_model: - Nitral-AI/L3.1_NN-8B-test1 - Nitral-AI/8B-lora2-test library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method. ### Models Merged The following models were included in the merge: * [Nitral-AI/L3.1_NN-8B-test1](https://huggingface.co/Nitral-AI/L3.1_NN-8B-test1) + [Nitral-AI/8B-lora2-test](https://huggingface.co/Nitral-AI/8B-lora2-test) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: Nitral-AI/L3.1_NN-8B-test1+Nitral-AI/8B-lora2-test layer_range: [0, 32] - model: Nitral-AI/L3.1_NN-8B-test1+Nitral-AI/8B-lora2-test layer_range: [0, 32] merge_method: slerp base_model: Nitral-AI/L3.1_NN-8B-test1+Nitral-AI/8B-lora2-test parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.420 dtype: bfloat16 ```
pczarnik/herbert-base-ner
pczarnik
"2025-01-28T09:50:52Z"
905
4
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "bert", "token-classification", "pl", "dataset:wikiann", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2023-05-27T11:35:50Z"
--- license: cc-by-4.0 datasets: - wikiann language: - pl pipeline_tag: token-classification widget: - text: "Nazywam się Grzegorz Brzęszczyszczykiewicz, pochodzę z Chrząszczyżewoszczyc, pracuję w Łękołodzkim Urzędzie Powiatowym" - text: "Jestem Krzysiek i pracuję w Ministerstwie Sportu" - text: "Na imię jej Wiktoria, pracuje w Krakowie na AGH" model-index: - name: herbert-base-ner results: - task: name: Token Classification type: token-classification dataset: name: wikiann type: wikiann config: pl split: test args: pl metrics: - name: Precision type: precision value: 0.8857142857142857 - name: Recall type: recall value: 0.9070532179048386 - name: F1 type: f1 value: 0.896256755412619 - name: Accuracy type: accuracy value: 0.9581463871961428 --- # herbert-base-ner ## Model description **herbert-base-ner** is a fine-tuned HerBERT model that can be used for **Named Entity Recognition** . It has been trained to recognize three types of entities: person (PER), location (LOC) and organization (ORG). Specifically, this model is an [*allegro/herbert-base-cased*](https://huggingface.co/allegro/herbert-base-cased) model that was fine-tuned on the Polish subset of *wikiann* dataset. ### How to use You can use this model with Transformers *pipeline* for NER. ```python from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline model_checkpoint = "pczarnik/herbert-base-ner" tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) model = AutoModelForTokenClassification.from_pretrained(model_checkpoint) nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "Nazywam się Grzegorz Brzęszczyszczykiewicz, pochodzę "\ "z Chrząszczyżewoszczyc, pracuję w Łękołodzkim Urzędzie Powiatowym" ner_results = nlp(example) print(ner_results) ``` ```python [{'entity': 'B-PER', 'score': 0.99451494, 'index': 4, 'word': 'Grzegorz</w>', 'start': 12, 'end': 20}, {'entity': 'I-PER', 'score': 0.99758506, 'index': 5, 'word': 'B', 'start': 21, 'end': 22}, {'entity': 'I-PER', 'score': 0.99749386, 'index': 6, 'word': 'rzę', 'start': 22, 'end': 25}, {'entity': 'I-PER', 'score': 0.9973041, 'index': 7, 'word': 'szczy', 'start': 25, 'end': 30}, {'entity': 'I-PER', 'score': 0.99682057, 'index': 8, 'word': 'szczy', 'start': 30, 'end': 35}, {'entity': 'I-PER', 'score': 0.9964832, 'index': 9, 'word': 'kiewicz</w>', 'start': 35, 'end': 42}, {'entity': 'B-LOC', 'score': 0.99427444, 'index': 14, 'word': 'Chrzą', 'start': 55, 'end': 60}, {'entity': 'I-LOC', 'score': 0.99143463, 'index': 15, 'word': 'szczy', 'start': 60, 'end': 65}, {'entity': 'I-LOC', 'score': 0.9922201, 'index': 16, 'word': 'że', 'start': 65, 'end': 67}, {'entity': 'I-LOC', 'score': 0.9918464, 'index': 17, 'word': 'wo', 'start': 67, 'end': 69}, {'entity': 'I-LOC', 'score': 0.9900766, 'index': 18, 'word': 'szczy', 'start': 69, 'end': 74}, {'entity': 'I-LOC', 'score': 0.98823845, 'index': 19, 'word': 'c</w>', 'start': 74, 'end': 75}, {'entity': 'B-ORG', 'score': 0.6808262, 'index': 23, 'word': 'Łę', 'start': 87, 'end': 89}, {'entity': 'I-ORG', 'score': 0.7763973, 'index': 24, 'word': 'ko', 'start': 89, 'end': 91}, {'entity': 'I-ORG', 'score': 0.77731717, 'index': 25, 'word': 'ło', 'start': 91, 'end': 93}, {'entity': 'I-ORG', 'score': 0.9108255, 'index': 26, 'word': 'dzkim</w>', 'start': 93, 'end': 98}, {'entity': 'I-ORG', 'score': 0.98050755, 'index': 27, 'word': 'Urzędzie</w>', 'start': 99, 'end': 107}, {'entity': 'I-ORG', 'score': 0.9789752, 'index': 28, 'word': 'Powiatowym</w>', 'start': 108, 'end': 118}] ``` ### BibTeX entry and citation info ``` @inproceedings{mroczkowski-etal-2021-herbert, title = "{H}er{BERT}: Efficiently Pretrained Transformer-based Language Model for {P}olish", author = "Mroczkowski, Robert and Rybak, Piotr and Wr{\\'o}blewska, Alina and Gawlik, Ireneusz", booktitle = "Proceedings of the 8th Workshop on Balto-Slavic Natural Language Processing", month = apr, year = "2021", address = "Kiyv, Ukraine", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.bsnlp-1.1", pages = "1--10", } ``` ``` @inproceedings{pan-etal-2017-cross, title = "Cross-lingual Name Tagging and Linking for 282 Languages", author = "Pan, Xiaoman and Zhang, Boliang and May, Jonathan and Nothman, Joel and Knight, Kevin and Ji, Heng", booktitle = "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = jul, year = "2017", address = "Vancouver, Canada", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/P17-1178", doi = "10.18653/v1/P17-1178", pages = "1946--1958", } ```