modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-25 06:27:54
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
495 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-25 06:24:22
card
stringlengths
11
1.01M
fine-tuned/before-finetuning-512-192-gpt-4o-2024-05-13-83930416
fine-tuned
2024-05-28T18:53:40Z
5
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "en", "dataset:fine-tuned/before-finetuning-512-192-gpt-4o-2024-05-13-83930416", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-28T18:53:06Z
--- license: apache-2.0 datasets: - fine-tuned/before-finetuning-512-192-gpt-4o-2024-05-13-83930416 - allenai/c4 language: - en - en pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb --- This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case: None ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/before-finetuning-512-192-gpt-4o-2024-05-13-83930416', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
fine-tuned/NFCorpus-512-192-gpt-4o-2024-05-13-89953157
fine-tuned
2024-05-28T18:52:41Z
5
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "en", "dataset:fine-tuned/NFCorpus-512-192-gpt-4o-2024-05-13-89953157", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-28T18:52:06Z
--- license: apache-2.0 datasets: - fine-tuned/NFCorpus-512-192-gpt-4o-2024-05-13-89953157 - allenai/c4 language: - en - en pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb --- This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case: None ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/NFCorpus-512-192-gpt-4o-2024-05-13-89953157', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
ninaa510/distilbert-finetuned-medical-diagnosis
ninaa510
2024-05-28T18:52:11Z
69
0
transformers
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "medical", "en", "dataset:ninaa510/diagnosis-text", "base_model:distilbert/distilbert-base-cased", "base_model:finetune:distilbert/distilbert-base-cased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T17:04:36Z
--- license: apache-2.0 tags: - generated_from_keras_callback - medical base_model: distilbert/distilbert-base-cased model-index: - name: distilbert-finetuned-medical-diagnosis results: - task: type: text-classification name: Text classification dataset: type: ninaa510/diagnosis-text name: Symptoms and diseases for classification split: test metrics: - type: accuracy # Required. Example: wer. Use metric id from https://hf.co/metrics value: 58.68 # Required. Example: 20.90 name: Accuracy # Optional. Example: Test WER datasets: - ninaa510/diagnosis-text language: - en metrics: - accuracy pipeline_tag: text-classification widget: - text: "I have had a persistent cough for the last three days. The cough sometimes includes blood. I am also suffering from fatigue and a loss of appetite." --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-finetuned-medical-diagnosis This model is a fine-tuned version of [distilbert/distilbert-base-cased](https://huggingface.co/distilbert/distilbert-base-cased) on the dataset [here](https://huggingface.co/ninaa510/diagnosis-text). It achieves an accuracy of 58.68% on the test set of the dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': 1.0, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 1663, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.41.0 - TensorFlow 2.15.0 - Datasets 2.19.1 - Tokenizers 0.19.1
odicem/tinyllama-cleantech-v1
odicem
2024-05-28T18:50:08Z
136
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-28T18:47:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mjun/tinyvit-musinsa-fashion-classification
mjun
2024-05-28T18:47:32Z
51
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-28T17:35:22Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model_index: - name: mjun/tinyvit-musinsa-fashion-classification results: - task: name: Image Classification type: image-classification metric: name: Accuracy type: accuracy value: 0.7588454376163873 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mjun/tinyvit-musinsa-fashion-classification This model is a fine-tuned version of [timm/tiny_vit_5m_224.dist_in22k_ft_in1k](https://huggingface.co/timm/tiny_vit_5m_224.dist_in22k_ft_in1k) on an unkown dataset. It achieves the following results on the evaluation set: - Loss: 0.5282 - Accuracy: 0.7588 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 96 - eval_batch_size: 96 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 269 | 0.7828 | 0.6390 | | 0.881 | 2.0 | 538 | 0.7235 | 0.6702 | | 0.881 | 3.0 | 807 | 0.5733 | 0.7353 | | 0.5813 | 4.0 | 1076 | 0.5362 | 0.7519 | | 0.5813 | 5.0 | 1345 | 0.5282 | 0.7588 | ### Framework versions - Transformers 4.8.1 - Pytorch 1.13.1+cu117 - Datasets 2.7.1 - Tokenizers 0.10.3
BehradG/vit-base-patch16-224-in21k-finetuned-lora-food101
BehradG
2024-05-28T18:47:23Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-28T18:04:31Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ds28/llama2-causal
ds28
2024-05-28T18:47:20Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-28T14:40:19Z
--- license: apache-2.0 ---
Kovalev/m2m_100_kazparc
Kovalev
2024-05-28T18:40:35Z
161
0
transformers
[ "transformers", "safetensors", "m2m_100", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-28T18:37:24Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Prahas10/shingles
Prahas10
2024-05-28T18:39:24Z
6
0
transformers
[ "transformers", "tf", "vit", "image-classification", "generated_from_keras_callback", "base_model:google/vit-base-patch16-384", "base_model:finetune:google/vit-base-patch16-384", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-24T06:41:50Z
--- license: apache-2.0 base_model: google/vit-base-patch16-384 tags: - generated_from_keras_callback model-index: - name: Prahas10/shingles results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Prahas10/shingles This model is a fine-tuned version of [google/vit-base-patch16-384](https://huggingface.co/google/vit-base-patch16-384) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0993 - Validation Loss: 0.6967 - Train Accuracy: 0.8166 - Epoch: 29 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 4e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 4e-05, 'decay_steps': 127899.75, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 10370.25, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.0001} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 5.2368 | 5.2154 | 0.0047 | 0 | | 5.1655 | 5.1337 | 0.0113 | 1 | | 5.0415 | 4.9860 | 0.0278 | 2 | | 4.8179 | 4.7812 | 0.0781 | 3 | | 4.4541 | 4.4703 | 0.1844 | 4 | | 3.9330 | 4.0779 | 0.2841 | 5 | | 3.3155 | 3.6691 | 0.3650 | 6 | | 2.6546 | 3.3371 | 0.4313 | 7 | | 2.0435 | 3.0037 | 0.4727 | 8 | | 1.5258 | 2.7059 | 0.5193 | 9 | | 1.1079 | 2.4174 | 0.5588 | 10 | | 0.7989 | 2.3590 | 0.5532 | 11 | | 0.5857 | 1.9721 | 0.6298 | 12 | | 0.4337 | 1.7442 | 0.6896 | 13 | | 0.3352 | 1.7334 | 0.6580 | 14 | | 0.2641 | 1.6197 | 0.6670 | 15 | | 0.2042 | 1.7021 | 0.6289 | 16 | | 0.1642 | 1.3843 | 0.7070 | 17 | | 0.1500 | 1.4422 | 0.6787 | 18 | | 0.1251 | 1.2797 | 0.7098 | 19 | | 0.1093 | 0.9233 | 0.8020 | 20 | | 0.1215 | 0.9209 | 0.7977 | 21 | | 0.1007 | 0.9143 | 0.7803 | 22 | | 0.0811 | 0.7952 | 0.8090 | 23 | | 0.0953 | 0.7678 | 0.8260 | 24 | | 0.1033 | 0.8928 | 0.7705 | 25 | | 0.0636 | 0.3480 | 0.9271 | 26 | | 0.0880 | 0.5916 | 0.8669 | 27 | | 0.0861 | 0.8892 | 0.7789 | 28 | | 0.0993 | 0.6967 | 0.8166 | 29 | ### Framework versions - Transformers 4.41.0 - TensorFlow 2.15.0 - Datasets 2.19.1 - Tokenizers 0.19.1
NikolayKozloff/Alpha-Ophiuchi-mini-128k-v0.1-Q5_0-GGUF
NikolayKozloff
2024-05-28T18:35:42Z
6
1
null
[ "gguf", "llama-cpp", "gguf-my-repo", "dataset:NobodyExistsOnTheInternet/ToxicQAFinal", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-28T18:35:33Z
--- license: mit tags: - llama-cpp - gguf-my-repo datasets: - NobodyExistsOnTheInternet/ToxicQAFinal --- # NikolayKozloff/Alpha-Ophiuchi-mini-128k-v0.1-Q5_0-GGUF This model was converted to GGUF format from [`fearlessdots/Alpha-Ophiuchi-mini-128k-v0.1`](https://huggingface.co/fearlessdots/Alpha-Ophiuchi-mini-128k-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/fearlessdots/Alpha-Ophiuchi-mini-128k-v0.1) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo NikolayKozloff/Alpha-Ophiuchi-mini-128k-v0.1-Q5_0-GGUF --model alpha-ophiuchi-mini-128k-v0.1-q5_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo NikolayKozloff/Alpha-Ophiuchi-mini-128k-v0.1-Q5_0-GGUF --model alpha-ophiuchi-mini-128k-v0.1-q5_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && \ cd llama.cpp && \ make && \ ./main -m alpha-ophiuchi-mini-128k-v0.1-q5_0.gguf -n 128 ```
NikolayKozloff/Alpha-Ophiuchi-mini-128k-v0.1-Q4_0-GGUF
NikolayKozloff
2024-05-28T18:34:54Z
6
1
null
[ "gguf", "llama-cpp", "gguf-my-repo", "dataset:NobodyExistsOnTheInternet/ToxicQAFinal", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-28T18:34:46Z
--- license: mit tags: - llama-cpp - gguf-my-repo datasets: - NobodyExistsOnTheInternet/ToxicQAFinal --- # NikolayKozloff/Alpha-Ophiuchi-mini-128k-v0.1-Q4_0-GGUF This model was converted to GGUF format from [`fearlessdots/Alpha-Ophiuchi-mini-128k-v0.1`](https://huggingface.co/fearlessdots/Alpha-Ophiuchi-mini-128k-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/fearlessdots/Alpha-Ophiuchi-mini-128k-v0.1) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo NikolayKozloff/Alpha-Ophiuchi-mini-128k-v0.1-Q4_0-GGUF --model alpha-ophiuchi-mini-128k-v0.1-q4_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo NikolayKozloff/Alpha-Ophiuchi-mini-128k-v0.1-Q4_0-GGUF --model alpha-ophiuchi-mini-128k-v0.1-q4_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && \ cd llama.cpp && \ make && \ ./main -m alpha-ophiuchi-mini-128k-v0.1-q4_0.gguf -n 128 ```
TrungNV/CHATBOT2
TrungNV
2024-05-28T18:34:15Z
0
0
null
[ "tensorboard", "safetensors", "license:apache-2.0", "region:us" ]
null
2024-05-28T09:43:10Z
--- license: apache-2.0 ---
MrezaPRZ/codellama_synthetic_create_context_bigquery
MrezaPRZ
2024-05-28T18:32:29Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-28T18:29:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
SecondNan/ppo-LunaLander-v2
SecondNan
2024-05-28T18:31:38Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-05-28T18:31:19Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 254.38 +/- 19.59 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
isaacchung/QwenPhi-7B-slerp
isaacchung
2024-05-28T18:26:50Z
0
0
null
[ "merge", "mergekit", "lazymergekit", "Qwen/Qwen1.5-7B-Chat", "microsoft/Phi-3-mini-128k-instruct", "base_model:Qwen/Qwen1.5-7B-Chat", "base_model:merge:Qwen/Qwen1.5-7B-Chat", "base_model:microsoft/Phi-3-mini-128k-instruct", "base_model:merge:microsoft/Phi-3-mini-128k-instruct", "region:us" ]
null
2024-05-28T18:26:49Z
--- tags: - merge - mergekit - lazymergekit - Qwen/Qwen1.5-7B-Chat - microsoft/Phi-3-mini-128k-instruct base_model: - Qwen/Qwen1.5-7B-Chat - microsoft/Phi-3-mini-128k-instruct --- # QwenPhi-7B-slerp QwenPhi-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [Qwen/Qwen1.5-7B-Chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat) * [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) ## 🧩 Configuration ```yaml slices: - sources: - model: Qwen/Qwen1.5-7B-Chat layer_range: [0, 32] - model: microsoft/Phi-3-mini-128k-instruct layer_range: [0, 32] merge_method: slerp base_model: microsoft/Phi-3-mini-128k-instruct parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "isaacchung/QwenPhi-7B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
Hanhpt23/whisper-base-vietmed-v1
Hanhpt23
2024-05-28T18:21:38Z
97
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "vi", "base_model:openai/whisper-base", "base_model:finetune:openai/whisper-base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-27T23:57:13Z
--- language: - vi license: apache-2.0 base_model: openai/whisper-base tags: - generated_from_trainer metrics: - wer model-index: - name: openai/whisper-base results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # openai/whisper-base This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the pphuc25/VietMed-split-8-2 dataset. It achieves the following results on the evaluation set: - Loss: 1.0352 - Wer: 27.1975 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:-------:| | 0.7081 | 1.0 | 569 | 0.7147 | 32.6304 | | 0.5097 | 2.0 | 1138 | 0.6779 | 30.7670 | | 0.3642 | 3.0 | 1707 | 0.6890 | 30.5144 | | 0.2242 | 4.0 | 2276 | 0.7389 | 31.4662 | | 0.1221 | 5.0 | 2845 | 0.7970 | 32.5828 | | 0.07 | 6.0 | 3414 | 0.8480 | 30.3240 | | 0.0411 | 7.0 | 3983 | 0.8862 | 29.4380 | | 0.0288 | 8.0 | 4552 | 0.9171 | 29.9066 | | 0.0199 | 9.0 | 5121 | 0.9572 | 29.6321 | | 0.0105 | 10.0 | 5690 | 0.9698 | 28.6473 | | 0.0068 | 11.0 | 6259 | 0.9811 | 29.5881 | | 0.0084 | 12.0 | 6828 | 0.9985 | 28.7424 | | 0.0024 | 13.0 | 7397 | 0.9903 | 29.3355 | | 0.003 | 14.0 | 7966 | 1.0112 | 27.6588 | | 0.0017 | 15.0 | 8535 | 1.0137 | 28.7205 | | 0.0004 | 16.0 | 9104 | 1.0185 | 27.2305 | | 0.0002 | 17.0 | 9673 | 1.0257 | 27.2964 | | 0.0006 | 18.0 | 10242 | 1.0282 | 27.2817 | | 0.0002 | 19.0 | 10811 | 1.0336 | 27.1609 | | 0.0001 | 20.0 | 11380 | 1.0352 | 27.1975 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0 - Datasets 2.19.1 - Tokenizers 0.19.1
imdatta0/meta_llama_3_MetaMathQA_40K_ortho
imdatta0
2024-05-28T18:21:37Z
5
0
peft
[ "peft", "safetensors", "unsloth", "generated_from_trainer", "base_model:meta-llama/Meta-Llama-3-8B", "base_model:adapter:meta-llama/Meta-Llama-3-8B", "license:llama3", "region:us" ]
null
2024-05-28T18:21:33Z
--- license: llama3 library_name: peft tags: - unsloth - generated_from_trainer base_model: meta-llama/Meta-Llama-3-8B model-index: - name: meta_llama_3_MetaMathQA_40K_ortho results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # meta_llama_3_MetaMathQA_40K_ortho This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5219 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 0.02 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.8807 | 0.0211 | 13 | 0.6706 | | 0.6201 | 0.0421 | 26 | 0.6389 | | 0.605 | 0.0632 | 39 | 0.6211 | | 0.5929 | 0.0842 | 52 | 0.6119 | | 0.5555 | 0.1053 | 65 | 0.6045 | | 0.5689 | 0.1264 | 78 | 0.5980 | | 0.5767 | 0.1474 | 91 | 0.5914 | | 0.5584 | 0.1685 | 104 | 0.5886 | | 0.5411 | 0.1896 | 117 | 0.5847 | | 0.5417 | 0.2106 | 130 | 0.5829 | | 0.5388 | 0.2317 | 143 | 0.5787 | | 0.5473 | 0.2527 | 156 | 0.5748 | | 0.5432 | 0.2738 | 169 | 0.5701 | | 0.5402 | 0.2949 | 182 | 0.5677 | | 0.5318 | 0.3159 | 195 | 0.5655 | | 0.5155 | 0.3370 | 208 | 0.5627 | | 0.5231 | 0.3580 | 221 | 0.5584 | | 0.528 | 0.3791 | 234 | 0.5578 | | 0.5372 | 0.4002 | 247 | 0.5545 | | 0.5145 | 0.4212 | 260 | 0.5517 | | 0.5246 | 0.4423 | 273 | 0.5487 | | 0.5299 | 0.4633 | 286 | 0.5473 | | 0.5297 | 0.4844 | 299 | 0.5445 | | 0.5089 | 0.5055 | 312 | 0.5425 | | 0.5208 | 0.5265 | 325 | 0.5409 | | 0.5114 | 0.5476 | 338 | 0.5398 | | 0.5092 | 0.5687 | 351 | 0.5384 | | 0.4886 | 0.5897 | 364 | 0.5359 | | 0.5121 | 0.6108 | 377 | 0.5337 | | 0.5079 | 0.6318 | 390 | 0.5324 | | 0.4996 | 0.6529 | 403 | 0.5310 | | 0.505 | 0.6740 | 416 | 0.5301 | | 0.5039 | 0.6950 | 429 | 0.5288 | | 0.5073 | 0.7161 | 442 | 0.5275 | | 0.4988 | 0.7371 | 455 | 0.5264 | | 0.4857 | 0.7582 | 468 | 0.5260 | | 0.4889 | 0.7793 | 481 | 0.5252 | | 0.4836 | 0.8003 | 494 | 0.5244 | | 0.5181 | 0.8214 | 507 | 0.5237 | | 0.5052 | 0.8424 | 520 | 0.5231 | | 0.4908 | 0.8635 | 533 | 0.5228 | | 0.5136 | 0.8846 | 546 | 0.5225 | | 0.493 | 0.9056 | 559 | 0.5223 | | 0.4908 | 0.9267 | 572 | 0.5222 | | 0.5066 | 0.9478 | 585 | 0.5221 | | 0.5116 | 0.9688 | 598 | 0.5219 | | 0.5073 | 0.9899 | 611 | 0.5219 | ### Framework versions - PEFT 0.7.1 - Transformers 4.40.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
chirbard/ppo-SnowballTarget
chirbard
2024-05-28T18:21:02Z
1
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2024-04-28T09:31:27Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: chirbard/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
DiederikMartens/gBERT_sa_cv_10_full_training
DiederikMartens
2024-05-28T18:18:30Z
108
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-german-cased", "base_model:finetune:google-bert/bert-base-german-cased", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T15:58:23Z
--- license: mit base_model: google-bert/bert-base-german-cased tags: - generated_from_trainer metrics: - f1 model-index: - name: gBERT_sa_cv_10_full_training results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gBERT_sa_cv_10_full_training This model is a fine-tuned version of [google-bert/bert-base-german-cased](https://huggingface.co/google-bert/bert-base-german-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5790 - F1: 0.6823 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.47e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 445 | 0.3761 | 0.5543 | | 0.4212 | 2.0 | 890 | 0.4136 | 0.6501 | | 0.2198 | 3.0 | 1335 | 0.5790 | 0.6823 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
amosp5/meta-llama3-8b-scrum
amosp5
2024-05-28T18:06:44Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "llama", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct", "license:llama3", "region:us" ]
null
2024-05-28T05:45:13Z
--- license: llama3 library_name: peft tags: - trl - sft - generated_from_trainer base_model: meta-llama/Meta-Llama-3-8B-Instruct datasets: - generator model-index: - name: code-llama3-8b-text-to-s results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # code-llama3-8b-text-to-s This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 3 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 6 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.7.2.dev0 - Transformers 4.38.2 - Pytorch 2.3.0a0+40ec155e58.nv24.03 - Datasets 2.19.1 - Tokenizers 0.15.2
aspis/llama3_tutor_pt-br_v0.2
aspis
2024-05-28T18:05:09Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-28T18:04:55Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** aspis - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
DiederikMartens/eBERT_sa_cv_13_fold9
DiederikMartens
2024-05-28T18:04:33Z
111
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T17:52:49Z
--- license: apache-2.0 base_model: google-bert/bert-base-cased tags: - generated_from_trainer metrics: - f1 model-index: - name: eBERT_sa_cv_13_fold9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eBERT_sa_cv_13_fold9 This model is a fine-tuned version of [google-bert/bert-base-cased](https://huggingface.co/google-bert/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6852 - F1: 0.5593 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.47e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 325 | 0.6179 | 0.4328 | | 0.6082 | 2.0 | 650 | 0.5883 | 0.4874 | | 0.6082 | 3.0 | 975 | 0.6852 | 0.5593 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
JawadC/pecorino-llava
JawadC
2024-05-28T18:03:26Z
2
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-28T17:40:21Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of PECORINO cheese widget: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - JawadC/pecorino-llava <Gallery /> ## Model description These are JawadC/pecorino-llava LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of PECORINO cheese to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](JawadC/pecorino-llava/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
enithgma/asogrocaima
enithgma
2024-05-28T17:59:01Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-28T17:59:01Z
--- license: apache-2.0 ---
lukarape/w2v-bert-2.0-acoustic-erebuni-commonvoice-v23-hyper2
lukarape
2024-05-28T17:54:27Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-28T17:54:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
doubledsbv/KafkaLM-Mixtral-8x7B-V0.2_DPO-AWQ
doubledsbv
2024-05-28T17:53:24Z
8
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "awq", "region:us" ]
text-generation
2024-05-28T17:47:37Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DiederikMartens/eBERT_sa_cv_13_fold8
DiederikMartens
2024-05-28T17:52:42Z
109
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T16:27:46Z
--- license: apache-2.0 base_model: google-bert/bert-base-cased tags: - generated_from_trainer metrics: - f1 model-index: - name: eBERT_sa_cv_13_fold8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eBERT_sa_cv_13_fold8 This model is a fine-tuned version of [google-bert/bert-base-cased](https://huggingface.co/google-bert/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5854 - F1: 0.5584 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.47e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 325 | 0.5765 | 0.4529 | | 0.6339 | 2.0 | 650 | 0.5104 | 0.5005 | | 0.6339 | 3.0 | 975 | 0.5854 | 0.5584 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
kamrr/tinyllama-1.1B_dolly-4.5k_lora
kamrr
2024-05-28T17:39:57Z
2
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T", "base_model:adapter:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2024-05-28T17:06:17Z
--- license: apache-2.0 library_name: peft tags: - axolotl - generated_from_trainer base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T model-index: - name: tinyllama-1.1B_dolly-4.5k_lora results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T model_type: LlamaForCausalLM tokenizer_type: LlamaTokenizer load_in_8bit: true load_in_4bit: false strict: false datasets: - path: kareemamrr/databricks-dolly-4.5k type: alpaca dataset_prepared_path: val_set_size: 0.05 output_dir: ./outputs/lora-out sequence_len: 4096 sample_packing: true eval_sample_packing: false pad_to_sequence_len: true adapter: lora lora_model_dir: lora_r: 16 lora_alpha: 16 lora_dropout: 0.5 lora_target_linear: true lora_fan_in_fan_out: # wandb_project: tinyllama-dolly-axolotl # wandb_entity: kamr54 hub_model_id: kareemamrr/tinyllama-1.1B_dolly-4.5k_lora gradient_accumulation_steps: 4 micro_batch_size: 2 num_epochs: 4 optimizer: adamw_bnb_8bit lr_scheduler: learning_rate: 0.0004 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 10 evals_per_epoch: 4 saves_per_epoch: 1 debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: special_tokens: ``` </details><br> # tinyllama-1.1B_dolly-4.5k_lora This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7650 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0004 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.8146 | 0.0317 | 1 | 2.1074 | | 1.7728 | 0.2540 | 8 | 1.8290 | | 1.9975 | 0.5079 | 16 | 1.7875 | | 1.7685 | 0.7619 | 24 | 1.7717 | | 1.8368 | 1.0159 | 32 | 1.7684 | | 1.768 | 1.2460 | 40 | 1.7622 | | 1.7774 | 1.5 | 48 | 1.7655 | | 1.7727 | 1.7540 | 56 | 1.7565 | | 1.7453 | 2.0079 | 64 | 1.7502 | | 1.5904 | 2.2381 | 72 | 1.7644 | | 1.5978 | 2.4921 | 80 | 1.7628 | | 1.7305 | 2.7460 | 88 | 1.7600 | | 1.4956 | 3.0 | 96 | 1.7582 | | 1.503 | 3.2222 | 104 | 1.7603 | | 1.6659 | 3.4762 | 112 | 1.7634 | | 1.734 | 3.7302 | 120 | 1.7650 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.2 - Pytorch 2.1.2+cu118 - Datasets 2.19.1 - Tokenizers 0.19.1
mchariar/ppo-Huggy
mchariar
2024-05-28T17:35:45Z
3
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2024-05-24T21:57:58Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: mchariar/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
atepeq/Mistral-7B-Instruct-v0.2_musk_r8
atepeq
2024-05-28T17:33:41Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-28T16:10:25Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
QuietImpostor/Llama-3-Refueled-Pruned
QuietImpostor
2024-05-28T17:31:08Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "en", "dataset:yahma/alpaca-cleaned", "base_model:refuelai/Llama-3-Refueled", "base_model:finetune:refuelai/Llama-3-Refueled", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-21T19:26:24Z
--- base_model: - refuelai/Llama-3-Refueled library_name: transformers tags: - mergekit - merge license: llama3 datasets: - yahma/alpaca-cleaned language: - en --- ### Pruning Details This is a prune of [Llama 3 Refueled](https://www.huggingface.co/refuelai/llama-3-refueled) using [mergekit](https://github.com/cg123/mergekit) and [PruneMe](https://www.github.com/arcee-ai/PruneMe) The model is semi-tested, but still needs some debugging, namely with converting to GGUF, though I am working on that. Note: the [dataset](https://www.huggingface.co/yahma/alpaca-cleaned) was used for evaluating what layers should be pruned. This model was **NOT** finetuned. ### Performance After only 1 test because of lack of compute and for stupid long inference times on my 3060ti (8GB), it does show some interesting results. Here's the response after being prompted "Hi!" using the [example from Meta](https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3). ```model_response vel tips and recommendations.user Hi!assistant Hi! I can help you find the best travel tips and recommendations for your next trip. Where you most interested to travel and what kind of activities you most to to the 9e sure, we can start and letiing 10e 11e 12e 13e 14e 15e 16e 17e 18e 19e 20e 21e 23e 24e 5e 6e 7e 8e 9e 10e 11e 12e 13e 14e 15e ``` Even without finetuning, the model still exhibits some extent of instruction following. And fine-tuning is a WIP and I will update this when it's ready. Finetuning is no longer in progress due to issues with unsloth. However, I am working on a project that will hopefully make pruning models easier. ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: refuelai/Llama-3-Refueled layer_range: [0, 19] - sources: - model: refuelai/Llama-3-Refueled layer_range: [29, 32] merge_method: passthrough dtype: bfloat16 ```
DiederikMartens/mBERT_sa_cv_13_fold9
DiederikMartens
2024-05-28T17:29:47Z
115
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-multilingual-cased", "base_model:finetune:google-bert/bert-base-multilingual-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T17:06:58Z
--- license: apache-2.0 base_model: google-bert/bert-base-multilingual-cased tags: - generated_from_trainer metrics: - f1 model-index: - name: mBERT_sa_cv_13_fold9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mBERT_sa_cv_13_fold9 This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5084 - F1: 0.5983 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.47e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 325 | 0.5642 | 0.4782 | | 0.5411 | 2.0 | 650 | 0.5084 | 0.5983 | | 0.5411 | 3.0 | 975 | 0.6772 | 0.5917 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
fine-tuned/BAAI_bge-small-en-v1_5-2852024-6p16-webapp
fine-tuned
2024-05-28T17:24:05Z
6
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "NLP", "Machine Learning", "Text Analysis", "AI", "Computational Linguistics", "en", "dataset:fine-tuned/BAAI_bge-small-en-v1_5-2852024-6p16-webapp", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-28T17:24:00Z
--- license: apache-2.0 datasets: - fine-tuned/BAAI_bge-small-en-v1_5-2852024-6p16-webapp - allenai/c4 language: - en pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb - NLP - Machine Learning - Text Analysis - AI - Computational Linguistics --- This model is a fine-tuned version of [**BAAI/bge-small-en-v1.5**](https://huggingface.co/BAAI/bge-small-en-v1.5) designed for the following use case: natural language processing ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/BAAI_bge-small-en-v1_5-2852024-6p16-webapp', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
dwb2023/paligemma_rlaifv-V-1
dwb2023
2024-05-28T17:23:21Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "paligemma", "generated_from_trainer", "base_model:google/paligemma-3b-pt-224", "base_model:adapter:google/paligemma-3b-pt-224", "license:gemma", "region:us" ]
null
2024-05-28T05:30:29Z
--- license: gemma library_name: peft tags: - generated_from_trainer base_model: google/paligemma-3b-pt-224 model-index: - name: paligemma_rlaifv-V-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # paligemma_rlaifv-V-1 This model is a fine-tuned version of [google/paligemma-3b-pt-224](https://huggingface.co/google/paligemma-3b-pt-224) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2 - num_epochs: 8 ### Training results ### Framework versions - PEFT 0.11.1 - Transformers 4.42.0.dev0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
BVRA/tf_efficientnetv2_b3.in1k_ft_df24m_384
BVRA
2024-05-28T17:22:51Z
10
0
DanishFungi
[ "DanishFungi", "pytorch", "image-classification", "ecology", "fungi", "FGVC", "license:cc-by-nc-4.0", "region:us" ]
image-classification
2024-05-15T06:17:51Z
--- tags: - image-classification - ecology - fungi - FGVC library_name: DanishFungi license: cc-by-nc-4.0 --- # Model card for BVRA/tf_efficientnetv2_b3.in1k_ft_df24m_384 ## Model Details - **Model Type:** Danish Fungi Classification - **Model Stats:** - Params (M): 13.2 - Image size: 384 x 384 - **Papers:** - **Original:** ?? - **Train Dataset:** DF24m --> https://sites.google.com/view/danish-fungi-dataset ## Model Usage ### Image Embeddings ```python import timm import torch import torchvision.transforms as T from PIL import Image from urllib.request import urlopen model = timm.create_model("hf-hub:BVRA/tf_efficientnetv2_b3.in1k_ft_df24m_384", pretrained=True) model = model.eval() train_transforms = T.Compose([T.Resize((384, 384)), T.ToTensor(), T.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) img = Image.open(PATH_TO_YOUR_IMAGE) output = model(train_transforms(img).unsqueeze(0)) # output is a (1, num_features) shaped tensor ``` ## Citation ```bibtex @InProceedings{Picek_2022_WACV, author = {Picek, Luk'a {s} and {S}ulc, Milan and Matas, Ji {r}{'\i} and Jeppesen, Thomas S. and Heilmann-Clausen, Jacob and L{e}ss{\o}e, Thomas and Fr{\o}slev, Tobias}, title = {Danish Fungi 2020 - Not Just Another Image Recognition Dataset}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2022}, pages = {1525-1535} } ``` ```bibtex @article{picek2022automatic, title={Automatic Fungi Recognition: Deep Learning Meets Mycology}, author={Picek, Luk{'a}{ {s}} and { {S}}ulc, Milan and Matas, Ji{ {r}}{'\i} and Heilmann-Clausen, Jacob and Jeppesen, Thomas S and Lind, Emil}, journal={Sensors}, volume={22}, number={2}, pages={633}, year={2022}, publisher={Multidisciplinary Digital Publishing Institute} } ```
morca/pegasus-ft
morca
2024-05-28T17:22:12Z
104
0
transformers
[ "transformers", "pytorch", "pegasus", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-28T17:20:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
amosp5/llama3-8b-instruct-scrum
amosp5
2024-05-28T17:21:11Z
3
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct", "license:llama3", "region:us" ]
null
2024-05-28T17:15:03Z
--- license: llama3 library_name: peft tags: - trl - sft - generated_from_trainer base_model: meta-llama/Meta-Llama-3-8B-Instruct datasets: - generator model-index: - name: llama3-8b-instruct-scrum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama3-8b-instruct-scrum This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 3 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 6 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.7.2.dev0 - Transformers 4.38.2 - Pytorch 2.3.0a0+40ec155e58.nv24.03 - Datasets 2.19.1 - Tokenizers 0.15.2
yoruneko-sama/homura_So-VITS-SVC_4.1_Model
yoruneko-sama
2024-05-28T17:20:22Z
0
0
null
[ "region:us" ]
null
2024-05-28T17:07:10Z
本文件夹内所有模型文件及配置文件均基于so-vits-svc4.1 All model files and configuration files in this folder are based on so-vits-svc4.1
vuongnhathien/test-wrong-label
vuongnhathien
2024-05-28T17:14:29Z
192
0
transformers
[ "transformers", "tensorboard", "safetensors", "convnextv2", "image-classification", "generated_from_trainer", "base_model:facebook/convnextv2-base-22k-384", "base_model:finetune:facebook/convnextv2-base-22k-384", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-28T17:06:36Z
--- license: apache-2.0 base_model: facebook/convnextv2-base-22k-384 tags: - generated_from_trainer model-index: - name: test-wrong-label results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test-wrong-label This model is a fine-tuned version of [facebook/convnextv2-base-22k-384](https://huggingface.co/facebook/convnextv2-base-22k-384) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 40 | 0.9315 | 0.7625 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
malerbe/q-FrozenLake-v1-4x4-noSlippery
malerbe
2024-05-28T17:11:50Z
0
0
null
[ "FrozenLake-v1-8x8-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-05-24T09:57:19Z
--- tags: - FrozenLake-v1-8x8-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-8x8-no_slippery type: FrozenLake-v1-8x8-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage model = load_from_hub(repo_id="malerbe/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"])
momina296/flan-t5-base-imdb-text-classification
momina296
2024-05-28T17:09:18Z
13
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google/flan-t5-base", "base_model:finetune:google/flan-t5-base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-02-11T16:54:00Z
--- license: apache-2.0 base_model: google/flan-t5-base tags: - generated_from_trainer metrics: - f1 model-index: - name: flan-t5-base-imdb-text-classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-base-imdb-text-classification This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5742 - F1: 54.5455 - Gen Len: 2.5 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.38.1 - Pytorch 2.1.2 - Datasets 2.1.0 - Tokenizers 0.15.2
DiederikMartens/tsBERT_sa_cv_10_full_training
DiederikMartens
2024-05-28T17:08:40Z
109
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:igorsterner/german-english-code-switching-bert", "base_model:finetune:igorsterner/german-english-code-switching-bert", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T15:58:36Z
--- license: mit base_model: igorsterner/german-english-code-switching-bert tags: - generated_from_trainer metrics: - f1 model-index: - name: tsBERT_sa_cv_10_full_training results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tsBERT_sa_cv_10_full_training This model is a fine-tuned version of [igorsterner/german-english-code-switching-bert](https://huggingface.co/igorsterner/german-english-code-switching-bert) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5331 - F1: 0.6921 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.47e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 445 | 0.3674 | 0.5516 | | 0.4118 | 2.0 | 890 | 0.4229 | 0.6564 | | 0.232 | 3.0 | 1335 | 0.5331 | 0.6921 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
abhiwebshar/lora_model
abhiwebshar
2024-05-28T17:08:36Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-28T17:08:25Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** abhiwebshar - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
adinolfi/borse-lora
adinolfi
2024-05-28T17:07:43Z
1
0
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers-training", "lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-05-23T14:01:28Z
--- license: creativeml-openrail-m library_name: diffusers tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - diffusers-training - lora - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - diffusers-training - lora base_model: runwayml/stable-diffusion-v1-5 inference: true --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # LoRA text2image fine-tuning - adinolfi/borse-lora These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the C:/Users/PNP/Desktop/Nappi/Dataset_manuale dataset. You can find some example images in the following. ![img_0](./image_0.png) ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
DiederikMartens/tsBERT_sa_cv_13_fold8
DiederikMartens
2024-05-28T17:06:54Z
109
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:igorsterner/german-english-code-switching-bert", "base_model:finetune:igorsterner/german-english-code-switching-bert", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T16:23:17Z
--- license: mit base_model: igorsterner/german-english-code-switching-bert tags: - generated_from_trainer metrics: - f1 model-index: - name: tsBERT_sa_cv_13_fold8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tsBERT_sa_cv_13_fold8 This model is a fine-tuned version of [igorsterner/german-english-code-switching-bert](https://huggingface.co/igorsterner/german-english-code-switching-bert) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5081 - F1: 0.6678 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.47e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 325 | 0.4193 | 0.6050 | | 0.45 | 2.0 | 650 | 0.4256 | 0.6563 | | 0.45 | 3.0 | 975 | 0.5081 | 0.6678 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
DiederikMartens/mBERT_sa_cv_13_fold8
DiederikMartens
2024-05-28T17:06:49Z
108
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-multilingual-cased", "base_model:finetune:google-bert/bert-base-multilingual-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T16:25:04Z
--- license: apache-2.0 base_model: google-bert/bert-base-multilingual-cased tags: - generated_from_trainer metrics: - f1 model-index: - name: mBERT_sa_cv_13_fold8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mBERT_sa_cv_13_fold8 This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5520 - F1: 0.6271 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.47e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 325 | 0.4435 | 0.5245 | | 0.5315 | 2.0 | 650 | 0.4610 | 0.5868 | | 0.5315 | 3.0 | 975 | 0.5520 | 0.6271 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
DokHee/Llama-3-Open-Ko-8B-Instruct-alphaEdu100V3-gguf
DokHee
2024-05-28T17:05:19Z
16
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:beomi/Llama-3-Open-Ko-8B", "base_model:quantized:beomi/Llama-3-Open-Ko-8B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-28T16:32:40Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf base_model: beomi/Llama-3-Open-Ko-8B --- # Uploaded model - **Developed by:** DokHee - **License:** apache-2.0 - **Finetuned from model :** beomi/Llama-3-Open-Ko-8B This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
pbruna/distilbert-base-uncased-finetuned-clinc
pbruna
2024-05-28T16:53:07Z
108
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T15:10:31Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-clinc results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3223 - Accuracy: 0.9458 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.4054 | 1.0 | 318 | 2.5365 | 0.7294 | | 1.9574 | 2.0 | 636 | 1.2998 | 0.8655 | | 1.0087 | 3.0 | 954 | 0.7107 | 0.9135 | | 0.5584 | 4.0 | 1272 | 0.4784 | 0.9313 | | 0.3615 | 5.0 | 1590 | 0.3918 | 0.9368 | | 0.2731 | 6.0 | 1908 | 0.3560 | 0.9426 | | 0.2281 | 7.0 | 2226 | 0.3339 | 0.9465 | | 0.2039 | 8.0 | 2544 | 0.3295 | 0.9442 | | 0.1926 | 9.0 | 2862 | 0.3229 | 0.9468 | | 0.186 | 10.0 | 3180 | 0.3223 | 0.9458 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.15.2
Essacheez/gemma-7b-it-finetune-code-10k-gemma-style
Essacheez
2024-05-28T16:50:15Z
4
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-28T15:34:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
straenyagun/akilvedavranisbozukluklari-classification
straenyagun
2024-05-28T16:48:58Z
111
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T16:48:33Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
vonvolous/tattoo_realism_before_LoRA
vonvolous
2024-05-28T16:46:35Z
9
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-25T04:12:15Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: In the style of TOK tattoo widget: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - vonvolous/tattoo_realism_LoRA <Gallery /> ## Model description These are vonvolous/tattoo_realism_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use In the style of TOK tattoo to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](vonvolous/tattoo_realism_LoRA/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
fine-tuned/SCIDOCS-512-192-gpt-4o-2024-05-13-591725
fine-tuned
2024-05-28T16:46:08Z
5
0
sentence-transformers
[ "sentence-transformers", "safetensors", "xlm-roberta", "feature-extraction", "sentence-similarity", "mteb", "en", "dataset:fine-tuned/SCIDOCS-512-192-gpt-4o-2024-05-13-591725", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-28T16:45:17Z
--- license: apache-2.0 datasets: - fine-tuned/SCIDOCS-512-192-gpt-4o-2024-05-13-591725 - allenai/c4 language: - en - en pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb --- This model is a fine-tuned version of [**BAAI/bge-m3**](https://huggingface.co/BAAI/bge-m3) designed for the following use case: None ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/SCIDOCS-512-192-gpt-4o-2024-05-13-591725', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf
RichardErkhov
2024-05-28T16:46:04Z
80
0
null
[ "gguf", "arxiv:2402.06332", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-28T04:54:44Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) internlm2-math-plus-7b - GGUF - Model creator: https://huggingface.co/internlm/ - Original model: https://huggingface.co/internlm/internlm2-math-plus-7b/ | Name | Quant method | Size | | ---- | ---- | ---- | | [internlm2-math-plus-7b.Q2_K.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf/blob/main/internlm2-math-plus-7b.Q2_K.gguf) | Q2_K | 2.8GB | | [internlm2-math-plus-7b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf/blob/main/internlm2-math-plus-7b.IQ3_XS.gguf) | IQ3_XS | 3.1GB | | [internlm2-math-plus-7b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf/blob/main/internlm2-math-plus-7b.IQ3_S.gguf) | IQ3_S | 3.25GB | | [internlm2-math-plus-7b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf/blob/main/internlm2-math-plus-7b.Q3_K_S.gguf) | Q3_K_S | 3.24GB | | [internlm2-math-plus-7b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf/blob/main/internlm2-math-plus-7b.IQ3_M.gguf) | IQ3_M | 3.35GB | | [internlm2-math-plus-7b.Q3_K.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf/blob/main/internlm2-math-plus-7b.Q3_K.gguf) | Q3_K | 3.57GB | | [internlm2-math-plus-7b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf/blob/main/internlm2-math-plus-7b.Q3_K_M.gguf) | Q3_K_M | 3.57GB | | [internlm2-math-plus-7b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf/blob/main/internlm2-math-plus-7b.Q3_K_L.gguf) | Q3_K_L | 3.85GB | | [internlm2-math-plus-7b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf/blob/main/internlm2-math-plus-7b.IQ4_XS.gguf) | IQ4_XS | 3.99GB | | [internlm2-math-plus-7b.Q4_0.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf/blob/main/internlm2-math-plus-7b.Q4_0.gguf) | Q4_0 | 4.15GB | | [internlm2-math-plus-7b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf/blob/main/internlm2-math-plus-7b.IQ4_NL.gguf) | IQ4_NL | 4.19GB | | [internlm2-math-plus-7b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf/blob/main/internlm2-math-plus-7b.Q4_K_S.gguf) | Q4_K_S | 4.18GB | | [internlm2-math-plus-7b.Q4_K.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf/blob/main/internlm2-math-plus-7b.Q4_K.gguf) | Q4_K | 4.39GB | | [internlm2-math-plus-7b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf/blob/main/internlm2-math-plus-7b.Q4_K_M.gguf) | Q4_K_M | 4.39GB | | [internlm2-math-plus-7b.Q4_1.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf/blob/main/internlm2-math-plus-7b.Q4_1.gguf) | Q4_1 | 4.58GB | | [internlm2-math-plus-7b.Q5_0.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf/blob/main/internlm2-math-plus-7b.Q5_0.gguf) | Q5_0 | 5.0GB | | [internlm2-math-plus-7b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf/blob/main/internlm2-math-plus-7b.Q5_K_S.gguf) | Q5_K_S | 5.0GB | | [internlm2-math-plus-7b.Q5_K.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf/blob/main/internlm2-math-plus-7b.Q5_K.gguf) | Q5_K | 5.13GB | | [internlm2-math-plus-7b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf/blob/main/internlm2-math-plus-7b.Q5_K_M.gguf) | Q5_K_M | 5.13GB | | [internlm2-math-plus-7b.Q5_1.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf/blob/main/internlm2-math-plus-7b.Q5_1.gguf) | Q5_1 | 5.43GB | | [internlm2-math-plus-7b.Q6_K.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf/blob/main/internlm2-math-plus-7b.Q6_K.gguf) | Q6_K | 5.91GB | | [internlm2-math-plus-7b.Q8_0.gguf](https://huggingface.co/RichardErkhov/internlm_-_internlm2-math-plus-7b-gguf/blob/main/internlm2-math-plus-7b.Q8_0.gguf) | Q8_0 | 7.66GB | Original model description: --- pipeline_tag: text-generation license: other language: - en - zh tags: - math --- # InternLM-Math-Plus <div align="center"> <img src="https://raw.githubusercontent.com/InternLM/InternLM/main/assets/logo.svg" width="200"/> <div> </div> <div align="center"> <b><font size="5">InternLM-Math</font></b> <sup> <a href="https://internlm.intern-ai.org.cn/"> <i><font size="4">Plus</font></i> </a> </sup> <div> </div> </div> State-of-the-art bilingual open-sourced Math reasoning LLMs. A **solver**, **prover**, **verifier**, **augmentor**. [💻 Github](https://github.com/InternLM/InternLM-Math) [🤗 Demo](https://huggingface.co/spaces/internlm/internlm2-math-7b) </div> # News - [2024.05.24] We release updated version InternLM2-Math-Plus with 4 sizes and state-of-the-art performances including 1.8B, 7B, 20B, and 8x22B. We improve informal math reasoning performance (chain-of-thought and code-intepreter) and formal math reasoning performance (LEAN 4 translation and LEAN 4 theorem proving) significantly. - [2024.02.10] We add tech reports and citation reference. - [2024.01.31] We add MiniF2F results with evaluation codes! - [2024.01.29] We add checkpoints from ModelScope. Update results about majority voting and Code Intepreter. Tech report is on the way! - [2024.01.26] We add checkpoints from OpenXLab, which ease Chinese users to download! # Performance ## Formal Math Reasoning We evaluate the performance of InternLM2-Math-Plus on formal math reasoning benchmark MiniF2F-test. The evaluation setting is same as Llemma with LEAN 4. | Models | MiniF2F-test | | -------------------------------- | ------------ | | ReProver | 26.5 | | LLMStep | 27.9 | | GPT-F | 36.6 | | HTPS | 41.0 | | Llemma-7B | 26.2 | | Llemma-34B | 25.8 | | InternLM2-Math-7B-Base | 30.3 | | InternLM2-Math-20B-Base | 29.5 | | InternLM2-Math-Plus-1.8B | 38.9 | | InternLM2-Math-Plus-7B | **43.4** | | InternLM2-Math-Plus-20B | 42.6 | | InternLM2-Math-Plus-Mixtral8x22B | 37.3 | ## Informal Math Reasoning We evaluate the performance of InternLM2-Math-Plus on informal math reasoning benchmark MATH and GSM8K. InternLM2-Math-Plus-1.8B outperforms MiniCPM-2B in the smallest size setting. InternLM2-Math-Plus-7B outperforms Deepseek-Math-7B-RL which is the state-of-the-art math reasoning open source model. InternLM2-Math-Plus-Mixtral8x22B achieves 68.5 on MATH (with Python) and 91.8 on GSM8K. | Model | MATH | MATH-Python | GSM8K | | -------------------------------- | -------- | ----------- | -------- | | MiniCPM-2B | 10.2 | - | 53.8 | | InternLM2-Math-Plus-1.8B | **37.0** | **41.5** | **58.8** | | InternLM2-Math-7B | 34.6 | 50.9 | 78.1 | | Deepseek-Math-7B-RL | 51.7 | 58.8 | **88.2** | | InternLM2-Math-Plus-7B | **53.0** | **59.7** | 85.8 | | InternLM2-Math-20B | 37.7 | 54.3 | 82.6 | | InternLM2-Math-Plus-20B | **53.8** | **61.8** | **87.7** | | Mixtral8x22B-Instruct-v0.1 | 41.8 | - | 78.6 | | Eurux-8x22B-NCA | 49.0 | - | - | | InternLM2-Math-Plus-Mixtral8x22B | **58.1** | **68.5** | **91.8** | We also evaluate models on [MathBench-A](https://github.com/open-compass/MathBench). InternLM2-Math-Plus-Mixtral8x22B has comparable performance compared to Claude 3 Opus. | Model | Arithmetic | Primary | Middle | High | College | Average | | -------------------------------- | ---------- | ------- | ------ | ---- | ------- | ------- | | GPT-4o-0513 | 77.7 | 87.7 | 76.3 | 59.0 | 54.0 | 70.9 | | Claude 3 Opus | 85.7 | 85.0 | 58.0 | 42.7 | 43.7 | 63.0 | | Qwen-Max-0428 | 72.3 | 86.3 | 65.0 | 45.0 | 27.3 | 59.2 | | Qwen-1.5-110B | 70.3 | 82.3 | 64.0 | 47.3 | 28.0 | 58.4 | | Deepseek-V2 | 82.7 | 89.3 | 59.0 | 39.3 | 29.3 | 59.9 | | Llama-3-70B-Instruct | 70.3 | 86.0 | 53.0 | 38.7 | 34.7 | 56.5 | | InternLM2-Math-Plus-Mixtral8x22B | 77.5 | 82.0 | 63.6 | 50.3 | 36.8 | 62.0 | | InternLM2-Math-20B | 58.7 | 70.0 | 43.7 | 24.7 | 12.7 | 42.0 | | InternLM2-Math-Plus-20B | 65.8 | 79.7 | 59.5 | 47.6 | 24.8 | 55.5 | | Llama3-8B-Instruct | 54.7 | 71.0 | 25.0 | 19.0 | 14.0 | 36.7 | | InternLM2-Math-7B | 53.7 | 67.0 | 41.3 | 18.3 | 8.0 | 37.7 | | Deepseek-Math-7B-RL | 68.0 | 83.3 | 44.3 | 33.0 | 23.0 | 50.3 | | InternLM2-Math-Plus-7B | 61.4 | 78.3 | 52.5 | 40.5 | 21.7 | 50.9 | | MiniCPM-2B | 49.3 | 51.7 | 18.0 | 8.7 | 3.7 | 26.3 | | InternLM2-Math-Plus-1.8B | 43.0 | 43.3 | 25.4 | 18.9 | 4.7 | 27.1 | # Citation and Tech Report ``` @misc{ying2024internlmmath, title={InternLM-Math: Open Math Large Language Models Toward Verifiable Reasoning}, author={Huaiyuan Ying and Shuo Zhang and Linyang Li and Zhejian Zhou and Yunfan Shao and Zhaoye Fei and Yichuan Ma and Jiawei Hong and Kuikun Liu and Ziyi Wang and Yudong Wang and Zijian Wu and Shuaibin Li and Fengzhe Zhou and Hongwei Liu and Songyang Zhang and Wenwei Zhang and Hang Yan and Xipeng Qiu and Jiayu Wang and Kai Chen and Dahua Lin}, year={2024}, eprint={2402.06332}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
pchopalli/whisper-small-or-en
pchopalli
2024-05-28T16:44:36Z
93
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "or", "dataset:mozilla-foundation/common_voice_11_0", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-28T16:43:31Z
--- language: - or license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 metrics: - wer model-index: - name: Whisper Small Oriya Translate - Prashant C results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 11.0 type: mozilla-foundation/common_voice_11_0 config: or split: test args: 'config: bg, split: test' metrics: - name: Wer type: wer value: 26.790595954073265 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Oriya Translate - Prashant C This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.3157 - Wer Ortho: 60.6530 - Wer: 26.7906 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |:-------------:|:------:|:----:|:---------------:|:---------:|:-------:| | 0.0106 | 9.6154 | 500 | 0.3157 | 60.6530 | 26.7906 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
javidanaslanli/tiny-az-tokenizer-13k
javidanaslanli
2024-05-28T16:40:10Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-28T16:40:09Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Klevin/DECYPHERS-TEST-2.0
Klevin
2024-05-28T16:35:30Z
138
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-28T16:28:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/Weyaxi_-_zephyr-alpha-Nebula-v2-7B-gguf
RichardErkhov
2024-05-28T16:32:30Z
5
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-28T12:59:01Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) zephyr-alpha-Nebula-v2-7B - GGUF - Model creator: https://huggingface.co/Weyaxi/ - Original model: https://huggingface.co/Weyaxi/zephyr-alpha-Nebula-v2-7B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [zephyr-alpha-Nebula-v2-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_zephyr-alpha-Nebula-v2-7B-gguf/blob/main/zephyr-alpha-Nebula-v2-7B.Q2_K.gguf) | Q2_K | 2.53GB | | [zephyr-alpha-Nebula-v2-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_zephyr-alpha-Nebula-v2-7B-gguf/blob/main/zephyr-alpha-Nebula-v2-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB | | [zephyr-alpha-Nebula-v2-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_zephyr-alpha-Nebula-v2-7B-gguf/blob/main/zephyr-alpha-Nebula-v2-7B.IQ3_S.gguf) | IQ3_S | 2.96GB | | [zephyr-alpha-Nebula-v2-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_zephyr-alpha-Nebula-v2-7B-gguf/blob/main/zephyr-alpha-Nebula-v2-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [zephyr-alpha-Nebula-v2-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_zephyr-alpha-Nebula-v2-7B-gguf/blob/main/zephyr-alpha-Nebula-v2-7B.IQ3_M.gguf) | IQ3_M | 3.06GB | | [zephyr-alpha-Nebula-v2-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_zephyr-alpha-Nebula-v2-7B-gguf/blob/main/zephyr-alpha-Nebula-v2-7B.Q3_K.gguf) | Q3_K | 3.28GB | | [zephyr-alpha-Nebula-v2-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_zephyr-alpha-Nebula-v2-7B-gguf/blob/main/zephyr-alpha-Nebula-v2-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [zephyr-alpha-Nebula-v2-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_zephyr-alpha-Nebula-v2-7B-gguf/blob/main/zephyr-alpha-Nebula-v2-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [zephyr-alpha-Nebula-v2-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_zephyr-alpha-Nebula-v2-7B-gguf/blob/main/zephyr-alpha-Nebula-v2-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [zephyr-alpha-Nebula-v2-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_zephyr-alpha-Nebula-v2-7B-gguf/blob/main/zephyr-alpha-Nebula-v2-7B.Q4_0.gguf) | Q4_0 | 3.83GB | | [zephyr-alpha-Nebula-v2-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_zephyr-alpha-Nebula-v2-7B-gguf/blob/main/zephyr-alpha-Nebula-v2-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [zephyr-alpha-Nebula-v2-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_zephyr-alpha-Nebula-v2-7B-gguf/blob/main/zephyr-alpha-Nebula-v2-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [zephyr-alpha-Nebula-v2-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_zephyr-alpha-Nebula-v2-7B-gguf/blob/main/zephyr-alpha-Nebula-v2-7B.Q4_K.gguf) | Q4_K | 4.07GB | | [zephyr-alpha-Nebula-v2-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_zephyr-alpha-Nebula-v2-7B-gguf/blob/main/zephyr-alpha-Nebula-v2-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [zephyr-alpha-Nebula-v2-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_zephyr-alpha-Nebula-v2-7B-gguf/blob/main/zephyr-alpha-Nebula-v2-7B.Q4_1.gguf) | Q4_1 | 4.24GB | | [zephyr-alpha-Nebula-v2-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_zephyr-alpha-Nebula-v2-7B-gguf/blob/main/zephyr-alpha-Nebula-v2-7B.Q5_0.gguf) | Q5_0 | 4.65GB | | [zephyr-alpha-Nebula-v2-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_zephyr-alpha-Nebula-v2-7B-gguf/blob/main/zephyr-alpha-Nebula-v2-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [zephyr-alpha-Nebula-v2-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_zephyr-alpha-Nebula-v2-7B-gguf/blob/main/zephyr-alpha-Nebula-v2-7B.Q5_K.gguf) | Q5_K | 4.78GB | | [zephyr-alpha-Nebula-v2-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_zephyr-alpha-Nebula-v2-7B-gguf/blob/main/zephyr-alpha-Nebula-v2-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [zephyr-alpha-Nebula-v2-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_zephyr-alpha-Nebula-v2-7B-gguf/blob/main/zephyr-alpha-Nebula-v2-7B.Q5_1.gguf) | Q5_1 | 5.07GB | | [zephyr-alpha-Nebula-v2-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_zephyr-alpha-Nebula-v2-7B-gguf/blob/main/zephyr-alpha-Nebula-v2-7B.Q6_K.gguf) | Q6_K | 5.53GB | | [zephyr-alpha-Nebula-v2-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_zephyr-alpha-Nebula-v2-7B-gguf/blob/main/zephyr-alpha-Nebula-v2-7B.Q8_0.gguf) | Q8_0 | 7.17GB | Original model description: --- license: cc-by-nc-4.0 datasets: - garage-bAInd/Open-Platypus language: - en --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6468ce47e134d050a58aa89c/cKySe1S5IW_KnbZpKmozQ.png) <a href="https://www.buymeacoffee.com/PulsarAI" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a> # zephyr-alpha-Nebula-v2-7B zephyr-alpha-Nebula-v2-7B is a merge of [HuggingFaceH4/zephyr-7b-alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha) and [PulsarAI/Nebula-v2-7B-Lora](https://huggingface.co/PulsarAI/Nebula-v2-7B-Lora) # Evaluation Results ([Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)) | Metric | Value | |-----------------------|-----------| | Avg. | | | ARC (25-shot) | | | HellaSwag (10-shot) | | | MMLU (5-shot) | | | TruthfulQA (0-shot) | | | Winogrande (5-shot) | | | GSM8K (5-shot) | | | DROP (3-shot) | |
ClaudioItaly/TopEvolution-Q8_0-GGUF
ClaudioItaly
2024-05-28T16:30:34Z
1
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:NousResearch/Hermes-2-Pro-Mistral-7B", "base_model:merge:NousResearch/Hermes-2-Pro-Mistral-7B", "base_model:mergekit-community/mergekit-slerp-ebgdloh", "base_model:merge:mergekit-community/mergekit-slerp-ebgdloh", "endpoints_compatible", "region:us" ]
null
2024-05-28T16:30:15Z
--- library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo base_model: - NousResearch/Hermes-2-Pro-Mistral-7B - mergekit-community/mergekit-slerp-ebgdloh --- # ClaudioItaly/TopEvolution-Q8_0-GGUF This model was converted to GGUF format from [`mergekit-community/TopEvolution`](https://huggingface.co/mergekit-community/TopEvolution) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/mergekit-community/TopEvolution) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo ClaudioItaly/TopEvolution-Q8_0-GGUF --model topevolution-q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo ClaudioItaly/TopEvolution-Q8_0-GGUF --model topevolution-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && \ cd llama.cpp && \ make && \ ./main -m topevolution-q8_0.gguf -n 128 ```
DiederikMartens/gBERT_sa_cv_13_fold8
DiederikMartens
2024-05-28T16:30:24Z
108
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-german-cased", "base_model:finetune:google-bert/bert-base-german-cased", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T16:10:33Z
--- license: mit base_model: google-bert/bert-base-german-cased tags: - generated_from_trainer metrics: - f1 model-index: - name: gBERT_sa_cv_13_fold8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gBERT_sa_cv_13_fold8 This model is a fine-tuned version of [google-bert/bert-base-german-cased](https://huggingface.co/google-bert/bert-base-german-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6000 - F1: 0.6661 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.47e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 325 | 0.4279 | 0.6323 | | 0.4358 | 2.0 | 650 | 0.4908 | 0.6479 | | 0.4358 | 3.0 | 975 | 0.6000 | 0.6661 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf
RichardErkhov
2024-05-28T16:29:31Z
18
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-28T12:47:10Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) OpenHermes-2.5-Nebula-v2-7B - GGUF - Model creator: https://huggingface.co/Weyaxi/ - Original model: https://huggingface.co/Weyaxi/OpenHermes-2.5-Nebula-v2-7B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [OpenHermes-2.5-Nebula-v2-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf/blob/main/OpenHermes-2.5-Nebula-v2-7B.Q2_K.gguf) | Q2_K | 2.53GB | | [OpenHermes-2.5-Nebula-v2-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf/blob/main/OpenHermes-2.5-Nebula-v2-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB | | [OpenHermes-2.5-Nebula-v2-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf/blob/main/OpenHermes-2.5-Nebula-v2-7B.IQ3_S.gguf) | IQ3_S | 2.96GB | | [OpenHermes-2.5-Nebula-v2-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf/blob/main/OpenHermes-2.5-Nebula-v2-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [OpenHermes-2.5-Nebula-v2-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf/blob/main/OpenHermes-2.5-Nebula-v2-7B.IQ3_M.gguf) | IQ3_M | 3.06GB | | [OpenHermes-2.5-Nebula-v2-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf/blob/main/OpenHermes-2.5-Nebula-v2-7B.Q3_K.gguf) | Q3_K | 3.28GB | | [OpenHermes-2.5-Nebula-v2-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf/blob/main/OpenHermes-2.5-Nebula-v2-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [OpenHermes-2.5-Nebula-v2-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf/blob/main/OpenHermes-2.5-Nebula-v2-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [OpenHermes-2.5-Nebula-v2-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf/blob/main/OpenHermes-2.5-Nebula-v2-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [OpenHermes-2.5-Nebula-v2-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf/blob/main/OpenHermes-2.5-Nebula-v2-7B.Q4_0.gguf) | Q4_0 | 3.83GB | | [OpenHermes-2.5-Nebula-v2-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf/blob/main/OpenHermes-2.5-Nebula-v2-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [OpenHermes-2.5-Nebula-v2-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf/blob/main/OpenHermes-2.5-Nebula-v2-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [OpenHermes-2.5-Nebula-v2-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf/blob/main/OpenHermes-2.5-Nebula-v2-7B.Q4_K.gguf) | Q4_K | 4.07GB | | [OpenHermes-2.5-Nebula-v2-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf/blob/main/OpenHermes-2.5-Nebula-v2-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [OpenHermes-2.5-Nebula-v2-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf/blob/main/OpenHermes-2.5-Nebula-v2-7B.Q4_1.gguf) | Q4_1 | 4.24GB | | [OpenHermes-2.5-Nebula-v2-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf/blob/main/OpenHermes-2.5-Nebula-v2-7B.Q5_0.gguf) | Q5_0 | 4.65GB | | [OpenHermes-2.5-Nebula-v2-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf/blob/main/OpenHermes-2.5-Nebula-v2-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [OpenHermes-2.5-Nebula-v2-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf/blob/main/OpenHermes-2.5-Nebula-v2-7B.Q5_K.gguf) | Q5_K | 4.78GB | | [OpenHermes-2.5-Nebula-v2-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf/blob/main/OpenHermes-2.5-Nebula-v2-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [OpenHermes-2.5-Nebula-v2-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf/blob/main/OpenHermes-2.5-Nebula-v2-7B.Q5_1.gguf) | Q5_1 | 5.07GB | | [OpenHermes-2.5-Nebula-v2-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf/blob/main/OpenHermes-2.5-Nebula-v2-7B.Q6_K.gguf) | Q6_K | 5.53GB | | [OpenHermes-2.5-Nebula-v2-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-Nebula-v2-7B-gguf/blob/main/OpenHermes-2.5-Nebula-v2-7B.Q8_0.gguf) | Q8_0 | 7.17GB | Original model description: --- license: cc-by-nc-4.0 datasets: - garage-bAInd/Open-Platypus language: - en --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6468ce47e134d050a58aa89c/cKySe1S5IW_KnbZpKmozQ.png) <a href="https://www.buymeacoffee.com/PulsarAI" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a> # OpenHermes-2.5-Nebula-v2-7B OpenHermes-2.5-Nebula-v2-7B is a merge of [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) and [PulsarAI/Nebula-v2-7B-Lora](https://huggingface.co/PulsarAI/Nebula-v2-7B-Lora) # Evaluation Results ([Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)) | Metric | Value | |-----------------------|-----------| | Avg. | | | ARC (25-shot) | | | HellaSwag (10-shot) | | | MMLU (5-shot) | | | TruthfulQA (0-shot) | | | Winogrande (5-shot) | | | GSM8K (5-shot) | | | DROP (3-shot) | |
DiederikMartens/eBERT_sa_cv_13_fold7
DiederikMartens
2024-05-28T16:27:39Z
108
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T16:05:43Z
--- license: apache-2.0 base_model: google-bert/bert-base-cased tags: - generated_from_trainer metrics: - f1 model-index: - name: eBERT_sa_cv_13_fold7 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eBERT_sa_cv_13_fold7 This model is a fine-tuned version of [google-bert/bert-base-cased](https://huggingface.co/google-bert/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5384 - F1: 0.5179 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.47e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 325 | 0.5761 | 0.4519 | | 0.6575 | 2.0 | 650 | 0.5185 | 0.4671 | | 0.6575 | 3.0 | 975 | 0.5384 | 0.5179 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
zoedc/resume_model_3labels_final
zoedc
2024-05-28T16:24:29Z
106
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T15:47:05Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: resume_model_3labels_final results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # resume_model_3labels_final This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3759 - Accuracy: 0.8333 - F1 Weighted: 0.7882 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Weighted | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:| | 1.0074 | 1.0 | 60 | 0.7552 | 0.7667 | 0.6835 | | 0.693 | 2.0 | 120 | 0.6421 | 0.7333 | 0.6505 | | 0.5233 | 3.0 | 180 | 0.3900 | 0.8333 | 0.7882 | | 0.3459 | 4.0 | 240 | 0.3759 | 0.8333 | 0.7882 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
asraar7/gemma-Finetuned
asraar7
2024-05-28T16:24:26Z
137
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-28T16:16:09Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ybelkada/test-gguf-trainer-Q8_0-GGUF
ybelkada
2024-05-28T16:23:40Z
3
0
null
[ "gguf", "trl", "sft", "gguf_generated_from_trainer", "generated_from_trainer", "llama-cpp", "gguf-my-repo", "base_model:ybelkada/tiny-random-llama", "base_model:quantized:ybelkada/tiny-random-llama", "endpoints_compatible", "region:us" ]
null
2024-05-28T16:14:45Z
--- tags: - trl - sft - gguf_generated_from_trainer - generated_from_trainer - llama-cpp - gguf-my-repo base_model: ybelkada/tiny-random-llama model-index: - name: test-gguf-trainer results: [] --- # ybelkada/test-gguf-trainer-Q8_0-GGUF This model was converted to GGUF format from [`ybelkada/test-gguf-trainer`](https://huggingface.co/ybelkada/test-gguf-trainer) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/ybelkada/test-gguf-trainer) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo ybelkada/test-gguf-trainer-Q8_0-GGUF --model test-gguf-trainer.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo ybelkada/test-gguf-trainer-Q8_0-GGUF --model test-gguf-trainer.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m test-gguf-trainer.Q8_0.gguf -n 128 ```
RichardErkhov/Weyaxi_-_SynthIA-v1.3-Nebula-v2-7B-gguf
RichardErkhov
2024-05-28T16:23:16Z
6
0
null
[ "gguf", "endpoints_compatible", "region:us" ]
null
2024-05-28T12:50:28Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) SynthIA-v1.3-Nebula-v2-7B - GGUF - Model creator: https://huggingface.co/Weyaxi/ - Original model: https://huggingface.co/Weyaxi/SynthIA-v1.3-Nebula-v2-7B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [SynthIA-v1.3-Nebula-v2-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_SynthIA-v1.3-Nebula-v2-7B-gguf/blob/main/SynthIA-v1.3-Nebula-v2-7B.Q2_K.gguf) | Q2_K | 2.53GB | | [SynthIA-v1.3-Nebula-v2-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_SynthIA-v1.3-Nebula-v2-7B-gguf/blob/main/SynthIA-v1.3-Nebula-v2-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB | | [SynthIA-v1.3-Nebula-v2-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_SynthIA-v1.3-Nebula-v2-7B-gguf/blob/main/SynthIA-v1.3-Nebula-v2-7B.IQ3_S.gguf) | IQ3_S | 2.96GB | | [SynthIA-v1.3-Nebula-v2-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_SynthIA-v1.3-Nebula-v2-7B-gguf/blob/main/SynthIA-v1.3-Nebula-v2-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [SynthIA-v1.3-Nebula-v2-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_SynthIA-v1.3-Nebula-v2-7B-gguf/blob/main/SynthIA-v1.3-Nebula-v2-7B.IQ3_M.gguf) | IQ3_M | 3.06GB | | [SynthIA-v1.3-Nebula-v2-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_SynthIA-v1.3-Nebula-v2-7B-gguf/blob/main/SynthIA-v1.3-Nebula-v2-7B.Q3_K.gguf) | Q3_K | 3.28GB | | [SynthIA-v1.3-Nebula-v2-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_SynthIA-v1.3-Nebula-v2-7B-gguf/blob/main/SynthIA-v1.3-Nebula-v2-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [SynthIA-v1.3-Nebula-v2-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_SynthIA-v1.3-Nebula-v2-7B-gguf/blob/main/SynthIA-v1.3-Nebula-v2-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [SynthIA-v1.3-Nebula-v2-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_SynthIA-v1.3-Nebula-v2-7B-gguf/blob/main/SynthIA-v1.3-Nebula-v2-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [SynthIA-v1.3-Nebula-v2-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_SynthIA-v1.3-Nebula-v2-7B-gguf/blob/main/SynthIA-v1.3-Nebula-v2-7B.Q4_0.gguf) | Q4_0 | 3.83GB | | [SynthIA-v1.3-Nebula-v2-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_SynthIA-v1.3-Nebula-v2-7B-gguf/blob/main/SynthIA-v1.3-Nebula-v2-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [SynthIA-v1.3-Nebula-v2-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_SynthIA-v1.3-Nebula-v2-7B-gguf/blob/main/SynthIA-v1.3-Nebula-v2-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [SynthIA-v1.3-Nebula-v2-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_SynthIA-v1.3-Nebula-v2-7B-gguf/blob/main/SynthIA-v1.3-Nebula-v2-7B.Q4_K.gguf) | Q4_K | 4.07GB | | [SynthIA-v1.3-Nebula-v2-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_SynthIA-v1.3-Nebula-v2-7B-gguf/blob/main/SynthIA-v1.3-Nebula-v2-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [SynthIA-v1.3-Nebula-v2-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_SynthIA-v1.3-Nebula-v2-7B-gguf/blob/main/SynthIA-v1.3-Nebula-v2-7B.Q4_1.gguf) | Q4_1 | 4.24GB | | [SynthIA-v1.3-Nebula-v2-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_SynthIA-v1.3-Nebula-v2-7B-gguf/blob/main/SynthIA-v1.3-Nebula-v2-7B.Q5_0.gguf) | Q5_0 | 4.65GB | | [SynthIA-v1.3-Nebula-v2-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_SynthIA-v1.3-Nebula-v2-7B-gguf/blob/main/SynthIA-v1.3-Nebula-v2-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [SynthIA-v1.3-Nebula-v2-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_SynthIA-v1.3-Nebula-v2-7B-gguf/blob/main/SynthIA-v1.3-Nebula-v2-7B.Q5_K.gguf) | Q5_K | 4.78GB | | [SynthIA-v1.3-Nebula-v2-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_SynthIA-v1.3-Nebula-v2-7B-gguf/blob/main/SynthIA-v1.3-Nebula-v2-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [SynthIA-v1.3-Nebula-v2-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_SynthIA-v1.3-Nebula-v2-7B-gguf/blob/main/SynthIA-v1.3-Nebula-v2-7B.Q5_1.gguf) | Q5_1 | 5.07GB | | [SynthIA-v1.3-Nebula-v2-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_SynthIA-v1.3-Nebula-v2-7B-gguf/blob/main/SynthIA-v1.3-Nebula-v2-7B.Q6_K.gguf) | Q6_K | 5.53GB | | [SynthIA-v1.3-Nebula-v2-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_SynthIA-v1.3-Nebula-v2-7B-gguf/blob/main/SynthIA-v1.3-Nebula-v2-7B.Q8_0.gguf) | Q8_0 | 7.17GB | Original model description: --- license: cc-by-nc-4.0 datasets: - garage-bAInd/Open-Platypus language: - en --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6468ce47e134d050a58aa89c/cKySe1S5IW_KnbZpKmozQ.png) <a href="https://www.buymeacoffee.com/PulsarAI" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a> # SynthIA-v1.3-Nebula-v2-7B SynthIA-v1.3-Nebula-v2-7B is a merge of [migtissera/SynthIA-7B-v1.3](https://huggingface.co/migtissera/SynthIA-7B-v1.3) and [PulsarAI/Nebula-v2-7B-Lora](https://huggingface.co/PulsarAI/Nebula-v2-7B-Lora) # Evaluation Results ([Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)) | Metric | Value | |-----------------------|-----------| | Avg. | | | ARC (25-shot) | | | HellaSwag (10-shot) | | | MMLU (5-shot) | | | TruthfulQA (0-shot) | | | Winogrande (5-shot) | | | GSM8K (5-shot) | | | DROP (3-shot) | |
Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-3_5bpw_exl2
Zoyd
2024-05-28T16:15:27Z
3
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "dpo", "dataset:mlabonne/orpo-dpo-mix-40k", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2024-05-28T15:21:22Z
--- license: other datasets: - mlabonne/orpo-dpo-mix-40k tags: - dpo --- **Exllamav2** quant (**exl2** / **3.5 bpw**) made with ExLlamaV2 v0.1.1 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-2_2bpw_exl2)**</center> | <center>3250 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-2_5bpw_exl2)**</center> | <center>3479 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-3_0bpw_exl2)**</center> | <center>3895 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-3_5bpw_exl2)**</center> | <center>4310 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-3_75bpw_exl2)**</center> | <center>4519 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-4_0bpw_exl2)**</center> | <center>4727 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-4_25bpw_exl2)**</center> | <center>4931 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-5_0bpw_exl2)**</center> | <center>5559 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-6_0bpw_exl2)**</center> | <center>6495 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-6_5bpw_exl2)**</center> | <center>6903 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-8_0bpw_exl2)**</center> | <center>8157 MB</center> | <center>8</center> | # NeuralDaredevil-8B-abliterated ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/gFEhcIDSKa3AWpkNfH91q.jpeg) This is a DPO fine-tune of [mlabonne/Daredevil-8-abliterated](https://huggingface.co/mlabonne/Daredevil-8B-abliterated) trained on one epoch of [mlabonne/orpo-dpo-mix-40k](https://huggingface.co/datasets/mlabonne/orpo-dpo-mix-40k). ## 🏆 Evaluation ### Open LLM Leaderboard TBD. ### Nous Evaluation performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval). See the entire leaderboard [here](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard). | Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench | |---|---:|---:|---:|---:|---:| | [**mlabonne/NeuralDaredevil-8B-abliterated**](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated) [📄](https://gist.github.com/mlabonne/ae0bf16936cef900b72964b33c99edbc) | **55.87** | **43.73** | **73.6** | **59.36** | **46.8** | | [mlabonne/Daredevil-8B](https://huggingface.co/mlabonne/Daredevil-8B) [📄](https://gist.github.com/mlabonne/080f9c5f153ea57a7ab7d932cf896f21) | 55.87 | 44.13 | 73.52 | 59.05 | 46.77 | | [mlabonne/Daredevil-8B-abliterated](https://huggingface.co/mlabonne/Daredevil-8B-abliterated) [📄](https://gist.github.com/mlabonne/32cdd8460804662c856bcb2a20acd49e) | 55.06 | 43.29 | 73.33 | 57.47 | 46.17 | | [NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B) [📄](https://gist.github.com/mlabonne/5df2a3051dd6eb3368a77b684635dc05) | 54.28 | 43.9 | 72.62 | 56.36 | 44.23 | | [openchat/openchat-3.6-8b-20240522](https://huggingface.co/openchat/openchat-3.6-8b-20240522) [📄](https://gist.github.com/mlabonne/95eef8e8d26b7b17910dcb78e1c95f4a) | 53.49 | 44.03 | 73.67 | 49.78 | 46.48 | | [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) [📄](https://gist.github.com/mlabonne/8329284d86035e6019edb11eb0933628) | 51.34 | 41.22 | 69.86 | 51.65 | 42.64 | | [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) [📄](https://gist.github.com/mlabonne/616b6245137a9cfc4ea80e4c6e55d847) | 45.42 | 31.1 | 69.95 | 43.91 | 36.7 | ## 🌳 Model family tree ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/ekwRGgnjzEOyprT8sEBFt.png)
fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-986812
fine-tuned
2024-05-28T16:15:16Z
5
0
sentence-transformers
[ "sentence-transformers", "safetensors", "xlm-roberta", "feature-extraction", "sentence-similarity", "mteb", "en", "dataset:fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-986812", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-28T16:14:17Z
--- license: apache-2.0 datasets: - fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-986812 - allenai/c4 language: - en - en pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb --- This model is a fine-tuned version of [**BAAI/bge-m3**](https://huggingface.co/BAAI/bge-m3) designed for the following use case: None ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-986812', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-437825
fine-tuned
2024-05-28T16:15:08Z
4
0
sentence-transformers
[ "sentence-transformers", "safetensors", "xlm-roberta", "feature-extraction", "sentence-similarity", "mteb", "en", "dataset:fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-437825", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-28T16:14:14Z
--- license: apache-2.0 datasets: - fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-437825 - allenai/c4 language: - en - en pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb --- This model is a fine-tuned version of [**BAAI/bge-m3**](https://huggingface.co/BAAI/bge-m3) designed for the following use case: None ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-437825', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-2_5bpw_exl2
Zoyd
2024-05-28T16:14:40Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "dpo", "dataset:mlabonne/orpo-dpo-mix-40k", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2024-05-28T15:09:35Z
--- license: other datasets: - mlabonne/orpo-dpo-mix-40k tags: - dpo --- **Exllamav2** quant (**exl2** / **2.5 bpw**) made with ExLlamaV2 v0.1.1 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-2_2bpw_exl2)**</center> | <center>3250 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-2_5bpw_exl2)**</center> | <center>3479 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-3_0bpw_exl2)**</center> | <center>3895 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-3_5bpw_exl2)**</center> | <center>4310 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-3_75bpw_exl2)**</center> | <center>4519 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-4_0bpw_exl2)**</center> | <center>4727 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-4_25bpw_exl2)**</center> | <center>4931 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-5_0bpw_exl2)**</center> | <center>5559 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-6_0bpw_exl2)**</center> | <center>6495 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-6_5bpw_exl2)**</center> | <center>6903 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-8_0bpw_exl2)**</center> | <center>8157 MB</center> | <center>8</center> | # NeuralDaredevil-8B-abliterated ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/gFEhcIDSKa3AWpkNfH91q.jpeg) This is a DPO fine-tune of [mlabonne/Daredevil-8-abliterated](https://huggingface.co/mlabonne/Daredevil-8B-abliterated) trained on one epoch of [mlabonne/orpo-dpo-mix-40k](https://huggingface.co/datasets/mlabonne/orpo-dpo-mix-40k). ## 🏆 Evaluation ### Open LLM Leaderboard TBD. ### Nous Evaluation performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval). See the entire leaderboard [here](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard). | Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench | |---|---:|---:|---:|---:|---:| | [**mlabonne/NeuralDaredevil-8B-abliterated**](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated) [📄](https://gist.github.com/mlabonne/ae0bf16936cef900b72964b33c99edbc) | **55.87** | **43.73** | **73.6** | **59.36** | **46.8** | | [mlabonne/Daredevil-8B](https://huggingface.co/mlabonne/Daredevil-8B) [📄](https://gist.github.com/mlabonne/080f9c5f153ea57a7ab7d932cf896f21) | 55.87 | 44.13 | 73.52 | 59.05 | 46.77 | | [mlabonne/Daredevil-8B-abliterated](https://huggingface.co/mlabonne/Daredevil-8B-abliterated) [📄](https://gist.github.com/mlabonne/32cdd8460804662c856bcb2a20acd49e) | 55.06 | 43.29 | 73.33 | 57.47 | 46.17 | | [NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B) [📄](https://gist.github.com/mlabonne/5df2a3051dd6eb3368a77b684635dc05) | 54.28 | 43.9 | 72.62 | 56.36 | 44.23 | | [openchat/openchat-3.6-8b-20240522](https://huggingface.co/openchat/openchat-3.6-8b-20240522) [📄](https://gist.github.com/mlabonne/95eef8e8d26b7b17910dcb78e1c95f4a) | 53.49 | 44.03 | 73.67 | 49.78 | 46.48 | | [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) [📄](https://gist.github.com/mlabonne/8329284d86035e6019edb11eb0933628) | 51.34 | 41.22 | 69.86 | 51.65 | 42.64 | | [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) [📄](https://gist.github.com/mlabonne/616b6245137a9cfc4ea80e4c6e55d847) | 45.42 | 31.1 | 69.95 | 43.91 | 36.7 | ## 🌳 Model family tree ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/ekwRGgnjzEOyprT8sEBFt.png)
fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-859511
fine-tuned
2024-05-28T16:14:29Z
5
0
sentence-transformers
[ "sentence-transformers", "safetensors", "xlm-roberta", "feature-extraction", "sentence-similarity", "mteb", "en", "dataset:fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-859511", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-28T16:13:34Z
--- license: apache-2.0 datasets: - fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-859511 - allenai/c4 language: - en - en pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb --- This model is a fine-tuned version of [**BAAI/bge-m3**](https://huggingface.co/BAAI/bge-m3) designed for the following use case: None ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-859511', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-8_0bpw_exl2
Zoyd
2024-05-28T16:14:12Z
21
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "dpo", "dataset:mlabonne/orpo-dpo-mix-40k", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "exl2", "region:us" ]
text-generation
2024-05-28T15:59:32Z
--- license: other datasets: - mlabonne/orpo-dpo-mix-40k tags: - dpo --- **Exllamav2** quant (**exl2** / **8.0 bpw**) made with ExLlamaV2 v0.1.1 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-2_2bpw_exl2)**</center> | <center>3250 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-2_5bpw_exl2)**</center> | <center>3479 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-3_0bpw_exl2)**</center> | <center>3895 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-3_5bpw_exl2)**</center> | <center>4310 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-3_75bpw_exl2)**</center> | <center>4519 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-4_0bpw_exl2)**</center> | <center>4727 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-4_25bpw_exl2)**</center> | <center>4931 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-5_0bpw_exl2)**</center> | <center>5559 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-6_0bpw_exl2)**</center> | <center>6495 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-6_5bpw_exl2)**</center> | <center>6903 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-8_0bpw_exl2)**</center> | <center>8157 MB</center> | <center>8</center> | # NeuralDaredevil-8B-abliterated ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/gFEhcIDSKa3AWpkNfH91q.jpeg) This is a DPO fine-tune of [mlabonne/Daredevil-8-abliterated](https://huggingface.co/mlabonne/Daredevil-8B-abliterated) trained on one epoch of [mlabonne/orpo-dpo-mix-40k](https://huggingface.co/datasets/mlabonne/orpo-dpo-mix-40k). ## 🏆 Evaluation ### Open LLM Leaderboard TBD. ### Nous Evaluation performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval). See the entire leaderboard [here](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard). | Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench | |---|---:|---:|---:|---:|---:| | [**mlabonne/NeuralDaredevil-8B-abliterated**](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated) [📄](https://gist.github.com/mlabonne/ae0bf16936cef900b72964b33c99edbc) | **55.87** | **43.73** | **73.6** | **59.36** | **46.8** | | [mlabonne/Daredevil-8B](https://huggingface.co/mlabonne/Daredevil-8B) [📄](https://gist.github.com/mlabonne/080f9c5f153ea57a7ab7d932cf896f21) | 55.87 | 44.13 | 73.52 | 59.05 | 46.77 | | [mlabonne/Daredevil-8B-abliterated](https://huggingface.co/mlabonne/Daredevil-8B-abliterated) [📄](https://gist.github.com/mlabonne/32cdd8460804662c856bcb2a20acd49e) | 55.06 | 43.29 | 73.33 | 57.47 | 46.17 | | [NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B) [📄](https://gist.github.com/mlabonne/5df2a3051dd6eb3368a77b684635dc05) | 54.28 | 43.9 | 72.62 | 56.36 | 44.23 | | [openchat/openchat-3.6-8b-20240522](https://huggingface.co/openchat/openchat-3.6-8b-20240522) [📄](https://gist.github.com/mlabonne/95eef8e8d26b7b17910dcb78e1c95f4a) | 53.49 | 44.03 | 73.67 | 49.78 | 46.48 | | [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) [📄](https://gist.github.com/mlabonne/8329284d86035e6019edb11eb0933628) | 51.34 | 41.22 | 69.86 | 51.65 | 42.64 | | [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) [📄](https://gist.github.com/mlabonne/616b6245137a9cfc4ea80e4c6e55d847) | 45.42 | 31.1 | 69.95 | 43.91 | 36.7 | ## 🌳 Model family tree ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/ekwRGgnjzEOyprT8sEBFt.png)
Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-6_0bpw_exl2
Zoyd
2024-05-28T16:14:03Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "dpo", "dataset:mlabonne/orpo-dpo-mix-40k", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "6-bit", "exl2", "region:us" ]
text-generation
2024-05-28T15:53:19Z
--- license: other datasets: - mlabonne/orpo-dpo-mix-40k tags: - dpo --- **Exllamav2** quant (**exl2** / **6.0 bpw**) made with ExLlamaV2 v0.1.1 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-2_2bpw_exl2)**</center> | <center>3250 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-2_5bpw_exl2)**</center> | <center>3479 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-3_0bpw_exl2)**</center> | <center>3895 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-3_5bpw_exl2)**</center> | <center>4310 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-3_75bpw_exl2)**</center> | <center>4519 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-4_0bpw_exl2)**</center> | <center>4727 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-4_25bpw_exl2)**</center> | <center>4931 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-5_0bpw_exl2)**</center> | <center>5559 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-6_0bpw_exl2)**</center> | <center>6495 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-6_5bpw_exl2)**</center> | <center>6903 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-8_0bpw_exl2)**</center> | <center>8157 MB</center> | <center>8</center> | # NeuralDaredevil-8B-abliterated ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/gFEhcIDSKa3AWpkNfH91q.jpeg) This is a DPO fine-tune of [mlabonne/Daredevil-8-abliterated](https://huggingface.co/mlabonne/Daredevil-8B-abliterated) trained on one epoch of [mlabonne/orpo-dpo-mix-40k](https://huggingface.co/datasets/mlabonne/orpo-dpo-mix-40k). ## 🏆 Evaluation ### Open LLM Leaderboard TBD. ### Nous Evaluation performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval). See the entire leaderboard [here](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard). | Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench | |---|---:|---:|---:|---:|---:| | [**mlabonne/NeuralDaredevil-8B-abliterated**](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated) [📄](https://gist.github.com/mlabonne/ae0bf16936cef900b72964b33c99edbc) | **55.87** | **43.73** | **73.6** | **59.36** | **46.8** | | [mlabonne/Daredevil-8B](https://huggingface.co/mlabonne/Daredevil-8B) [📄](https://gist.github.com/mlabonne/080f9c5f153ea57a7ab7d932cf896f21) | 55.87 | 44.13 | 73.52 | 59.05 | 46.77 | | [mlabonne/Daredevil-8B-abliterated](https://huggingface.co/mlabonne/Daredevil-8B-abliterated) [📄](https://gist.github.com/mlabonne/32cdd8460804662c856bcb2a20acd49e) | 55.06 | 43.29 | 73.33 | 57.47 | 46.17 | | [NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B) [📄](https://gist.github.com/mlabonne/5df2a3051dd6eb3368a77b684635dc05) | 54.28 | 43.9 | 72.62 | 56.36 | 44.23 | | [openchat/openchat-3.6-8b-20240522](https://huggingface.co/openchat/openchat-3.6-8b-20240522) [📄](https://gist.github.com/mlabonne/95eef8e8d26b7b17910dcb78e1c95f4a) | 53.49 | 44.03 | 73.67 | 49.78 | 46.48 | | [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) [📄](https://gist.github.com/mlabonne/8329284d86035e6019edb11eb0933628) | 51.34 | 41.22 | 69.86 | 51.65 | 42.64 | | [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) [📄](https://gist.github.com/mlabonne/616b6245137a9cfc4ea80e4c6e55d847) | 45.42 | 31.1 | 69.95 | 43.91 | 36.7 | ## 🌳 Model family tree ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/ekwRGgnjzEOyprT8sEBFt.png)
roscazo/vih_explainability3
roscazo
2024-05-28T16:13:39Z
109
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:PlanTL-GOB-ES/bsc-bio-ehr-es", "base_model:finetune:PlanTL-GOB-ES/bsc-bio-ehr-es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T16:13:21Z
--- license: apache-2.0 base_model: PlanTL-GOB-ES/bsc-bio-ehr-es tags: - generated_from_trainer metrics: - precision - recall - f1 model-index: - name: vih_explainability3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vih_explainability3 This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3951 - Roc Auc: 0.8213 - Ap Score: 0.7049 - Precision: 0.9836 - Recall: 0.6452 - F1: 0.7792 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Roc Auc | Ap Score | Precision | Recall | F1 | |:-------------:|:------:|:----:|:---------------:|:-------:|:--------:|:---------:|:------:|:------:| | 0.4261 | 0.8475 | 100 | 0.3832 | 0.6129 | 0.3793 | 1.0 | 0.2258 | 0.3684 | | 0.2405 | 1.6949 | 200 | 0.4736 | 0.6344 | 0.4138 | 1.0 | 0.2688 | 0.4237 | | 0.2088 | 2.5424 | 300 | 0.3452 | 0.7729 | 0.6274 | 0.9808 | 0.5484 | 0.7034 | | 0.2196 | 3.3898 | 400 | 0.3644 | 0.7151 | 0.5431 | 1.0 | 0.4301 | 0.6015 | | 0.2068 | 4.2373 | 500 | 0.5156 | 0.6344 | 0.4138 | 1.0 | 0.2688 | 0.4237 | | 0.1374 | 5.0847 | 600 | 0.3988 | 0.7944 | 0.6619 | 0.9821 | 0.5914 | 0.7383 | | 0.1098 | 5.9322 | 700 | 0.3629 | 0.8051 | 0.6791 | 0.9828 | 0.6129 | 0.7550 | | 0.0914 | 6.7797 | 800 | 0.3394 | 0.8240 | 0.6934 | 0.9531 | 0.6559 | 0.7771 | | 0.088 | 7.6271 | 900 | 0.3612 | 0.8334 | 0.7009 | 0.9403 | 0.6774 | 0.7875 | | 0.0787 | 8.4746 | 1000 | 0.3801 | 0.8213 | 0.7049 | 0.9836 | 0.6452 | 0.7792 | | 0.0588 | 9.3220 | 1100 | 0.3951 | 0.8213 | 0.7049 | 0.9836 | 0.6452 | 0.7792 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-3_0bpw_exl2
Zoyd
2024-05-28T16:13:37Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "dpo", "dataset:mlabonne/orpo-dpo-mix-40k", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "3-bit", "exl2", "region:us" ]
text-generation
2024-05-28T15:16:51Z
--- license: other datasets: - mlabonne/orpo-dpo-mix-40k tags: - dpo --- **Exllamav2** quant (**exl2** / **3.0 bpw**) made with ExLlamaV2 v0.1.1 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-2_2bpw_exl2)**</center> | <center>3250 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-2_5bpw_exl2)**</center> | <center>3479 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-3_0bpw_exl2)**</center> | <center>3895 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-3_5bpw_exl2)**</center> | <center>4310 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-3_75bpw_exl2)**</center> | <center>4519 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-4_0bpw_exl2)**</center> | <center>4727 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-4_25bpw_exl2)**</center> | <center>4931 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-5_0bpw_exl2)**</center> | <center>5559 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-6_0bpw_exl2)**</center> | <center>6495 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-6_5bpw_exl2)**</center> | <center>6903 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/mlabonne_NeuralDaredevil-8B-abliterated-8_0bpw_exl2)**</center> | <center>8157 MB</center> | <center>8</center> | # NeuralDaredevil-8B-abliterated ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/gFEhcIDSKa3AWpkNfH91q.jpeg) This is a DPO fine-tune of [mlabonne/Daredevil-8-abliterated](https://huggingface.co/mlabonne/Daredevil-8B-abliterated) trained on one epoch of [mlabonne/orpo-dpo-mix-40k](https://huggingface.co/datasets/mlabonne/orpo-dpo-mix-40k). ## 🏆 Evaluation ### Open LLM Leaderboard TBD. ### Nous Evaluation performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval). See the entire leaderboard [here](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard). | Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench | |---|---:|---:|---:|---:|---:| | [**mlabonne/NeuralDaredevil-8B-abliterated**](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated) [📄](https://gist.github.com/mlabonne/ae0bf16936cef900b72964b33c99edbc) | **55.87** | **43.73** | **73.6** | **59.36** | **46.8** | | [mlabonne/Daredevil-8B](https://huggingface.co/mlabonne/Daredevil-8B) [📄](https://gist.github.com/mlabonne/080f9c5f153ea57a7ab7d932cf896f21) | 55.87 | 44.13 | 73.52 | 59.05 | 46.77 | | [mlabonne/Daredevil-8B-abliterated](https://huggingface.co/mlabonne/Daredevil-8B-abliterated) [📄](https://gist.github.com/mlabonne/32cdd8460804662c856bcb2a20acd49e) | 55.06 | 43.29 | 73.33 | 57.47 | 46.17 | | [NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B) [📄](https://gist.github.com/mlabonne/5df2a3051dd6eb3368a77b684635dc05) | 54.28 | 43.9 | 72.62 | 56.36 | 44.23 | | [openchat/openchat-3.6-8b-20240522](https://huggingface.co/openchat/openchat-3.6-8b-20240522) [📄](https://gist.github.com/mlabonne/95eef8e8d26b7b17910dcb78e1c95f4a) | 53.49 | 44.03 | 73.67 | 49.78 | 46.48 | | [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) [📄](https://gist.github.com/mlabonne/8329284d86035e6019edb11eb0933628) | 51.34 | 41.22 | 69.86 | 51.65 | 42.64 | | [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) [📄](https://gist.github.com/mlabonne/616b6245137a9cfc4ea80e4c6e55d847) | 45.42 | 31.1 | 69.95 | 43.91 | 36.7 | ## 🌳 Model family tree ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/ekwRGgnjzEOyprT8sEBFt.png)
fine-tuned/before-finetuning-512-192-gpt-4o-2024-05-13-110174
fine-tuned
2024-05-28T16:13:23Z
4
0
sentence-transformers
[ "sentence-transformers", "safetensors", "xlm-roberta", "feature-extraction", "sentence-similarity", "mteb", "en", "dataset:fine-tuned/before-finetuning-512-192-gpt-4o-2024-05-13-110174", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-28T16:12:27Z
--- license: apache-2.0 datasets: - fine-tuned/before-finetuning-512-192-gpt-4o-2024-05-13-110174 - allenai/c4 language: - en - en pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb --- This model is a fine-tuned version of [**BAAI/bge-m3**](https://huggingface.co/BAAI/bge-m3) designed for the following use case: None ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/before-finetuning-512-192-gpt-4o-2024-05-13-110174', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
rileybol/autotrain-mb2mv-qdf75
rileybol
2024-05-28T16:12:05Z
192
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "autotrain", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-28T16:03:02Z
--- tags: - autotrain - image-classification widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metrics loss: 0.011031342670321465 f1_macro: 1.0 f1_micro: 1.0 f1_weighted: 1.0 precision_macro: 1.0 precision_micro: 1.0 precision_weighted: 1.0 recall_macro: 1.0 recall_micro: 1.0 recall_weighted: 1.0 accuracy: 1.0
Varine/distilhubert-finetuned-gtzan
Varine
2024-05-28T16:10:36Z
160
0
transformers
[ "transformers", "tensorboard", "safetensors", "hubert", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "base_model:ntu-spml/distilhubert", "base_model:finetune:ntu-spml/distilhubert", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
audio-classification
2024-05-28T14:36:00Z
--- license: apache-2.0 base_model: ntu-spml/distilhubert tags: - generated_from_trainer datasets: - marsyas/gtzan metrics: - accuracy model-index: - name: distilhubert-finetuned-gtzan results: - task: name: Audio Classification type: audio-classification dataset: name: GTZAN type: marsyas/gtzan config: all split: train args: all metrics: - name: Accuracy type: accuracy value: 0.84 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilhubert-finetuned-gtzan This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset. It achieves the following results on the evaluation set: - Loss: 0.5946 - Accuracy: 0.84 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.9378 | 1.0 | 113 | 1.8274 | 0.51 | | 1.1834 | 2.0 | 226 | 1.2481 | 0.57 | | 1.0385 | 3.0 | 339 | 0.9500 | 0.73 | | 0.6567 | 4.0 | 452 | 0.8293 | 0.74 | | 0.5658 | 5.0 | 565 | 0.6914 | 0.81 | | 0.4314 | 6.0 | 678 | 0.6027 | 0.81 | | 0.2145 | 7.0 | 791 | 0.5902 | 0.81 | | 0.1052 | 8.0 | 904 | 0.6030 | 0.8 | | 0.1014 | 9.0 | 1017 | 0.6204 | 0.83 | | 0.0866 | 10.0 | 1130 | 0.5946 | 0.84 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.0.1 - Datasets 2.19.1 - Tokenizers 0.19.1
Toshifumi/Llama3-IMDB_20240528v1
Toshifumi
2024-05-28T16:08:11Z
4
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:quantized:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-28T16:02:49Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** Toshifumi - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
MoTHer-VTHR/VTHR-LoRA-V-ModelTree_4-Depth_2-Node_Lw7mhgaY
MoTHer-VTHR
2024-05-28T16:07:29Z
166
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-28T16:07:16Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MoTHer-VTHR/VTHR-LoRA-V-ModelTree_4-Depth_2-Node_ZsMxZM3p
MoTHer-VTHR
2024-05-28T16:06:49Z
168
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-28T16:06:36Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-418918
fine-tuned
2024-05-28T16:06:47Z
5
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "en", "dataset:fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-418918", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-28T16:06:12Z
--- license: apache-2.0 datasets: - fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-418918 - allenai/c4 language: - en - en pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb --- This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case: None ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-418918', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
ybelkada/tiny-random-llama-Q6_K-GGUF
ybelkada
2024-05-28T16:06:31Z
6
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "endpoints_compatible", "region:us" ]
null
2024-05-28T16:06:30Z
--- library_name: transformers tags: - llama-cpp - gguf-my-repo --- # ybelkada/tiny-random-llama-Q6_K-GGUF This model was converted to GGUF format from [`ybelkada/tiny-random-llama`](https://huggingface.co/ybelkada/tiny-random-llama) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/ybelkada/tiny-random-llama) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo ybelkada/tiny-random-llama-Q6_K-GGUF --model tiny-random-llama.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo ybelkada/tiny-random-llama-Q6_K-GGUF --model tiny-random-llama.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tiny-random-llama.Q6_K.gguf -n 128 ```
MoTHer-VTHR/VTHR-LoRA-V-ModelTree_4-Depth_2-Node_ehobdK3q
MoTHer-VTHR
2024-05-28T16:05:59Z
166
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-28T15:48:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MoTHer-VTHR/VTHR-LoRA-V-ModelTree_4-Depth_2-Node_Kb6teTEK
MoTHer-VTHR
2024-05-28T16:05:52Z
166
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-28T15:48:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DiederikMartens/eBERT_sa_cv_13_fold6
DiederikMartens
2024-05-28T16:05:35Z
110
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-28T15:43:39Z
--- license: apache-2.0 base_model: google-bert/bert-base-cased tags: - generated_from_trainer metrics: - f1 model-index: - name: eBERT_sa_cv_13_fold6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eBERT_sa_cv_13_fold6 This model is a fine-tuned version of [google-bert/bert-base-cased](https://huggingface.co/google-bert/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6516 - F1: 0.5892 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.47e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 325 | 0.5637 | 0.4047 | | 0.6115 | 2.0 | 650 | 0.5408 | 0.4896 | | 0.6115 | 3.0 | 975 | 0.6516 | 0.5892 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
MoTHer-VTHR/VTHR-LoRA-V-ModelTree_4-Depth_1-Node_nTztYgyb
MoTHer-VTHR
2024-05-28T16:05:28Z
169
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-28T15:47:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MoTHer-VTHR/VTHR-LoRA-V-ModelTree_4-Depth_2-Node_w8zptskb
MoTHer-VTHR
2024-05-28T16:05:20Z
166
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-28T15:46:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MoTHer-VTHR/VTHR-LoRA-V-ModelTree_4-Depth_2-Node_Htd3LHVr
MoTHer-VTHR
2024-05-28T16:05:13Z
166
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-28T15:46:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
fine-tuned/NFCorpus-512-192-gpt-4o-2024-05-13-625238
fine-tuned
2024-05-28T16:05:10Z
4
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "en", "dataset:fine-tuned/NFCorpus-512-192-gpt-4o-2024-05-13-625238", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-28T16:04:40Z
--- license: apache-2.0 datasets: - fine-tuned/NFCorpus-512-192-gpt-4o-2024-05-13-625238 - allenai/c4 language: - en - en pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb --- This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case: None ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/NFCorpus-512-192-gpt-4o-2024-05-13-625238', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
MoTHer-VTHR/VTHR-LoRA-V-ModelTree_4-Depth_2-Node_AhUK6Fzg
MoTHer-VTHR
2024-05-28T16:05:04Z
167
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-28T15:46:14Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MoTHer-VTHR/VTHR-LoRA-V-ModelTree_4-Depth_2-Node_yrMJSNsx
MoTHer-VTHR
2024-05-28T16:04:40Z
169
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-28T15:45:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
fine-tuned/before-finetuning-512-192-gpt-4o-2024-05-13-143735
fine-tuned
2024-05-28T16:04:27Z
4
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "en", "dataset:fine-tuned/before-finetuning-512-192-gpt-4o-2024-05-13-143735", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-28T16:03:56Z
--- license: apache-2.0 datasets: - fine-tuned/before-finetuning-512-192-gpt-4o-2024-05-13-143735 - allenai/c4 language: - en - en pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb --- This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case: None ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/before-finetuning-512-192-gpt-4o-2024-05-13-143735', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
MoTHer-VTHR/VTHR-LoRA-V-ModelTree_4-Depth_2-Node_E7wc5aR2
MoTHer-VTHR
2024-05-28T16:04:26Z
166
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-28T15:44:31Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MoTHer-VTHR/VTHR-LoRA-V-ModelTree_4-Depth_2-Node_jMDgYmhU
MoTHer-VTHR
2024-05-28T16:04:18Z
166
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-28T15:44:10Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MoTHer-VTHR/VTHR-LoRA-V-ModelTree_4-Depth_1-Node_SBcz5fSi
MoTHer-VTHR
2024-05-28T16:04:10Z
166
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-28T15:43:48Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MoTHer-VTHR/VTHR-LoRA-V-ModelTree_4-Depth_0-Node_fUpzZmga
MoTHer-VTHR
2024-05-28T16:04:01Z
123
0
transformers
[ "transformers", "safetensors", "vit_msn", "image-feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
image-feature-extraction
2024-05-28T15:43:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MoTHer-VTHR/VTHR-LoRA-V-ModelTree_3-Depth_2-Node_pFP4EkPc
MoTHer-VTHR
2024-05-28T16:03:54Z
167
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-28T15:43:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MoTHer-VTHR/VTHR-LoRA-V-ModelTree_3-Depth_2-Node_XReBUogr
MoTHer-VTHR
2024-05-28T16:03:47Z
168
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-28T15:42:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
fine-tuned/SCIDOCS-512-192-gpt-4o-2024-05-13-452456
fine-tuned
2024-05-28T16:03:39Z
5
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "en", "dataset:fine-tuned/SCIDOCS-512-192-gpt-4o-2024-05-13-452456", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-28T16:03:09Z
--- license: apache-2.0 datasets: - fine-tuned/SCIDOCS-512-192-gpt-4o-2024-05-13-452456 - allenai/c4 language: - en - en pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb --- This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case: None ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/SCIDOCS-512-192-gpt-4o-2024-05-13-452456', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
MoTHer-VTHR/VTHR-LoRA-V-ModelTree_3-Depth_2-Node_8V6dLssx
MoTHer-VTHR
2024-05-28T16:03:34Z
166
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-28T15:42:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]