Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
sequencelengths
1
1.84k
sha
null
created_at
stringlengths
25
25
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["trl", "sft"]}
yashdkadam/jsontraining
null
[ "transformers", "safetensors", "phi3", "text-generation", "trl", "sft", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "4-bit", "region:us" ]
null
2024-04-30T11:37:09+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
rasika00/testing-pushing
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-30T11:37:56+00:00
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # events-mem-large This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0 | 0.0004 | 1 | nan | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "google/flan-t5-large", "model-index": [{"name": "events-mem-large", "results": []}]}
eddieman78/events-mem-large
null
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google/flan-t5-large", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T11:38:23+00:00
text-generation
transformers
# mlx-community/llm-jp-13b-instruct-full-ac_001-dolly-ichikara_004_001_single-oasst-oasst2-v2.0-4bit This model was converted to MLX format from [`llm-jp/llm-jp-13b-instruct-full-ac_001-dolly-ichikara_004_001_single-oasst-oasst2-v2.0`]() using mlx-lm version **0.12.0**. Refer to the [original model card](https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-ac_001-dolly-ichikara_004_001_single-oasst-oasst2-v2.0) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/llm-jp-13b-instruct-full-ac_001-dolly-ichikara_004_001_single-oasst-oasst2-v2.0-4bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
{"language": ["en", "ja"], "license": "apache-2.0", "library_name": "transformers", "tags": ["mlx"], "datasets": ["databricks/databricks-dolly-15k", "llm-jp/databricks-dolly-15k-ja", "llm-jp/oasst1-21k-en", "llm-jp/oasst1-21k-ja", "llm-jp/oasst2-33k-en", "llm-jp/oasst2-33k-ja"], "programming_language": ["C", "C++", "C#", "Go", "Java", "JavaScript", "Lua", "PHP", "Python", "Ruby", "Rust", "Scala", "TypeScript"], "pipeline_tag": "text-generation", "inference": false}
mlx-community/llm-jp-13b-instruct-full-ac_001-dolly-ichikara_004_001_single-oasst-oasst2-v2.0-4bit
null
[ "transformers", "safetensors", "llama", "text-generation", "mlx", "conversational", "en", "ja", "dataset:databricks/databricks-dolly-15k", "dataset:llm-jp/databricks-dolly-15k-ja", "dataset:llm-jp/oasst1-21k-en", "dataset:llm-jp/oasst1-21k-ja", "dataset:llm-jp/oasst2-33k-en", "dataset:llm-jp/oasst2-33k-ja", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T11:38:58+00:00
null
null
{"license": "mit"}
tsikuzro/sdm-s24
null
[ "license:mit", "region:us" ]
null
2024-04-30T11:39:20+00:00
object-detection
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tabletransstructrecog_finetuned_pubt1m_lstabletransstrucrecogv1_session2 This model was trained from scratch on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1500 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.41.0.dev0 - Pytorch 2.0.1 - Datasets 2.18.0 - Tokenizers 0.19.1
{"tags": ["generated_from_trainer"], "model-index": [{"name": "tabletransstructrecog_finetuned_pubt1m_lstabletransstrucrecogv1_session2", "results": []}]}
nsugianto/tabletransstructrecog_finetuned_pubt1m_lstabletransstrucrecogv1_session2
null
[ "transformers", "tensorboard", "safetensors", "table-transformer", "object-detection", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
2024-04-30T11:39:40+00:00
reinforcement-learning
null
# **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="ilanasto/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]}
ilanasto/q-FrozenLake-v1-4x4-noSlippery
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-04-30T11:39:56+00:00
null
null
## Example model file tracked with [git-theta](https://github.com/r-three/git-theta) ![image.png](https://cdn-uploads.huggingface.co/production/uploads/5dd96eb166059660ed1ee413/CtvjS0aShaNPGxCN8Ya0_.png)
{"license": "apache-2.0", "tags": ["git-theta"]}
julien-c/git-theta-small-pytorch
null
[ "pytorch", "git-theta", "license:apache-2.0", "region:us" ]
null
2024-04-30T11:41:39+00:00
reinforcement-learning
null
# **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="ilanasto/taxi-RL", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "taxi-RL", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.54 +/- 2.74", "name": "mean_reward", "verified": false}]}]}]}
ilanasto/taxi-RL
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-04-30T11:41:48+00:00
null
null
{}
Fomin/sn25-3-4
null
[ "region:us" ]
null
2024-04-30T11:43:02+00:00
feature-extraction
transformers
{}
riccorl/aida-e5-base-topics-from-blink-1M-32words
null
[ "transformers", "pytorch", "bert", "feature-extraction", "custom_code", "region:us" ]
null
2024-04-30T11:43:21+00:00
sentence-similarity
sentence-transformers
# {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
{"library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"}
Andreas95/test
null
[ "sentence-transformers", "safetensors", "xlm-roberta", "feature-extraction", "sentence-similarity", "endpoints_compatible", "region:us" ]
null
2024-04-30T11:44:01+00:00
null
null
{}
riccorl/index-aida-e5-base-topics-from-blink-1M-32words
null
[ "region:us" ]
null
2024-04-30T11:44:13+00:00
null
null
{}
onionqqq/ft_gemma_1.1_7b_it
null
[ "region:us" ]
null
2024-04-30T11:44:50+00:00
text-generation
transformers
{}
yirenc/Meta-Llama-3-8B-on-llama_gsm8k_public
null
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T11:44:53+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
codevang/fast_small_14
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-30T11:45:19+00:00
null
null
{}
pupoh/arisu
null
[ "region:us" ]
null
2024-04-30T11:46:56+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # biomistral-7b-dpo-full-sft-wo-kqa_silver_wogold This model is a fine-tuned version of [Minbyul/biomistral-7b-wo-kqa_silver_wogold-sft](https://huggingface.co/Minbyul/biomistral-7b-wo-kqa_silver_wogold-sft) on the HuggingFaceH4/ultrafeedback_binarized dataset. It achieves the following results on the evaluation set: - Loss: 0.3493 - Rewards/chosen: -0.0378 - Rewards/rejected: -1.3358 - Rewards/accuracies: 1.0 - Rewards/margins: 1.2980 - Logps/rejected: -740.3141 - Logps/chosen: -69.0360 - Logits/rejected: -3.3845 - Logits/chosen: -3.7481 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.0911 | 0.83 | 100 | 0.3514 | -0.0382 | -1.3191 | 1.0 | 1.2809 | -738.6374 | -69.0788 | -3.3885 | -3.7525 | ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrafeedback_binarized"], "base_model": "Minbyul/biomistral-7b-wo-kqa_silver_wogold-sft", "model-index": [{"name": "biomistral-7b-dpo-full-sft-wo-kqa_silver_wogold", "results": []}]}
Minbyul/biomistral-7b-dpo-full-sft-wo-kqa_silver_wogold
null
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "conversational", "dataset:HuggingFaceH4/ultrafeedback_binarized", "base_model:Minbyul/biomistral-7b-wo-kqa_silver_wogold-sft", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T11:47:51+00:00
text-to-image
diffusers
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # DreamBooth - zhenjili/cat_dreambooth This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks cat using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: True. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
{"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers"], "base_model": "CompVis/stable-diffusion-v1-4", "inference": true, "instance_prompt": "a photo of sks cat"}
zhenjili/cat_dreambooth
null
[ "diffusers", "tensorboard", "safetensors", "text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "base_model:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
null
2024-04-30T11:48:31+00:00
null
null
{}
amirafshari/my_awesome_model
null
[ "region:us" ]
null
2024-04-30T11:51:31+00:00
null
transformers
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/DevsDoCode/Llama-3-8B-Instruct-1048k <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-1048k-GGUF/resolve/main/Llama-3-8B-Instruct-1048k.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "library_name": "transformers", "base_model": "DevsDoCode/Llama-3-8B-Instruct-1048k", "quantized_by": "mradermacher"}
mradermacher/Llama-3-8B-Instruct-1048k-GGUF
null
[ "transformers", "gguf", "en", "base_model:DevsDoCode/Llama-3-8B-Instruct-1048k", "endpoints_compatible", "region:us" ]
null
2024-04-30T11:53:41+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # first_test_trainer This model is a fine-tuned version of [llava-hf/llava-v1.6-mistral-7b-hf](https://huggingface.co/llava-hf/llava-v1.6-mistral-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00017 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 20 - total_train_batch_size: 20 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "llava-hf/llava-v1.6-mistral-7b-hf", "model-index": [{"name": "first_test_trainer", "results": []}]}
matchmaking/first_test_trainer
null
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:llava-hf/llava-v1.6-mistral-7b-hf", "region:us" ]
null
2024-04-30T11:55:02+00:00
text-classification
setfit
# SetFit with intfloat/multilingual-e5-large This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 7 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 6 | <ul><li>'What kind of promotions generally lead to higher cannibalization?'</li><li>'Which Skus has higher Canninibalization in Natural Juices for 2023?'</li><li>'Which two Product can have simultaneous Promotions?'</li></ul> | | 2 | <ul><li>'Which Promotions contributred the most lift Change between 2022 and 2023?'</li><li>'Which category x brand has seen major decline in Volume Lift for 2023?'</li><li>'What actions were taken to increase the volume lift for MEGAMART in 2023?'</li></ul> | | 3 | <ul><li>'What types of promotions within the FIZZY DRINKS category are best suited for offering high discounts?'</li><li>'Which promotion types are better for high discounts in Hydra category for 2022?'</li><li>'Which promotion types in are better for low discounts in FIZZY DRINKS category?'</li></ul> | | 5 | <ul><li>'How will increasing the discount by 50 percent on Brand BREEZEFIZZ affect the incremental volume lift?'</li><li>'How will the introduction of a 20% discount promotion for Rice Krispies in August affect incremental volume and ROI?'</li><li>'If I raise the discount by 20% on Brand BREEZEFIZZ, what will be the incremental roi?'</li></ul> | | 0 | <ul><li>'For which category MULTISAVING type of promotions worked best for WorldMart in 2022?'</li><li>'What type of promotions worked best for WorldMart in 2022?'</li><li>'Which subcategory have the highest ROI in 2022?'</li></ul> | | 4 | <ul><li>'Suggest a better investment strategy to gain better ROI in 2023 for FIZZY DRINKS'</li><li>'Which promotions have scope for higher investment to drive more ROIs in UrbanHub ?'</li><li>'What promotions in FIZZY DRINKS have shown declining effectiveneHydra and can be discontinued?'</li></ul> | | 1 | <ul><li>'How do the performance metrics of brands in the FIZZY DRINKS category compare to those in HYDRA and NATURAL JUICES concerning ROI change between 2021 to 2022?'</li><li>'Can you identify the specific factors or challenges that contributed to the decline in ROI within ULTRASTORE in 2022 compared to 2021?'</li><li>'What are the main reasons for ROI decline in 2022 compared to 2021?'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 1.0 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("vgarg/promo_prescriptive_gpt_30_04_2024") # Run inference preds = model("Which promotion types are better for low discounts for Zucaritas ?") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 7 | 14.6667 | 27 | | Label | Training Sample Count | |:------|:----------------------| | 0 | 10 | | 1 | 10 | | 2 | 10 | | 3 | 10 | | 4 | 10 | | 5 | 10 | | 6 | 9 | ### Training Hyperparameters - batch_size: (16, 16) - num_epochs: (3, 3) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 20 - body_learning_rate: (2e-05, 2e-05) - head_learning_rate: 2e-05 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0058 | 1 | 0.3528 | - | | 0.2890 | 50 | 0.0485 | - | | 0.5780 | 100 | 0.0052 | - | | 0.8671 | 150 | 0.0014 | - | | 1.1561 | 200 | 0.0006 | - | | 1.4451 | 250 | 0.0004 | - | | 1.7341 | 300 | 0.0005 | - | | 2.0231 | 350 | 0.0004 | - | | 2.3121 | 400 | 0.0004 | - | | 2.6012 | 450 | 0.0005 | - | | 2.8902 | 500 | 0.0004 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.3 - Sentence Transformers: 2.7.0 - Transformers: 4.40.0 - PyTorch: 2.2.1+cu121 - Datasets: 2.19.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"library_name": "setfit", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "metrics": ["accuracy"], "base_model": "intfloat/multilingual-e5-large", "widget": [{"text": "What promotions in RTEC have shown declining effectiveness and can be discontinued?"}, {"text": "What are my priority brands in RTEC to get positive Lift Change in 2022?"}, {"text": "What would be the expected incremental volume lift if the discount on Brand Zucaritas is raised by 5%?"}, {"text": "Which promotion types are better for low discounts for Zucaritas ?"}, {"text": "Which Promotions contributred the most ROI Change between 2022 and 2023?"}], "pipeline_tag": "text-classification", "inference": true, "model-index": [{"name": "SetFit with intfloat/multilingual-e5-large", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 1.0, "name": "Accuracy"}]}]}]}
vgarg/promo_prescriptive_gpt_30_04_2024
null
[ "setfit", "safetensors", "xlm-roberta", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:intfloat/multilingual-e5-large", "model-index", "region:us" ]
null
2024-04-30T11:55:15+00:00
text-generation
transformers
{}
yirenc/llama-7b-on-llama_gsm8k_public
null
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T11:55:23+00:00
null
null
{}
Amrit05/output_dir
null
[ "region:us" ]
null
2024-04-30T11:55:28+00:00
sentence-similarity
sentence-transformers
# luiz-and-robert-thesis/mpnet-frozen-newtriplets-v2-lr-1e-8-m-1-e-3 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('luiz-and-robert-thesis/mpnet-frozen-newtriplets-v2-lr-1e-8-m-1-e-3') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=luiz-and-robert-thesis/mpnet-frozen-newtriplets-v2-lr-1e-8-m-1-e-3) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 5885 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.TripletLoss.TripletLoss` with parameters: ``` {'distance_metric': 'TripletDistanceMetric.COSINE', 'triplet_margin': 1} ``` Parameters of the fit()-Method: ``` { "epochs": 3, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 1e-08 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 2648, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
{"library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"}
luiz-and-robert-thesis/mpnet-frozen-newtriplets-v2-lr-1e-8-m-1-e-3
null
[ "sentence-transformers", "safetensors", "mpnet", "feature-extraction", "sentence-similarity", "endpoints_compatible", "region:us" ]
null
2024-04-30T11:55:35+00:00
null
null
{}
Fomin/sn25-2-5
null
[ "region:us" ]
null
2024-04-30T11:56:07+00:00
reinforcement-learning
stable-baselines3
# **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
{"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "260.68 +/- 22.20", "name": "mean_reward", "verified": false}]}]}]}
David0702/ppo-LunarLander-v2-1
null
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-30T11:57:08+00:00
text-generation
transformers
# mlx-community/llm-jp-13b-instruct-full-ac_001_16x-dolly-ichikara_004_001_single-oasst-oasst2-v2.0-4bit This model was converted to MLX format from [`llm-jp/llm-jp-13b-instruct-full-ac_001_16x-dolly-ichikara_004_001_single-oasst-oasst2-v2.0`]() using mlx-lm version **0.12.0**. Refer to the [original model card](https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-ac_001_16x-dolly-ichikara_004_001_single-oasst-oasst2-v2.0) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/llm-jp-13b-instruct-full-ac_001_16x-dolly-ichikara_004_001_single-oasst-oasst2-v2.0-4bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
{"language": ["en", "ja"], "license": "apache-2.0", "library_name": "transformers", "tags": ["mlx"], "datasets": ["databricks/databricks-dolly-15k", "llm-jp/databricks-dolly-15k-ja", "llm-jp/oasst1-21k-en", "llm-jp/oasst1-21k-ja", "llm-jp/oasst2-33k-en", "llm-jp/oasst2-33k-ja"], "programming_language": ["C", "C++", "C#", "Go", "Java", "JavaScript", "Lua", "PHP", "Python", "Ruby", "Rust", "Scala", "TypeScript"], "pipeline_tag": "text-generation", "inference": false}
mlx-community/llm-jp-13b-instruct-full-ac_001_16x-dolly-ichikara_004_001_single-oasst-oasst2-v2.0-4bit
null
[ "transformers", "safetensors", "llama", "text-generation", "mlx", "conversational", "en", "ja", "dataset:databricks/databricks-dolly-15k", "dataset:llm-jp/databricks-dolly-15k-ja", "dataset:llm-jp/oasst1-21k-en", "dataset:llm-jp/oasst1-21k-ja", "dataset:llm-jp/oasst2-33k-en", "dataset:llm-jp/oasst2-33k-ja", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T11:58:35+00:00
text-generation
transformers
# This is a [DADA tuned](https://huggingface.co/Envoid/Mixtral-Instruct-ITR-DADA-8x7B) version of [ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B) ### As always, experimental model, results may be unpredictable. ![](https://files.catbox.moe/1yl1vq.jpg) The intermediate merge model was finetuned specifically for the original model and then merged onto it via 50/50 SLERP merge. As far as RP goes my current testing setup is having difficulties involving end of sequence and I found the prompt templates provided with Poppy Porpoise were not yielding desirable results on either the original model or the DADA tune so I opted instead to use Libra style Llama-3 prompt templates and simple sampling. The results, albeit subjective, I thought were good. ## Whether or not the tune had any effect on multi-modal capabilities was not part of the test and instead focus was entirely on creative writing. Test: A simple instruct prompt utilizing determinsitic samplers. **PROMPT** ``` Write me a 'beauty and the beast' gothic metal duet about a couple fighting over which brand of ketchup to purchase. ``` **Poppy Porpoise Original:** ``` (Verse 1 - Beauty) In the land of condiments, where flavors reign supreme, A tale of love and strife, a battle of the brands it seems. I stand before the shelves, my heart aflame with desire, For the perfect ketchup to complement my fries. (Chorus) But alas, my love, you stand beside me, with a different view, Your heart set on [Brand 1 redacted], while mine beats for [Brand 2 redacted], it's true. We clash in this aisle, our passions running high, A Gothic metal duet, our love and hate collide. (Verse 2 - Beast) I've tasted both, and I know, [Brand 1 redacted] is the one for me, Its tangy sweetness, a symphony, a harmony. But you, my love, you cling to Hunt's, its flavor so bland, A cacophony of mediocrity, a discordant band. (Chorus) But alas, my love, you stand beside me, with a different view, Your heart set on [Brand 2 redacted], while mine beats for [Brand 1 redacted], it's true. We clash in this aisle, our passions running high, A Gothic metal duet, our love and hate collide. (Bridge) We're lost in this supermarket, a Gothic metal dream, Our love and hate entwined, a twisted, tortured theme. The ketchup bottles loom, like tombstones in the night, A monument to our love, a testament to our fight. (Chorus) But alas, my love, you stand beside me, with a different view, Your heart set on [Brand 2 redacted], while mine beats for [Brand 1 redacted], it's true. We clash in this aisle, our passions running high, A Gothic metal duet, our love and hate collide. (Outro) And so we stand, in this aisle of despair, Our love and hate, a Gothic metal prayer. For in the end, it's not the ketchup that matters, But the love we share, in this twisted, tortured heart. ``` **Poppy Porpoise DADA** ``` (Verse 1 - Beauty) In the land of condiments, where the sauces reign supreme, A battle rages on, a war of ketchup dreams. I stand by [Brand 1 redacted], the classic, the original, the best, But my love, he's a rebel, he's a [Brand 2 redacted] pest. (Chorus) Oh, the ketchup wars, they rage on and on, A tale of love and sauce, a battle to be won. Will our love prevail, or will it be undone? By the brand of ketchup, our hearts are torn asunder. (Verse 2 - Beast) [Brand 2 redacted], the new kid, the one with the flair, A sweeter taste, a richer flavor, beyond compare. But Beauty, she's a traditionalist, she won't budge, She clings to [Brand 1 redacted], her heart, her soul, her blood. (Chorus) Oh, the ketchup wars, they rage on and on, A tale of love and sauce, a battle to be won. Will our love prevail, or will it be undone? By the brand of ketchup, our hearts are torn asunder. (Bridge - Both) We stand, divided, by our condiment creed, Our love, a casualty, in this ketchup war we lead. But can we find a middle ground, a compromise, a truce? Or will our love be lost, in this battle of the sauces? (Chorus) Oh, the ketchup wars, they rage on and on, A tale of love and sauce, a battle to be won. Will our love prevail, or will it be undone? By the brand of ketchup, our hearts are torn asunder. (Outro - Both) In the end, we find a way, to put aside our pride, And together, we stand, united, side by side. For love, for ketchup, for the sauce that binds, We'll find a way, to make our love, forever shine. ``` As you can clearly see the cadence and overall word selection by the DADA model is more copacetic with what would be expected from a gothic metal duet indicating that the IQ of the model has increased by at least 100 points. Training was done using [qlora-pipe](https://github.com/tdrussell/qlora-pipe) GGUFs care of [Quant Cartel](https://huggingface.co/Quant-Cartel/Poppy_Porpoise-DADA-8B-iMat-GGUF)
{"license": "cc-by-nc-4.0"}
Envoid/Poppy_Porpoise-DADA-8B
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T11:59:00+00:00
null
null
{}
LuyangZ/FloorAI_Controlnet
null
[ "region:us" ]
null
2024-04-30T11:59:08+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Anas989898/llava-mistral-7b-finetuned
null
[ "transformers", "safetensors", "llava_mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T12:01:33+00:00
reinforcement-learning
stable-baselines3
# **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
{"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "252.38 +/- 20.26", "name": "mean_reward", "verified": false}]}]}]}
ArnavModanwal/ppo-LunarLander-v2
null
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-30T12:02:29+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
HenryCai1129/adapter-llama-adapterhappy2sad-1k-search-noshuffle-50-0.003
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-30T12:02:59+00:00
automatic-speech-recognition
transformers
{}
racheltong/whisper-smallMOE_chinese
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2024-04-30T12:03:06+00:00
null
null
## [Circuitbentphotography](https://d14i3advvh2bvd.cloudfront.net/d7ec7ae7eb05398000dac393417202696950ae2f3f9e96a6f624a28e70fbd5cd.tar) LoRA trained on [Eden.art](https://eden.art) by [dawnia](https://app.eden.art/creators/dawnia) on 11 images. * [How to train Concepts (LoRAs) on Eden](https://docs.eden.art/docs/guides/concepts) * [How to export LoRAs from Eden](https://docs.eden.art/docs/guides/concepts#exporting-loras-for-use-in-other-tools) ![Samples](https://d14i3advvh2bvd.cloudfront.net/ef24f72598a41df92f3a369d2230eb7e1b9f39b1ed98d2f7bacdc9c9854e06f1.jpg)
{"language": ["en"], "thumbnail": "https://d14i3advvh2bvd.cloudfront.net/ef24f72598a41df92f3a369d2230eb7e1b9f39b1ed98d2f7bacdc9c9854e06f1.jpg", "base_model": "stabilityai/stable-diffusion-xl-base-1.0"}
eden-art/Circuitbentphotography
null
[ "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
null
2024-04-30T12:03:14+00:00
text2text-generation
transformers
{}
shenkha/DGSlow_T5-small_ED
null
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T12:03:21+00:00
null
null
{}
Oppabank/Llama3Tuna
null
[ "safetensors", "gguf", "region:us" ]
null
2024-04-30T12:03:34+00:00
null
null
{}
Fomin/sn25-3-5
null
[ "region:us" ]
null
2024-04-30T12:03:37+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tweet_sentiment This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0486 - Accuracy: 0.9902 - F1: 0.9863 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "tweet_sentiment", "results": []}]}
nischalsingh/tweet_sentiment
null
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T12:03:49+00:00
text-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
jorgefg03/mdeberta-v3-base-autext2024
null
[ "transformers", "safetensors", "deberta-v2", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T12:04:17+00:00
feature-extraction
transformers
# phospho-small This is a SetFit model that can be used for Text Classification on CPU. The model has been trained using an efficient few-shot learning technique. ## Usage ```python from setfit import SetFitModel model = SetFitModel.from_pretrained("phospho-small-f75baf6") outputs = model.predict(["This is a sentence to classify", "Another sentence"]) # tensor([1, 0]) ``` ## References This work was possible thanks to the SetFit library and the work of: Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren (2022). Efficient Few-Shot Learning Without Prompts. ArXiv: [https://doi.org/10.48550/arxiv.2209.11055](https://doi.org/10.48550/arxiv.2209.11055)
{"language": "en", "license": "apache-2.0"}
phospho-app/phospho-small-f75baf6
null
[ "transformers", "safetensors", "mpnet", "feature-extraction", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-30T12:04:44+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
salmanjara/llavatuned
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-30T12:05:25+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral_finetued_on_scigen_v3 This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 128 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 128 - total_train_batch_size: 16384 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 30 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.1.2 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "model-index": [{"name": "mistral_finetued_on_scigen_v3", "results": []}]}
moetezsa/mistral_finetued_on_scigen_v3
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "region:us" ]
null
2024-04-30T12:05:35+00:00
text-generation
transformers
# Model details This is a quick experiment on llamafied phi-3 with only 1000 orpo steps from an azureml translated german orca binarized-dataset (johannhartmann/mistralorpo), with original phi-3 prompt template. The immediate result is not really good, but also not bad enough to disencourage further experiments. # Benchmark results This was an experiment on a german dataset snippet which, as expected, worsened results on english benchmarks: | Metric |Value| |---------------------------------|----:| |Avg. |64.40| |AI2 Reasoning Challenge (25-Shot)|60.41| |HellaSwag (10-Shot) |78.37| |MMLU (5-Shot) |65.26| |TruthfulQA (0-shot) |49.76| |Winogrande (5-shot) |70.24| |GSM8k (5-shot) |62.32| On german EQ-Bench (v2_de) 51.82 (insignificant over 51.41 for original llamafied but significantly better than intermediate cstr/phi-3-orpo-v8_16 which after initial 150 test steps achieved 46.38) but with still only 164/171 correctly parsed. Note: We can improve the correctness of parsing, i.a., by only a few SFT steps, as shown with cas/phi3-mini-4k-llamafied-sft-v3 (170/171 correct but with then only 39.46 score in v2_de, which was also an experiment in changing the prompt template). All that was quickly done with bnb and q4 quants only, which might, in theory, affect especially such small dense models significantly. But it served the intention for both proof-of-concept-experiments at least. Probably it would easily be possible to further improve results, but that would take some time and compute. # Training setup This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
{"language": ["en", "de"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "orpo"], "base_model": "cstr/phi-3-orpo-v8_16"}
cstr/phi-3-orpo-v9_16
null
[ "transformers", "pytorch", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "orpo", "conversational", "en", "de", "base_model:cstr/phi-3-orpo-v8_16", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T12:05:40+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # trained_serbian This model is a fine-tuned version of [distilbert/distilbert-base-multilingual-cased](https://huggingface.co/distilbert/distilbert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0786 - Precision: 0.8138 - Recall: 0.8795 - F1: 0.8454 - Accuracy: 0.9774 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 104 | 0.1030 | 0.7317 | 0.7706 | 0.7507 | 0.9667 | | No log | 2.0 | 208 | 0.0814 | 0.8001 | 0.8642 | 0.8309 | 0.9757 | | No log | 3.0 | 312 | 0.0786 | 0.8138 | 0.8795 | 0.8454 | 0.9774 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.1.2+cpu - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "distilbert/distilbert-base-multilingual-cased", "model-index": [{"name": "trained_serbian", "results": []}]}
annamariagnat/trained_serbian
null
[ "transformers", "tensorboard", "safetensors", "distilbert", "token-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-multilingual-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T12:05:59+00:00
text-generation
transformers
## Exllama v2 Quantizations of Llama-3-8B-Ultra-Instruct Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.20">turboderp's ExLlamaV2 v0.0.20</a> for quantization. <b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b> Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. Original model: https://huggingface.co/elinas/Llama-3-8B-Ultra-Instruct ## Prompt format ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> {prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|> ``` ## Available sizes | Branch | Bits | lm_head bits | VRAM (4k) | VRAM (8K) | VRAM (16k) | VRAM (32k) | Description | | ----- | ---- | ------- | ------ | ------ | ------ | ------ | ------------ | | [8_0](https://huggingface.co/bartowski/Llama-3-8B-Ultra-Instruct-exl2/tree/8_0) | 8.0 | 8.0 | 10.1 GB | 10.5 GB | 11.5 GB | 13.6 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. | | [6_5](https://huggingface.co/bartowski/Llama-3-8B-Ultra-Instruct-exl2/tree/6_5) | 6.5 | 8.0 | 8.9 GB | 9.3 GB | 10.3 GB | 12.4 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. | | [5_0](https://huggingface.co/bartowski/Llama-3-8B-Ultra-Instruct-exl2/tree/5_0) | 5.0 | 6.0 | 7.7 GB | 8.1 GB | 9.1 GB | 11.2 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. | | [4_25](https://huggingface.co/bartowski/Llama-3-8B-Ultra-Instruct-exl2/tree/4_25) | 4.25 | 6.0 | 7.0 GB | 7.4 GB | 8.4 GB | 10.5 GB | GPTQ equivalent bits per weight, slightly higher quality. | | [3_5](https://huggingface.co/bartowski/Llama-3-8B-Ultra-Instruct-exl2/tree/3_5) | 3.5 | 6.0 | 6.4 GB | 6.8 GB | 7.8 GB | 9.9 GB | Lower quality, only use if you have to. | ## Download instructions With git: ```shell git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Llama-3-8B-Ultra-Instruct-exl2 Llama-3-8B-Ultra-Instruct-exl2-6_5 ``` With huggingface hub (credit to TheBloke for instructions): ```shell pip3 install huggingface-hub ``` To download a specific branch, use the `--revision` parameter. For example, to download the 6.5 bpw branch: Linux: ```shell huggingface-cli download bartowski/Llama-3-8B-Ultra-Instruct-exl2 --revision 6_5 --local-dir Llama-3-8B-Ultra-Instruct-exl2-6_5 --local-dir-use-symlinks False ``` Windows (which apparently doesn't like _ in folders sometimes?): ```shell huggingface-cli download bartowski/Llama-3-8B-Ultra-Instruct-exl2 --revision 6_5 --local-dir Llama-3-8B-Ultra-Instruct-exl2-6.5 --local-dir-use-symlinks False ``` Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
{"license": "llama3", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Undi95/Meta-Llama-3-8B-Instruct-hf"], "quantized_by": "bartowski", "pipeline_tag": "text-generation"}
bartowski/Llama-3-8B-Ultra-Instruct-exl2
null
[ "transformers", "mergekit", "merge", "text-generation", "base_model:Undi95/Meta-Llama-3-8B-Instruct-hf", "license:llama3", "endpoints_compatible", "region:us" ]
null
2024-04-30T12:06:43+00:00
null
null
{"language": ["de", "en"], "license": "apache-2.0"}
hoschidude/Mixtral-8x7B-Instruct-v0.1-Q6_K-GGUF
null
[ "gguf", "de", "en", "license:apache-2.0", "region:us" ]
null
2024-04-30T12:07:13+00:00
null
transformers
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/elinas/Llama-3-8B-Ultra-Instruct You should use `--override-kv tokenizer.ggml.pre=str:llama3` and a current llama.cpp version to work around a bug in llama.cpp that made these quants. (see https://old.reddit.com/r/LocalLLaMA/comments/1cg0z1i/bpe_pretokenization_support_is_now_merged_llamacpp/?share_id=5dBFB9x0cOJi8vbr-Murh) <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3-8B-Ultra-Instruct-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Ultra-Instruct-GGUF/resolve/main/Llama-3-8B-Ultra-Instruct.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Ultra-Instruct-GGUF/resolve/main/Llama-3-8B-Ultra-Instruct.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Ultra-Instruct-GGUF/resolve/main/Llama-3-8B-Ultra-Instruct.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Ultra-Instruct-GGUF/resolve/main/Llama-3-8B-Ultra-Instruct.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Ultra-Instruct-GGUF/resolve/main/Llama-3-8B-Ultra-Instruct.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Ultra-Instruct-GGUF/resolve/main/Llama-3-8B-Ultra-Instruct.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Ultra-Instruct-GGUF/resolve/main/Llama-3-8B-Ultra-Instruct.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Ultra-Instruct-GGUF/resolve/main/Llama-3-8B-Ultra-Instruct.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Ultra-Instruct-GGUF/resolve/main/Llama-3-8B-Ultra-Instruct.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Ultra-Instruct-GGUF/resolve/main/Llama-3-8B-Ultra-Instruct.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Ultra-Instruct-GGUF/resolve/main/Llama-3-8B-Ultra-Instruct.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Ultra-Instruct-GGUF/resolve/main/Llama-3-8B-Ultra-Instruct.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Ultra-Instruct-GGUF/resolve/main/Llama-3-8B-Ultra-Instruct.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Ultra-Instruct-GGUF/resolve/main/Llama-3-8B-Ultra-Instruct.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Ultra-Instruct-GGUF/resolve/main/Llama-3-8B-Ultra-Instruct.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "llama3", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": "elinas/Llama-3-8B-Ultra-Instruct", "quantized_by": "mradermacher"}
mradermacher/Llama-3-8B-Ultra-Instruct-GGUF
null
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:elinas/Llama-3-8B-Ultra-Instruct", "license:llama3", "endpoints_compatible", "region:us" ]
null
2024-04-30T12:07:34+00:00
null
null
{}
GraydientPlatformAPI/loras-april30
null
[ "region:us" ]
null
2024-04-30T12:07:41+00:00
null
transformers
# Uploaded model - **Developed by:** CarlosFersoft - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-bnb-4bit"}
CarlosFersoft/GPBusiness0001_LoRA
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-30T12:08:06+00:00
text-generation
transformers
Quantizations of https://huggingface.co/PygmalionAI/mythalion-13b # From original readme ## Prompting This model can be prompted using both the Alpaca and [Pygmalion formatting](https://huggingface.co/PygmalionAI/pygmalion-2-13b#prompting). **Alpaca formatting**: ``` ### Instruction: <prompt> ### Response: <leave a newline blank for model to respond> ``` **Pygmalion/Metharme formatting**: ``` <|system|>Enter RP mode. Pretend to be {{char}} whose persona follows: {{persona}} You shall reply to the user while staying in character, and generate long responses. <|user|>Hello!<|model|>{model's response goes here} ``` The model has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`. The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input. The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history.
{"language": ["en"], "license": "other", "tags": ["transformers", "gguf", "imatrix", "mythalion-13b"], "pipeline_tag": "text-generation", "inference": false}
duyntnet/mythalion-13b-imatrix-GGUF
null
[ "transformers", "gguf", "imatrix", "mythalion-13b", "text-generation", "en", "license:other", "region:us" ]
null
2024-04-30T12:08:24+00:00
null
null
{"license": "llama3"}
l3utterfly/Einstein-v6.1-Llama3-8B-GGUF
null
[ "gguf", "license:llama3", "region:us" ]
null
2024-04-30T12:08:49+00:00
null
transformers
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Minbyul/meditron-7b-dpo-full-sft-wo-kqa_golden <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/meditron-7b-dpo-full-sft-wo-kqa_golden-GGUF/resolve/main/meditron-7b-dpo-full-sft-wo-kqa_golden.Q2_K.gguf) | Q2_K | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/meditron-7b-dpo-full-sft-wo-kqa_golden-GGUF/resolve/main/meditron-7b-dpo-full-sft-wo-kqa_golden.IQ3_XS.gguf) | IQ3_XS | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/meditron-7b-dpo-full-sft-wo-kqa_golden-GGUF/resolve/main/meditron-7b-dpo-full-sft-wo-kqa_golden.IQ3_S.gguf) | IQ3_S | 3.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/meditron-7b-dpo-full-sft-wo-kqa_golden-GGUF/resolve/main/meditron-7b-dpo-full-sft-wo-kqa_golden.Q3_K_S.gguf) | Q3_K_S | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/meditron-7b-dpo-full-sft-wo-kqa_golden-GGUF/resolve/main/meditron-7b-dpo-full-sft-wo-kqa_golden.IQ3_M.gguf) | IQ3_M | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/meditron-7b-dpo-full-sft-wo-kqa_golden-GGUF/resolve/main/meditron-7b-dpo-full-sft-wo-kqa_golden.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/meditron-7b-dpo-full-sft-wo-kqa_golden-GGUF/resolve/main/meditron-7b-dpo-full-sft-wo-kqa_golden.Q3_K_L.gguf) | Q3_K_L | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/meditron-7b-dpo-full-sft-wo-kqa_golden-GGUF/resolve/main/meditron-7b-dpo-full-sft-wo-kqa_golden.IQ4_XS.gguf) | IQ4_XS | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/meditron-7b-dpo-full-sft-wo-kqa_golden-GGUF/resolve/main/meditron-7b-dpo-full-sft-wo-kqa_golden.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/meditron-7b-dpo-full-sft-wo-kqa_golden-GGUF/resolve/main/meditron-7b-dpo-full-sft-wo-kqa_golden.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/meditron-7b-dpo-full-sft-wo-kqa_golden-GGUF/resolve/main/meditron-7b-dpo-full-sft-wo-kqa_golden.Q5_K_S.gguf) | Q5_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/meditron-7b-dpo-full-sft-wo-kqa_golden-GGUF/resolve/main/meditron-7b-dpo-full-sft-wo-kqa_golden.Q5_K_M.gguf) | Q5_K_M | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/meditron-7b-dpo-full-sft-wo-kqa_golden-GGUF/resolve/main/meditron-7b-dpo-full-sft-wo-kqa_golden.Q6_K.gguf) | Q6_K | 5.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/meditron-7b-dpo-full-sft-wo-kqa_golden-GGUF/resolve/main/meditron-7b-dpo-full-sft-wo-kqa_golden.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/meditron-7b-dpo-full-sft-wo-kqa_golden-GGUF/resolve/main/meditron-7b-dpo-full-sft-wo-kqa_golden.f16.gguf) | f16 | 13.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "llama2", "library_name": "transformers", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrafeedback_binarized"], "base_model": "Minbyul/meditron-7b-dpo-full-sft-wo-kqa_golden", "quantized_by": "mradermacher"}
mradermacher/meditron-7b-dpo-full-sft-wo-kqa_golden-GGUF
null
[ "transformers", "gguf", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "en", "dataset:HuggingFaceH4/ultrafeedback_binarized", "base_model:Minbyul/meditron-7b-dpo-full-sft-wo-kqa_golden", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-04-30T12:09:16+00:00
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
{"library_name": "peft", "base_model": "unsloth/llama-3-8b"}
sravaniayyagari/lora_model
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:unsloth/llama-3-8b", "region:us" ]
null
2024-04-30T12:09:20+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llava-1.5-7b-hf-ft-museum-updated This model is a fine-tuned version of [llava-hf/llava-1.5-7b-hf](https://huggingface.co/llava-hf/llava-1.5-7b-hf) on the The Metropolitan Museum of Art(https://www.metmuseum.org/art/collection) dataset. ## Model description This project designed to enrich museum experiences by enabling users to explore and comprehend museum artifacts through image-based queries. ## Intended uses & limitations **Intended uses:** 1. Enhance museum visitor experiences through interactive exploration via image-based queries. 2. Aid researchers and curators in retrieving information from museum artifacts. 3. Democratize access to museum knowledge by catering to a broader audience. **Limitation:** The model is fine tuned on a subset of data from the Met's collection that pertains to Indian heritage. ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.4e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.19.1
{"library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "llava-hf/llava-1.5-7b-hf", "model-index": [{"name": "llava-1.5-7b-hf-ft-museum-updated", "results": []}]}
somnathsingh31/llava-1.5-7b-hf-ft-museum
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:llava-hf/llava-1.5-7b-hf", "has_space", "region:us" ]
null
2024-04-30T12:09:56+00:00
feature-extraction
transformers
# phospho-small This is a SetFit model that can be used for Text Classification on CPU. The model has been trained using an efficient few-shot learning technique. ## Usage ```python from setfit import SetFitModel model = SetFitModel.from_pretrained("phospho-small-2502093") outputs = model.predict(["This is a sentence to classify", "Another sentence"]) # tensor([1, 0]) ``` ## References This work was possible thanks to the SetFit library and the work of: Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren (2022). Efficient Few-Shot Learning Without Prompts. ArXiv: [https://doi.org/10.48550/arxiv.2209.11055](https://doi.org/10.48550/arxiv.2209.11055)
{"language": "en", "license": "apache-2.0"}
phospho-app/phospho-small-2502093
null
[ "transformers", "safetensors", "mpnet", "feature-extraction", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-30T12:10:05+00:00
feature-extraction
transformers
# phospho-small This is a SetFit model that can be used for Text Classification on CPU. The model has been trained using an efficient few-shot learning technique. ## Usage ```python from setfit import SetFitModel model = SetFitModel.from_pretrained("phospho-small-bcfa83c") outputs = model.predict(["This is a sentence to classify", "Another sentence"]) # tensor([1, 0]) ``` ## References This work was possible thanks to the SetFit library and the work of: Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren (2022). Efficient Few-Shot Learning Without Prompts. ArXiv: [https://doi.org/10.48550/arxiv.2209.11055](https://doi.org/10.48550/arxiv.2209.11055)
{"language": "en", "license": "apache-2.0"}
phospho-app/phospho-small-bcfa83c
null
[ "transformers", "safetensors", "mpnet", "feature-extraction", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-30T12:10:16+00:00
null
null
{}
MRTTYRN/1234
null
[ "region:us" ]
null
2024-04-30T12:12:13+00:00
text-generation
transformers
# Uploaded model - **Developed by:** cstr - **License:** apache-2.0 - **Finetuned from model :** cstr/phi-3-orpo-v8_16 This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "orpo"], "base_model": "cstr/phi-3-orpo-v8_16"}
cstr/phi-3-orpo-v9_4
null
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "orpo", "conversational", "en", "base_model:cstr/phi-3-orpo-v8_16", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "region:us" ]
null
2024-04-30T12:13:31+00:00
null
null
{}
itay-nakash/model_d582214a69
null
[ "region:us" ]
null
2024-04-30T12:13:46+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Narkantak/phi3-Intent-entity-Classifier-Ashu
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-30T12:13:54+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # working This model is a fine-tuned version of [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6374 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 24 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.6546 | 0.92 | 6 | 1.7189 | | 1.2076 | 2.0 | 13 | 0.8973 | | 0.7157 | 2.92 | 19 | 0.5511 | | 0.4138 | 4.0 | 26 | 0.4499 | | 0.4018 | 4.92 | 32 | 0.4044 | | 0.3034 | 6.0 | 39 | 0.3793 | | 0.3186 | 6.92 | 45 | 0.3645 | | 0.2451 | 8.0 | 52 | 0.3590 | | 0.2556 | 8.92 | 58 | 0.3660 | | 0.1937 | 10.0 | 65 | 0.3825 | | 0.1993 | 10.92 | 71 | 0.3782 | | 0.1511 | 12.0 | 78 | 0.4275 | | 0.1487 | 12.92 | 84 | 0.4234 | | 0.1098 | 14.0 | 91 | 0.4876 | | 0.1121 | 14.92 | 97 | 0.4675 | | 0.0846 | 16.0 | 104 | 0.5187 | | 0.0869 | 16.92 | 110 | 0.5365 | | 0.0677 | 18.0 | 117 | 0.5372 | | 0.0729 | 18.92 | 123 | 0.5639 | | 0.0587 | 20.0 | 130 | 0.5773 | | 0.0623 | 20.92 | 136 | 0.6006 | | 0.0524 | 22.0 | 143 | 0.6098 | | 0.0599 | 22.92 | 149 | 0.6101 | | 0.0495 | 24.0 | 156 | 0.6204 | | 0.0571 | 24.92 | 162 | 0.6297 | | 0.0475 | 26.0 | 169 | 0.6353 | | 0.0551 | 26.92 | 175 | 0.6374 | | 0.0455 | 27.69 | 180 | 0.6374 | ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "microsoft/Phi-3-mini-128k-instruct", "model-index": [{"name": "working", "results": []}]}
Narkantak/working
null
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:microsoft/Phi-3-mini-128k-instruct", "license:mit", "region:us" ]
null
2024-04-30T12:13:56+00:00
null
null
{}
mozksoft/realcartoonAnime-v11-coreml-q6
null
[ "region:us" ]
null
2024-04-30T12:14:17+00:00
null
null
{"license": "apache-2.0"}
kundansai/NLT
null
[ "license:apache-2.0", "region:us" ]
null
2024-04-30T12:14:41+00:00
image-to-3d
fasttext
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"language": ["fr", "en"], "license": "apache-2.0", "library_name": "fasttext", "tags": ["biology"], "datasets": ["HuggingFaceFW/fineweb"], "metrics": ["charcut_mt"], "pipeline_tag": "image-to-3d"}
hjebuoebduede/BRICK
null
[ "fasttext", "biology", "image-to-3d", "fr", "en", "dataset:HuggingFaceFW/fineweb", "arxiv:1910.09700", "license:apache-2.0", "region:us" ]
null
2024-04-30T12:14:43+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
jiuhai/llama-3-1425
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T12:14:50+00:00
null
null
{}
almazafa/Emotion_final_project
null
[ "region:us" ]
null
2024-04-30T12:17:28+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"language": ["yo"], "library_name": "transformers"}
oyemade/mms-tts-yor-fine-tuned
null
[ "transformers", "safetensors", "vits", "yo", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-30T12:17:47+00:00
null
null
{}
KK4210/chalicethorn
null
[ "region:us" ]
null
2024-04-30T12:18:43+00:00
text-classification
transformers
# Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 0.4074123501777649 f1_macro: 0.8845669198610375 f1_micro: 0.8388888888888889 f1_weighted: 0.8391621032797504 precision_macro: 0.884004884004884 precision_micro: 0.8388888888888889 precision_weighted: 0.8399267399267399 recall_macro: 0.88548463356974 recall_micro: 0.8388888888888889 recall_weighted: 0.8388888888888889 accuracy: 0.8388888888888889 trained with train: df_1to1000_bewertet_train_and_postitiv_TRAIN_HF val: df_1too1200_val_HF
{"tags": ["autotrain", "text-classification"], "datasets": ["post-auto-v4/autotrain-data"], "widget": [{"text": "I love AutoTrain"}]}
Zerithas/post-auto-v4
null
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "autotrain", "dataset:post-auto-v4/autotrain-data", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T12:18:46+00:00
null
null
{}
yliu315/Buk
null
[ "region:us" ]
null
2024-04-30T12:20:14+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
OwOOwO/finalnew3
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T12:20:24+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
EldritchLavender/SponkiTwo
null
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T12:20:32+00:00
null
null
{}
yliu315/cot
null
[ "region:us" ]
null
2024-04-30T12:21:42+00:00
null
transformers
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/AwanLLM/Llama-3-8B-Cumulus-v0.1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Cumulus-v0.1-GGUF/resolve/main/Llama-3-8B-Cumulus-v0.1.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Cumulus-v0.1-GGUF/resolve/main/Llama-3-8B-Cumulus-v0.1.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Cumulus-v0.1-GGUF/resolve/main/Llama-3-8B-Cumulus-v0.1.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Cumulus-v0.1-GGUF/resolve/main/Llama-3-8B-Cumulus-v0.1.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Cumulus-v0.1-GGUF/resolve/main/Llama-3-8B-Cumulus-v0.1.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Cumulus-v0.1-GGUF/resolve/main/Llama-3-8B-Cumulus-v0.1.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Cumulus-v0.1-GGUF/resolve/main/Llama-3-8B-Cumulus-v0.1.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Cumulus-v0.1-GGUF/resolve/main/Llama-3-8B-Cumulus-v0.1.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Cumulus-v0.1-GGUF/resolve/main/Llama-3-8B-Cumulus-v0.1.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Cumulus-v0.1-GGUF/resolve/main/Llama-3-8B-Cumulus-v0.1.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Cumulus-v0.1-GGUF/resolve/main/Llama-3-8B-Cumulus-v0.1.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Cumulus-v0.1-GGUF/resolve/main/Llama-3-8B-Cumulus-v0.1.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Cumulus-v0.1-GGUF/resolve/main/Llama-3-8B-Cumulus-v0.1.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Cumulus-v0.1-GGUF/resolve/main/Llama-3-8B-Cumulus-v0.1.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Cumulus-v0.1-GGUF/resolve/main/Llama-3-8B-Cumulus-v0.1.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "llama3", "library_name": "transformers", "base_model": "AwanLLM/Llama-3-8B-Cumulus-v0.1", "quantized_by": "mradermacher"}
mradermacher/Llama-3-8B-Cumulus-v0.1-GGUF
null
[ "transformers", "gguf", "en", "base_model:AwanLLM/Llama-3-8B-Cumulus-v0.1", "license:llama3", "endpoints_compatible", "region:us" ]
null
2024-04-30T12:21:57+00:00
null
null
## Example of [Safetensors](https://huggingface.co/docs/safetensors) model file tracked with [git-theta](https://github.com/r-three/git-theta) ![image.png](https://cdn-uploads.huggingface.co/production/uploads/5dd96eb166059660ed1ee413/CtvjS0aShaNPGxCN8Ya0_.png)
{"license": "apache-2.0", "tags": ["git-theta"]}
julien-c/git-theta-safetensors
null
[ "safetensors", "git-theta", "license:apache-2.0", "region:us" ]
null
2024-04-30T12:22:35+00:00
text-to-image
diffusers
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - zhenjili/cat_dreambooth_lora_xl <Gallery /> ## Model description These are zhenjili/cat_dreambooth_lora_xl LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of sks cat to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](zhenjili/cat_dreambooth_lora_xl/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
{"license": "openrail++", "library_name": "diffusers", "tags": ["text-to-image", "text-to-image", "diffusers-training", "diffusers", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "a photo of sks cat", "widget": []}
zhenjili/cat_dreambooth_lora_xl
null
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
null
2024-04-30T12:22:39+00:00
fill-mask
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MiniLMv2-L6-H384_R-simplewiki-fineweb-100k_en-med_512-vN This model is a fine-tuned version of [pszemraj/MiniLMv2-L6-H384_R-simplewiki](https://huggingface.co/pszemraj/MiniLMv2-L6-H384_R-simplewiki) on the BEE-spoke-data/fineweb-100k_en-med dataset. It achieves the following results on the evaluation set: - Loss: 4.0206 - Accuracy: 0.3783 - Num Input Tokens Seen: 162790400 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 1792 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-07 - lr_scheduler_type: inverse_sqrt - lr_scheduler_warmup_steps: 100 - num_epochs: 2.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Input Tokens Seen | |:-------------:|:------:|:----:|:---------------:|:--------:|:-----------------:| | 4.6583 | 0.1208 | 150 | 4.5052 | 0.3406 | 9830400 | | 4.5365 | 0.2415 | 300 | 4.3712 | 0.3525 | 19660800 | | 4.4621 | 0.3623 | 450 | 4.2810 | 0.3575 | 29491200 | | 4.4116 | 0.4831 | 600 | 4.2466 | 0.3615 | 39321600 | | 4.3487 | 0.6038 | 750 | 4.1795 | 0.3661 | 49152000 | | 4.338 | 0.7246 | 900 | 4.1874 | 0.3663 | 58982400 | | 4.342 | 0.8454 | 1050 | 4.1475 | 0.3695 | 68812800 | | 4.268 | 0.9661 | 1200 | 4.1215 | 0.3714 | 78643200 | | 4.2185 | 1.0869 | 1350 | 4.1032 | 0.3725 | 88472576 | | 4.2645 | 1.2077 | 1500 | 4.0859 | 0.3757 | 98302976 | | 4.2542 | 1.3284 | 1650 | 4.0730 | 0.3750 | 108133376 | | 4.2614 | 1.4492 | 1800 | 4.0682 | 0.3749 | 117963776 | | 4.1928 | 1.5700 | 1950 | 4.0596 | 0.3758 | 127794176 | | 4.1971 | 1.6907 | 2100 | 4.0505 | 0.3777 | 137624576 | | 4.1966 | 1.8115 | 2250 | 4.0163 | 0.3787 | 147454976 | | 4.16 | 1.9323 | 2400 | 4.0352 | 0.3774 | 157285376 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.3.0+cu118 - Datasets 2.19.0 - Tokenizers 0.19.1
{"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["BEE-spoke-data/fineweb-100k_en-med"], "metrics": ["accuracy"], "base_model": "pszemraj/MiniLMv2-L6-H384_R-simplewiki"}
pszemraj/MiniLMv2-L6-H384_R-fineweb-100k
null
[ "transformers", "safetensors", "roberta", "fill-mask", "generated_from_trainer", "en", "dataset:BEE-spoke-data/fineweb-100k_en-med", "base_model:pszemraj/MiniLMv2-L6-H384_R-simplewiki", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T12:24:11+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["unsloth"]}
mohammedriza-rahman/unsloth_techzonebot
null
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-30T12:24:11+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Ahjeong/dpo_gemma_7b_bf16_lr5e-7_origindset_default_kl0.01-epoch1
null
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T12:25:42+00:00
null
transformers
# Uploaded model - **Developed by:** mohammedriza-rahman - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
mohammedriza-rahman/lora_model
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-30T12:26:55+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large-plm-nsp-100000 This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3012 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.1655 | 1.0 | 782 | 0.1216 | | 0.0776 | 2.0 | 1564 | 0.1362 | | 0.0523 | 3.0 | 2346 | 0.1356 | | 0.0394 | 4.0 | 3128 | 0.1604 | | 0.0338 | 5.0 | 3910 | 0.1738 | | 0.0315 | 6.0 | 4692 | 0.2066 | | 0.0305 | 7.0 | 5474 | 0.1757 | | 0.0267 | 8.0 | 6256 | 0.2052 | | 0.0228 | 9.0 | 7038 | 0.2014 | | 0.0206 | 10.0 | 7820 | 0.2132 | | 0.0184 | 11.0 | 8602 | 0.1986 | | 0.0183 | 12.0 | 9384 | 0.2099 | | 0.0171 | 13.0 | 10166 | 0.2311 | | 0.0157 | 14.0 | 10948 | 0.2681 | | 0.0141 | 15.0 | 11730 | 0.2320 | | 0.0132 | 16.0 | 12512 | 0.3015 | | 0.0128 | 17.0 | 13294 | 0.2730 | | 0.012 | 18.0 | 14076 | 0.3014 | | 0.0119 | 19.0 | 14858 | 0.3021 | | 0.0117 | 20.0 | 15640 | 0.3012 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "roberta-large", "model-index": [{"name": "roberta-large-plm-nsp-100000", "results": []}]}
mhr2004/roberta-large-plm-nsp-100000
null
[ "transformers", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:roberta-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T12:27:00+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cdp-multi-classifier-weighted This model is a fine-tuned version of [alex-miller/ODABert](https://huggingface.co/alex-miller/ODABert) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8564 - Accuracy: 0.9716 - F1: 0.8484 - Precision: 0.7788 - Recall: 0.9316 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:| | 1.0497 | 1.0 | 11302 | 1.5640 | 0.9621 | 0.8011 | 0.7244 | 0.8958 | | 0.9103 | 2.0 | 22604 | 1.4417 | 0.9663 | 0.8203 | 0.7522 | 0.9021 | | 0.7629 | 3.0 | 33906 | 0.9562 | 0.9661 | 0.8235 | 0.7406 | 0.9272 | | 0.6321 | 4.0 | 45208 | 0.9106 | 0.9697 | 0.8376 | 0.7720 | 0.9153 | | 0.5464 | 5.0 | 56510 | 0.9811 | 0.9705 | 0.8419 | 0.7760 | 0.9200 | | 0.5043 | 6.0 | 67812 | 0.9484 | 0.9700 | 0.8409 | 0.7677 | 0.9296 | | 0.4647 | 7.0 | 79114 | 0.8569 | 0.9713 | 0.8465 | 0.7781 | 0.9281 | | 0.4215 | 8.0 | 90416 | 0.8620 | 0.9703 | 0.8430 | 0.7682 | 0.9338 | | 0.3794 | 9.0 | 101718 | 0.8569 | 0.9704 | 0.8437 | 0.7682 | 0.9357 | | 0.344 | 10.0 | 113020 | 0.8305 | 0.9708 | 0.8448 | 0.7720 | 0.9328 | | 0.3247 | 11.0 | 124322 | 0.7900 | 0.9707 | 0.8446 | 0.7709 | 0.9338 | | 0.3159 | 12.0 | 135624 | 0.7838 | 0.9711 | 0.8463 | 0.7734 | 0.9344 | | 0.3166 | 13.0 | 146926 | 0.8381 | 0.9710 | 0.8462 | 0.7727 | 0.9351 | | 0.279 | 14.0 | 158228 | 0.8694 | 0.9718 | 0.8487 | 0.7821 | 0.9277 | | 0.281 | 15.0 | 169530 | 0.8564 | 0.9716 | 0.8484 | 0.7788 | 0.9316 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.0.1 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "base_model": "alex-miller/ODABert", "model-index": [{"name": "cdp-multi-classifier-weighted", "results": []}]}
alex-miller/cdp-multi-classifier-weighted
null
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:alex-miller/ODABert", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T12:29:21+00:00
text-to-image
diffusers
{}
GraydientPlatformAPI/toonify-xl
null
[ "diffusers", "safetensors", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
null
2024-04-30T12:29:22+00:00
text-generation
transformers
{}
itay-nakash/model_1c75e9a0b8
null
[ "transformers", "mistral", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T12:31:16+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Ahjeong/dpo_gemma_7b_bf16_lr5e-7_origindset_default_kl0.01-epoch2
null
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T12:33:21+00:00
multiple-choice
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Amrit05/swag_bert_finetuned This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3440 - Validation Loss: 0.5666 - Train Accuracy: 0.7995 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 9192, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.8207 | 0.5675 | 0.7803 | 0 | | 0.3440 | 0.5666 | 0.7995 | 1 | ### Framework versions - Transformers 4.39.3 - TensorFlow 2.15.0 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "google-bert/bert-base-uncased", "model-index": [{"name": "Amrit05/swag_bert_finetuned", "results": []}]}
Amrit05/swag_bert_finetuned
null
[ "transformers", "tf", "bert", "multiple-choice", "generated_from_keras_callback", "base_model:google-bert/bert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-30T12:33:55+00:00
null
null
{}
yliu315/blow
null
[ "region:us" ]
null
2024-04-30T12:34:04+00:00
text-generation
transformers
{}
itay-nakash/model_6c2fcc454a
null
[ "transformers", "mistral", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T12:35:21+00:00
null
transformers
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/DavidAU/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO-GGUF/resolve/main/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO-GGUF/resolve/main/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO-GGUF/resolve/main/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO-GGUF/resolve/main/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO-GGUF/resolve/main/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO-GGUF/resolve/main/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO-GGUF/resolve/main/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO-GGUF/resolve/main/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO-GGUF/resolve/main/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO-GGUF/resolve/main/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO-GGUF/resolve/main/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO-GGUF/resolve/main/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO-GGUF/resolve/main/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO-GGUF/resolve/main/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO-GGUF/resolve/main/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "library_name": "transformers", "tags": ["merge", "mergekit", "lazymergekit", "Eric111/Yarn-Mistral-7b-128k-DPO", "saishf/Multi-Verse-RP-7B"], "base_model": "DavidAU/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO", "quantized_by": "mradermacher"}
mradermacher/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO-GGUF
null
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "Eric111/Yarn-Mistral-7b-128k-DPO", "saishf/Multi-Verse-RP-7B", "en", "base_model:DavidAU/D_AU-Multi-Verse-RP-Yarn-Mistral-7b-128k-DPO", "endpoints_compatible", "region:us" ]
null
2024-04-30T12:35:30+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Ahjeong/dpo_gemma_7b_bf16_lr5e-7_origindset_default_kl0.01-epoch3
null
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T12:36:41+00:00
reinforcement-learning
null
# **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
{"tags": ["CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class"], "model-index": [{"name": "Reinforce-cartpolev1", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "CartPole-v1", "type": "CartPole-v1"}, "metrics": [{"type": "mean_reward", "value": "422.30 +/- 72.49", "name": "mean_reward", "verified": false}]}]}]}
pietroorlandi/Reinforce-cartpolev1
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
null
2024-04-30T12:36:41+00:00
translation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vinai_EnglistToVietnamese_v2 This model is a fine-tuned version of [vinai/vinai-translate-en2vi-v2](https://huggingface.co/vinai/vinai-translate-en2vi-v2) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.1+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
{"tags": ["translation", "generated_from_trainer"], "base_model": "vinai/vinai-translate-en2vi-v2", "model-index": [{"name": "vinai_EnglistToVietnamese_v2", "results": []}]}
TammyTrinh/vinai_EnglistToVietnamese_v2
null
[ "transformers", "safetensors", "mbart", "text2text-generation", "translation", "generated_from_trainer", "base_model:vinai/vinai-translate-en2vi-v2", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-30T12:37:21+00:00
feature-extraction
transformers
# fine-tuned/medical-20-0-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo-0_9062874564 ## Model Description fine-tuned/medical-20-0-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo-0_9062874564 is a fine-tuned version of jinaai/jina-embeddings-v2-small-en designed for a specific domain. ## Use Case This model is designed to support various applications in natural language processing and understanding. ## Associated Dataset This the dataset for this model can be found [**here**](https://huggingface.co/datasets/fine-tuned/fine-tuned/medical-20-0-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo-0_9062874564). ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from transformers import AutoModel, AutoTokenizer llm_name = "fine-tuned/medical-20-0-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo-0_9062874564" tokenizer = AutoTokenizer.from_pretrained(llm_name) model = AutoModel.from_pretrained(llm_name) tokens = tokenizer("Your text here", return_tensors="pt") embedding = model(**tokens) ```
{}
fine-tuned/medical-20-0-16-jinaai_jina-embeddings-v2-small-en-100-gpt-3.5-turbo-0_9062874564
null
[ "transformers", "safetensors", "bert", "feature-extraction", "custom_code", "endpoints_compatible", "region:us" ]
null
2024-04-30T12:38:21+00:00
feature-extraction
transformers
{}
Mihaiii/test18
null
[ "transformers", "safetensors", "bert", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2024-04-30T12:38:22+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # w2v-bert-2.0-tamil-gpu-custom_preprocessed_v2 This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: inf - Wer: 0.4310 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.53567e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.2473 | 0.24 | 300 | inf | 0.4771 | | 0.7121 | 0.49 | 600 | inf | 0.3487 | | 0.552 | 0.73 | 900 | inf | 0.3140 | | 0.4973 | 0.97 | 1200 | inf | 0.3202 | | 0.499 | 1.22 | 1500 | inf | 0.2678 | | 0.4667 | 1.46 | 1800 | inf | 0.2784 | | 0.5909 | 1.71 | 2100 | inf | 0.3930 | | 1.411 | 1.95 | 2400 | inf | 0.3839 | | 2.1124 | 2.19 | 2700 | inf | 0.4063 | | 2.2398 | 2.44 | 3000 | inf | 0.4310 | | 2.3058 | 2.68 | 3300 | inf | 0.4310 | | 2.262 | 2.92 | 3600 | inf | 0.4310 | | 2.2588 | 3.17 | 3900 | inf | 0.4310 | | 2.3649 | 3.41 | 4200 | inf | 0.4310 | | 2.2835 | 3.66 | 4500 | inf | 0.4310 | | 2.3228 | 3.9 | 4800 | inf | 0.4310 | | 2.2322 | 4.14 | 5100 | inf | 0.4310 | | 2.3131 | 4.39 | 5400 | inf | 0.4310 | | 2.2916 | 4.63 | 5700 | inf | 0.4310 | | 2.3239 | 4.87 | 6000 | inf | 0.4310 | | 2.3533 | 5.12 | 6300 | inf | 0.4310 | | 2.2787 | 5.36 | 6600 | inf | 0.4310 | | 2.2776 | 5.61 | 6900 | inf | 0.4310 | | 2.3143 | 5.85 | 7200 | inf | 0.4310 | | 2.3105 | 6.09 | 7500 | inf | 0.4310 | | 2.2639 | 6.34 | 7800 | inf | 0.4310 | | 2.3211 | 6.58 | 8100 | inf | 0.4310 | | 2.2755 | 6.82 | 8400 | inf | 0.4310 | | 2.3074 | 7.07 | 8700 | inf | 0.4310 | | 2.2627 | 7.31 | 9000 | inf | 0.4310 | | 2.2756 | 7.55 | 9300 | inf | 0.4310 | | 2.2594 | 7.8 | 9600 | inf | 0.4310 | | 2.2221 | 8.04 | 9900 | inf | 0.4310 | | 2.2932 | 8.29 | 10200 | inf | 0.4310 | | 2.2978 | 8.53 | 10500 | inf | 0.4310 | | 2.2958 | 8.77 | 10800 | inf | 0.4310 | | 2.3239 | 9.02 | 11100 | inf | 0.4310 | | 2.281 | 9.26 | 11400 | inf | 0.4310 | | 2.272 | 9.5 | 11700 | inf | 0.4310 | | 2.2544 | 9.75 | 12000 | inf | 0.4310 | | 2.3103 | 9.99 | 12300 | inf | 0.4310 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["wer"], "base_model": "facebook/w2v-bert-2.0", "model-index": [{"name": "w2v-bert-2.0-tamil-gpu-custom_preprocessed_v2", "results": []}]}
Sajjo/w2v-bert-2.0-tamil-gpu-custom_preprocessed_v2
null
[ "transformers", "tensorboard", "safetensors", "wav2vec2-bert", "automatic-speech-recognition", "generated_from_trainer", "base_model:facebook/w2v-bert-2.0", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-30T12:40:55+00:00
null
null
# int2eh/deepseek-coder-6.7b-instruct-Q6_K-GGUF This model was converted to GGUF format from [`deepseek-ai/deepseek-coder-6.7b-instruct`](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo int2eh/deepseek-coder-6.7b-instruct-Q6_K-GGUF --model deepseek-coder-6.7b-instruct.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo int2eh/deepseek-coder-6.7b-instruct-Q6_K-GGUF --model deepseek-coder-6.7b-instruct.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m deepseek-coder-6.7b-instruct.Q6_K.gguf -n 128 ```
{"license": "other", "tags": ["llama-cpp", "gguf-my-repo"], "license_name": "deepseek", "license_link": "LICENSE"}
int2eh/deepseek-coder-6.7b-instruct-Q6_K-GGUF
null
[ "gguf", "llama-cpp", "gguf-my-repo", "license:other", "region:us" ]
null
2024-04-30T12:41:14+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Ahjeong/dpo_gemma_7b_bf16_lr5e-7_origindset_beta0.5_kl0.01-epoch1
null
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T12:41:17+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # python-gpt2 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.1448 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 9.2956 | 0.0138 | 25 | 7.9483 | | 6.8319 | 0.0275 | 50 | 6.0463 | | 5.653 | 0.0413 | 75 | 5.3905 | | 5.0998 | 0.0551 | 100 | 5.0523 | | 4.7296 | 0.0688 | 125 | 4.7295 | | 4.4676 | 0.0826 | 150 | 4.4801 | | 4.2285 | 0.0964 | 175 | 4.2580 | | 4.0335 | 0.1101 | 200 | 4.0891 | | 3.8654 | 0.1239 | 225 | 3.9376 | | 3.7442 | 0.1377 | 250 | 3.8222 | | 3.6155 | 0.1514 | 275 | 3.7006 | | 3.4805 | 0.1652 | 300 | 3.5997 | | 3.3804 | 0.1790 | 325 | 3.4840 | | 3.3074 | 0.1927 | 350 | 3.3887 | | 3.1737 | 0.2065 | 375 | 3.2711 | | 3.0593 | 0.2203 | 400 | 3.1535 | | 2.9634 | 0.2340 | 425 | 3.0443 | | 2.887 | 0.2478 | 450 | 2.9574 | | 2.7808 | 0.2616 | 475 | 2.8775 | | 2.7117 | 0.2753 | 500 | 2.8190 | | 2.6611 | 0.2891 | 525 | 2.7515 | | 2.6141 | 0.3029 | 550 | 2.7097 | | 2.5752 | 0.3167 | 575 | 2.6704 | | 2.5038 | 0.3304 | 600 | 2.6307 | | 2.4852 | 0.3442 | 625 | 2.6004 | | 2.4638 | 0.3580 | 650 | 2.5696 | | 2.4362 | 0.3717 | 675 | 2.5343 | | 2.3896 | 0.3855 | 700 | 2.5131 | | 2.3669 | 0.3993 | 725 | 2.4886 | | 2.3174 | 0.4130 | 750 | 2.4695 | | 2.3152 | 0.4268 | 775 | 2.4478 | | 2.2916 | 0.4406 | 800 | 2.4271 | | 2.2743 | 0.4543 | 825 | 2.4166 | | 2.2555 | 0.4681 | 850 | 2.3959 | | 2.2545 | 0.4819 | 875 | 2.3794 | | 2.2291 | 0.4956 | 900 | 2.3645 | | 2.2032 | 0.5094 | 925 | 2.3499 | | 2.1842 | 0.5232 | 950 | 2.3382 | | 2.1505 | 0.5369 | 975 | 2.3263 | | 2.1668 | 0.5507 | 1000 | 2.3147 | | 2.1649 | 0.5645 | 1025 | 2.3072 | | 2.1427 | 0.5782 | 1050 | 2.2926 | | 2.1051 | 0.5920 | 1075 | 2.2799 | | 2.0792 | 0.6058 | 1100 | 2.2708 | | 2.1171 | 0.6195 | 1125 | 2.2570 | | 2.1012 | 0.6333 | 1150 | 2.2470 | | 2.0853 | 0.6471 | 1175 | 2.2405 | | 2.0786 | 0.6608 | 1200 | 2.2312 | | 2.0664 | 0.6746 | 1225 | 2.2238 | | 2.0706 | 0.6884 | 1250 | 2.2183 | | 2.0557 | 0.7021 | 1275 | 2.2102 | | 2.0404 | 0.7159 | 1300 | 2.2042 | | 2.0493 | 0.7297 | 1325 | 2.1978 | | 2.0373 | 0.7434 | 1350 | 2.1907 | | 2.0093 | 0.7572 | 1375 | 2.1837 | | 2.0228 | 0.7710 | 1400 | 2.1819 | | 2.0147 | 0.7847 | 1425 | 2.1739 | | 2.0206 | 0.7985 | 1450 | 2.1694 | | 2.0156 | 0.8123 | 1475 | 2.1671 | | 2.0126 | 0.8260 | 1500 | 2.1622 | | 1.9834 | 0.8398 | 1525 | 2.1598 | | 2.0182 | 0.8536 | 1550 | 2.1558 | | 1.9876 | 0.8674 | 1575 | 2.1543 | | 1.9914 | 0.8811 | 1600 | 2.1515 | | 1.9933 | 0.8949 | 1625 | 2.1498 | | 1.9945 | 0.9087 | 1650 | 2.1483 | | 1.9733 | 0.9224 | 1675 | 2.1470 | | 1.9778 | 0.9362 | 1700 | 2.1467 | | 1.983 | 0.9500 | 1725 | 2.1454 | | 1.9716 | 0.9637 | 1750 | 2.1453 | | 1.9668 | 0.9775 | 1775 | 2.1449 | | 1.9733 | 0.9913 | 1800 | 2.1448 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "gpt2", "model-index": [{"name": "python-gpt2", "results": []}]}
MadMarx37/python-gpt2
null
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:gpt2", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-30T12:42:02+00:00