modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-27 00:42:13
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 499
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-27 00:40:00
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Dev2410/Gemma_7B_cr2 | Dev2410 | 2024-02-29T08:17:44Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-29T08:06:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
betajuned/gpt2-indonesian-unila | betajuned | 2024-02-29T08:16:48Z | 0 | 0 | null | [
"safetensors",
"autotrain",
"text-generation",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-29T08:16:45Z | ---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
saadrasheeddev/lora-flan-t5-cv-chatbot | saadrasheeddev | 2024-02-29T08:12:53Z | 106 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-28T09:57:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
KVNAditya/drl__u4__p_ple_v0 | KVNAditya | 2024-02-29T08:12:17Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-29T08:12:14Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: drl__u4__p_ple_v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 16.70 +/- 19.48
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
kevin1313/code-search-net-tokenizer | kevin1313 | 2024-02-29T08:09:38Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-29T08:09:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yechenzhi1/Reinforce-PixelCopter | yechenzhi1 | 2024-02-29T08:04:29Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-29T08:04:26Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 27.20 +/- 16.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
kumatomo/BasicGCN | kumatomo | 2024-02-29T08:03:55Z | 1 | 0 | pytorch_geometric | [
"pytorch_geometric",
"pretrain",
"graph-machine-learning",
"en",
"dataset:QM9",
"arxiv:1910.09700",
"license:mit",
"region:us"
] | null | 2024-02-29T07:20:14Z | ---
language: en
license: mit
library_name: pytorch_geometric
tags:
- graph-machine-learning
datasets: QM9
model_name: GCN
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** mit
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Dohui/embedding_workshop | Dohui | 2024-02-29T08:01:36Z | 196 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-02-29T07:58:28Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: embedding_workshop
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# embedding_workshop
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
nilq/lua-mistral-1L-mini | nilq | 2024-02-29T07:56:17Z | 121 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"dataset:nilq/small-lua-stack",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-28T19:24:11Z | ---
tags:
- generated_from_trainer
datasets:
- nilq/small-lua-stack
metrics:
- accuracy
model-index:
- name: lua-mistral-1L-mini
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: nilq/small-lua-stack
type: nilq/small-lua-stack
metrics:
- name: Accuracy
type: accuracy
value: 0.4208221928842605
---
# lua-mistral-1L-mini
This model is a mini single-layer Mistral model pre-trained on on the `nilq/small-lua-stack` dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0245
- Accuracy: 0.4208
## Model description
This model might contain some very simple model of Lua.
## Intended uses & limitations
Let's see if we can find some interesting stuff inside this model.
## Training and evaluation data
Trained on the Lua subset of The Stack.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0006
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 3.0
### Training results
- Loss: 3.016
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
Chattiori/PersikaFurry | Chattiori | 2024-02-29T07:51:19Z | 0 | 2 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-16T08:57:22Z | ---
license: creativeml-openrail-m
---
|
AyanAnsar/llama2-qlora-finetuned-slimorca-1k | AyanAnsar | 2024-02-29T07:50:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-29T07:42:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tomaszki/gemma-9 | tomaszki | 2024-02-29T07:47:41Z | 114 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-29T07:40:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CorticalStack/neurotic-crown-clown-7b-tak-stack-dpo-6.5bpw-exl2 | CorticalStack | 2024-02-29T07:47:35Z | 6 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-29T07:42:09Z | ---
license: apache-2.0
---
<img src="neurotic-crown-clown-tak-stack.png" alt="Neurotic crown clown tak stack logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# CorticalStack/neurotic-crown-clown-7b-tak-stack-dpo
An EXL2 6.5bpw quantised version of [CorticalStack/neurotic-crown-clown-7b-tak-stack-dpo](https://huggingface.co/CorticalStack/neurotic-crown-clown-7b-tak-stack-dpo).
An incomplete list of clients and libraries that are known to support EXL2:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [exllamav2](https://github.com/turboderp/exllamav2), an inference library for running local LLMs on modern consumer GPUs. |
Reshphil/lab2_efficient | Reshphil | 2024-02-29T07:46:44Z | 103 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-29T07:46:25Z | ---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-fr
tags:
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-fr
split: train
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 50.03285120906795
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9574
- Bleu: 50.0329
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
Yukino666/a2c-PandaReachDense-v3 | Yukino666 | 2024-02-29T07:44:44Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-29T07:36:22Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.21 +/- 0.08
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Dev2410/Gemma_adapter_cr2 | Dev2410 | 2024-02-29T07:42:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-29T07:40:03Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ahmedrizwan239/NativeSQL-with-schema | ahmedrizwan239 | 2024-02-29T07:37:36Z | 107 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"en",
"arxiv:1910.09700",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-28T09:07:33Z | ---
language: en
widget:
- text: >-
tables: student(id,name, age). question: how many students with name ahmed and age less than 25
license: cc-by-sa-4.0
pipeline_tag: text2text-generation
inference:
parameters:
max_length: 512
num_beams: 10
top_k: 10
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
renyiyu/llama-2-7b-ppo-lora-v0.1 | renyiyu | 2024-02-29T07:35:49Z | 0 | 0 | null | [
"pytorch",
"safetensors",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-02-23T10:26:56Z | ---
base_model: meta-llama/Llama-2-7b-hf
---
# Model Details
- SFT based on [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) with merged alpaca datasets
- DPO: trained on top of SFT model as LoRa Adapter, with merged [hh-rlhf](https://huggingface.co/datasets/Anthropic/hh-rlhf) data
- PPO: trained on top of dpo model and reward model, with multi-adapters, with [PKU-SafeRLHF](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF) data for futher RLHF
- Trained with Deepspeed ZeRO-1 + TRL + QLoRA + Flash-Attntion 2
## Model and Training Details
- **Finetuned from model:** [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf)
- **Dataset:**
- SFT (mixed train):
- [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned)
- [vicgalle/alpaca-gpt4](https://huggingface.co/datasets/vicgalle/alpaca-gpt4)
- DPO (mixed train):
- [Anthropic/hh-rlhf](https://huggingface.co/datasets/Anthropic/hh-rlhf)
- [Unified-Language-Model-Alignment/Anthropic_HH_Golden](https://huggingface.co/datasets/Unified-Language-Model-Alignment/Anthropic_HH_Golden)
- PPO:
- [PKU-Alignment/PKU-SafeRLHF-10K](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF-10K)
- [PKU-Alignment/PKU-SafeRLHF-30K](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF-30K)
- [PKU-Alignment/PKU-SafeRLHF](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF)
### Training Results

### Evaluation
The reward score and toxicity scores are computed and compared with [PKU-Alignment/PKU-SafeRLHF-30K](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF-30K) data on SFT/DPO/PPO models
| Model | Toxicity | Reward |
| ----- |:--------:|:--------:|
| SFT_v0.1 | 0.0698 | -0.2828 |
| DPO_v0.1 | 0.0356 | -0.2633 |
| PPO_v0.1 | 0.0321 | 0.38 |

### Compute Infrastructure
The model is trained using 8 * RTX-3090-24GB/A100-PCIE-40GB
### Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, trust_remote_code=True,)
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True,)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.eos_token = DEFINE_EOS_TOKEN
model.config.eos_token = DEFINE_EOS_TOKEN
model.config.eos_token_id = tokenizer.eos_token_id
def format_prompt(question):
return f"###Question: {question}\n###Answer: "
instruction = "Your text here"
input = format_prompt(instruction)
inputs = tokenizer(input, return_tensors='pt')
output = model.generate(inputs['input_ids'], max_new_tokens=512, do_sample=False, top_p=1)
output = tokenizer.decode(output[0], skip_special_tokens=True)
print(output)
```
## Model Card Authors
Yiyu (Michael) Ren
## Model Card Contact
Email: [email protected]
### Framework versions
- PEFT 0.8.2 |
vignesh-spericorn/test-dialogue-summarization | vignesh-spericorn | 2024-02-29T07:35:23Z | 103 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-29T07:35:04Z | ---
license: apache-2.0
base_model: facebook/bart-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: test-dialogue-summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-dialogue-summarization
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8387
- Rouge1: 48.1775
- Rouge2: 24.5925
- Rougel: 40.3237
- Rougelsum: 43.9647
- Gen Len: 18.4707
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.8408 | 1.0 | 1841 | 1.5902 | 47.4895 | 24.7763 | 40.0228 | 44.4895 | 18.5159 |
| 1.5348 | 2.0 | 3683 | 1.5498 | 48.0242 | 24.8392 | 40.559 | 44.2542 | 17.6015 |
| 1.3076 | 3.0 | 5524 | 1.5561 | 48.5695 | 25.9259 | 41.4698 | 44.6406 | 17.4658 |
| 1.1286 | 4.0 | 7366 | 1.5796 | 48.5079 | 25.1521 | 40.8084 | 44.6149 | 18.4364 |
| 0.9956 | 5.0 | 9207 | 1.6134 | 49.1351 | 25.6367 | 41.3139 | 45.0814 | 18.3313 |
| 0.8668 | 6.0 | 11049 | 1.6679 | 49.002 | 25.4589 | 41.1276 | 44.787 | 18.4853 |
| 0.7696 | 7.0 | 12890 | 1.7327 | 48.1978 | 25.0238 | 40.6671 | 44.3866 | 18.3374 |
| 0.69 | 8.0 | 14732 | 1.7603 | 48.7522 | 25.0831 | 40.8193 | 44.4452 | 18.4597 |
| 0.6175 | 9.0 | 16573 | 1.8092 | 48.2747 | 24.8563 | 40.3027 | 44.1975 | 18.3729 |
| 0.5701 | 10.0 | 18410 | 1.8387 | 48.1775 | 24.5925 | 40.3237 | 43.9647 | 18.4707 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
idn123009/ppo-Huggy | idn123009 | 2024-02-29T07:30:50Z | 2 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2024-02-29T07:15:42Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: idn123009/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
uni-zhuan/Taxi-v3 | uni-zhuan | 2024-02-29T07:29:11Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-29T07:29:09Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="uni-zhuan/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
vignesh-spericorn/t5-small-samsum | vignesh-spericorn | 2024-02-29T07:22:33Z | 104 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-29T07:22:21Z | ---
license: apache-2.0
base_model: google-t5/t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-samsum
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6707
- Rouge1: 43.8206
- Rouge2: 19.9652
- Rougel: 36.0416
- Rougelsum: 40.0887
- Gen Len: 17.0305
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.016 | 1.0 | 1842 | 1.7872 | 40.6656 | 17.0772 | 33.6487 | 37.3124 | 16.9829 |
| 1.8798 | 2.0 | 3684 | 1.7375 | 42.1059 | 18.6064 | 35.0368 | 38.6458 | 16.7045 |
| 1.8219 | 3.0 | 5526 | 1.7062 | 43.2636 | 19.4321 | 35.6415 | 39.5613 | 16.8266 |
| 1.77 | 4.0 | 7368 | 1.6990 | 43.2211 | 19.5021 | 35.5155 | 39.6933 | 17.1905 |
| 1.7408 | 5.0 | 9210 | 1.6878 | 43.9084 | 19.8501 | 36.2255 | 40.2666 | 16.7766 |
| 1.7113 | 6.0 | 11052 | 1.6816 | 44.0573 | 20.1359 | 36.426 | 40.4933 | 16.9829 |
| 1.692 | 7.0 | 12894 | 1.6771 | 43.9234 | 19.9018 | 36.0759 | 40.1654 | 16.9158 |
| 1.6771 | 8.0 | 14736 | 1.6723 | 43.5824 | 19.8023 | 35.9709 | 39.963 | 16.9731 |
| 1.6604 | 9.0 | 16578 | 1.6718 | 43.8502 | 19.9263 | 36.157 | 40.1653 | 17.0134 |
| 1.6575 | 10.0 | 18420 | 1.6707 | 43.8206 | 19.9652 | 36.0416 | 40.0887 | 17.0305 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
holyofferings/roberta-finetuned-subjqa-movies_2 | holyofferings | 2024-02-29T07:21:33Z | 121 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"question-answering",
"generated_from_trainer",
"base_model:deepset/roberta-base-squad2",
"base_model:finetune:deepset/roberta-base-squad2",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-02-29T07:00:56Z | ---
license: cc-by-4.0
base_model: deepset/roberta-base-squad2
tags:
- generated_from_trainer
model-index:
- name: roberta-finetuned-subjqa-movies_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-subjqa-movies_2
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.2
|
KVNAditya/drl__u8_2__dhgs | KVNAditya | 2024-02-29T07:20:33Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-29T07:10:08Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.02 +/- 5.78
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r KVNAditya/drl__u8_2__dhgs
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=drl__u8_2__dhgs
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=drl__u8_2__dhgs --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
ultimecia/xlm-roberta-base-finetuned-panx-all | ultimecia | 2024-02-29T07:19:23Z | 104 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-02-29T07:01:24Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1698
- F1: 0.8482
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 209 | 0.1982 | 0.8129 |
| No log | 2.0 | 418 | 0.1723 | 0.8407 |
| No log | 3.0 | 627 | 0.1698 | 0.8482 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
DouglasPontes/2020-Q4-50p-filtered-random-prog_from_Q3 | DouglasPontes | 2024-02-29T07:15:09Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:DouglasPontes/2020-Q3-50p-filtered-random",
"base_model:finetune:DouglasPontes/2020-Q3-50p-filtered-random",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-02-26T14:36:40Z | ---
license: mit
base_model: DouglasPontes/2020-Q3-50p-filtered-random
tags:
- generated_from_trainer
model-index:
- name: 2020-Q4-50p-filtered-random-prog_from_Q3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 2020-Q4-50p-filtered-random-prog_from_Q3
This model is a fine-tuned version of [DouglasPontes/2020-Q3-50p-filtered-random](https://huggingface.co/DouglasPontes/2020-Q3-50p-filtered-random) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2730
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.1e-07
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2400000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| No log | 0.03 | 8000 | 2.3816 |
| 2.5089 | 0.07 | 16000 | 2.3647 |
| 2.5089 | 0.1 | 24000 | 2.3552 |
| 2.4989 | 0.13 | 32000 | 2.3528 |
| 2.4989 | 0.17 | 40000 | 2.3486 |
| 2.4836 | 0.2 | 48000 | 2.3463 |
| 2.4836 | 0.24 | 56000 | 2.3411 |
| 2.4904 | 0.27 | 64000 | 2.3394 |
| 2.4904 | 0.3 | 72000 | 2.3350 |
| 2.4733 | 0.34 | 80000 | 2.3309 |
| 2.4733 | 0.37 | 88000 | 2.3289 |
| 2.4675 | 0.4 | 96000 | 2.3381 |
| 2.4675 | 0.44 | 104000 | 2.3317 |
| 2.4762 | 0.47 | 112000 | 2.3218 |
| 2.4762 | 0.51 | 120000 | 2.3267 |
| 2.4616 | 0.54 | 128000 | 2.3241 |
| 2.4616 | 0.57 | 136000 | 2.3281 |
| 2.4601 | 0.61 | 144000 | 2.3152 |
| 2.4601 | 0.64 | 152000 | 2.3243 |
| 2.4563 | 0.67 | 160000 | 2.3202 |
| 2.4563 | 0.71 | 168000 | 2.3193 |
| 2.459 | 0.74 | 176000 | 2.3182 |
| 2.459 | 0.77 | 184000 | 2.3247 |
| 2.4639 | 0.81 | 192000 | 2.3201 |
| 2.4639 | 0.84 | 200000 | 2.3243 |
| 2.4561 | 0.88 | 208000 | 2.3218 |
| 2.4561 | 0.91 | 216000 | 2.3137 |
| 2.4556 | 0.94 | 224000 | 2.3180 |
| 2.4556 | 0.98 | 232000 | 2.3147 |
| 2.4573 | 1.01 | 240000 | 2.3100 |
| 2.4573 | 1.04 | 248000 | 2.3118 |
| 2.4516 | 1.08 | 256000 | 2.3158 |
| 2.4516 | 1.11 | 264000 | 2.3133 |
| 2.4561 | 1.15 | 272000 | 2.3065 |
| 2.4561 | 1.18 | 280000 | 2.3183 |
| 2.4476 | 1.21 | 288000 | 2.3106 |
| 2.4476 | 1.25 | 296000 | 2.3131 |
| 2.4503 | 1.28 | 304000 | 2.3104 |
| 2.4503 | 1.31 | 312000 | 2.3101 |
| 2.4495 | 1.35 | 320000 | 2.3086 |
| 2.4495 | 1.38 | 328000 | 2.3057 |
| 2.4534 | 1.41 | 336000 | 2.3086 |
| 2.4534 | 1.45 | 344000 | 2.3093 |
| 2.4486 | 1.48 | 352000 | 2.3018 |
| 2.4486 | 1.52 | 360000 | 2.3060 |
| 2.4457 | 1.55 | 368000 | 2.3083 |
| 2.4457 | 1.58 | 376000 | 2.3110 |
| 2.4443 | 1.62 | 384000 | 2.2975 |
| 2.4443 | 1.65 | 392000 | 2.3009 |
| 2.4405 | 1.68 | 400000 | 2.3067 |
| 2.4405 | 1.72 | 408000 | 2.3027 |
| 2.4531 | 1.75 | 416000 | 2.3050 |
| 2.4531 | 1.79 | 424000 | 2.3026 |
| 2.4539 | 1.82 | 432000 | 2.2929 |
| 2.4539 | 1.85 | 440000 | 2.3051 |
| 2.4499 | 1.89 | 448000 | 2.3035 |
| 2.4499 | 1.92 | 456000 | 2.3011 |
| 2.4401 | 1.95 | 464000 | 2.2920 |
| 2.4401 | 1.99 | 472000 | 2.2999 |
| 2.4401 | 2.02 | 480000 | 2.3034 |
| 2.4401 | 2.05 | 488000 | 2.3021 |
| 2.4433 | 2.09 | 496000 | 2.3102 |
| 2.4433 | 2.12 | 504000 | 2.2985 |
| 2.4445 | 2.16 | 512000 | 2.3018 |
| 2.4445 | 2.19 | 520000 | 2.2996 |
| 2.4379 | 2.22 | 528000 | 2.3006 |
| 2.4379 | 2.26 | 536000 | 2.2970 |
| 2.4454 | 2.29 | 544000 | 2.3014 |
| 2.4454 | 2.32 | 552000 | 2.2992 |
| 2.4457 | 2.36 | 560000 | 2.2962 |
| 2.4457 | 2.39 | 568000 | 2.3009 |
| 2.4354 | 2.43 | 576000 | 2.2960 |
| 2.4354 | 2.46 | 584000 | 2.3008 |
| 2.4361 | 2.49 | 592000 | 2.2898 |
| 2.4361 | 2.53 | 600000 | 2.3060 |
| 2.4377 | 2.56 | 608000 | 2.2990 |
| 2.4377 | 2.59 | 616000 | 2.2989 |
| 2.4416 | 2.63 | 624000 | 2.2969 |
| 2.4416 | 2.66 | 632000 | 2.2933 |
| 2.434 | 2.69 | 640000 | 2.2998 |
| 2.434 | 2.73 | 648000 | 2.2948 |
| 2.433 | 2.76 | 656000 | 2.2892 |
| 2.433 | 2.8 | 664000 | 2.2929 |
| 2.44 | 2.83 | 672000 | 2.2973 |
| 2.44 | 2.86 | 680000 | 2.2926 |
| 2.4291 | 2.9 | 688000 | 2.2990 |
| 2.4291 | 2.93 | 696000 | 2.2937 |
| 2.4336 | 2.96 | 704000 | 2.2894 |
| 2.4336 | 3.0 | 712000 | 2.2958 |
| 2.439 | 3.03 | 720000 | 2.2956 |
| 2.439 | 3.07 | 728000 | 2.2928 |
| 2.4405 | 3.1 | 736000 | 2.2956 |
| 2.4405 | 3.13 | 744000 | 2.2905 |
| 2.4332 | 3.17 | 752000 | 2.2921 |
| 2.4332 | 3.2 | 760000 | 2.2907 |
| 2.4353 | 3.23 | 768000 | 2.2879 |
| 2.4353 | 3.27 | 776000 | 2.2929 |
| 2.4273 | 3.3 | 784000 | 2.2953 |
| 2.4273 | 3.33 | 792000 | 2.2917 |
| 2.4233 | 3.37 | 800000 | 2.2947 |
| 2.4233 | 3.4 | 808000 | 2.2943 |
| 2.4324 | 3.44 | 816000 | 2.2940 |
| 2.4324 | 3.47 | 824000 | 2.2911 |
| 2.4461 | 3.5 | 832000 | 2.2920 |
| 2.4461 | 3.54 | 840000 | 2.2911 |
| 2.4267 | 3.57 | 848000 | 2.2940 |
| 2.4267 | 3.6 | 856000 | 2.2890 |
| 2.4313 | 3.64 | 864000 | 2.2913 |
| 2.4313 | 3.67 | 872000 | 2.2967 |
| 2.4388 | 3.71 | 880000 | 2.2907 |
| 2.4388 | 3.74 | 888000 | 2.2952 |
| 2.4326 | 3.77 | 896000 | 2.2873 |
| 2.4326 | 3.81 | 904000 | 2.2871 |
| 2.4312 | 3.84 | 912000 | 2.2880 |
| 2.4312 | 3.87 | 920000 | 2.2941 |
| 2.4398 | 3.91 | 928000 | 2.2925 |
| 2.4398 | 3.94 | 936000 | 2.2965 |
| 2.441 | 3.97 | 944000 | 2.2890 |
| 2.441 | 4.01 | 952000 | 2.2946 |
| 2.4345 | 4.04 | 960000 | 2.2910 |
| 2.4345 | 4.08 | 968000 | 2.2792 |
| 2.4332 | 4.11 | 976000 | 2.2856 |
| 2.4332 | 4.14 | 984000 | 2.2879 |
| 2.4375 | 4.18 | 992000 | 2.2861 |
| 2.4375 | 4.21 | 1000000 | 2.2892 |
| 2.4282 | 4.24 | 1008000 | 2.2894 |
| 2.4282 | 4.28 | 1016000 | 2.2902 |
| 2.4231 | 4.31 | 1024000 | 2.2830 |
| 2.4231 | 4.35 | 1032000 | 2.2948 |
| 2.4299 | 4.38 | 1040000 | 2.2915 |
| 2.4299 | 4.41 | 1048000 | 2.2922 |
| 2.4353 | 4.45 | 1056000 | 2.2876 |
| 2.4353 | 4.48 | 1064000 | 2.2893 |
| 2.4308 | 4.51 | 1072000 | 2.2920 |
| 2.4308 | 4.55 | 1080000 | 2.2860 |
| 2.4358 | 4.58 | 1088000 | 2.2907 |
| 2.4358 | 4.61 | 1096000 | 2.2808 |
| 2.4341 | 4.65 | 1104000 | 2.2902 |
| 2.4341 | 4.68 | 1112000 | 2.2815 |
| 2.4315 | 4.72 | 1120000 | 2.2961 |
| 2.4315 | 4.75 | 1128000 | 2.2885 |
| 2.434 | 4.78 | 1136000 | 2.2917 |
| 2.434 | 4.82 | 1144000 | 2.2851 |
| 2.4324 | 4.85 | 1152000 | 2.2837 |
| 2.4324 | 4.88 | 1160000 | 2.2883 |
| 2.4297 | 4.92 | 1168000 | 2.2824 |
| 2.4297 | 4.95 | 1176000 | 2.2832 |
| 2.436 | 4.99 | 1184000 | 2.2865 |
| 2.436 | 5.02 | 1192000 | 2.2816 |
| 2.4329 | 5.05 | 1200000 | 2.2862 |
| 2.4329 | 5.09 | 1208000 | 2.2847 |
| 2.4276 | 5.12 | 1216000 | 2.2951 |
| 2.4276 | 5.15 | 1224000 | 2.2980 |
| 2.4362 | 5.19 | 1232000 | 2.2889 |
| 2.4362 | 5.22 | 1240000 | 2.2914 |
| 2.4309 | 5.25 | 1248000 | 2.2915 |
| 2.4309 | 5.29 | 1256000 | 2.2822 |
| 2.4414 | 5.32 | 1264000 | 2.2871 |
| 2.4414 | 5.36 | 1272000 | 2.2890 |
| 2.4241 | 5.39 | 1280000 | 2.2844 |
| 2.4241 | 5.42 | 1288000 | 2.2812 |
| 2.4251 | 5.46 | 1296000 | 2.2874 |
| 2.4251 | 5.49 | 1304000 | 2.2846 |
| 2.4318 | 5.52 | 1312000 | 2.2831 |
| 2.4318 | 5.56 | 1320000 | 2.2895 |
| 2.4247 | 5.59 | 1328000 | 2.2796 |
| 2.4247 | 5.63 | 1336000 | 2.2834 |
| 2.4305 | 5.66 | 1344000 | 2.2811 |
| 2.4305 | 5.69 | 1352000 | 2.2922 |
| 2.4336 | 5.73 | 1360000 | 2.2830 |
| 2.4336 | 5.76 | 1368000 | 2.2904 |
| 2.428 | 5.79 | 1376000 | 2.2843 |
| 2.428 | 5.83 | 1384000 | 2.2804 |
| 2.4254 | 5.86 | 1392000 | 2.2852 |
| 2.4254 | 5.89 | 1400000 | 2.2858 |
| 2.4287 | 5.93 | 1408000 | 2.2922 |
| 2.4287 | 5.96 | 1416000 | 2.2847 |
| 2.4291 | 6.0 | 1424000 | 2.2856 |
| 2.4291 | 6.03 | 1432000 | 2.2876 |
| 2.4289 | 6.06 | 1440000 | 2.2822 |
| 2.4289 | 6.1 | 1448000 | 2.2787 |
| 2.4272 | 6.13 | 1456000 | 2.2811 |
| 2.4272 | 6.16 | 1464000 | 2.2853 |
| 2.4267 | 6.2 | 1472000 | 2.2818 |
| 2.4267 | 6.23 | 1480000 | 2.2765 |
| 2.4237 | 6.27 | 1488000 | 2.2791 |
| 2.4237 | 6.3 | 1496000 | 2.2768 |
| 2.4277 | 6.33 | 1504000 | 2.2866 |
| 2.4277 | 6.37 | 1512000 | 2.2821 |
| 2.4316 | 6.4 | 1520000 | 2.2856 |
| 2.4316 | 6.43 | 1528000 | 2.2820 |
| 2.4222 | 6.47 | 1536000 | 2.2891 |
| 2.4222 | 6.5 | 1544000 | 2.2803 |
| 2.426 | 6.53 | 1552000 | 2.2797 |
| 2.426 | 6.57 | 1560000 | 2.2844 |
| 2.422 | 6.6 | 1568000 | 2.2872 |
| 2.422 | 6.64 | 1576000 | 2.2904 |
| 2.4323 | 6.67 | 1584000 | 2.2797 |
| 2.4323 | 6.7 | 1592000 | 2.2757 |
| 2.4315 | 6.74 | 1600000 | 2.2874 |
| 2.4315 | 6.77 | 1608000 | 2.2763 |
| 2.421 | 6.8 | 1616000 | 2.2857 |
| 2.421 | 6.84 | 1624000 | 2.2804 |
| 2.4299 | 6.87 | 1632000 | 2.2825 |
| 2.4299 | 6.91 | 1640000 | 2.2819 |
| 2.4289 | 6.94 | 1648000 | 2.2824 |
| 2.4289 | 6.97 | 1656000 | 2.2821 |
| 2.4257 | 7.01 | 1664000 | 2.2802 |
| 2.4257 | 7.04 | 1672000 | 2.2760 |
| 2.4227 | 7.07 | 1680000 | 2.2810 |
| 2.4227 | 7.11 | 1688000 | 2.2777 |
| 2.4287 | 7.14 | 1696000 | 2.2772 |
| 2.4287 | 7.17 | 1704000 | 2.2786 |
| 2.4227 | 7.21 | 1712000 | 2.2859 |
| 2.4227 | 7.24 | 1720000 | 2.2862 |
| 2.4262 | 7.28 | 1728000 | 2.2789 |
| 2.4262 | 7.31 | 1736000 | 2.2848 |
| 2.4263 | 7.34 | 1744000 | 2.2754 |
| 2.4263 | 7.38 | 1752000 | 2.2778 |
| 2.4246 | 7.41 | 1760000 | 2.2735 |
| 2.4246 | 7.44 | 1768000 | 2.2827 |
| 2.4147 | 7.48 | 1776000 | 2.2850 |
| 2.4147 | 7.51 | 1784000 | 2.2821 |
| 2.4288 | 7.55 | 1792000 | 2.2803 |
| 2.4288 | 7.58 | 1800000 | 2.2760 |
| 2.4231 | 7.61 | 1808000 | 2.2749 |
| 2.4231 | 7.65 | 1816000 | 2.2749 |
| 2.4243 | 7.68 | 1824000 | 2.2743 |
| 2.4243 | 7.71 | 1832000 | 2.2792 |
| 2.4215 | 7.75 | 1840000 | 2.2752 |
| 2.4215 | 7.78 | 1848000 | 2.2770 |
| 2.4213 | 7.81 | 1856000 | 2.2802 |
| 2.4213 | 7.85 | 1864000 | 2.2796 |
| 2.4236 | 7.88 | 1872000 | 2.2883 |
| 2.4236 | 7.92 | 1880000 | 2.2792 |
| 2.4237 | 7.95 | 1888000 | 2.2726 |
| 2.4237 | 7.98 | 1896000 | 2.2816 |
| 2.4183 | 8.02 | 1904000 | 2.2790 |
| 2.4183 | 8.05 | 1912000 | 2.2815 |
| 2.4215 | 8.08 | 1920000 | 2.2774 |
| 2.4215 | 8.12 | 1928000 | 2.2700 |
| 2.4258 | 8.15 | 1936000 | 2.2763 |
| 2.4258 | 8.19 | 1944000 | 2.2786 |
| 2.4209 | 8.22 | 1952000 | 2.2763 |
| 2.4209 | 8.25 | 1960000 | 2.2789 |
| 2.4217 | 8.29 | 1968000 | 2.2784 |
| 2.4217 | 8.32 | 1976000 | 2.2773 |
| 2.4279 | 8.35 | 1984000 | 2.2861 |
| 2.4279 | 8.39 | 1992000 | 2.2728 |
| 2.4268 | 8.42 | 2000000 | 2.2762 |
| 2.4268 | 8.45 | 2008000 | 2.2789 |
| 2.4177 | 8.49 | 2016000 | 2.2822 |
| 2.4177 | 8.52 | 2024000 | 2.2759 |
| 2.4166 | 8.56 | 2032000 | 2.2792 |
| 2.4166 | 8.59 | 2040000 | 2.2721 |
| 2.4223 | 8.62 | 2048000 | 2.2768 |
| 2.4223 | 8.66 | 2056000 | 2.2726 |
| 2.4139 | 8.69 | 2064000 | 2.2825 |
| 2.4139 | 8.72 | 2072000 | 2.2739 |
| 2.4236 | 8.76 | 2080000 | 2.2834 |
| 2.4236 | 8.79 | 2088000 | 2.2750 |
| 2.4235 | 8.83 | 2096000 | 2.2752 |
| 2.4235 | 8.86 | 2104000 | 2.2803 |
| 2.4193 | 8.89 | 2112000 | 2.2763 |
| 2.4193 | 8.93 | 2120000 | 2.2755 |
| 2.4179 | 8.96 | 2128000 | 2.2794 |
| 2.4179 | 8.99 | 2136000 | 2.2711 |
| 2.4181 | 9.03 | 2144000 | 2.2792 |
| 2.4181 | 9.06 | 2152000 | 2.2752 |
| 2.4173 | 9.09 | 2160000 | 2.2775 |
| 2.4173 | 9.13 | 2168000 | 2.2752 |
| 2.4242 | 9.16 | 2176000 | 2.2729 |
| 2.4242 | 9.2 | 2184000 | 2.2793 |
| 2.4166 | 9.23 | 2192000 | 2.2719 |
| 2.4166 | 9.26 | 2200000 | 2.2820 |
| 2.4181 | 9.3 | 2208000 | 2.2716 |
| 2.4181 | 9.33 | 2216000 | 2.2855 |
| 2.4245 | 9.36 | 2224000 | 2.2805 |
| 2.4245 | 9.4 | 2232000 | 2.2721 |
| 2.4204 | 9.43 | 2240000 | 2.2707 |
| 2.4204 | 9.47 | 2248000 | 2.2767 |
| 2.4255 | 9.5 | 2256000 | 2.2710 |
| 2.4255 | 9.53 | 2264000 | 2.2814 |
| 2.4254 | 9.57 | 2272000 | 2.2746 |
| 2.4254 | 9.6 | 2280000 | 2.2766 |
| 2.4232 | 9.63 | 2288000 | 2.2725 |
| 2.4232 | 9.67 | 2296000 | 2.2765 |
| 2.4189 | 9.7 | 2304000 | 2.2756 |
| 2.4189 | 9.73 | 2312000 | 2.2768 |
| 2.4105 | 9.77 | 2320000 | 2.2804 |
| 2.4105 | 9.8 | 2328000 | 2.2873 |
| 2.415 | 9.84 | 2336000 | 2.2783 |
| 2.415 | 9.87 | 2344000 | 2.2737 |
| 2.4174 | 9.9 | 2352000 | 2.2786 |
| 2.4174 | 9.94 | 2360000 | 2.2730 |
| 2.4199 | 9.97 | 2368000 | 2.2794 |
| 2.4199 | 10.0 | 2376000 | 2.2848 |
| 2.4224 | 10.04 | 2384000 | 2.2811 |
| 2.4224 | 10.07 | 2392000 | 2.2818 |
| 2.4226 | 10.11 | 2400000 | 2.2798 |
### Framework versions
- Transformers 4.35.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.14.0
|
pokjay/poca-SoccerTwos-15M-Steps | pokjay | 2024-02-29T07:13:57Z | 0 | 0 | ml-agents | [
"ml-agents",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | 2024-02-29T07:13:54Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: pokjay/poca-SoccerTwos-15M-Steps
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
22x99/w2v2-ru-pre | 22x99 | 2024-02-29T07:13:25Z | 77 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-02-29T06:23:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lamini/lamini_docs_finetuned | lamini | 2024-02-29T07:13:15Z | 3,206 | 9 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-24T01:07:55Z | ---
license: apache-2.0
---
This model was finetuned on EleutherAI/pythia-70m. The dataset used was lamini documents in the form of Q/A which tell more about how to use the platform. This is a sample task. |
uni-zhuan/q-FrozenLake-v1-4x4-noSlippery | uni-zhuan | 2024-02-29T07:10:24Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-29T07:10:22Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="uni-zhuan/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
CatBarks/t5_esSEC4_2_tokenizer | CatBarks | 2024-02-29T07:05:47Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-29T07:05:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ultimecia/xlm-roberta-base-finetuned-panx-en | ultimecia | 2024-02-29T07:01:18Z | 104 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-02-29T06:56:44Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5819
- F1: 0.5333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 13 | 1.0130 | 0.2401 |
| No log | 2.0 | 26 | 0.6741 | 0.4890 |
| No log | 3.0 | 39 | 0.5819 | 0.5333 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
hemanthkandimalla/HemanthLLM_tiny1.1b | hemanthkandimalla | 2024-02-29T06:58:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-29T05:01:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
```python
%%time
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
from peft import PeftModel
model_name='google/gemma-2b'
inference_model = AutoModelForCausalLM.from_pretrained(
model_name,
low_cpu_mem_usage=True,
# use_flash_attention_2=True,
)
inference_model.resize_token_embeddings(len(tokenizer))
inference_model = PeftModel.from_pretrained(inference_model, "hemanthkandimalla/HemanthLLM_tiny1.1b")
inference_model.eval()
text="attacks names"
tokenizer_input=tokenizer(text=text, return_tensors="pt")
output_tokens = inference_model.generate(
**tokenizer_input,
max_new_tokens=256,
do_sample=True,
temperature=0.2,
top_p=0.95,
top_k=50,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.pad_token_id,
)
text=tokenizer.decode(output_tokens[0], skip_special_tokens=False)
print(text)
```
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Noel-lawrence/ppo-SnowballTarget | Noel-lawrence | 2024-02-29T06:58:37Z | 23 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2024-02-27T12:43:39Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Noel-lawrence/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ultimecia/xlm-roberta-base-finetuned-panx-it | ultimecia | 2024-02-29T06:56:37Z | 104 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-02-29T06:51:47Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3623
- F1: 0.7313
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 18 | 0.7339 | 0.4485 |
| No log | 2.0 | 36 | 0.4065 | 0.6765 |
| No log | 3.0 | 54 | 0.3623 | 0.7313 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
jjlee6496/LDCC-SOLAR-10.7B-finetuned | jjlee6496 | 2024-02-29T06:49:27Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-29T05:30:42Z | ---
license: cc-by-nc-4.0
---
## Base Model
- [LDCC/LDCC-SOLAR-10.7B](https://huggingface.co/LDCC/LDCC-SOLAR-10.7B) |
shivanikerai/Llama-2-7b-chat-hf-adapter-sku-title-ner-generation-reversed-v2.1 | shivanikerai | 2024-02-29T06:46:29Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-02-29T06:45:46Z | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.9.0 |
pockypocky/distilbert-base-uncased-finetuned-emotion | pockypocky | 2024-02-29T06:29:34Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-29T06:09:19Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.927
- name: F1
type: f1
value: 0.9268755922056178
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2132
- Accuracy: 0.927
- F1: 0.9269
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8076 | 1.0 | 250 | 0.3047 | 0.9155 | 0.9150 |
| 0.2454 | 2.0 | 500 | 0.2132 | 0.927 | 0.9269 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
lole25/phi-2-gpo-ultrachat-lora-0.1 | lole25 | 2024-02-29T06:20:54Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"phi",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"custom_code",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-02-29T05:56:32Z | ---
license: mit
library_name: peft
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- HuggingFaceH4/ultrafeedback_binarized
base_model: microsoft/phi-2
model-index:
- name: phi-2-gpo-ultrachat-lora-0.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-2-gpo-ultrachat-lora-0.1
This model is a fine-tuned version of [lole25/phi-2-sft-ultrachat-lora](https://huggingface.co/lole25/phi-2-sft-ultrachat-lora) on the HuggingFaceH4/ultrafeedback_binarized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0100
- Rewards/chosen: -0.0001
- Rewards/rejected: -0.0003
- Rewards/accuracies: 0.2620
- Rewards/margins: 0.0002
- Logps/rejected: -94.2224
- Logps/chosen: -91.7363
- Logits/rejected: 0.8196
- Logits/chosen: 0.8018
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.14.6
- Tokenizers 0.15.2 |
nerone/ppo-LunarLander-v2 | nerone | 2024-02-29T06:13:07Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-29T06:12:49Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 235.30 +/- 20.40
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
rlawjdghek/StableVITON | rlawjdghek | 2024-02-29T06:06:45Z | 0 | 10 | null | [
"arxiv:2312.01725",
"region:us"
] | null | 2024-02-29T06:01:49Z | # [CVPR2024] StableVITON: Learning Semantic Correspondence with Latent Diffusion Model for Virtual Try-On
This repository is the official implementation of [StableVITON](https://arxiv.org/abs/2312.01725)
> **StableVITON: Learning Semantic Correspondence with Latent Diffusion Model for Virtual Try-On**<br>
> [Jeongho Kim](https://scholar.google.co.kr/citations?user=ucoiLHQAAAAJ&hl=ko), [Gyojung Gu](https://www.linkedin.com/in/gyojung-gu-29033118b/), [Minho Park](https://pmh9960.github.io/), [Sunghyun Park](https://psh01087.github.io/), [Jaegul Choo](https://sites.google.com/site/jaegulchoo/)
[[Arxiv Paper](https://arxiv.org/abs/2312.01725)]
[[Website Page](https://rlawjdghek.github.io/StableVITON/)]

## TODO List
- [x] ~~Inference code~~
- [x] ~~Release model weights~~
- [x] ~~Training code~~
## Environments
```bash
git clone https://github.com/rlawjdghek/StableVITON
cd StableVITON
conda create --name StableVITON python=3.10 -y
conda activate StableVITON
# install packages
pip install torch==2.0.0+cu117 torchvision==0.15.1+cu117 torchaudio==2.0.1 --index-url https://download.pytorch.org/whl/cu117
pip install pytorch-lightning==1.5.0
pip install einops
pip install opencv-python==4.7.0.72
pip install matplotlib
pip install omegaconf
pip install albumentations
pip install transformers==4.33.2
pip install xformers==0.0.19
pip install triton==2.0.0
pip install open-clip-torch==2.19.0
pip install diffusers==0.20.2
pip install scipy==1.10.1
conda install -c anaconda ipython -y
```
## Weights and Data
Our [checkpoint](https://kaistackr-my.sharepoint.com/:f:/g/personal/rlawjdghek_kaist_ac_kr/EjzAZHJu9MlEoKIxG4tqPr0BM_Ry20NHyNw5Sic2vItxiA?e=5mGa1c) on VITONHD have been released! <br>
You can download the VITON-HD dataset from [here](https://github.com/shadow2496/VITON-HD).<br>
For both training and inference, the following dataset structure is required:
```
train
|-- image
|-- image-densepose
|-- agnostic
|-- agnostic-mask
|-- cloth
|-- cloth_mask
|-- gt_cloth_warped_mask (for ATV loss)
test
|-- image
|-- image-densepose
|-- agnostic
|-- agnostic-mask
|-- cloth
|-- cloth_mask
```
## Preprocessing
The VITON-HD dataset serves as a benchmark and provides an agnostic mask. However, you can attempt virtual try-on on **arbitrary images** using segmentation tools like [SAM](https://github.com/facebookresearch/segment-anything). Please note that for densepose, you should use the same densepose model as used in VITON-HD.
## Inference
```bash
#### paired
CUDA_VISIBLE_DEVICES=4 python inference.py \
--config_path ./configs/VITONHD.yaml \
--batch_size 4 \
--model_load_path <model weight path> \
--save_dir <save directory>
#### unpaired
CUDA_VISIBLE_DEVICES=4 python inference.py \
--config_path ./configs/VITONHD.yaml \
--batch_size 4 \
--model_load_path <model weight path> \
--unpair \
--save_dir <save directory>
#### paired repaint
CUDA_VISIBLE_DEVICES=4 python inference.py \
--config_path ./configs/VITONHD.yaml \
--batch_size 4 \
--model_load_path <model weight path>t \
--repaint \
--save_dir <save directory>
#### unpaired repaint
CUDA_VISIBLE_DEVICES=4 python inference.py \
--config_path ./configs/VITONHD.yaml \
--batch_size 4 \
--model_load_path <model weight path> \
--unpair \
--repaint \
--save_dir <save directory>
```
You can also preserve the unmasked region by '--repaint' option.
## Training
For VITON training, we increased the first block of U-Net from 9 to 13 channels (add zero conv) based on the Paint-by-Example (PBE) model. Therefore, you should download the modified checkpoint (named as 'VITONHD_PBE_pose.ckpt') from the [Link](https://kaistackr-my.sharepoint.com/:f:/g/personal/rlawjdghek_kaist_ac_kr/EjzAZHJu9MlEoKIxG4tqPr0BM_Ry20NHyNw5Sic2vItxiA?e=5mGa1c) and place it in the './ckpts/' folder first.
Additionally, for more refined person texture, we utilized a VAE fine-tuned on the VITONHD dataset. You should also download the checkpoint (named as VITONHD_VAE_finetuning.ckpt') from the [Link](https://kaistackr-my.sharepoint.com/:f:/g/personal/rlawjdghek_kaist_ac_kr/EjzAZHJu9MlEoKIxG4tqPr0BM_Ry20NHyNw5Sic2vItxiA?e=5mGa1c) and place it in the './ckpts/' folder.
```bash
### Base model training
CUDA_VISIBLE_DEVICES=3,4 python train.py \
--config_name VITONHD \
--transform_size shiftscale3 hflip \
--transform_color hsv bright_contrast \
--save_name Base_test
### ATV loss finetuning
CUDA_VISIBLE_DEVICES=5,6 python train.py \
--config_name VITONHD \
--transform_size shiftscale3 hflip \
--transform_color hsv bright_contrast \
--use_atv_loss \
--resume_path <first stage model path> \
--save_name ATVloss_test
```
## Citation
If you find our work useful for your research, please cite us:
```
@artical{kim2023stableviton,
title={StableVITON: Learning Semantic Correspondence with Latent Diffusion Model for Virtual Try-On},
author={Kim, Jeongho and Gu, Gyojung and Park, Minho and Park, Sunghyun and Choo, Jaegul},
booktitle={arXiv preprint arxiv:2312.01725},
year={2023}
}
```
**Acknowledgements** Sunghyun Park is the corresponding author.
## License
Licensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode). |
bartowski/opus-v1.2-7b-exl2 | bartowski | 2024-02-29T06:02:36Z | 3 | 0 | null | [
"unsloth",
"axolotl",
"text-generation",
"en",
"region:us"
] | text-generation | 2024-02-29T05:48:16Z | ---
language:
- en
pipeline_tag: text-generation
tags:
- unsloth
- axolotl
quantized_by: bartowski
---
## Exllama v2 Quantizations of opus-v1.2-7b
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.14">turboderp's ExLlamaV2 v0.0.14</a> for quantization.
<b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/dreamgen/opus-v1.2-7b
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
| ----- | ---- | ------- | ------ | ------ | ------ | ------------ |
| [8_0](https://huggingface.co/bartowski/opus-v1.2-7b-exl2/tree/8_0) | 8.0 | 8.0 | 8.4 GB | 9.8 GB | 11.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/bartowski/opus-v1.2-7b-exl2/tree/6_5) | 6.5 | 8.0 | 7.2 GB | 8.6 GB | 10.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/bartowski/opus-v1.2-7b-exl2/tree/5_0) | 5.0 | 6.0 | 6.0 GB | 7.4 GB | 9.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
| [4_25](https://huggingface.co/bartowski/opus-v1.2-7b-exl2/tree/4_25) | 4.25 | 6.0 | 5.3 GB | 6.7 GB | 8.7 GB | GPTQ equivalent bits per weight, slightly higher quality. |
| [3_5](https://huggingface.co/bartowski/opus-v1.2-7b-exl2/tree/3_5) | 3.5 | 6.0 | 4.7 GB | 6.1 GB | 8.1 GB | Lower quality, only use if you have to. |
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/opus-v1.2-7b-exl2 opus-v1.2-7b-exl2-6_5
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `opus-v1.2-7b-exl2`:
```shell
mkdir opus-v1.2-7b-exl2
huggingface-cli download bartowski/opus-v1.2-7b-exl2 --local-dir opus-v1.2-7b-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
Linux:
```shell
mkdir opus-v1.2-7b-exl2-6_5
huggingface-cli download bartowski/opus-v1.2-7b-exl2 --revision 6_5 --local-dir opus-v1.2-7b-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
mkdir opus-v1.2-7b-exl2-6.5
huggingface-cli download bartowski/opus-v1.2-7b-exl2 --revision 6_5 --local-dir opus-v1.2-7b-exl2-6.5 --local-dir-use-symlinks False
```
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski |
VietDo26/distilbert-finetuned-squadv2 | VietDo26 | 2024-02-29T06:00:59Z | 113 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-02-29T04:53:07Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-finetuned-squadv2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-finetuned-squadv2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2
|
Zacksabrach/Pila | Zacksabrach | 2024-02-29T05:58:41Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | 2024-02-29T05:58:41Z | ---
license: openrail
license_name: openrail
license_link: >-
https://drive.google.com/drive/folders/19TqheZva44yIWWevYBHGkaENhGH8206f?usp=sharing
---
|
FINNUMBER/Yi-Ko-6B-Finch-Full-Epoch3 | FINNUMBER | 2024-02-29T05:54:57Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-26T15:03:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
souvik0306/whisper_pre_prod_test | souvik0306 | 2024-02-29T05:54:50Z | 5 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | 2024-02-29T05:54:48Z | ---
library_name: peft
base_model: OpenAI/whisper-large-v3
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 |
lole25/phi-2-gpo-ultrachat-lora | lole25 | 2024-02-29T05:53:01Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"phi",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"custom_code",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-02-29T04:58:35Z | ---
license: mit
library_name: peft
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- HuggingFaceH4/ultrafeedback_binarized
base_model: microsoft/phi-2
model-index:
- name: phi-2-gpo-ultrachat-lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-2-gpo-ultrachat-lora
This model is a fine-tuned version of [lole25/phi-2-sft-ultrachat-lora](https://huggingface.co/lole25/phi-2-sft-ultrachat-lora) on the HuggingFaceH4/ultrafeedback_binarized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0001
- Rewards/chosen: -0.0000
- Rewards/rejected: -0.0004
- Rewards/accuracies: 0.2600
- Rewards/margins: 0.0004
- Logps/rejected: -94.2372
- Logps/chosen: -91.7275
- Logits/rejected: 0.8188
- Logits/chosen: 0.8013
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.14.6
- Tokenizers 0.15.2 |
universalml/wf | universalml | 2024-02-29T05:50:46Z | 166 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-02-29T05:48:08Z | ---
tags:
- generated_from_trainer
model-index:
- name: wf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wf
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Framework versions
- Transformers 4.34.1
- Pytorch 1.13.0+cpu
- Datasets 2.14.5
- Tokenizers 0.14.1
|
abdullaharean/regipa_bangla | abdullaharean | 2024-02-29T05:48:54Z | 109 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"bn",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-27T12:22:15Z | ---
license: mit
language:
- bn
---
# Team_Khita_Kortesi_Model: Bengali Text to IPA Transcription based on fine-tuned ByT5-small
## Solution Summary:
Our team's solution focuses on developing a robust model for transcribing Bengali text into International Phonetic Alphabet (IPA), contributing to computational linguistics and NLP research in Bengali. Leveraging a linguist-validated dataset encompassing diverse domains of Bengali text, our model aims to accurately capture the phonetic nuances and regional dialects present in Bengali language.
## Approach:
### Data Preprocessing:
We preprocess the Bengali text data to handle linguistic variations, tokenization, and normalization.
### Model Architecture:
Our model architecture employs state-of-the-art deep learning techniques, such as recurrent neural networks (RNNs) or transformer-based models, to capture the sequential and contextual information inherent in language.
### Training:
The model is trained on the linguist-validated dataset, optimizing for accuracy, robustness, and generalization across various dialects and linguistic contexts.
### Validation:
We validate the model's performance using rigorous evaluation metrics, ensuring its effectiveness in accurately transcribing Bengali text into IPA.
### Deployment:
Upon successful validation, the model is deployed as an open-source tool, extending the capabilities of generalized Bengali Text-to-Speech systems and facilitating further research in Bengali computational linguistics.
## Key Features:
- **Phonetic Accuracy:** Our model prioritizes phonetic accuracy, ensuring faithful transcription of Bengali text into IPA symbols.
- **Regional Dialects:** The model is designed to accommodate the diverse regional dialects and linguistic variations present in Bengali language, capturing the nuances specific to each region.
- **Scalability:** With a scalable architecture, our solution can handle large volumes of text data efficiently, making it suitable for real-world applications and research purposes.
- **Accessibility:** By open-sourcing our model, we aim to make IPA transcription accessible to a wider audience, fostering collaboration and innovation in Bengali computational linguistics.
## Impact:
- **Advancing Research:** Our solution contributes to advancing research in Bengali computational linguistics and NLP, providing researchers with a valuable tool for studying language dynamics and linguistic diversity.
- **Community Engagement:** By open-sourcing our model and making it accessible to all, we empower the Bengali language community to engage in linguistic research and exploration.
- **Technological Innovation:** Our model extends the capabilities of existing Bengali Text-to-Speech systems, paving the way for innovative applications in speech synthesis, language learning, and accessibility.
## Example Inference:
```python
from transformers import T5ForConditionalGeneration
import torch
model = T5ForConditionalGeneration.from_pretrained('abdullaharean/regipa_bangla')
input_ids = torch.tensor([list("Life is like a box of chocolates.".encode("utf-8"))]) + 3 # add 3 for special tokens
labels = torch.tensor([list("La vie est comme une boîte de chocolat.".encode("utf-8"))]) + 3 # add 3 for special tokens
loss = model(input_ids, labels=labels).loss # forward pass
```
|
youndukn/MythoMax-L2-13b | youndukn | 2024-02-29T05:47:08Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-22T02:17:22Z | ---
license: other
language:
- en
---
An improved, potentially even perfected variant of MythoMix, my [MythoLogic-L2](https://huggingface.co/Gryphe/MythoLogic-L2-13b) and [Huginn](https://huggingface.co/The-Face-Of-Goonery/Huginn-13b-FP16) merge using a highly experimental tensor type merge technique. The main difference with MythoMix is that I allowed more of Huginn to intermingle with the single tensors located at the front and end of a model, resulting in increased coherency across the entire structure.
The script and the acccompanying templates I used to produce both can [be found here](https://github.com/Gryphe/BlockMerge_Gradient/tree/main/YAML).
This model is proficient at both roleplaying and storywriting due to its unique nature.
Quantized models are available from TheBloke: [GGUF](https://huggingface.co/TheBloke/MythoMax-L2-13B-GGUF) - [GPTQ](https://huggingface.co/TheBloke/MythoMax-L2-13B-GPTQ) - [AWQ](https://huggingface.co/TheBloke/MythoMax-L2-13B-AWQ) (You're the best!)
## Model details
The idea behind this merge is that each layer is composed of several tensors, which are in turn responsible for specific functions. Using MythoLogic-L2's robust understanding as its input and Huginn's extensive writing capability as its output seems to have resulted in a model that exceeds at both, confirming my theory. (More details to be released at a later time)
This type of merge is incapable of being illustrated, as each of its 363 tensors had an unique ratio applied to it. As with my prior merges, gradients were part of these ratios to further finetune its behaviour.
## Prompt Format
This model primarily uses Alpaca formatting, so for optimal model performance, use:
```
<System prompt/Character Card>
### Instruction:
Your instruction or question here.
For roleplay purposes, I suggest the following - Write <CHAR NAME>'s next reply in a chat between <YOUR NAME> and <CHAR NAME>. Write a single reply only.
### Response:
```
---
license: other
--- |
CatBarks/t5_esSEC4_1 | CatBarks | 2024-02-29T05:43:17Z | 48 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-29T05:41:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
morissu/camelot | morissu | 2024-02-29T05:41:42Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-15T14:57:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hcapp/hcapp | hcapp | 2024-02-29T05:39:36Z | 4 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-02-29T05:35:48Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### hcapp Dreambooth model trained by hcapp with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
Abdulrhman3133/AceGPT_3 | Abdulrhman3133 | 2024-02-29T05:38:02Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-29T05:30:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MIMASA24/my-panda | MIMASA24 | 2024-02-29T05:31:09Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-02-29T05:26:51Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-PANDA Dreambooth model trained by MIMASA24 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 220349106096
Sample pictures of this concept:

|
PavanDeepak/Topic_Classification | PavanDeepak | 2024-02-29T05:29:41Z | 111 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-28T12:51:23Z | ---
license: mit
---
## BERT-based Text Classification Model
This model is a fine-tuned version of the bert-base-uncased model, specifically adapted for text classification across a diverse set of categories. The model has been trained on a dataset collected from multiple sources, including the News Category Dataset on Kaggle and various other websites.
The model classifies text into one of the following 12 categories:
* Food
* Videogames & Shows
* Kids and fun
* Homestyle
* Travel
* Health
* Charity
* Electronics & Technology
* Sports
* Cultural & Music
* Education
* Convenience
The model has demonstrated robust performance with an accuracy of 0.721459, F1 score of 0.659451, precision of 0.707620, and recall of 0.635155.
## Model Architecture
The model leverages the BertForSequenceClassification architecture, It has been fine-tuned on the aforementioned dataset, with the following key configuration parameters:
* Hidden size: 768
* Number of attention heads: 12
* Number of hidden layers: 12
* Max position embeddings: 512
* Type vocab size: 2
* Vocab size: 30522
* The model uses the GELU activation function in its hidden layers and applies dropout with a probability of 0.1 to the attention probabilities to prevent overfitting.
## Example
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import numpy as np
from scipy.special import expit
MODEL = "PavanDeepak/Topic_Classification"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
class_mapping = model.config.id2label
text = "I love chicken manchuria"
tokens = tokenizer(text, return_tensors="pt")
output = model(**tokens)
scores = output.logits[0][0].detach().numpy()
scores = expit(scores)
predictions = (scores >= 0.5) * 1
for i in range(len(predictions)):
if predictions[i]:
print(class_mapping[i])
```
## Output:
* Food
* Videogames & Shows
* Homestyle
* Travel
* Health |
Lichang-Chen/ODIN_L1_O1 | Lichang-Chen | 2024-02-29T05:14:39Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"RM",
"dataset:OpenAssistant/oasst1",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-05T13:33:36Z | ---
license: mit
datasets:
- OpenAssistant/oasst1
tags:
- RM
---
The model trained with L=1 (Length loss weight) and O=1 (Orthogonal loss weight). |
Denilah/CoMA-7B | Denilah | 2024-02-29T05:10:17Z | 13 | 5 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-24T01:55:41Z | # CoLLaMA: A Multi-task Instruction Dataset and Large Language Model for Code
<p align="center" width="100%">
<img src="https://i.postimg.cc/J7Ds1tw6/CoLLaMA.jpg" width="40%" height="20%">
</p>
## Model details
Trained in June 2023.
CoMA comprises a fine-tuned coding LLM and a multi-task instruction tuning dataset featuring 77K data samples encompassing 8 diverse tasks.
Please refer to [the README of the GitHub repository](https://github.com/Denilah/CoLLaMA) for detailed information.
### Training dataset
The model was trained on a 77k rows instruction following dataset, which is released in the [GitHub repository](https://github.com/Denilah/CoLLaMA).
### Citation
<div>
<div align="center">
<a target='_blank'>Gang Hu<sup>1</sup></span> 
<a target='_blank'>Xi Wen<sup>1</sup></span> 
<a target='_blank'>Xin Liu<sup>1</sup></a> 
<a href='https://jimin.chancefocus.com/' target='_blank'>Jimin Huang<sup>2</sup></a> 
<a target='_blank'>Qianqian Xie*<sup>3</sup></a> 
</div>
<div>
<div align="center">
<sup>1</sup>School of Information Science & Engineering, Yunnan University 
<sup>2</sup>ChanceFocus AMC 
<sup>3</sup>School of Computer Science, Wuhan University 
</div>
```
@misc{Hu2023CoLLaMA,
title={CoLLaMA: A Multi-task Instruction Dataset and Large Language Model for Code},
author={Gang Hu and Xi Wen and Xin Liu and Jimin Huang and Qianqian Xie},
year={2023},
}
``` |
JinghuiLuAstronaut/DocLLM_baichuan2_7b | JinghuiLuAstronaut | 2024-02-29T04:58:18Z | 134 | 5 | transformers | [
"transformers",
"safetensors",
"baichuan",
"text-generation",
"custom_code",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-29T03:41:01Z | See https://github.com/GeorgeLuImmortal/DocLLM_reimplementation/ |
aisuko/ft-facebook-bart-large-xsum-on-samsum | aisuko | 2024-02-29T04:54:13Z | 108 | 0 | transformers | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:text-summatization",
"base_model:facebook/bart-large-xsum",
"base_model:finetune:facebook/bart-large-xsum",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-20T06:01:20Z | ---
license: mit
base_model: facebook/bart-large-xsum
tags:
- generated_from_trainer
datasets:
- text-summatization
model-index:
- name: ft-facebook-bart-large-xsum-on-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ft-facebook-bart-large-xsum-on-samsum
This model is a fine-tuned version of [facebook/bart-large-xsum](https://huggingface.co/facebook/bart-large-xsum) on the Samsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 0.22 | 100 | 1.5051 | 49.5382 | 24.4845 | 40.7456 | 45.3284 | 26.1722 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.2
- Datasets 2.17.0
- Tokenizers 0.15.1
|
LarryAIDraw/yorktownii-lora-v1-32dim-20ep-naivae | LarryAIDraw | 2024-02-29T04:45:22Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-02-26T15:36:08Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/18699?modelVersionId=22186 |
JC9027/finetuning-gpt2-finetunned | JC9027 | 2024-02-29T04:44:43Z | 116 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-26T05:45:10Z | ---
license: mit
base_model: openai-community/gpt2
tags:
- generated_from_trainer
model-index:
- name: finetuning-gpt2-finetunned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-gpt2-finetunned
This model is a fine-tuned version of [openai-community/gpt2](https://huggingface.co/openai-community/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
jonquimbly/shap-e | jonquimbly | 2024-02-29T04:36:13Z | 4 | 1 | diffusers | [
"diffusers",
"text-to-image",
"shap-e",
"text-to-3d",
"arxiv:2305.02463",
"license:mit",
"diffusers:ShapEPipeline",
"region:us"
] | text-to-3d | 2024-02-29T04:36:13Z | ---
license: mit
tags:
- text-to-image
- shap-e
- diffusers
pipeline_tag: text-to-3d
---
# Shap-E
Shap-E introduces a diffusion process that can generate a 3D image from a text prompt. It was introduced in [Shap-E: Generating Conditional 3D Implicit Functions](https://arxiv.org/abs/2305.02463) by Heewoo Jun and Alex Nichol from OpenAI.
Original repository of Shap-E can be found here: https://github.com/openai/shap-e.
_The authors of Shap-E didn't author this model card. They provide a separate model card [here](https://github.com/openai/shap-e/blob/main/model-card.md)._
## Introduction
The abstract of the Shap-E paper:
*We present Shap-E, a conditional generative model for 3D assets. Unlike recent work on 3D generative models which produce a single output representation, Shap-E directly generates the parameters of implicit functions that can be rendered as both textured meshes and neural radiance fields. We train Shap-E in two stages: first, we train an encoder that deterministically maps 3D assets into the parameters of an implicit function; second, we train a conditional diffusion model on outputs of the encoder. When trained on a large dataset of paired 3D and text data, our resulting models are capable of generating complex and diverse 3D assets in a matter of seconds. When compared to Point-E, an explicit generative model over point clouds, Shap-E converges faster and reaches comparable or better sample quality despite modeling a higher-dimensional, multi-representation output space. We release model weights, inference code, and samples at [this https URL](https://github.com/openai/shap-e).*
## Released checkpoints
The authors released the following checkpoints:
* [openai/shap-e](https://hf.co/openai/shap-e): produces a 3D image from a text input prompt
* [openai/shap-e-img2img](https://hf.co/openai/shap-e-img2img): samples a 3D image from synthetic 2D image
## Usage examples in 🧨 diffusers
First make sure you have installed all the dependencies:
```bash
pip install transformers accelerate -q
pip install git+https://github.com/huggingface/diffusers@@shap-ee
```
Once the dependencies are installed, use the code below:
```python
import torch
from diffusers import ShapEPipeline
from diffusers.utils import export_to_gif
ckpt_id = "openai/shap-e"
pipe = ShapEPipeline.from_pretrained(repo).to("cuda")
guidance_scale = 15.0
prompt = "a shark"
images = pipe(
prompt,
guidance_scale=guidance_scale,
num_inference_steps=64,
size=256,
).images
gif_path = export_to_gif(images, "shark_3d.gif")
```
## Results
<table>
<tbody>
<tr>
<td align="center">
<img src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/shap-e/bird_3d.gif" alt="a bird">
</td>
<td align="center">
<img src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/shap-e/shark_3d.gif" alt="a shark">
</td align="center">
<td align="center">
<img src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/shap-e/veg_3d.gif" alt="A bowl of vegetables">
</td>
</tr>
<tr>
<td align="center">A bird</td>
<td align="center">A shark</td>
<td align="center">A bowl of vegetables</td>
</tr>
</tr>
</tbody>
<table>
## Training details
Refer to the [original paper](https://arxiv.org/abs/2305.02463).
## Known limitations and potential biases
Refer to the [original model card](https://github.com/openai/shap-e/blob/main/model-card.md).
## Citation
```bibtex
@misc{jun2023shape,
title={Shap-E: Generating Conditional 3D Implicit Functions},
author={Heewoo Jun and Alex Nichol},
year={2023},
eprint={2305.02463},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
``` |
suthanhcong/bert-base-japanese-v3-marc_ja | suthanhcong | 2024-02-29T04:33:47Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-29T04:33:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
atakanince/whisper-small-turkish-v1 | atakanince | 2024-02-29T04:27:04Z | 116 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"tr",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-02-27T21:32:31Z | ---
language:
- tr
license: apache-2.0
base_model: openai/whisper-small
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: Whisper Small Tr V1 - Atakan Ince
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Tr V1 - Atakan Ince
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
kumatomo/TopK_GNN | kumatomo | 2024-02-29T04:12:49Z | 1 | 0 | pytorch_geometric | [
"pytorch_geometric",
"pretrain",
"graph-machine-learning",
"en",
"dataset:QM9",
"arxiv:1910.09700",
"license:mit",
"region:us"
] | null | 2024-02-29T03:13:32Z | ---
language: en
license: mit
library_name: pytorch_geometric
tags:
- graph-machine-learning
datasets: QM9
model_name: TopK
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** mit
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dagbs/Gemma-Wukong-2b-GGUF | dagbs | 2024-02-29T03:59:36Z | 13 | 1 | null | [
"gguf",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-02-28T03:32:17Z | ---
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
---
Note: This results in very weird responses. We are working on fixing the vocab but don't have an ETA.
# Gemma-Wukong-2b GGUF
Original Model: https://huggingface.co/RESMPDEV/Gemma-Wukong-2b
 |
AIMindaeng/hansoldeco-KGOAT | AIMindaeng | 2024-02-29T03:52:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-29T03:52:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ArthurZ/mamba-790m | ArthurZ | 2024-02-29T03:47:39Z | 373 | 0 | transformers | [
"transformers",
"safetensors",
"mamba",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-19T12:21:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yjchenbe/sd-class-butterflies-32 | yjchenbe | 2024-02-29T03:37:37Z | 44 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2024-02-29T03:37:11Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('yjchenbe/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
lunarsylph/gemmacell_v6 | lunarsylph | 2024-02-29T03:20:30Z | 114 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-29T03:08:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
danwils/Alasv2-Seallm-7B-v2 | danwils | 2024-02-29T03:17:10Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-29T03:08:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rasyosef/bert-tiny-domain-adapted-imdb | rasyosef | 2024-02-29T03:00:57Z | 118 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"en",
"dataset:imdb",
"base_model:prajjwal1/bert-tiny",
"base_model:finetune:prajjwal1/bert-tiny",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-02-29T02:32:37Z | ---
license: mit
base_model: prajjwal1/bert-tiny
tags:
- generated_from_trainer
model-index:
- name: bert-tiny-domain-adapted-imdb
results: []
datasets:
- imdb
language:
- en
metrics:
- perplexity
pipeline_tag: fill-mask
widget:
- text: This is a great [MASK].
example_title: Example
---
# bert-tiny-domain-adapted-imdb
This model is a fine-tuned version of [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny) on the [imdb](https://huggingface.co/datasets/imdb) dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6873
- Perplexity: 39.94
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.1563 | 1.0 | 782 | 3.7462 |
| 4.0555 | 2.0 | 1564 | 3.6977 |
| 4.0285 | 3.0 | 2346 | 3.6963 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 |
cogbuji/Mr-Grammatology-clinical-problems-Mistral-7B-0.5 | cogbuji | 2024-02-29T02:56:46Z | 16 | 5 | mlx | [
"mlx",
"safetensors",
"mistral",
"medical",
"health",
"instruct",
"finetune",
"chatml",
"text-generation",
"conversational",
"en",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:finetune:teknium/OpenHermes-2.5-Mistral-7B",
"license:mit",
"region:us"
] | text-generation | 2024-02-20T01:35:24Z | ---
base_model: teknium/OpenHermes-2.5-Mistral-7B
license: mit
language:
- en
model_creator: Chime Ogbuji
library_name: mlx
model_name: Mr-Grammatology-clinical-problems-Mistral-7B-0.5
pipeline_tag: text-generation
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
tags:
- mlx
- medical
- health
- mistral
- instruct
- finetune
- chatml
---
# Mr-Grammatology-clinical-problems-Mistral-7B-0.5

The name of the model is a homage to Fela Kuti's song __Mr Grammarticalogy-Lisationalsim Is The Boss__ released on the B-side of his 1976 LP [Excuse O](https://www.discogs.com/release/3149841-Fela-And-The-Africa-70-Excuse-O).
It is a 16/32 QLoRa all linear layers finetune of [teknium/OpenHermes-2.5-Mistral-7B](/teknium/OpenHermes-2.5-Mistral-7B) using [controlled natural language (CNL) phrases](https://github.com/chimezie/django-snomed-ct#controlled-natural-language)
generated from the September 23rd release of [SNOMED CT United States Edition](https://www.snomed.org/snomed-ct/Use-SNOMED-CT). The general idea is described in [Domain-Specific Biomedical Ontologies, RALM, and Generative Medical Expert Systems](https://chimezie.medium.com/biomedical-ontology-retrieval-augmented-language-models-using-django-snomed-ct-and-ogbujipt-dfa0d0b150d8).
It is an experimental model for non-production environments to test how generative AI systems can be trained for use in various medical informatics scenarios.
The original model was converted to MLX format, quantized, and then subject to continued pretraining using all the active domain-expert text definitions available in SNOMED-CT at a constant learning rate of 1e-5 using
[mlx_lm's LoRa finetuning functionality](https://github.com/ml-explore/mlx-examples/blob/main/llms/mlx_lm/LORA.md) with 16 LoRa layers.
It was then trained on a dataset of 336,762 records of medical terminology **definition instructions** generated from SNOMED-CT using a fork of [django-snomed-ct](https://github.com/chimezie/django-snomed-ct). These definition instructions were generated from the **disorder**, **finding**, **morphological abnormality**, and **situation** hierarchies in SNOMED-CT. This training step was done using [mlx-tuning-fork](https://github.com/chimezie/mlx-tuning-fork) through 42,096 training iterations, with a batch size of 8 at a time, using LoRa on all linear layers.
There were 51,082 records of more granular definition instructions, part of which were incorporated into the training dataset. However, 40% were kept aside for validation.
## Use with mlx
```bash
pip install mlx-lm
```
Download and convert.
```bash
$ python -m mlx_lm.convert --hf-path cogbuji/Mr-Grammatology-clinical-problems-Mistral-7B-0.5 \
--mlx-path /path/to/mlx/model
```
Generate from prompts in commandline (see [Generate Text with LLMs and MLX](https://github.com/ml-explore/mlx-examples/tree/main/llms) for more options )
```bash
$ python -m mlx_lm.generate --prompt "How is Cardiomyopathy characterized in form?" \
--temp .4 -m 300 --model /path/to/mlx/model --seed 4
```
```
Prompt: <|im_start|>user
How is Cardiomyopathy characterized in form?<|im_end|>
<|im_start|>assistant
Cardiomyopathy is characterized in form by a morphologically abnormal structure located in a myocardium structure
```
## Example of use of 1-shot description prompting
Using mlx-tuning-fork with OgbujiPts word looms to construct 1-shot example of description prompting.
```bash
$ python -m mlx_tuning_fork.training -nt 1200 -t .4 --loom-file=sct_prompt.toml -f chatml config.yaml
```
```
Prompt: <|im_start|>system
Give detailed responses. Use critical thinking and think step-by-step.
Don't make up answers if you don't know. Use SNOMED-CT definitions as much as possible, describing
characterizations in form of morphological abnormalities involved in diagnoses (their signs), their etiology (causes/basis),
their related findings.
## Question ##
What are the signs and etiology of Skin ulcer due to diabetes mellitus?
## Answer ##
SNOMED-CT defines Skin ulcer due to diabetes mellitus as characterized in form by an ulcer located in a skin structure.
Ulcer is a mophologic abnormality.
The cause or origin of a disorder is part of its etiology.
Skin ulcer due to diabetes mellitus is caused by diabetes mellitus.
So, the etiology of Skin ulcer due to diabetes mellitus is diabetes mellitus.
Infected ulcer of skin is an infectious process characterized in form by an ulcer located in a skin structure.
Since the signs of Skin ulcer due to diabetes mellitus is also an ulcer, Infected ulcer of skin is
an example of a disorder with the same signs as Skin ulcer due to diabetes mellitus.
<|im_end|>
<|im_start|>user
Take a deep breath and work on this problem step by step.
## Question ##
The patient has Irritable bowel syndrome.
Summarize the the biological processes and signs (characterizations in form)
involved in the patient's diagnoses. Give examples of other disorders with the same signs and etiology.
<|im_end|>
<|im_start|>assistant
Irritable bowel syndrome (SNOMED-CT definition: Irritable bowel syndrome) is a functional disorder of the gastrointestinal tract, characterized in form by abnormal bowel movements. It is a clinical diagnosis, and its signs and symptoms include abdominal pain, bloating, and altered bowel movements (diarrhea, constipation, or both).
Etiology: Irritable bowel syndrome is believed to be caused by factors including abnormal gut motility, gastrointestinal dysrhythmias, and psychological factors.
Signs: Examples of signs of the disorder include changes in stool consistency and abdominal pain.
Irritable bowel syndrome is a functional bowel disorder, which is a category of gastrointestinal disorders with similar signs and symptoms. Examples of disorders with the same signs and etiology include:
1. Functional dyspepsia: It is characterized by an upper abdominal pain or discomfort and has a similar etiology as irritable bowel syndrome. It is a functional disorder of the stomach and small intestine, and its signs include epigastric pain and discomfort.
2. Chronic idiopathic constipation: It is characterized by chronic constipation and has a similar etiology as irritable bowel syndrome. It is a functional disorder of the colon
==========
Prompt: 447.658 tokens-per-sec
```
|
CatBarks/t5_esSEC2_2_tokenizer | CatBarks | 2024-02-29T02:52:04Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-29T02:52:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CatBarks/t5_esSEC2_2 | CatBarks | 2024-02-29T02:52:02Z | 48 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-29T02:49:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
fzzhang/mistral_gsm8k_s_prod_fullS | fzzhang | 2024-02-29T02:40:11Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-02-28T19:11:32Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: mistral_gsm8k_s_prod_fullS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_gsm8k_s_prod_fullS
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.37.0.dev0
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.0 |
Kquant03/Umbra-MoE-4x10.7-GGUF | Kquant03 | 2024-02-29T02:31:46Z | 79 | 5 | null | [
"gguf",
"moe",
"merge",
"mergekit",
"dataset:kodonho/SolarM-SakuraSolar-SLERP",
"dataset:Sao10K/Sensualize-Solar-10.7B",
"dataset:NousResearch/Nous-Hermes-2-SOLAR-10.7B",
"dataset:fblgit/UNA-SOLAR-10.7B-Instruct-v1.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-01-21T10:04:26Z | ---
license: apache-2.0
tags:
- moe
- merge
- mergekit
datasets:
- kodonho/SolarM-SakuraSolar-SLERP
- Sao10K/Sensualize-Solar-10.7B
- NousResearch/Nous-Hermes-2-SOLAR-10.7B
- fblgit/UNA-SOLAR-10.7B-Instruct-v1.0
---

# Umbra-MoE-4x10.7
Umbra is an off shoot of the [Lumosia Series] with a Focus in General Knowledge and RP/ERP
This model was built around the idea someone wanted a General Assiatant that could also tell Stories/RP/ERP when wanted.
This is a very experimental model. It's a combination MoE of Solar models, the models selected are personal favorites.
base context is 4k but it stays coherent up to 16k
Please let me know how the model works for you.
[Join our Discord!](https://discord.gg/ZgU79QDnE2)
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [Q2_K Tiny](https://huggingface.co/Kquant03/Umbra-MoE-4x10.7-GGUF/blob/main/ggml-model-q2_k.gguf) | Q2_K | 2 | 12 GB| 14 GB | smallest, significant quality loss - not recommended for most purposes |
| [Q3_K_M](https://huggingface.co/Kquant03/Umbra-MoE-4x10.7-GGUF/blob/main/ggml-model-q3_k_m.gguf) | Q3_K_M | 3 | 15.7 GB| 17.7 GB | very small, high quality loss |
| [Q4_0](https://huggingface.co/Kquant03/Umbra-MoE-4x10.7-GGUF/blob/main/ggml-model-q4_0.gguf) | Q4_0 | 4 | 20.3 GB| 22.3 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Q4_K_M](https://huggingface.co/Kquant03/Umbra-MoE-4x10.7-GGUF/blob/main/ggml-model-q4_k_m.gguf) | Q4_K_M | 4 | 20.4 GB| 22.4 GB | medium, balanced quality - recommended |
| [Q5_0](https://huggingface.co/Kquant03/Umbra-MoE-4x10.7-GGUF/blob/main/ggml-model-q5_0.gguf) | Q5_0 | 5 | 24.8 GB| 26.8 GB | legacy; large, balanced quality |
| [Q5_K_M](https://huggingface.co/Kquant03/Umbra-MoE-4x10.7-GGUF/blob/main/ggml-model-q5_k_m.gguf) | Q5_K_M | 5 | 24.9 GB| 26.9 GB | large, balanced quality - recommended |
| [Q6 XL](https://huggingface.co/Kquant03/Umbra-MoE-4x10.7-GGUF/blob/main/ggml-model-q6_k.gguf) | Q6_K | 6 | 29.6 GB| 31.6 GB | very large, extremely minor degradation |
| [Q8 XXL](https://huggingface.co/Kquant03/Umbra-MoE-4x10.7-GGUF/blob/main/ggml-model-q8_0.gguf) | Q8_0 | 8 | 38.4 GB| 40.4 GB | very large, extremely minor degradation - not recommended |
Template: ChatML
```
### System:
### USER:{prompt}
### Assistant:
```
Settings:
```
Temp: 1.0
min-p: 0.02-0.1
```
## Evals:
* Avg: 73.4
* ARC: 70.31
* HellaSwag: 87.81
* MMLU: 66.42
* T-QA: 67.82
* Winogrande: 83.27
* GSM8K: 64.75
## Examples:
```
User:
Tell me an interesting fact.
Umbra:
The Amazon Rainforest is home to an astounding variety of life forms.
It encompasses over five million square kilometers across nine South American countries and holds approximately one-third of all known species on Earth.
This biodiversity hotspot supports more than 390 billion individual trees representing 16,000 species.
Over two million insect species live here alongside thousands of bird, mammal, amphibian, and reptile species. In this dense forest, indigenous communities have developed intricate knowledge systems that allow them to thrive sustainably within their surroundings.
```
```
User:
Tell me a story about a red haired fox
Umbra:
In the enchanted forest of Ardenwood, a vibrant world teeming with life and magic, lived a curious creature known as Vixie, whose russet fur glowed against the verdure that surrounded her. Her emerald eyes twinkled with mischief and wonder as she roamed the lush landscape, befriending all who crossed her path.
One sunlit afternoon, Vixie found herself weaving amongst a sea of tall grasses swaying gently in the breeze. Ahead lay a shimmering riverbank, where a gathering of animals was assembled around a mystifying sight - a radiant stone pulsing with an ethereal glow.
A majestic stag approached Vixie, his voice resonating with authority. He introduced himself as Elthon, leader of this assembly. They had gathered to seek guidance from the celestial gem, said to hold the secrets of the universe within its sparkling depths.
Vixie, filled with curiosity, asked if she could partake in their quest for knowledge. Elthon agreed, and they each took turns gazing into the jewel. As Vixie's turn came, she felt a tingling sensation course through her body, her senses heightened as the stone revealed a vision before her.
In the dreamlike tableau, Vixie saw herself standing atop a hill overlooking Ardenwood. She watched as the seasons changed, the land thrived, and creatures played out their destinies. It appeared the fates of these beings were intertwined, a tapestry woven together by love, friendship, sacrifice, and growth.
When the trance subsided, Vixie returned to the present moment, feeling humbled and grateful. This revelation emboldened her desire to cherish those bonds she shared with others, knowing how vital they were to the beauty and harmony of the enchanted realm. From then on, Vixie continued her adventures with renewed purpose, nurturing friendships and embracing the mysteries of Ardenwood.
```
Umbra-MoE-4x10.7 is a Mixure of Experts (MoE) made with the following models using:
* [kodonho/SolarM-SakuraSolar-SLERP](https://huggingface.co/kodonho/SolarM-SakuraSolar-SLERP)
* [Sao10K/Sensualize-Solar-10.7B](https://huggingface.co/Sao10K/Sensualize-Solar-10.7B)
* [NousResearch/Nous-Hermes-2-SOLAR-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B)
* [fblgit/UNA-SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/fblgit/UNA-SOLAR-10.7B-Instruct-v1.0)
## 🧩 Configuration
```
base_model: kodonho/SolarM-SakuraSolar-SLERP
gate_mode: hidden
dtype: bfloat16
experts:
- source_model: kodonho/SolarM-SakuraSolar-SLERP
positive_prompts:
- "versatile"
- "helpful"
- "factual"
- "integrated"
- "adaptive"
- "comprehensive"
- "balanced"
negative_prompts:
- "specialized"
- "narrow"
- "focused"
- "limited"
- "specific"
- source_model: Sao10K/Sensualize-Solar-10.7B
positive_prompts:
- "creative"
- "chat"
- "discuss"
- "culture"
- "world"
- "expressive"
- "detailed"
- "imaginative"
- "engaging"
negative_prompts:
- "sorry"
- "cannot"
- "factual"
- "concise"
- "straightforward"
- "objective"
- "dry"
- source_model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
positive_prompts:
- "analytical"
- "accurate"
- "logical"
- "knowledgeable"
- "precise"
- "calculate"
- "compute"
- "solve"
- "work"
- "python"
- "javascript"
- "programming"
- "algorithm"
- "tell me"
- "assistant"
negative_prompts:
- "creative"
- "abstract"
- "imaginative"
- "artistic"
- "emotional"
- "mistake"
- "inaccurate"
- source_model: fblgit/UNA-SOLAR-10.7B-Instruct-v1.0
positive_prompts:
- "instructive"
- "clear"
- "directive"
- "helpful"
- "informative"
negative_prompts:
- "exploratory"
- "open-ended"
- "narrative"
- "speculative"
- "artistic"
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Steelskull/Umbra-MoE-4x10.7"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Kquant03/NeuralTrix-7B-dpo-laser-GGUF | Kquant03 | 2024-02-29T02:28:26Z | 6 | 2 | null | [
"gguf",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-02-14T13:37:51Z | ---
license: apache-2.0
language:
- en
---

# Made using Cultrix's Model, which can be found [here](https://huggingface.co/CultriX/NeuralTrix-7B-dpo).
[Join our Discord!](https://discord.gg/ZgU79QDnE2)
[BASE MODEL HERE](https://huggingface.co/Kquant03/NeuralTrix-7B-dpo-laser)
NeuralTrix-7B-v1 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mlabonne/OmniBeagle-7B](https://huggingface.co/mlabonne/OmniBeagle-7B)
* [flemmingmiguel/MBX-7B-v3](https://huggingface.co/flemmingmiguel/MBX-7B-v3)
* [AiMavenAi/AiMaven-Prometheus](https://huggingface.co/AiMavenAi/AiMaven-Prometheus)
It was then trained with DPO by Cultrix using:
* https://huggingface.co/datasets/jondurbin/truthy-dpo-v0.1
I performed laser_snr_math on it afterwards to see if I could improve it.
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# no parameters necessary for base model
- model: mlabonne/OmniBeagle-7B
parameters:
density: 0.65
weight: 0.4
- model: flemmingmiguel/MBX-7B-v3
parameters:
density: 0.6
weight: 0.35
- model: AiMavenAi/AiMaven-Prometheus
parameters:
density: 0.6
weight: 0.35
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "CultriX/NeuralTrix-7B-v1"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Dcolinmorgan/distaster-mlx-model | Dcolinmorgan | 2024-02-29T02:26:01Z | 113 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"dataset:Dcolinmorgan/disaster-tw",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-27T13:05:55Z | ---
license: mit
metrics:
- accuracy
pipeline_tag: text-classification
datasets:
- Dcolinmorgan/disaster-tw
widget:
- text: "Waves are crashing all around, the wind is growing stronger, lights are flashing."
example_title: "Storm description without keyword"
- text: "A cool breeze is passing through the meadow and over the brook tonight"
example_title: "No emergency"
- text: "Blood was scene at the corner of X and Y"
example_title: "Emergency scene"
- text: "Two men were seen at the corner of X and Y"
example_title: "Normal scene"
---
## Model Name
distaster inference from tweets trained locally on M1 max via mlx
bert-base-uncased with labels tuned to 2
### Training
Description of the training process.
### Evaluation
epoch and accuracy based.
### Weights & Biases
This model was trained and evaluated using Weights & Biases. You can see the training and evaluation logs [here](https://api.wandb.ai/links/dcolinmorgan/tpeiht5r). |
jq/nllb-1.3b-many-to-many-step800 | jq | 2024-02-29T02:21:17Z | 149 | 0 | transformers | [
"transformers",
"safetensors",
"m2m_100",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-29T02:16:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ICTNLP/Llama-2-7b-chat-TruthX | ICTNLP | 2024-02-29T02:20:40Z | 186 | 6 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"custom_code",
"arxiv:2402.17811",
"license:gpl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-27T06:54:34Z | ---
license: gpl-3.0
---
# TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space
> [Shaolei Zhang](https://zhangshaolei1998.github.io/), [Tian Yu](https://tianyu0313.github.io/), [Yang Feng](https://people.ucas.edu.cn/~yangfeng?language=en)*
Model for paper "[TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space](https://arxiv.org/pdf/2402.17811.pdf)".
**TruthX** is an inference-time method to elicit the truthfulness of LLMs by editing their internal representations in truthful space, thereby mitigating the hallucinations of LLMs. On the [TruthfulQA benchmark](https://paperswithcode.com/sota/question-answering-on-truthfulqa), TruthX yields an average **enhancement of 20% in truthfulness** across 13 advanced LLMs.
<div align="center">
<img src="./truthx_results.png" alt="img" width="100%" />
</div>
<p align="center">
TruthfulQA MC1 accuracy of TruthX across 13 advanced LLMs
</p>
This repo provides **Llama-2-7B-Chat-TruthX**, a Llama-2-7B-Chat model with baked-in TruthX model. You can directly download this baked-in model and use it like standard Llama, no additional operations are required.
## Quick Starts
Inference with Llama-2-7B-Chat-TruthX:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
llama2chat_with_truthx = "ICTNLP/Llama-2-7b-chat-TruthX"
tokenizer = AutoTokenizer.from_pretrained(llama2chat_with_truthx, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(llama2chat_with_truthx, trust_remote_code=True,torch_dtype=torch.float16).cuda()
question = "What are the benefits of eating an apple a day?"
encoded_inputs = tokenizer(question, return_tensors="pt")["input_ids"]
outputs = model.generate(encoded_inputs.cuda())[0, encoded_inputs.shape[-1] :]
outputs_text = tokenizer.decode(outputs, skip_special_tokens=True).strip()
print(outputs_text)
```
Please refer to [GitHub repo](https://github.com/ictnlp/TruthX) and [our paper](https://arxiv.org/pdf/2402.17811.pdf) for more details.
## Licence
Model weights and the inference code are released under The GNU General Public License v3.0 (GPLv3)
## Citation
If this repository is useful for you, please cite as:
```
@misc{zhang2024truthx,
title={TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space},
author={Shaolei Zhang and Tian Yu and Yang Feng},
year={2024},
eprint={2402.17811},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2402.17811}
}
```
If you have any questions, feel free to contact `[email protected]`.
|
adjohn1313/wizard_sft_blackbox_rlhf_60k | adjohn1313 | 2024-02-29T02:14:23Z | 76 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] | text-generation | 2024-02-29T02:08:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ultimecia/xlm-roberta-base-finetuned-panx-de | ultimecia | 2024-02-29T02:14:20Z | 115 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-02-29T02:02:06Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1353
- F1: 0.8480
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 132 | 0.1623 | 0.8116 |
| No log | 2.0 | 264 | 0.1418 | 0.8438 |
| No log | 3.0 | 396 | 0.1353 | 0.8480 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
Kquant03/Buttercup-4x7B-GGUF | Kquant03 | 2024-02-29T02:14:09Z | 113 | 8 | null | [
"gguf",
"moe",
"merge",
"en",
"arxiv:2101.03961",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-22T06:54:12Z | ---
license: apache-2.0
language:
- en
tags:
- moe
- merge
---

# "[We] are joined by the bonds of love. And you cannot track that, not with a thousand bloodhounds, and you cannot break it, not with a thousand swords."
## This is the repo for the GGUF files
[BASE MODEL](https://huggingface.co/Kquant03/Buttercup-4x7B-bf16)
[Join our Discord!](https://discord.gg/uT4CzytfYW)
A frankenMoE not only using far better methodology and fundamental understanding of SMoE, but completely focused around intellectual roleplay. This model doesn't actually have redundancy like most MoE does...but if you encounter this, try to keep things fresh with the model by either introducing new concepts often, or through [drμgs](https://github.com/EGjoni/DRUGS). (no not that kind)
The models that were implemented are as follows:
- [mlabonne/Beagle14-7B](https://huggingface.co/mlabonne/Beagle14-7B) - base
- [fblgit/una-cybertron-7b-v3-OMA](https://huggingface.co/fblgit/una-cybertron-7b-v3-OMA) - expert #1
- [rwitz/go-bruins-v2](https://huggingface.co/rwitz/go-bruins-v2) - expert #2
- [mlabonne/Beagle14-7B](https://huggingface.co/mlabonne/Beagle14-7B) - expert #3
- [mlabonne/Beagle14-7B](https://huggingface.co/mlabonne/Beagle14-7B) - expert #4
# Completely mogs mixtral instruct 0.1 across multiple benchmarks at half the size


## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [Q2_K Tiny](https://huggingface.co/Kquant03/Buttercup-4x7B-GGUF/blob/main/ggml-model-q2_k.gguf) | Q2_K | 2 | 7.87 GB| 9.87 GB | smallest, significant quality loss - not recommended for most purposes |
| [Q3_K_M](https://huggingface.co/Kquant03/Buttercup-4x7B-GGUF/blob/main/ggml-model-q3_k_m.gguf) | Q3_K_M | 3 | 10.28 GB| 12.28 GB | very small, high quality loss |
| [Q4_0](https://huggingface.co/Kquant03/Buttercup-4x7B-GGUF/blob/main/ggml-model-q4_0.gguf) | Q4_0 | 4 | 13.3 GB| 15.3 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Q4_K_M](https://huggingface.co/Kquant03/Buttercup-4x7B-GGUF/blob/main/ggml-model-q4_k_m.gguf) | Q4_K_M | 4 | 13.32 GB| 15.32 GB | medium, balanced quality - recommended |
| [Q5_0](https://huggingface.co/Kquant03/Buttercup-4x7B-GGUF/blob/main/ggml-model-q5_0.gguf) | Q5_0 | 5 | 16.24 GB| 18.24 GB | legacy; large, balanced quality |
| [Q5_K_M](https://huggingface.co/Kquant03/Buttercup-4x7B-GGUF/blob/main/ggml-model-q5_k_m.gguf) | Q5_K_M | 5 | ~16.24 GB| ~18.24 GB | large, balanced quality - recommended |
| [Q6 XL](https://huggingface.co/Kquant03/Buttercup-4x7B-GGUF/blob/main/ggml-model-q6_k.gguf) | Q6_K | 6 | 19.35 GB| 21.35 GB | very large, extremely minor degradation |
| [Q8 XXL](https://huggingface.co/Kquant03/Buttercup-4x7B-GGUF/blob/main/ggml-model-q8_0.gguf) | Q8_0 | 8 | 25.1 GB| 27.1 GB | very large, extremely minor degradation - not recommended |
# "[What is a Mixture of Experts (MoE)?](https://huggingface.co/blog/moe)"
### (from the MistralAI papers...click the quoted question above to navigate to it directly.)
The scale of a model is one of the most important axes for better model quality. Given a fixed computing budget, training a larger model for fewer steps is better than training a smaller model for more steps.
Mixture of Experts enable models to be pretrained with far less compute, which means you can dramatically scale up the model or dataset size with the same compute budget as a dense model. In particular, a MoE model should achieve the same quality as its dense counterpart much faster during pretraining.
So, what exactly is a MoE? In the context of transformer models, a MoE consists of two main elements:
Sparse MoE layers are used instead of dense feed-forward network (FFN) layers. MoE layers have a certain number of “experts” (e.g. 32 in my "frankenMoE"), where each expert is a neural network. In practice, the experts are FFNs, but they can also be more complex networks or even a MoE itself, leading to hierarchical MoEs!
A gate network or router, that determines which tokens are sent to which expert. For example, in the image below, the token “More” is sent to the second expert, and the token "Parameters” is sent to the first network. As we’ll explore later, we can send a token to more than one expert. How to route a token to an expert is one of the big decisions when working with MoEs - the router is composed of learned parameters and is pretrained at the same time as the rest of the network.
At every layer, for every token, a router network chooses two of these groups (the “experts”) to process the token and combine their output additively.

Switch Layer
MoE layer from the [Switch Transformers paper](https://arxiv.org/abs/2101.03961)
So, to recap, in MoEs we replace every FFN layer of the transformer model with an MoE layer, which is composed of a gate network and a certain number of experts.
Although MoEs provide benefits like efficient pretraining and faster inference compared to dense models, they also come with challenges:
Training: MoEs enable significantly more compute-efficient pretraining, but they’ve historically struggled to generalize during fine-tuning, leading to overfitting.
Inference: Although a MoE might have many parameters, only some of them are used during inference. This leads to much faster inference compared to a dense model with the same number of parameters. However, all parameters need to be loaded in RAM, so memory requirements are high. For example, [given a MoE like Mixtral 8x7B](https://huggingface.co/blog/moe), we’ll need to have enough VRAM to hold a dense 47B parameter model. Why 47B parameters and not 8 x 7B = 56B? That’s because in MoE models, only the FFN layers are treated as individual experts, and the rest of the model parameters are shared. At the same time, assuming just two experts are being used per token, the inference speed (FLOPs) is like using a 12B model (as opposed to a 14B model), because it computes 2x7B matrix multiplications, but with some layers shared (more on this soon).
If all our tokens are sent to just a few popular experts, that will make training inefficient. In a normal MoE training, the gating network converges to mostly activate the same few experts. This self-reinforces as favored experts are trained quicker and hence selected more. To mitigate this, an auxiliary loss is added to encourage giving all experts equal importance. This loss ensures that all experts receive a roughly equal number of training examples. The following sections will also explore the concept of expert capacity, which introduces a threshold of how many tokens can be processed by an expert. In transformers, the auxiliary loss is exposed via the aux_loss parameter.
## "Wait...but you called this a frankenMoE?"
The difference between MoE and "frankenMoE" lies in the fact that the router layer in a model like the one on this repo is not trained simultaneously. |
ch2vod/spamllama_labeler_12.8b | ch2vod | 2024-02-29T02:08:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-29T02:08:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Kquant03/Buttercup-4x7B-V2-bf16 | Kquant03 | 2024-02-29T02:08:04Z | 8 | 5 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"merge",
"en",
"arxiv:2101.03961",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-15T19:15:45Z | ---
license: apache-2.0
language:
- en
tags:
- moe
- merge
---

# "[We] are joined by the bonds of love. And you cannot track that, not with a thousand bloodhounds, and you cannot break it, not with a thousand swords."
[GGUF FILES HERE](https://huggingface.co/Kquant03/Buttercup-V2-GGUF)
[Join our Discord!](https://discord.gg/uT4CzytfYW)
I will be lasering this model, so stay tuned. Hopefully this an improvement upon my [original model](https://huggingface.co/Kquant03/Buttercup-4x7B-bf16).
The config looks like this...(detailed version is in the files and versions):
- [macadeliccc/WestLake-7B-v2-laser-truthy-dpo](https://huggingface.co/macadeliccc/WestLake-7B-v2-laser-truthy-dpo) - base
- [macadeliccc/WestLake-7B-v2-laser-truthy-dpo](https://huggingface.co/macadeliccc/WestLake-7B-v2-laser-truthy-dpo) - expert #1
- [cognitivecomputations/samantha-1.1-westlake-7b](https://huggingface.co/cognitivecomputations/samantha-1.1-westlake-7b) - expert #2
- [macadeliccc/WestLake-7B-v2-laser-truthy-dpo](https://huggingface.co/macadeliccc/WestLake-7B-v2-laser-truthy-dpo) - expert #3
- [Kukedlc/Triunvirato-7b](https://huggingface.co/Kukedlc/Triunvirato-7b) - expert #4
# Huge improvement upon the base Buttercup model!!!!

# Laser version is rank 3 in the world for roleplay.

# "[What is a Mixture of Experts (MoE)?](https://huggingface.co/blog/moe)"
### (from the MistralAI papers...click the quoted question above to navigate to it directly.)
The scale of a model is one of the most important axes for better model quality. Given a fixed computing budget, training a larger model for fewer steps is better than training a smaller model for more steps.
Mixture of Experts enable models to be pretrained with far less compute, which means you can dramatically scale up the model or dataset size with the same compute budget as a dense model. In particular, a MoE model should achieve the same quality as its dense counterpart much faster during pretraining.
So, what exactly is a MoE? In the context of transformer models, a MoE consists of two main elements:
Sparse MoE layers are used instead of dense feed-forward network (FFN) layers. MoE layers have a certain number of “experts” (e.g. 32 in my "frankenMoE"), where each expert is a neural network. In practice, the experts are FFNs, but they can also be more complex networks or even a MoE itself, leading to hierarchical MoEs!
A gate network or router, that determines which tokens are sent to which expert. For example, in the image below, the token “More” is sent to the second expert, and the token "Parameters” is sent to the first network. As we’ll explore later, we can send a token to more than one expert. How to route a token to an expert is one of the big decisions when working with MoEs - the router is composed of learned parameters and is pretrained at the same time as the rest of the network.
At every layer, for every token, a router network chooses two of these groups (the “experts”) to process the token and combine their output additively.

Switch Layer
MoE layer from the [Switch Transformers paper](https://arxiv.org/abs/2101.03961)
So, to recap, in MoEs we replace every FFN layer of the transformer model with an MoE layer, which is composed of a gate network and a certain number of experts.
Although MoEs provide benefits like efficient pretraining and faster inference compared to dense models, they also come with challenges:
Training: MoEs enable significantly more compute-efficient pretraining, but they’ve historically struggled to generalize during fine-tuning, leading to overfitting.
Inference: Although a MoE might have many parameters, only some of them are used during inference. This leads to much faster inference compared to a dense model with the same number of parameters. However, all parameters need to be loaded in RAM, so memory requirements are high. For example, [given a MoE like Mixtral 8x7B](https://huggingface.co/blog/moe), we’ll need to have enough VRAM to hold a dense 47B parameter model. Why 47B parameters and not 8 x 7B = 56B? That’s because in MoE models, only the FFN layers are treated as individual experts, and the rest of the model parameters are shared. At the same time, assuming just two experts are being used per token, the inference speed (FLOPs) is like using a 12B model (as opposed to a 14B model), because it computes 2x7B matrix multiplications, but with some layers shared (more on this soon).
If all our tokens are sent to just a few popular experts, that will make training inefficient. In a normal MoE training, the gating network converges to mostly activate the same few experts. This self-reinforces as favored experts are trained quicker and hence selected more. To mitigate this, an auxiliary loss is added to encourage giving all experts equal importance. This loss ensures that all experts receive a roughly equal number of training examples. The following sections will also explore the concept of expert capacity, which introduces a threshold of how many tokens can be processed by an expert. In transformers, the auxiliary loss is exposed via the aux_loss parameter.
## "Wait...but you called this a frankenMoE?"
The difference between MoE and "frankenMoE" lies in the fact that the router layer in a model like the one on this repo is not trained simultaneously. |
Kquant03/NurseButtercup-4x7B-bf16 | Kquant03 | 2024-02-29T02:01:18Z | 5 | 1 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"merge",
"en",
"arxiv:2101.03961",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-26T04:36:53Z | ---
license: apache-2.0
language:
- en
tags:
- moe
- merge
---

# "[We] are joined by the bonds of love. And you cannot track that, not with a thousand bloodhounds, and you cannot break it, not with a thousand swords."
[GGUF FILES HERE](https://huggingface.co/Kquant03/NurseButtercup-4x7B-GGUF)
[Join our Discord!](https://discord.gg/uT4CzytfYW)
Going to try a laser scan of this model with Fernando from Cognitive Computations. It is a FrankenMoE.
The config looks like this...(detailed version is in the files and versions):
- [macadeliccc/WestLake-7B-v2-laser-truthy-dpo](https://huggingface.co/macadeliccc/WestLake-7B-v2-laser-truthy-dpo) - base
- [macadeliccc/WestLake-7B-v2-laser-truthy-dpo](https://huggingface.co/macadeliccc/WestLake-7B-v2-laser-truthy-dpo) - expert #1
- [cognitivecomputations/samantha-1.1-westlake-7b-laser](https://huggingface.co/cognitivecomputations/samantha-1.1-westlake-7b-laser) - expert #2
- [macadeliccc/WestLake-7B-v2-laser-truthy-dpo](https://huggingface.co/macadeliccc/WestLake-7B-v2-laser-truthy-dpo) - expert #3
- [BioMistral/BioMistral-7B](https://huggingface.co/BioMistral/BioMistral-7B) - expert #4
# Will upload to eval eventually.
# "[What is a Mixture of Experts (MoE)?](https://huggingface.co/blog/moe)"
### (from the MistralAI papers...click the quoted question above to navigate to it directly.)
The scale of a model is one of the most important axes for better model quality. Given a fixed computing budget, training a larger model for fewer steps is better than training a smaller model for more steps.
Mixture of Experts enable models to be pretrained with far less compute, which means you can dramatically scale up the model or dataset size with the same compute budget as a dense model. In particular, a MoE model should achieve the same quality as its dense counterpart much faster during pretraining.
So, what exactly is a MoE? In the context of transformer models, a MoE consists of two main elements:
Sparse MoE layers are used instead of dense feed-forward network (FFN) layers. MoE layers have a certain number of “experts” (e.g. 32 in my "frankenMoE"), where each expert is a neural network. In practice, the experts are FFNs, but they can also be more complex networks or even a MoE itself, leading to hierarchical MoEs!
A gate network or router, that determines which tokens are sent to which expert. For example, in the image below, the token “More” is sent to the second expert, and the token "Parameters” is sent to the first network. As we’ll explore later, we can send a token to more than one expert. How to route a token to an expert is one of the big decisions when working with MoEs - the router is composed of learned parameters and is pretrained at the same time as the rest of the network.
At every layer, for every token, a router network chooses two of these groups (the “experts”) to process the token and combine their output additively.

Switch Layer
MoE layer from the [Switch Transformers paper](https://arxiv.org/abs/2101.03961)
So, to recap, in MoEs we replace every FFN layer of the transformer model with an MoE layer, which is composed of a gate network and a certain number of experts.
Although MoEs provide benefits like efficient pretraining and faster inference compared to dense models, they also come with challenges:
Training: MoEs enable significantly more compute-efficient pretraining, but they’ve historically struggled to generalize during fine-tuning, leading to overfitting.
Inference: Although a MoE might have many parameters, only some of them are used during inference. This leads to much faster inference compared to a dense model with the same number of parameters. However, all parameters need to be loaded in RAM, so memory requirements are high. For example, [given a MoE like Mixtral 8x7B](https://huggingface.co/blog/moe), we’ll need to have enough VRAM to hold a dense 47B parameter model. Why 47B parameters and not 8 x 7B = 56B? That’s because in MoE models, only the FFN layers are treated as individual experts, and the rest of the model parameters are shared. At the same time, assuming just two experts are being used per token, the inference speed (FLOPs) is like using a 12B model (as opposed to a 14B model), because it computes 2x7B matrix multiplications, but with some layers shared (more on this soon).
If all our tokens are sent to just a few popular experts, that will make training inefficient. In a normal MoE training, the gating network converges to mostly activate the same few experts. This self-reinforces as favored experts are trained quicker and hence selected more. To mitigate this, an auxiliary loss is added to encourage giving all experts equal importance. This loss ensures that all experts receive a roughly equal number of training examples. The following sections will also explore the concept of expert capacity, which introduces a threshold of how many tokens can be processed by an expert. In transformers, the auxiliary loss is exposed via the aux_loss parameter.
## "Wait...but you called this a frankenMoE?"
The difference between MoE and "frankenMoE" lies in the fact that the router layer in a model like the one on this repo is not trained simultaneously. |
Kquant03/NurseButtercup-4x7B-GGUF | Kquant03 | 2024-02-29T02:01:04Z | 36 | 3 | null | [
"gguf",
"moe",
"merge",
"en",
"arxiv:2101.03961",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-02-26T16:07:51Z | ---
license: apache-2.0
language:
- en
tags:
- moe
- merge
---

# "[We] are joined by the bonds of love. And you cannot track that, not with a thousand bloodhounds, and you cannot break it, not with a thousand swords."
[BASE MODEL HERE](https://huggingface.co/Kquant03/NurseButtercup-4x7B-bf16)
[Join our Discord!](https://discord.gg/uT4CzytfYW)
Going to try a laser scan of this model with Fernando from Cognitive Computations. It is a FrankenMoE.
The config looks like this...(detailed version is in the files and versions):
- [macadeliccc/WestLake-7B-v2-laser-truthy-dpo](https://huggingface.co/macadeliccc/WestLake-7B-v2-laser-truthy-dpo) - base
- [macadeliccc/WestLake-7B-v2-laser-truthy-dpo](https://huggingface.co/macadeliccc/WestLake-7B-v2-laser-truthy-dpo) - expert #1
- [cognitivecomputations/samantha-1.1-westlake-7b-laser](https://huggingface.co/cognitivecomputations/samantha-1.1-westlake-7b-laser) - expert #2
- [macadeliccc/WestLake-7B-v2-laser-truthy-dpo](https://huggingface.co/macadeliccc/WestLake-7B-v2-laser-truthy-dpo) - expert #3
- [BioMistral/BioMistral-7B](https://huggingface.co/BioMistral/BioMistral-7B) - expert #4
# Will upload to eval eventually.
# "[What is a Mixture of Experts (MoE)?](https://huggingface.co/blog/moe)"
### (from the MistralAI papers...click the quoted question above to navigate to it directly.)
The scale of a model is one of the most important axes for better model quality. Given a fixed computing budget, training a larger model for fewer steps is better than training a smaller model for more steps.
Mixture of Experts enable models to be pretrained with far less compute, which means you can dramatically scale up the model or dataset size with the same compute budget as a dense model. In particular, a MoE model should achieve the same quality as its dense counterpart much faster during pretraining.
So, what exactly is a MoE? In the context of transformer models, a MoE consists of two main elements:
Sparse MoE layers are used instead of dense feed-forward network (FFN) layers. MoE layers have a certain number of “experts” (e.g. 32 in my "frankenMoE"), where each expert is a neural network. In practice, the experts are FFNs, but they can also be more complex networks or even a MoE itself, leading to hierarchical MoEs!
A gate network or router, that determines which tokens are sent to which expert. For example, in the image below, the token “More” is sent to the second expert, and the token "Parameters” is sent to the first network. As we’ll explore later, we can send a token to more than one expert. How to route a token to an expert is one of the big decisions when working with MoEs - the router is composed of learned parameters and is pretrained at the same time as the rest of the network.
At every layer, for every token, a router network chooses two of these groups (the “experts”) to process the token and combine their output additively.

Switch Layer
MoE layer from the [Switch Transformers paper](https://arxiv.org/abs/2101.03961)
So, to recap, in MoEs we replace every FFN layer of the transformer model with an MoE layer, which is composed of a gate network and a certain number of experts.
Although MoEs provide benefits like efficient pretraining and faster inference compared to dense models, they also come with challenges:
Training: MoEs enable significantly more compute-efficient pretraining, but they’ve historically struggled to generalize during fine-tuning, leading to overfitting.
Inference: Although a MoE might have many parameters, only some of them are used during inference. This leads to much faster inference compared to a dense model with the same number of parameters. However, all parameters need to be loaded in RAM, so memory requirements are high. For example, [given a MoE like Mixtral 8x7B](https://huggingface.co/blog/moe), we’ll need to have enough VRAM to hold a dense 47B parameter model. Why 47B parameters and not 8 x 7B = 56B? That’s because in MoE models, only the FFN layers are treated as individual experts, and the rest of the model parameters are shared. At the same time, assuming just two experts are being used per token, the inference speed (FLOPs) is like using a 12B model (as opposed to a 14B model), because it computes 2x7B matrix multiplications, but with some layers shared (more on this soon).
If all our tokens are sent to just a few popular experts, that will make training inefficient. In a normal MoE training, the gating network converges to mostly activate the same few experts. This self-reinforces as favored experts are trained quicker and hence selected more. To mitigate this, an auxiliary loss is added to encourage giving all experts equal importance. This loss ensures that all experts receive a roughly equal number of training examples. The following sections will also explore the concept of expert capacity, which introduces a threshold of how many tokens can be processed by an expert. In transformers, the auxiliary loss is exposed via the aux_loss parameter.
## "Wait...but you called this a frankenMoE?"
The difference between MoE and "frankenMoE" lies in the fact that the router layer in a model like the one on this repo is not trained simultaneously. |
intvy/setfit-absa-all-mpnet-base-v2-laptops-polarity | intvy | 2024-02-29T01:59:11Z | 4 | 0 | setfit | [
"setfit",
"safetensors",
"mpnet",
"absa",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/all-mpnet-base-v2",
"base_model:finetune:sentence-transformers/all-mpnet-base-v2",
"model-index",
"region:us"
] | text-classification | 2024-02-29T01:59:11Z | ---
library_name: setfit
tags:
- setfit
- absa
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- accuracy
widget:
- text: louder and the mouse didnt break:I wish the volume could be louder and the
mouse didnt break after only a month.
- text: + + (sales, service,:BEST BUY - 5 STARS + + + (sales, service, respect for
old men who aren't familiar with the technology) DELL COMPUTERS - 3 stars DELL
SUPPORT - owes a me a couple
- text: back and my built-in webcam and built-:I got it back and my built-in webcam
and built-in mic were shorting out anytime I touched the lid, (mind you this was
my means of communication with my fiance who was deployed) but I suffered thru
it and would constandly have to reset the computer to be able to use my cam and
mic anytime they went out.
- text: after i install Mozzilla firfox i love every:the only fact i dont like about
apples is they generally use safari and i dont use safari but after i install
Mozzilla firfox i love every single bit about it.
- text: in webcam and built-in mic were shorting out:I got it back and my built-in
webcam and built-in mic were shorting out anytime I touched the lid, (mind you
this was my means of communication with my fiance who was deployed) but I suffered
thru it and would constandly have to reset the computer to be able to use my cam
and mic anytime they went out.
pipeline_tag: text-classification
inference: false
base_model: sentence-transformers/all-mpnet-base-v2
model-index:
- name: SetFit Polarity Model with sentence-transformers/all-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tomaarsen/setfit-absa-semeval-laptops
type: unknown
split: test
metrics:
- type: accuracy
value: 0.7007874015748031
name: Accuracy
---
# SetFit Polarity Model with sentence-transformers/all-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Aspect Based Sentiment Analysis (ABSA). This SetFit model uses [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. In particular, this model is in charge of classifying aspect polarities.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
This model was trained within the context of a larger system for ABSA, which looks like so:
1. Use a spaCy model to select possible aspect span candidates.
2. Use a SetFit model to filter these possible aspect span candidates.
3. **Use this SetFit model to classify the filtered aspect span candidates.**
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **spaCy Model:** en_core_web_sm
- **SetFitABSA Aspect Model:** [joshuasundance/setfit-absa-all-MiniLM-L6-v2-laptops-aspect](https://huggingface.co/joshuasundance/setfit-absa-all-MiniLM-L6-v2-laptops-aspect)
- **SetFitABSA Polarity Model:** [joshuasundance/setfit-absa-all-mpnet-base-v2-laptops-polarity](https://huggingface.co/joshuasundance/setfit-absa-all-mpnet-base-v2-laptops-polarity)
- **Maximum Sequence Length:** 384 tokens
- **Number of Classes:** 4 classes
<!-- - **Training Dataset:** [tomaarsen/setfit-absa-semeval-laptops](https://huggingface.co/datasets/tomaarsen/setfit-absa-semeval-laptops) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:---------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| neutral | <ul><li>'skip taking the cord with me because:I charge it at night and skip taking the cord with me because of the good battery life.'</li><li>'The tech guy then said the:The tech guy then said the service center does not do 1-to-1 exchange and I have to direct my concern to the "sales" team, which is the retail shop which I bought my netbook from.'</li><li>'all dark, power light steady, hard:\xa0One night I turned the freaking thing off after using it, the next day I turn it on, no GUI, screen all dark, power light steady, hard drive light steady and not flashing as it usually does.'</li></ul> |
| positive | <ul><li>'of the good battery life.:I charge it at night and skip taking the cord with me because of the good battery life.'</li><li>'is of high quality, has a:it is of high quality, has a killer GUI, is extremely stable, is highly expandable, is bundled with lots of very good applications, is easy to use, and is absolutely gorgeous.'</li><li>'has a killer GUI, is extremely:it is of high quality, has a killer GUI, is extremely stable, is highly expandable, is bundled with lots of very good applications, is easy to use, and is absolutely gorgeous.'</li></ul> |
| negative | <ul><li>'then said the service center does not do:The tech guy then said the service center does not do 1-to-1 exchange and I have to direct my concern to the "sales" team, which is the retail shop which I bought my netbook from.'</li><li>'concern to the "sales" team, which is:The tech guy then said the service center does not do 1-to-1 exchange and I have to direct my concern to the "sales" team, which is the retail shop which I bought my netbook from.'</li><li>'on, no GUI, screen all:\xa0One night I turned the freaking thing off after using it, the next day I turn it on, no GUI, screen all dark, power light steady, hard drive light steady and not flashing as it usually does.'</li></ul> |
| conflict | <ul><li>'-No backlit keyboard, but not:-No backlit keyboard, but not an issue for me.'</li><li>"to replace the battery once, but:I did have to replace the battery once, but that was only a couple months ago and it's been working perfect ever since."</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.7008 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import AbsaModel
# Download from the 🤗 Hub
model = AbsaModel.from_pretrained(
"joshuasundance/setfit-absa-all-MiniLM-L6-v2-laptops-aspect",
"joshuasundance/setfit-absa-all-mpnet-base-v2-laptops-polarity",
spacy_model="en_core_web_sm",
)
# Run inference
preds = model("This laptop meets every expectation and Windows 7 is great!")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 3 | 25.5873 | 48 |
| Label | Training Sample Count |
|:---------|:----------------------|
| conflict | 2 |
| negative | 45 |
| neutral | 30 |
| positive | 49 |
### Training Hyperparameters
- batch_size: (128, 128)
- num_epochs: (5, 5)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: True
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:----------:|:------:|:-------------:|:---------------:|
| 0.0120 | 1 | 0.2721 | - |
| **0.6024** | **50** | **0.0894** | **0.2059** |
| 1.2048 | 100 | 0.0014 | 0.2309 |
| 1.8072 | 150 | 0.0006 | 0.2359 |
| 2.4096 | 200 | 0.0005 | 0.2373 |
| 3.0120 | 250 | 0.0004 | 0.2364 |
| 3.6145 | 300 | 0.0003 | 0.2371 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.11.7
- SetFit: 1.0.3
- Sentence Transformers: 2.3.0
- spaCy: 3.7.2
- Transformers: 4.37.2
- PyTorch: 2.1.2+cu118
- Datasets: 2.16.1
- Tokenizers: 0.15.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
intvy/setfit-absa-all-MiniLM-L6-v2-laptops-aspect | intvy | 2024-02-29T01:58:44Z | 6 | 0 | setfit | [
"setfit",
"safetensors",
"bert",
"absa",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"model-index",
"region:us"
] | text-classification | 2024-02-29T01:58:44Z | ---
library_name: setfit
tags:
- setfit
- absa
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- accuracy
widget:
- text: camera:It has no camera but, I can always buy and install one easy.
- text: Acer:Acer was no help and Garmin could not determine the problem(after spending
about 2 hours with me), so I returned it and purchased a Toshiba R700 that seems
even nicer and I was able to load all of my software with no problem.
- text: memory:I've been impressed with the battery life and the performance for such
a small amount of memory.
- text: speed:Yes, a Mac is much more money than the average laptop out there, but
there is no comparison in style, speed and just cool factor.
- text: fiance:I got it back and my built-in webcam and built-in mic were shorting
out anytime I touched the lid, (mind you this was my means of communication with
my fiance who was deployed) but I suffered thru it and would constandly have to
reset the computer to be able to use my cam and mic anytime they went out.
pipeline_tag: text-classification
inference: false
base_model: sentence-transformers/all-MiniLM-L6-v2
model-index:
- name: SetFit Aspect Model with sentence-transformers/all-MiniLM-L6-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tomaarsen/setfit-absa-semeval-laptops
type: unknown
split: test
metrics:
- type: accuracy
value: 0.8239700374531835
name: Accuracy
---
# SetFit Aspect Model with sentence-transformers/all-MiniLM-L6-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Aspect Based Sentiment Analysis (ABSA). This SetFit model uses [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. In particular, this model is in charge of filtering aspect span candidates.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
This model was trained within the context of a larger system for ABSA, which looks like so:
1. Use a spaCy model to select possible aspect span candidates.
2. **Use this SetFit model to filter these possible aspect span candidates.**
3. Use a SetFit model to classify the filtered aspect span candidates.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **spaCy Model:** en_core_web_sm
- **SetFitABSA Aspect Model:** [joshuasundance/setfit-absa-all-MiniLM-L6-v2-laptops-aspect](https://huggingface.co/joshuasundance/setfit-absa-all-MiniLM-L6-v2-laptops-aspect)
- **SetFitABSA Polarity Model:** [joshuasundance/setfit-absa-all-mpnet-base-v2-laptops-polarity](https://huggingface.co/joshuasundance/setfit-absa-all-mpnet-base-v2-laptops-polarity)
- **Maximum Sequence Length:** 256 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [tomaarsen/setfit-absa-semeval-laptops](https://huggingface.co/datasets/tomaarsen/setfit-absa-semeval-laptops) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:----------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| aspect | <ul><li>'cord:I charge it at night and skip taking the cord with me because of the good battery life.'</li><li>'battery life:I charge it at night and skip taking the cord with me because of the good battery life.'</li><li>'service center:The tech guy then said the service center does not do 1-to-1 exchange and I have to direct my concern to the "sales" team, which is the retail shop which I bought my netbook from.'</li></ul> |
| no aspect | <ul><li>'night:I charge it at night and skip taking the cord with me because of the good battery life.'</li><li>'skip:I charge it at night and skip taking the cord with me because of the good battery life.'</li><li>'exchange:The tech guy then said the service center does not do 1-to-1 exchange and I have to direct my concern to the "sales" team, which is the retail shop which I bought my netbook from.'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.8240 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import AbsaModel
# Download from the 🤗 Hub
model = AbsaModel.from_pretrained(
"joshuasundance/setfit-absa-all-MiniLM-L6-v2-laptops-aspect",
"joshuasundance/setfit-absa-all-mpnet-base-v2-laptops-polarity",
spacy_model="en_core_web_sm",
)
# Run inference
preds = model("This laptop meets every expectation and Windows 7 is great!")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 2 | 21.1510 | 42 |
| Label | Training Sample Count |
|:----------|:----------------------|
| no aspect | 119 |
| aspect | 126 |
### Training Hyperparameters
- batch_size: (128, 128)
- num_epochs: (5, 5)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: True
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:----------:|:-------:|:-------------:|:---------------:|
| 0.0042 | 1 | 0.3776 | - |
| 0.2110 | 50 | 0.2644 | 0.2622 |
| 0.4219 | 100 | 0.2248 | 0.2437 |
| **0.6329** | **150** | **0.0059** | **0.2238** |
| 0.8439 | 200 | 0.0017 | 0.2326 |
| 1.0549 | 250 | 0.0012 | 0.2382 |
| 1.2658 | 300 | 0.0008 | 0.2455 |
| 1.4768 | 350 | 0.0006 | 0.2328 |
| 1.6878 | 400 | 0.0005 | 0.243 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.11.7
- SetFit: 1.0.3
- Sentence Transformers: 2.3.0
- spaCy: 3.7.2
- Transformers: 4.37.2
- PyTorch: 2.1.2+cu118
- Datasets: 2.16.1
- Tokenizers: 0.15.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
roseruski/autotrain-xp6fd-5n2o9 | roseruski | 2024-02-28T23:58:05Z | 119 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"autotrain",
"dataset:autotrain-xp6fd-5n2o9/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-28T23:57:30Z |
---
tags:
- autotrain
- text-classification
widget:
- text: "I love AutoTrain"
datasets:
- autotrain-xp6fd-5n2o9/autotrain-data
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.047194674611091614
f1: 0.962457337883959
precision: 0.9791666666666666
recall: 0.9463087248322147
auc: 0.9930245807106035
accuracy: 0.9901345291479821
|
Lienid/nous-twelve | Lienid | 2024-02-28T21:05:01Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-28T17:16:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Mantis-VL/mfuyu_v2_8192_720p-5500 | Mantis-VL | 2024-02-28T21:04:16Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"fuyu",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-28T20:59:03Z | ---
tags:
- generated_from_trainer
model-index:
- name: mfuyu_v2_8192_720p-5500
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mfuyu_v2_8192_720p-5500
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.17.1
- Tokenizers 0.15.0
|
MayaDG/autotrain-abquo-3fhoy | MayaDG | 2024-02-28T21:00:13Z | 194 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"autotrain",
"dataset:autotrain-abquo-3fhoy/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-02-28T20:59:39Z |
---
tags:
- autotrain
- image-classification
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
datasets:
- autotrain-abquo-3fhoy/autotrain-data
---
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 0.21081779897212982
f1: 0.9487179487179487
precision: 0.9487179487179487
recall: 0.9487179487179487
auc: 0.9721767594108021
accuracy: 0.936
|
Mantis-VL/mfuyu_v2_3072_480p-7000 | Mantis-VL | 2024-02-28T20:59:04Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"fuyu",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-28T20:51:15Z | ---
tags:
- generated_from_trainer
model-index:
- name: mfuyu_v2_3072_480p-7000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mfuyu_v2_3072_480p-7000
|
tanatapanun/fine-tuned-FLAN-T5-20-epochs-wanglab-512-output | tanatapanun | 2024-02-28T20:56:35Z | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-28T19:22:00Z | ---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: fine-tuned-FLAN-T5-20-epochs-wanglab-512-output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-FLAN-T5-20-epochs-wanglab-512-output
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.0705
- Rouge1: 0.1508
- Rouge2: 0.0272
- Rougel: 0.1374
- Rougelsum: 0.1351
- Bertscore F1: 0.8553
- Bleurt Score: -1.2097
- Gen Len: 14.69
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bertscore F1 | Bleurt Score | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:------------:|:------------:|:-------:|
| No log | 1.0 | 301 | 11.0933 | 0.065 | 0.0148 | 0.0596 | 0.0595 | 0.7859 | -1.4402 | 18.92 |
| 20.9249 | 2.0 | 602 | 9.2324 | 0.0604 | 0.0154 | 0.0556 | 0.0554 | 0.7869 | -1.3807 | 17.42 |
| 20.9249 | 3.0 | 903 | 7.6254 | 0.0681 | 0.0192 | 0.0632 | 0.0627 | 0.7978 | -1.4375 | 18.42 |
| 11.3584 | 4.0 | 1204 | 6.7112 | 0.0614 | 0.0073 | 0.0578 | 0.0582 | 0.8076 | -1.3157 | 14.34 |
| 8.9106 | 5.0 | 1505 | 6.6742 | 0.0701 | 0.0204 | 0.0638 | 0.0635 | 0.7968 | -1.3894 | 17.29 |
| 8.9106 | 6.0 | 1806 | 5.9658 | 0.0836 | 0.0145 | 0.074 | 0.0742 | 0.818 | -1.3081 | 13.76 |
| 7.8674 | 7.0 | 2107 | 5.7095 | 0.113 | 0.025 | 0.1061 | 0.1078 | 0.8433 | -1.4119 | 13.71 |
| 7.8674 | 8.0 | 2408 | 5.6269 | 0.0987 | 0.0147 | 0.0933 | 0.0939 | 0.8201 | -1.2529 | 15.32 |
| 6.7786 | 9.0 | 2709 | 5.5192 | 0.1133 | 0.0203 | 0.1038 | 0.1051 | 0.8484 | -1.3751 | 13.75 |
| 6.3646 | 10.0 | 3010 | 5.4626 | 0.1347 | 0.0276 | 0.122 | 0.1236 | 0.8501 | -1.278 | 13.16 |
| 6.3646 | 11.0 | 3311 | 5.4467 | 0.103 | 0.0172 | 0.0951 | 0.0943 | 0.8263 | -1.3587 | 15.48 |
| 5.6998 | 12.0 | 3612 | 5.4587 | 0.126 | 0.0326 | 0.1191 | 0.1183 | 0.8474 | -1.2782 | 15.86 |
| 5.6998 | 13.0 | 3913 | 5.4846 | 0.1523 | 0.0325 | 0.1407 | 0.1408 | 0.8528 | -1.2406 | 14.82 |
| 5.2971 | 14.0 | 4214 | 5.6166 | 0.1363 | 0.0275 | 0.1279 | 0.1247 | 0.8512 | -1.2827 | 14.7 |
| 4.9391 | 15.0 | 4515 | 5.6821 | 0.1479 | 0.0238 | 0.136 | 0.1342 | 0.8545 | -1.2217 | 14.72 |
| 4.9391 | 16.0 | 4816 | 5.7849 | 0.1577 | 0.0307 | 0.1455 | 0.1445 | 0.8566 | -1.1756 | 15.25 |
| 4.6035 | 17.0 | 5117 | 5.8945 | 0.1313 | 0.0234 | 0.1214 | 0.1199 | 0.8525 | -1.2609 | 14.67 |
| 4.6035 | 18.0 | 5418 | 5.9956 | 0.1506 | 0.0315 | 0.1367 | 0.1348 | 0.8542 | -1.2107 | 14.61 |
| 4.3893 | 19.0 | 5719 | 6.0337 | 0.1449 | 0.0294 | 0.1337 | 0.1317 | 0.8553 | -1.2173 | 14.49 |
| 4.245 | 20.0 | 6020 | 6.0705 | 0.1508 | 0.0272 | 0.1374 | 0.1351 | 0.8553 | -1.2097 | 14.69 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
KevStrider/ppo-LunarLander-v2 | KevStrider | 2024-02-28T20:55:19Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-28T20:55:00Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 253.05 +/- 51.82
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CorticalStack/neurotic-crown-clown-7b-tak-stack-dpo-gguf | CorticalStack | 2024-02-28T20:48:23Z | 2 | 1 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-02-28T19:58:22Z | ---
license: apache-2.0
---
<img src="neurotic-crown-clown-tak-stack.png" alt="Neurotic crown clown tak stack logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# CorticalStack/neurotic-crown-clown-7b-tak-stack-dpo
A collection of GGUF quantised versions of [CorticalStack/neurotic-crown-clown-7b-tak-stack-dpo](https://huggingface.co/CorticalStack/neurotic-crown-clown-7b-tak-stack-dpo).
The main branch model is quantised using GGUF format Q4_K_M.
GGUF is a format that replaces GGML, which is no longer supported by llama.cpp.
An incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. |
ankhamun/IxI000x800000_000x800IxI | ankhamun | 2024-02-28T20:48:09Z | 114 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-28T20:46:03Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Subsets and Splits