modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
mtc/meta-llama-Llama-2-7b-hf-pubmed-summarization-5000-last-lora-full-adapter | mtc | 2024-02-09T18:27:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-09T16:22:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LoneStriker/Quyen-Pro-Max-v0.1-GGUF | LoneStriker | 2024-02-09T18:22:06Z | 5 | 3 | transformers | [
"transformers",
"gguf",
"text-generation",
"en",
"dataset:teknium/OpenHermes-2.5",
"dataset:LDJnr/Capybara",
"dataset:Intel/orca_dpo_pairs",
"dataset:argilla/distilabel-capybara-dpo-7k-binarized",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-02-09T16:45:35Z | ---
library_name: transformers
license: other
datasets:
- teknium/OpenHermes-2.5
- LDJnr/Capybara
- Intel/orca_dpo_pairs
- argilla/distilabel-capybara-dpo-7k-binarized
language:
- en
pipeline_tag: text-generation
---
# Quyen
<img src="quyen.webp" width="512" height="512" alt="Quyen">
# Model Description
Quyen is our first flagship LLM series based on the Qwen1.5 family. We introduced 6 different versions:
- **Quyen-SE (0.5B)**
- **Quyen-Mini (1.8B)**
- **Quyen (4B)**
- **Quyen-Plus (7B)**
- **Quyen-Pro (14B)**
- **Quyen-Pro-Max (72B)**
All models were trained with SFT and DPO using the following dataset:
- *OpenHermes-2.5* by **Teknium**
- *Capyabara* by **LDJ**
- *argilla/distilabel-capybara-dpo-7k-binarized* by **argilla**
- *orca_dpo_pairs* by **Intel**
- and Private Data by **Ontocord** & **BEE-spoke-data**
# Prompt Template
- All Quyen models use ChatML as the default template:
```
<|im_start|>system
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
<|im_start|>user
Hello world.<|im_end|>
<|im_start|>assistant
```
- You can also use `apply_chat_template`:
```python
messages = [
{"role": "system", "content": "You are a sentient, superintelligent artificial general intelligence, here to teach and assist me."},
{"role": "user", "content": "Hello world."}
]
gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
model.generate(**gen_input)
```
# Benchmarks:
- Coming Soon! We will update the benchmarks later
# Acknowledgement
- We're incredibly grateful to **Tensoic** and **Ontocord** for their generous support with compute and data preparation.
- Special thanks to the Qwen team for letting us access the models early for these amazing finetunes. |
Jaswir/midjourney-phi-2 | Jaswir | 2024-02-09T18:14:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-09T18:14:25Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jeffmcc/Llama-2-7b-chat-hf-GGUF | jeffmcc | 2024-02-09T18:11:38Z | 3 | 0 | null | [
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-2",
"en",
"license:llama2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-02-08T00:12:21Z | ---
license: llama2
language:
- en
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
# Llama-2-7b-chat-hf-GGUF
This repo contains GGUF format, quantized model files for [Meta's Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b-hf) LLM. The files were generated using the [hf-to-gguf](https://github.com/jmcconne/hf-to-gguf) project on GitHub which facilitates the conversion of LLMs stored in Hugging Face into GGUF while providing traceability and reproducibility. Each model file has an accompanying JSON config file containing the source and version of the model being converted, version of conversion scripts, quantization method, and anything else needed to fully reproduce the converted model. Keeping the JSON config file with the GGUF model file anywhere the model is deployed can be useful for use cases that require tight version control and reproducibility.
### Downloading model and JSON config files from the command line
Install the huggingface_hub Python library:
```
pip3 install huggingface_hub
```
Download the model and JSON config file for a specific quantization:
```
huggingface-cli download jeffmcc/Llama-2-7b-chat-hf-GGUF --local-dir . --local-dir-use-symlinks False --include='*q4_k_m*'
```
|
atmikah/ppo-SnowballTarget | atmikah | 2024-02-09T18:06:21Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2024-02-09T18:06:18Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: atmikah/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
mtc/mistralai-Mistral-7B-v0.1-pubmed-summarization-5000-last-lora-full-adapter | mtc | 2024-02-09T18:02:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-09T18:02:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Weni/Zeroshot-3.2.3-Mistral-7B-pipeline-config-merged | Weni | 2024-02-09T17:57:38Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-09T17:43:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Cuphadi/dqn-SpaceInvadersNoFrameskip-v4 | Cuphadi | 2024-02-09T17:46:38Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-09T17:46:00Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 682.00 +/- 256.02
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Cuphadi -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Cuphadi -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Cuphadi
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
AprendeIngenia/bill_bank_co | AprendeIngenia | 2024-02-09T17:46:31Z | 0 | 0 | null | [
"onnx",
"license:apache-2.0",
"region:us"
] | null | 2024-02-09T17:09:46Z | ---
license: apache-2.0
---
# Modelos de detección de objetos e identificadores de dinero para tiendas inteligentes
Este repositorio contiene dos potentes modelos de visión por computadora diseñados específicamente para aplicaciones de tiendas de comestibles. El primer modelo se especializa en la detección de objetos, lo que permite una identificación y ubicación precisas de varios productos dentro del entorno de la tienda. El segundo modelo se centra en el reconocimiento de moneda, lo que facilita procesos de pago fluidos durante el pago. Juntos, forman la base de nuestro sistema de tienda de comestibles inteligente, brindando a los clientes experiencias de compra eficientes y al mismo tiempo reduciendo los costos operativos.

## Descripción general:
### Modelo detector de objetos
#### Caracteristicas
- Detecte artículos comestibles comunes como frutas, verduras, teclado, mouse, libros, cucharas y más.
- Alta precisión gracias a técnicas avanzadas de aprendizaje profundo.
- Rendimiento en tiempo real adecuado para la implementación en entornos con recursos limitados, como dispositivos periféricos.
- Fácil integración utilizando marcos populares de aprendizaje automático como TensorFlow o PyTorch.
#### Usage Example
```python
import ShoppingIA as shop
# Shop
def main():
class_shop = shop.ShopIA()
cap = class_shop.__int__()
# Stream
stream = class_shop.tiendaIA(cap)
if __name__ == "__main__":
main()
# Clases Billetes:
# 0 -> 10,000 | 1 -> 20,000 | 2 -> 50,000
|
arash-rasouli/T5-convert-toxic-to-neutral | arash-rasouli | 2024-02-09T17:41:44Z | 174 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-09T15:53:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
404NotF0und/lunar-llm-mistral-7B-10epochs | 404NotF0und | 2024-02-09T17:33:34Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-09T17:28:37Z | ---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
Antonini01/physicist | Antonini01 | 2024-02-09T17:32:56Z | 4 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/tinyllama-bnb-4bit",
"base_model:quantized:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-02-09T17:32:32Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/tinyllama-bnb-4bit
---
# Uploaded model
- **Developed by:** Antonini01
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Technoculture/Medmerge-tulu-70b | Technoculture | 2024-02-09T17:21:54Z | 58 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"epfl-llm/meditron-70b",
"allenai/tulu-2-dpo-70b",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-21T14:39:04Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- epfl-llm/meditron-70b
- allenai/tulu-2-dpo-70b
---
# Medmerge-tulu-70b
Medmerge-tulu-70b is a merge of the following models:
* [wanglab/ClinicalCamel-70B](https://huggingface.co/wanglab/ClinicalCamel-70B)
* [epfl-llm/meditron-70b](https://huggingface.co/epfl-llm/meditron-70b)
* [allenai/tulu-2-dpo-70b](https://huggingface.co/allenai/tulu-2-dpo-70b)
# Open LLM Leaderboard

| Model Name | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
| -------------------- | -------- | --------- | ------ | ---------- | ---------- | -------- |
| tulu-2-dpo-70b | 72.1 | 88.99 | 69.84 | 65.78 | 83.27 | 62.62 |
| Medmerge-tulu-70b | 67.81 | 87.46 | 70.1 | 47.89 | 83.43 | 56.56 |
## Performance
Clinical Camel demonstrates competitive performance on medical benchmarks.
**Table: Five-Shot Performance of Clinical Camel-70B (C70), GPT3.5, GPT4, and Med-PaLM 2 on Various Medical Datasets**
| Dataset | Medmerge-tulu-70b | ClinicalCamel-70B | GPT3.5 | GPT4 | Med-PaLM 2 |
|-----------------------------|-------------------|-------------------|--------|-------|--------------|
| MMLU Anatomy | 66.6 | 65.2 | 60.7 | 80.0 | 77.8 |
| MMLU Clinical Knowledge | 72.0 | 72.8 | 68.7 | 86.4 | 88.3 |
| MMLU College Biology | 84.7 | 81.2 | 72.9 | 93.8 | 94.4 |
| MMLU College Medicine | 64.2 | 68.2 | 63.6 | 76.3 | 80.9 |
| MMLU Medical Genetics | 76.0 | 69.0 | 68.0 | 92.0 | 90.0 |
| MMLU Professional Medicine | 75.7 | 75.0 | 69.8 | 93.8 | 95.2 |
| MedMCQA | | 54.2 | 51.0 | 72.4 | 71.3 |
| MedQA (USMLE) | | 60.7 | 53.6 | 81.4 | 79.7 |
| PubMedQA | | 77.9 | 60.2 | 74.4 | 79.2 |
| USMLE Sample Exam | | 64.3 | 58.5 | 86.6 | - |
## 🧩 Configuration
```yaml
models:
- model: NousResearch/Llama-2-70b-hf
# no parameters necessary for base model
- model: wanglab/ClinicalCamel-70B
parameters:
weight: 0.08
density: 0.45
- model: epfl-llm/meditron-70b
parameters:
weight: 0.08
density: 0.45
- model: allenai/tulu-2-dpo-70b
parameters:
weight: 0.08
density: 0.45
merge_method: dare_ties
base_model: NousResearch/Llama-2-70b-hf
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Technoculture/Medmerge-tulu-70b"
messages = [{"role": "user", "content": "I am feeling sleepy these days"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
saransh03sharma/cmumosei | saransh03sharma | 2024-02-09T17:09:56Z | 61 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-02-09T17:05:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mtc/mistralai-Mistral-7B-v0.1-arxiv-summarization-5000-last_merged | mtc | 2024-02-09T17:09:40Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-09T17:05:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
collinbarnwell/pyannote-speaker-diarization-31 | collinbarnwell | 2024-02-09T17:09:03Z | 13 | 1 | pyannote-audio | [
"pyannote-audio",
"pyannote",
"pyannote-audio-pipeline",
"audio",
"voice",
"speech",
"speaker",
"speaker-diarization",
"speaker-change-detection",
"voice-activity-detection",
"overlapped-speech-detection",
"automatic-speech-recognition",
"arxiv:2111.14448",
"arxiv:2012.01477",
"license:mit",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-02-08T23:36:41Z | ---
tags:
- pyannote
- pyannote-audio
- pyannote-audio-pipeline
- audio
- voice
- speech
- speaker
- speaker-diarization
- speaker-change-detection
- voice-activity-detection
- overlapped-speech-detection
- automatic-speech-recognition
license: mit
extra_gated_prompt: "The collected information will help acquire a better knowledge of pyannote.audio userbase and help its maintainers improve it further. Though this pipeline uses MIT license and will always remain open-source, we will occasionnally email you about premium pipelines and paid services around pyannote."
extra_gated_fields:
Company/university: text
Website: text
---
Using this open-source pipeline in production?
Make the most of it thanks to our [consulting services](https://herve.niderb.fr/consulting.html).
# 🎹 Speaker diarization 3.1
This pipeline is the same as [`pyannote/speaker-diarization-3.0`](https://hf.co/pyannote/speaker-diarization-3.1) except it removes the [problematic](https://github.com/pyannote/pyannote-audio/issues/1537) use of `onnxruntime`.
Both speaker segmentation and embedding now run in pure PyTorch. This should ease deployment and possibly speed up inference.
It requires pyannote.audio version 3.1 or higher.
It ingests mono audio sampled at 16kHz and outputs speaker diarization as an [`Annotation`](http://pyannote.github.io/pyannote-core/structure.html#annotation) instance:
- stereo or multi-channel audio files are automatically downmixed to mono by averaging the channels.
- audio files sampled at a different rate are resampled to 16kHz automatically upon loading.
## Requirements
1. Install [`pyannote.audio`](https://github.com/pyannote/pyannote-audio) `3.1` with `pip install pyannote.audio`
2. Accept [`pyannote/segmentation-3.0`](https://hf.co/pyannote/segmentation-3.0) user conditions
3. Accept [`pyannote/speaker-diarization-3.1`](https://hf.co/pyannote-speaker-diarization-3.1) user conditions
4. Create access token at [`hf.co/settings/tokens`](https://hf.co/settings/tokens).
## Usage
```python
# instantiate the pipeline
from pyannote.audio import Pipeline
pipeline = Pipeline.from_pretrained(
"pyannote/speaker-diarization-3.1",
use_auth_token="HUGGINGFACE_ACCESS_TOKEN_GOES_HERE")
# run the pipeline on an audio file
diarization = pipeline("audio.wav")
# dump the diarization output to disk using RTTM format
with open("audio.rttm", "w") as rttm:
diarization.write_rttm(rttm)
```
### Processing on GPU
`pyannote.audio` pipelines run on CPU by default.
You can send them to GPU with the following lines:
```python
import torch
pipeline.to(torch.device("cuda"))
```
### Processing from memory
Pre-loading audio files in memory may result in faster processing:
```python
waveform, sample_rate = torchaudio.load("audio.wav")
diarization = pipeline({"waveform": waveform, "sample_rate": sample_rate})
```
### Monitoring progress
Hooks are available to monitor the progress of the pipeline:
```python
from pyannote.audio.pipelines.utils.hook import ProgressHook
with ProgressHook() as hook:
diarization = pipeline("audio.wav", hook=hook)
```
### Controlling the number of speakers
In case the number of speakers is known in advance, one can use the `num_speakers` option:
```python
diarization = pipeline("audio.wav", num_speakers=2)
```
One can also provide lower and/or upper bounds on the number of speakers using `min_speakers` and `max_speakers` options:
```python
diarization = pipeline("audio.wav", min_speakers=2, max_speakers=5)
```
## Benchmark
This pipeline has been benchmarked on a large collection of datasets.
Processing is fully automatic:
- no manual voice activity detection (as is sometimes the case in the literature)
- no manual number of speakers (though it is possible to provide it to the pipeline)
- no fine-tuning of the internal models nor tuning of the pipeline hyper-parameters to each dataset
... with the least forgiving diarization error rate (DER) setup (named _"Full"_ in [this paper](https://doi.org/10.1016/j.csl.2021.101254)):
- no forgiveness collar
- evaluation of overlapped speech
| Benchmark | [DER%](. "Diarization error rate") | [FA%](. "False alarm rate") | [Miss%](. "Missed detection rate") | [Conf%](. "Speaker confusion rate") | Expected output | File-level evaluation |
| ------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------- | --------------------------- | ---------------------------------- | ----------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- |
| [AISHELL-4](http://www.openslr.org/111/) | 12.2 | 3.8 | 4.4 | 4.0 | [RTTM](https://huggingface.co/pyannote/speaker-diarization-3.1/blob/main/reproducible_research/AISHELL.SpeakerDiarization.Benchmark.test.rttm) | [eval](https://huggingface.co/pyannote/speaker-diarization-3.1/blob/main/reproducible_research/AISHELL.SpeakerDiarization.Benchmark.test.eval) |
| [AliMeeting (_channel 1_)](https://www.openslr.org/119/) | 24.4 | 4.4 | 10.0 | 10.0 | [RTTM](https://huggingface.co/pyannote/speaker-diarization-3.1/blob/main/reproducible_research/AliMeeting.SpeakerDiarization.Benchmark.test.rttm) | [eval](https://huggingface.co/pyannote/speaker-diarization-3.1/blob/main/reproducible_research/AliMeeting.SpeakerDiarization.Benchmark.test.eval) |
| [AMI (_headset mix,_](https://groups.inf.ed.ac.uk/ami/corpus/) [_only_words_)](https://github.com/BUTSpeechFIT/AMI-diarization-setup) | 18.8 | 3.6 | 9.5 | 5.7 | [RTTM](https://huggingface.co/pyannote/speaker-diarization-3.1/blob/main/reproducible_research/AMI.SpeakerDiarization.Benchmark.test.rttm) | [eval](https://huggingface.co/pyannote/speaker-diarization-3.1/blob/main/reproducible_research/AMI.SpeakerDiarization.Benchmark.test.eval) |
| [AMI (_array1, channel 1,_](https://groups.inf.ed.ac.uk/ami/corpus/) [_only_words)_](https://github.com/BUTSpeechFIT/AMI-diarization-setup) | 22.4 | 3.8 | 11.2 | 7.5 | [RTTM](https://huggingface.co/pyannote/speaker-diarization-3.1/blob/main/reproducible_research/AMI-SDM.SpeakerDiarization.Benchmark.test.rttm) | [eval](https://huggingface.co/pyannote/speaker-diarization-3.1/blob/main/reproducible_research/AMI-SDM.SpeakerDiarization.Benchmark.test.eval) |
| [AVA-AVD](https://arxiv.org/abs/2111.14448) | 50.0 | 10.8 | 15.7 | 23.4 | [RTTM](https://huggingface.co/pyannote/speaker-diarization-3.1/blob/main/reproducible_research/AVA-AVD.SpeakerDiarization.Benchmark.test.rttm) | [eval](https://huggingface.co/pyannote/speaker-diarization-3.1/blob/main/reproducible_research/AVA-AVD.SpeakerDiarization.Benchmark.test.eval) |
| [DIHARD 3 (_Full_)](https://arxiv.org/abs/2012.01477) | 21.7 | 6.2 | 8.1 | 7.3 | [RTTM](https://huggingface.co/pyannote/speaker-diarization-3.1/blob/main/reproducible_research/DIHARD.SpeakerDiarization.Benchmark.test.rttm) | [eval](https://huggingface.co/pyannote/speaker-diarization-3.1/blob/main/reproducible_research/DIHARD.SpeakerDiarization.Benchmark.test.eval) |
| [MSDWild](https://x-lance.github.io/MSDWILD/) | 25.3 | 5.8 | 8.0 | 11.5 | [RTTM](https://huggingface.co/pyannote/speaker-diarization-3.1/blob/main/reproducible_research/MSDWILD.SpeakerDiarization.Benchmark.test.rttm) | [eval](https://huggingface.co/pyannote/speaker-diarization-3.1/blob/main/reproducible_research/MSDWILD.SpeakerDiarization.Benchmark.test.eval) |
| [REPERE (_phase 2_)](https://islrn.org/resources/360-758-359-485-0/) | 7.8 | 1.8 | 2.6 | 3.5 | [RTTM](https://huggingface.co/pyannote/speaker-diarization-3.1/blob/main/reproducible_research/REPERE.SpeakerDiarization.Benchmark.test.rttm) | [eval](https://huggingface.co/pyannote/speaker-diarization-3.1/blob/main/reproducible_research/REPERE.SpeakerDiarization.Benchmark.test.eval) |
| [VoxConverse (_v0.3_)](https://github.com/joonson/voxconverse) | 11.3 | 4.1 | 3.4 | 3.8 | [RTTM](https://huggingface.co/pyannote/speaker-diarization-3.1/blob/main/reproducible_research/VoxConverse.SpeakerDiarization.Benchmark.test.rttm) | [eval](https://huggingface.co/pyannote/speaker-diarization-3.1/blob/main/reproducible_research/VoxConverse.SpeakerDiarization.Benchmark.test.eval) |
## Citations
```bibtex
@inproceedings{Plaquet23,
author={Alexis Plaquet and Hervé Bredin},
title={{Powerset multi-class cross entropy loss for neural speaker diarization}},
year=2023,
booktitle={Proc. INTERSPEECH 2023},
}
```
```bibtex
@inproceedings{Bredin23,
author={Hervé Bredin},
title={{pyannote.audio 2.1 speaker diarization pipeline: principle, benchmark, and recipe}},
year=2023,
booktitle={Proc. INTERSPEECH 2023},
}
```
|
edu-shok/opus-mt-en-es-finetuned-en-to-es-TA-5EPOCHS | edu-shok | 2024-02-09T17:08:23Z | 119 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-en-es",
"base_model:finetune:Helsinki-NLP/opus-mt-en-es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-09T16:15:53Z | ---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-es
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-en-es-finetuned-en-to-es-TA-5EPOCHS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-es-finetuned-en-to-es-TA-5EPOCHS
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-es](https://huggingface.co/Helsinki-NLP/opus-mt-en-es) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1519
- Bleu: 36.1767
- Gen Len: 33.1691
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 1.1849 | 1.0 | 2989 | 1.1316 | 36.2806 | 32.8762 |
| 1.0928 | 2.0 | 5978 | 1.1345 | 36.2441 | 32.9869 |
| 1.0191 | 3.0 | 8967 | 1.1439 | 36.0699 | 33.0191 |
| 0.9753 | 4.0 | 11956 | 1.1495 | 36.1426 | 33.2622 |
| 0.9491 | 5.0 | 14945 | 1.1519 | 36.1767 | 33.1691 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1
|
mtc/mistralai-Mistral-7B-v0.1-arxiv-summarization-5000-last-lora-full-adapter | mtc | 2024-02-09T17:05:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-09T17:05:35Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jeevana/GenAI_QnA_Mistral7b_QLoRA_G8_FV01 | jeevana | 2024-02-09T17:00:38Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-09T16:55:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
daekeun-ml/phi-2-upscaled-4B-instruct-v0.1 | daekeun-ml | 2024-02-09T16:50:48Z | 122 | 3 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"conversational",
"en",
"dataset:Intel/orca_dpo_pairs",
"dataset:wikipedia",
"dataset:Open-Orca/OpenOrca",
"arxiv:2312.15166",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-30T23:19:06Z | ---
language:
- en
license: apache-2.0
library_name: transformers
datasets:
- Intel/orca_dpo_pairs
- wikipedia
- Open-Orca/OpenOrca
inference: false
---
# phi-2-upscaled-4B-instruct-v0.1
## Model Details
This model is a model that performed continued pre-training and fine-tuning (instruction tuning) using the depth up-scaling (DUS) technique disclosed by Upstage.
### DUS(Depth Up-Scaling) and continued pre-training
Similar to the methodology disclosed in the paper, we expanded from 32 transformer blocks to 48 blocks and then continued pre-training with the public dataset. Pre-training was performed for 3 days using 4 `ml.g5.48xlarge` instances from AWS (NVIDIA A10G GPU x 32ea). For pre-training, we used a sample set from Wikipedia.
Note that performance is not guaranteed since only a small number of datasets were used for the experiment. The number of samples for training set is just around 1.5 million after tokenization.
For distributed training, all weights were trained without adapter techniques, and sharding parallelization was performed with ZeRO-2. The presets are as follows.
```json
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"bf16": {
"enabled": "auto"
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 2,
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"contiguous_gradients": true,
"cpu_offload": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto"
}
```
Some hyperparameters are listed below.
```
batch_size: 2
num_epochs: 1
learning_rate: 3e-4
gradient_accumulation_steps: 8
lr_scheduler_type: "linear"
group_by_length: False
```
### Fine-tuning
After performing pre-training, instruction tuning and alignment tuning were performed sequentially. This process only took about 10 hours using AWS `ml.g5.24xlarge` (NVIDIA A10G GPU x 4ea). The dataset used for instruction tuning is a sample set of the OpenOrca dataset, and the dataset used for alignment tuning is Intel's orca_dpo_pairs dataset.
All fine-tuning was learned using QLoRA, and the batch sizes were set to 3 and 1, respectively. We used 1,024 for the context length. 2,048 is also possible, but applying DPO often runs out of memory on 24GB GPU memory, so we settled on 1,024.
Please see below for relevant code snippets.
```python
peft_config = LoraConfig(
r=8,
lora_alpha=16,
target_modules=["q_proj", "k_proj", "v_proj", "fc1", "fc2"],
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM",
)
training_arguments = TrainingArguments(
output_dir="logs",
num_train_epochs=1,
per_device_train_batch_size=batch_size,
gradient_accumulation_steps=4,
optim="paged_adamw_8bit",
learning_rate=3e-4,
weight_decay=0.001,
bf16=True,
max_grad_norm=0.3,
max_steps=-1,
warmup_ratio=0.03,
group_by_length=True,
lr_scheduler_type="cosine",
report_to="wandb", ...
)
```
### References
- Base model: [microsoft/phi-2](https://huggingface.co/microsoft/phi-2)
- Paper: [SOLAR 10.7B](https://arxiv.org/abs/2312.15166)
## How to Get Started with the Model
Since this model used ChatGPT's ChatML template, <im_start> and <im_end> tokens were added.
You can use Hugging Face's chat template to create the prompt, but you can also create the prompt yourself with the code snippet below.
```python
def create_inference_prompt(text):
string = f"""<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
{text}<|im_end|>
<|im_start|>assistant
"""
return string
```
If you want to simply see the inference results, please use the code snippet below.
```python
from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer
import torch
torch.set_default_device("cuda")
model_path = "daekeun-ml/phi-2-upscaled-4B-instruct-v0.1"
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype="auto",
trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(
model_path,
use_fast=True,
trust_remote_code=True
)
# Format prompt
message = [
{"role": "system", "content": "You are a helpful AI assistant. Generate appropriate answers to given questions."},
{"role": "user", "content": "What is a Large Language Model?"}
]
prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False)
inputs = tokenizer(prompt, return_tensors="pt", return_attention_mask=False)
outputs = model.generate(**inputs, max_new_tokens=200, do_sample=True, top_p=0.9, temperature=0.5, repetition_penalty=1.2)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
## Notes
### License
Apache 2.0; The license of phi-2 is MIT, but the license of the orca dataset used for training is apache 2.0.
### Caution
This model was created as a personal experiment, unrelated to the organization I work for. The model may not operate correctly because separate verification was not performed. Please be careful unless it is for personal experimentation or PoC (Proof of Concept)! |
lmg-anon/vntl-qwen-14b-v0.1-qlora | lmg-anon | 2024-02-09T16:45:55Z | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:KnutJaegersberg/Qwen-14B-Llamafied",
"base_model:adapter:KnutJaegersberg/Qwen-14B-Llamafied",
"region:us"
] | null | 2024-02-09T06:52:39Z | ---
library_name: peft
base_model: KnutJaegersberg/Qwen-14B-Llamafied
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 |
LarryAIDraw/firefly | LarryAIDraw | 2024-02-09T16:40:24Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-02-09T16:32:55Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/297274/firefly-honkai-star-rail |
LarryAIDraw/TomoeKoga-08 | LarryAIDraw | 2024-02-09T16:40:13Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-02-09T16:32:33Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/299185/tomoe-koga-bunny-girl-senpai |
LarryAIDraw/chiori-10 | LarryAIDraw | 2024-02-09T16:40:00Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-02-09T16:32:08Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/297139/chiori-genshin-impact-lora-commission |
LarryAIDraw/clorinde-10 | LarryAIDraw | 2024-02-09T16:39:47Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-02-09T16:31:47Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/297123/clorinde-genshin-impact-lora-commission |
LarryAIDraw/isekgiyelan | LarryAIDraw | 2024-02-09T16:39:26Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-02-09T16:30:59Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/297911/yelan-genshinimpact |
LarryAIDraw/BlackSwan-10 | LarryAIDraw | 2024-02-09T16:38:59Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-02-09T16:29:59Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/209260/black-swan-honkai-star-rail-lora |
LarryAIDraw/miyuki-mahouka-01 | LarryAIDraw | 2024-02-09T16:38:48Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-02-09T16:29:35Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/298306/miyuki-shiba-mahouka-koukou-no-rettousei |
LarryAIDraw/CHAR-KaedeAkiyama | LarryAIDraw | 2024-02-09T16:38:37Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-02-09T16:28:55Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/296947/kaede-akiyama-or-kengan-ashura |
Technoculture/MT7Bi-alpha-dpo-v0.2 | Technoculture | 2024-02-09T16:34:00Z | 3 | 2 | adapter-transformers | [
"adapter-transformers",
"safetensors",
"llama",
"en",
"dataset:argilla/distilabel-intel-orca-dpo-pairs",
"dataset:jondurbin/truthy-dpo-v0.1",
"dataset:argilla/distilabel-math-preference-dpo",
"dataset:argilla/distilabel-capybara-dpo-7k-binarized",
"base_model:Technoculture/MT7Bi-sft",
"base_model:adapter:Technoculture/MT7Bi-sft",
"license:mit",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-02-04T14:34:31Z | ---
license: mit
datasets:
- argilla/distilabel-intel-orca-dpo-pairs
- jondurbin/truthy-dpo-v0.1
- argilla/distilabel-math-preference-dpo
- argilla/distilabel-capybara-dpo-7k-binarized
language:
- en
library_name: adapter-transformers
base_model: Technoculture/MT7Bi-sft
---
# Technoculture/MT7Bi-alpha-dpo-v-0.2
# Open LLM Leaderboard

| Model Name | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
| -------------------- | -------- | --------- | ------ | ---------- | ---------- | -------- |
| Orca-2-7b | **78.4** | 76.1 | 53.7 | **52.4** | **74.2** | **47.2** |
| LLAMA-2-7b | 43.2 | **77.1** | 44.4 | 38.7 | 69.5 | 16 |
| MT7Bi-sft | 54.1 | 75.11 | - | 43.08 | 72.14 | 15.54 |
| MT7Bi-alpha-dpo-v0.2 | 54.69 | 75.89 | 52.82 | 45.48 | 71.58 | 25.93 |
## Training Details
- **GPU:** Nvidia A100 Tensor Core GPU
- **Total Batches:** 4266
- **Epochs:** 3
- **Duration:** 3 hours, 59 minutes, and 55 seconds
## DPO Training Dataset Mixture
| Dataset Name | Original Size(Rows) | Ratio | Size After Ratio(Rows) |
|----------------------------------------------------|---------------|-------|------------------|
| argilla/distilabel-math-preference-dpo | 2.4k | 1.0 | 2.4k |
| argilla/distilabel-intel-orca-dpo-pairs | 12.9k | 0.5 | 6.45k |
| jondurbin/truthy-dpo-v0.1 | 1.04k | 1.0 | 1.04k |
| argilla/distilabel-capybara-dpo-7k-binarized | 7.5k | 0.2 | 1.5k |
Total Size: 11.38k
## Training Loss Plot

## Training Loss Smoothed Plot

### For full details of this dpo-training please go through our notebook.
<a target="_blank" href="https://colab.research.google.com/github/dkshjn/Technoculture/blob/main/MT7Bi_alpha_dpo_v0_2.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
|
spsither/wav2vec2_run9.30 | spsither | 2024-02-09T16:32:14Z | 61 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-02-09T16:31:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Hatsu2004/ppo-LunarLander-v2 | Hatsu2004 | 2024-02-09T16:27:24Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-09T16:21:24Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 301.89 +/- 13.53
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
gz8iz/Boris_Pistorius | gz8iz | 2024-02-09T16:21:36Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | 2024-02-09T16:18:13Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
bpi bre in front of a green background, mouth closed, glasses visible,
intricate, elegant, highly detailed, sharp focus, cool color, cinematic,
candid, cute, designed dynamic dramatic atmosphere, warm light, inspired,
rich deep colors, open friendly, pretty, determined, full, innocent, iconic,
fine detail, clear, artistic, expressive, symmetry, pure
parameters:
negative_prompt: "\tunrealistic, saturated, high contrast, big nose, painting, drawing, sketch, cartoon, anime, manga, render, CG, 3d, watermark, signature, label"
output:
url: images/2024-02-09_17-15-14_5002.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: bpi, bre
---
# Boris Pistorius
<Gallery />
## Model description
just a LORA of Boris Pistorius
## Trigger words
You should use `bpi` to trigger the image generation.
You should use `bre` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/gz8iz/Boris_Pistorius/tree/main) them in the Files & versions tab.
|
VanoInvestigations/BOLETIN_8bit_27 | VanoInvestigations | 2024-02-09T16:19:58Z | 2 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:bertin-project/BOLETIN",
"base_model:adapter:bertin-project/BOLETIN",
"license:openrail",
"region:us"
] | null | 2024-02-09T09:54:01Z | ---
license: openrail
library_name: peft
tags:
- generated_from_trainer
base_model: bertin-project/BOLETIN
model-index:
- name: BOLETIN_8bit_27
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BOLETIN_8bit_27
This model is a fine-tuned version of [bertin-project/BOLETIN](https://huggingface.co/bertin-project/BOLETIN) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.41e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.14.6
- Tokenizers 0.15.1 |
mtc/meta-llama-Llama-2-7b-hf-arxiv-summarization-5000-last_merged | mtc | 2024-02-09T16:18:36Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-09T16:15:03Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
YieldInc/agentinstruct_os_env-filtered_v2-sharegpt | YieldInc | 2024-02-09T16:13:33Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-02-09T16:00:58Z | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.7.0 |
ngxson/Vistral-7B-ChatML | ngxson | 2024-02-09T16:07:39Z | 11 | 1 | transformers | [
"transformers",
"safetensors",
"gguf",
"mistral",
"text-generation",
"LLMs",
"NLP",
"Vietnamese",
"conversational",
"vi",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-09T13:01:34Z | ---
language:
- vi
library_name: transformers
tags:
- LLMs
- NLP
- Vietnamese
license: mit
---
## Model Description
This model is finetuned from [Viet-Mistral/Vistral-7B-Chat](https://huggingface.co/Viet-Mistral/Vistral-7B-Chat). The dataset is taken from [bkai-foundation-models/vi-self-chat-sharegpt-format](https://huggingface.co/datasets/bkai-foundation-models/vi-self-chat-sharegpt-format)
This is a **low rank** finetune to add support for chatml template. While the template does not affect model performance, it would be nice to support chatml since most of models based on Mistral already using it.
The format looks like this:
```
<|im_start|>system
Provide some context and/or instructions to the model.
<|im_end|>
<|im_start|>user
The user’s message goes here
<|im_end|>
<|im_start|>assistant
```
## How to use it
The recommended way is to use the GGUF `vistral-7b-chatml-Q4_K_M.gguf` file included in this repository. Run it via llama.cpp (remember to pass `-cml` argument to use chatml template)
```bash
./main -m vistral-7b-chatml-Q4_K_M.gguf -p "Bạn là một trợ lí Tiếng Việt nhiệt tình và trung thực." -cml
```
Additionally, you can run the `python3 run.py` inside this repository to try the model using transformers library. This it not the recommended way since you may need to change some params inside in order to make it work.
This is an example of a conversation using llama.cpp:
- <|im_start|>system
- Bạn là một trợ lí Tiếng Việt nhiệt tình và trung thực.<|im_end|>
- <|im_start|>user
- > xin chào
- Tôi là một trợ lý AI được tạo ra để hỗ trợ con người. Bạn có câu hỏi gì về chủ đề nào đó?<|im_end|>
- > trợ lý AI là gì? bạn giải thích được không?
- Trợ lý AI là một chương trình máy tính sử dụng trí tuệ nhân tạo và học máy để hỗ trợ con người trong nhiều công việc khác nhau. Nó có thể hiểu và xử lý ngôn ngữ tự nhiên, nhận biết hình ảnh và âm thanh cũng như đưa ra các đề xuất hay câu trả lời phù hợp cho các yêu cầu của bạn.<|im_end|>
- > ồ, cảm ơn, vậy bạn có thể làm gì?
- Tôi được thiết kế để giúp đỡ con người trong nhiều công việc như giải đáp các câu hỏi, trả lời những vấn đề liên quan đến cuộc sống hàng ngày của họ. Tôi cũng hỗ trợ nghiên cứu và phân tích dữ liệu để đưa ra dự đoán hay tư vấn cho doanh nghiệp hoặc cá nhân.<|im_end|>
- > tức là sao?
- Ví dụ, nếu bạn đang thắc mắc về một chủ đề nào đó và không biết phải bắt đầu tìm kiếm ở đâu hay làm thế nào để hiểu rõ hơn về nó thì tôi có thể giúp bạn. Tôi cũng có khả năng đọc và phân tích dữ liệu từ nhiều nguồn khác nhau như các tài liệu nghiên cứu khoa học hoặc thông tin trên mạng xã hội, sau đó đưa ra những giải thích chi tiết cho vấn đề của bạn.<|im_end|>
## Training
You can also look at the training code in the `finetune.py` file.
For tokenizer, I changed these things:
- Token `[INST]` to `<|im_start|>`, make it become special token
- Token `[/INST]` to `<|im_end|>`, make it become special token
- Change to `eos_token` to `<|im_end|>`
- Update `chat_template` to chatml, taken from [this example](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B/blob/main/tokenizer_config.json#L52)
Additionally, there is a checkpoint file in my repository if you want to merge the LORA yourself.
## More information
Disclaimer: I'm not expert in machine learning, my background is from cybersecurity so the making of this model is a "hobby" to me. Training is done using a VPS on Google Cloud, I paid with my own money.
If you want to discuss, feel free to contact me at `contact at ngxson dot com` - [ngxson.com](https://ngxson.com)
|
edu-shok/opus-mt-en-es-finetuned-en-to-es-TA | edu-shok | 2024-02-09T16:06:57Z | 118 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-en-es",
"base_model:finetune:Helsinki-NLP/opus-mt-en-es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-09T15:56:23Z | ---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-es
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-en-es-finetuned-en-to-es-TA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-es-finetuned-en-to-es-TA
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-es](https://huggingface.co/Helsinki-NLP/opus-mt-en-es) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1097
- Bleu: 36.7132
- Gen Len: 32.9874
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 1.2188 | 1.0 | 2989 | 1.1097 | 36.7132 | 32.9874 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1
|
Mahdish720/dolphin_mistral_7b_Enlighten | Mahdish720 | 2024-02-09T16:04:57Z | 1 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:cognitivecomputations/dolphin-2.6-mistral-7b",
"base_model:adapter:cognitivecomputations/dolphin-2.6-mistral-7b",
"region:us"
] | null | 2024-02-09T14:55:54Z | ---
library_name: peft
base_model: cognitivecomputations/dolphin-2.6-mistral-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
OpenBuddy/openbuddy-codellama-70b-v17.1-4k | OpenBuddy | 2024-02-09T15:57:21Z | 6 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"ru",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-09T11:02:01Z | ---
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- ru
pipeline_tag: text-generation
inference: false
library_name: transformers
license: llama2
---
# OpenBuddy - Open Multilingual Chatbot
GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy)
Website and Demo: [https://openbuddy.ai](https://openbuddy.ai)
Evaluation result of this model: [Evaluation.txt](Evaluation.txt)

# Copyright Notice
Base model: https://huggingface.co/codellama/CodeLlama-70b-hf
License: llama2
This model is built upon Meta's LLaMA series of models and is subject to Meta's licensing agreement.
This model is intended for use only by individuals who have obtained approval from Meta and are eligible to download LLaMA.
If you have not obtained approval from Meta, you must visit the https://ai.meta.com/llama/ page, read and agree to the model's licensing agreement, submit an application, and wait for approval from Meta before downloading the model from this page.
## Disclaimer
All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions.
OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.
By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy.
## 免责声明
所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。
OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。
使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。 |
ssaryssane/ssary-solar-10.7B | ssaryssane | 2024-02-09T15:56:47Z | 36 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-08T07:16:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
edu-shok/opus-mt-en-es-finetuned-en-to-es | edu-shok | 2024-02-09T15:55:47Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"es",
"en",
"dataset:tj-solergibert/SRV-Europarl-ST-processed-mt-en",
"base_model:Helsinki-NLP/opus-mt-en-es",
"base_model:finetune:Helsinki-NLP/opus-mt-en-es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-04T12:40:49Z | ---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-es
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-en-es-finetuned-en-to-es
results: []
language:
- es
- en
datasets:
- tj-solergibert/SRV-Europarl-ST-processed-mt-en
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-es-finetuned-en-to-es
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-es](https://huggingface.co/Helsinki-NLP/opus-mt-en-es) on EuParl dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8139
- Bleu: 51.2804
- Gen Len: 31.5269
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 0.853 | 1.0 | 4780 | 0.8139 | 51.2804 | 31.5269 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1 |
daviibrt/en_ner_craft_md | daviibrt | 2024-02-09T15:41:57Z | 1 | 0 | spacy | [
"spacy",
"token-classification",
"en",
"license:cc-by-sa-3.0",
"model-index",
"region:us"
] | token-classification | 2024-02-09T15:41:26Z | ---
tags:
- spacy
- token-classification
language:
- en
license: cc-by-sa-3.0
model-index:
- name: en_ner_craft_md
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.8277022815
- name: NER Recall
type: recall
value: 0.7689367616
- name: NER F Score
type: f_score
value: 0.7972380666
- task:
name: TAG
type: token-classification
metrics:
- name: TAG (XPOS) Accuracy
type: accuracy
value: 0.0
- task:
name: LEMMA
type: token-classification
metrics:
- name: Lemma Accuracy
type: accuracy
value: 0.0
- task:
name: UNLABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Unlabeled Attachment Score (UAS)
type: f_score
value: 0.0
- task:
name: LABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Labeled Attachment Score (LAS)
type: f_score
value: 0.0
- task:
name: SENTS
type: token-classification
metrics:
- name: Sentences F-Score
type: f_score
value: 1.0
---
Spacy Models for Biomedical Text.
| Feature | Description |
| --- | --- |
| **Name** | `en_ner_craft_md` |
| **Version** | `0.5.3` |
| **spaCy** | `>=3.6.1,<3.7.0` |
| **Default Pipeline** | `tok2vec`, `tagger`, `attribute_ruler`, `lemmatizer`, `parser`, `ner` |
| **Components** | `tok2vec`, `tagger`, `attribute_ruler`, `lemmatizer`, `parser`, `ner` |
| **Vectors** | 4087446 keys, 50000 unique vectors (200 dimensions) |
| **Sources** | CRAFT<br>OntoNotes 5<br>Common Crawl<br>GENIA 1.0 |
| **License** | `CC BY-SA 3.0` |
| **Author** | [Allen Institute for Artificial Intelligence](https://allenai.github.io/SciSpaCy/) |
### Label Scheme
<details>
<summary>View label scheme (103 labels for 3 components)</summary>
| Component | Labels |
| --- | --- |
| **`tagger`** | `$`, `''`, `,`, `-LRB-`, `-RRB-`, `.`, `:`, `ADD`, `AFX`, `CC`, `CD`, `DT`, `EX`, `FW`, `HYPH`, `IN`, `JJ`, `JJR`, `JJS`, `LS`, `MD`, `NFP`, `NN`, `NNP`, `NNPS`, `NNS`, `PDT`, `POS`, `PRP`, `PRP$`, `RB`, `RBR`, `RBS`, `RP`, `SYM`, `TO`, `UH`, `VB`, `VBD`, `VBG`, `VBN`, `VBP`, `VBZ`, `WDT`, `WP`, `WP$`, `WRB`, `XX`, ```` |
| **`parser`** | `ROOT`, `acl`, `acl:relcl`, `acomp`, `advcl`, `advmod`, `amod`, `amod@nmod`, `appos`, `attr`, `aux`, `auxpass`, `case`, `cc`, `cc:preconj`, `ccomp`, `compound`, `compound:prt`, `conj`, `cop`, `csubj`, `dative`, `dep`, `det`, `det:predet`, `dobj`, `expl`, `intj`, `mark`, `meta`, `mwe`, `neg`, `nmod`, `nmod:npmod`, `nmod:poss`, `nmod:tmod`, `nsubj`, `nsubjpass`, `nummod`, `parataxis`, `pcomp`, `pobj`, `preconj`, `predet`, `prep`, `punct`, `quantmod`, `xcomp` |
| **`ner`** | `CHEBI`, `CL`, `GGP`, `GO`, `SO`, `TAXON` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TAG_ACC` | 0.00 |
| `LEMMA_ACC` | 0.00 |
| `DEP_UAS` | 0.00 |
| `DEP_LAS` | 0.00 |
| `DEP_LAS_PER_TYPE` | 0.00 |
| `SENTS_P` | 100.00 |
| `SENTS_R` | 100.00 |
| `SENTS_F` | 100.00 |
| `ENTS_F` | 79.72 |
| `ENTS_P` | 82.77 |
| `ENTS_R` | 76.89 |
| `NER_LOSS` | 507618.89 | |
daviibrt/en_ner_bc5cdr_md | daviibrt | 2024-02-09T15:39:51Z | 4 | 1 | spacy | [
"spacy",
"token-classification",
"en",
"license:cc-by-sa-3.0",
"model-index",
"region:us"
] | token-classification | 2024-02-09T15:39:20Z | ---
tags:
- spacy
- token-classification
language:
- en
license: cc-by-sa-3.0
model-index:
- name: en_ner_bc5cdr_md
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.8733132597
- name: NER Recall
type: recall
value: 0.8273639725
- name: NER F Score
type: f_score
value: 0.8497178819
- task:
name: TAG
type: token-classification
metrics:
- name: TAG (XPOS) Accuracy
type: accuracy
value: 0.0
- task:
name: LEMMA
type: token-classification
metrics:
- name: Lemma Accuracy
type: accuracy
value: 0.0
- task:
name: UNLABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Unlabeled Attachment Score (UAS)
type: f_score
value: 0.0
- task:
name: LABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Labeled Attachment Score (LAS)
type: f_score
value: 0.0
- task:
name: SENTS
type: token-classification
metrics:
- name: Sentences F-Score
type: f_score
value: 0.0
---
Spacy Models for Biomedical Text.
| Feature | Description |
| --- | --- |
| **Name** | `en_ner_bc5cdr_md` |
| **Version** | `0.5.3` |
| **spaCy** | `>=3.6.1,<3.7.0` |
| **Default Pipeline** | `tok2vec`, `tagger`, `attribute_ruler`, `lemmatizer`, `parser`, `ner` |
| **Components** | `tok2vec`, `tagger`, `attribute_ruler`, `lemmatizer`, `parser`, `ner` |
| **Vectors** | 4087446 keys, 50000 unique vectors (200 dimensions) |
| **Sources** | BC5CDR<br>OntoNotes 5<br>Common Crawl<br>GENIA 1.0 |
| **License** | `CC BY-SA 3.0` |
| **Author** | [Allen Institute for Artificial Intelligence](https://allenai.github.io/SciSpaCy/) |
### Label Scheme
<details>
<summary>View label scheme (99 labels for 3 components)</summary>
| Component | Labels |
| --- | --- |
| **`tagger`** | `$`, `''`, `,`, `-LRB-`, `-RRB-`, `.`, `:`, `ADD`, `AFX`, `CC`, `CD`, `DT`, `EX`, `FW`, `HYPH`, `IN`, `JJ`, `JJR`, `JJS`, `LS`, `MD`, `NFP`, `NN`, `NNP`, `NNPS`, `NNS`, `PDT`, `POS`, `PRP`, `PRP$`, `RB`, `RBR`, `RBS`, `RP`, `SYM`, `TO`, `UH`, `VB`, `VBD`, `VBG`, `VBN`, `VBP`, `VBZ`, `WDT`, `WP`, `WP$`, `WRB`, `XX`, ```` |
| **`parser`** | `ROOT`, `acl`, `acl:relcl`, `acomp`, `advcl`, `advmod`, `amod`, `amod@nmod`, `appos`, `attr`, `aux`, `auxpass`, `case`, `cc`, `cc:preconj`, `ccomp`, `compound`, `compound:prt`, `conj`, `cop`, `csubj`, `dative`, `dep`, `det`, `det:predet`, `dobj`, `expl`, `intj`, `mark`, `meta`, `mwe`, `neg`, `nmod`, `nmod:npmod`, `nmod:poss`, `nmod:tmod`, `nsubj`, `nsubjpass`, `nummod`, `parataxis`, `pcomp`, `pobj`, `preconj`, `predet`, `prep`, `punct`, `quantmod`, `xcomp` |
| **`ner`** | `CHEMICAL`, `DISEASE` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TAG_ACC` | 0.00 |
| `LEMMA_ACC` | 0.00 |
| `DEP_UAS` | 0.00 |
| `DEP_LAS` | 0.00 |
| `DEP_LAS_PER_TYPE` | 0.00 |
| `SENTS_P` | 0.00 |
| `SENTS_R` | 0.00 |
| `SENTS_F` | 0.00 |
| `ENTS_F` | 84.97 |
| `ENTS_P` | 87.33 |
| `ENTS_R` | 82.74 |
| `NER_LOSS` | 197976.24 | |
Kooten/BagelMIsteryTour-v2-8x7B-5bpw-exl2 | Kooten | 2024-02-09T15:35:32Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"mergekit",
"merge",
"base_model:Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora",
"base_model:merge:Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora",
"base_model:Sao10K/Sensualize-Mixtral-bf16",
"base_model:merge:Sao10K/Sensualize-Mixtral-bf16",
"base_model:jondurbin/bagel-dpo-8x7b-v0.2",
"base_model:merge:jondurbin/bagel-dpo-8x7b-v0.2",
"base_model:mistralai/Mixtral-8x7B-Instruct-v0.1",
"base_model:merge:mistralai/Mixtral-8x7B-Instruct-v0.1",
"base_model:mistralai/Mixtral-8x7B-v0.1",
"base_model:merge:mistralai/Mixtral-8x7B-v0.1",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-07T10:29:37Z | ---
base_model:
- mistralai/Mixtral-8x7B-v0.1
- jondurbin/bagel-dpo-8x7b-v0.2
- Sao10K/Sensualize-Mixtral-bf16
- mistralai/Mixtral-8x7B-v0.1
- Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora
- mistralai/Mixtral-8x7B-Instruct-v0.1
tags:
- mergekit
- merge
license: cc-by-nc-4.0
---
# BagelMIsteryTour-v2-8x7B 5bpw
Exllama quant of [ycros/BagelMIsteryTour-v2-8x7B](https://huggingface.co/ycros/BagelMIsteryTour-v2-8x7B)
## Other quants:
EXL2: [8bpw](https://huggingface.co/Kooten/BagelMIsteryTour-v2-8x7B-8bpw-exl2), [6bpw](https://huggingface.co/Kooten/BagelMIsteryTour-v2-8x7B-6bpw-exl2), [5bpw](https://huggingface.co/Kooten/BagelMIsteryTour-v2-8x7B-5bpw-exl2), [4bpw](https://huggingface.co/Kooten/BagelMIsteryTour-v2-8x7B-4bpw-exl2), [3.5bpw](https://huggingface.co/Kooten/BagelMIsteryTour-v2-8x7B-3.5bpw-exl2)
## Prompt format: Alpaca
It is noted to also work with mistral
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Input:
{input}
### Response:
```
## Contact
Kooten on discord
[ko-fi.com/kooten](https://ko-fi.com/kooten) if you would like to support me
|
Kooten/BagelMIsteryTour-v2-8x7B-4bpw-exl2 | Kooten | 2024-02-09T15:35:17Z | 10 | 1 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"mergekit",
"merge",
"base_model:Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora",
"base_model:merge:Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora",
"base_model:Sao10K/Sensualize-Mixtral-bf16",
"base_model:merge:Sao10K/Sensualize-Mixtral-bf16",
"base_model:jondurbin/bagel-dpo-8x7b-v0.2",
"base_model:merge:jondurbin/bagel-dpo-8x7b-v0.2",
"base_model:mistralai/Mixtral-8x7B-Instruct-v0.1",
"base_model:merge:mistralai/Mixtral-8x7B-Instruct-v0.1",
"base_model:mistralai/Mixtral-8x7B-v0.1",
"base_model:merge:mistralai/Mixtral-8x7B-v0.1",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-09T14:51:11Z | ---
base_model:
- mistralai/Mixtral-8x7B-v0.1
- jondurbin/bagel-dpo-8x7b-v0.2
- Sao10K/Sensualize-Mixtral-bf16
- mistralai/Mixtral-8x7B-v0.1
- Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora
- mistralai/Mixtral-8x7B-Instruct-v0.1
tags:
- mergekit
- merge
license: cc-by-nc-4.0
---
# BagelMIsteryTour-v2-8x7B 4bpw
Exllama quant of [ycros/BagelMIsteryTour-v2-8x7B](https://huggingface.co/ycros/BagelMIsteryTour-v2-8x7B)
## Other quants:
EXL2: [8bpw](https://huggingface.co/Kooten/BagelMIsteryTour-v2-8x7B-8bpw-exl2), [6bpw](https://huggingface.co/Kooten/BagelMIsteryTour-v2-8x7B-6bpw-exl2), [5bpw](https://huggingface.co/Kooten/BagelMIsteryTour-v2-8x7B-5bpw-exl2), [4bpw](https://huggingface.co/Kooten/BagelMIsteryTour-v2-8x7B-4bpw-exl2), [3.5bpw](https://huggingface.co/Kooten/BagelMIsteryTour-v2-8x7B-3.5bpw-exl2)
## Prompt format: Alpaca
It is noted to also work with mistral
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Input:
{input}
### Response:
```
## Contact
Kooten on discord
[ko-fi.com/kooten](https://ko-fi.com/kooten) if you would like to support me
|
Kooten/BagelMIsteryTour-v2-8x7B-3.5bpw-exl2 | Kooten | 2024-02-09T15:35:06Z | 6 | 3 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"mergekit",
"merge",
"base_model:Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora",
"base_model:merge:Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora",
"base_model:Sao10K/Sensualize-Mixtral-bf16",
"base_model:merge:Sao10K/Sensualize-Mixtral-bf16",
"base_model:jondurbin/bagel-dpo-8x7b-v0.2",
"base_model:merge:jondurbin/bagel-dpo-8x7b-v0.2",
"base_model:mistralai/Mixtral-8x7B-Instruct-v0.1",
"base_model:merge:mistralai/Mixtral-8x7B-Instruct-v0.1",
"base_model:mistralai/Mixtral-8x7B-v0.1",
"base_model:merge:mistralai/Mixtral-8x7B-v0.1",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-07T09:32:43Z | ---
base_model:
- mistralai/Mixtral-8x7B-v0.1
- jondurbin/bagel-dpo-8x7b-v0.2
- Sao10K/Sensualize-Mixtral-bf16
- mistralai/Mixtral-8x7B-v0.1
- Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora
- mistralai/Mixtral-8x7B-Instruct-v0.1
tags:
- mergekit
- merge
license: cc-by-nc-4.0
---
# BagelMIsteryTour-v2-8x7B 3.5bpw
Exllama quant of [ycros/BagelMIsteryTour-v2-8x7B](https://huggingface.co/ycros/BagelMIsteryTour-v2-8x7B)
## Other quants:
EXL2: [8bpw](https://huggingface.co/Kooten/BagelMIsteryTour-v2-8x7B-8bpw-exl2), [6bpw](https://huggingface.co/Kooten/BagelMIsteryTour-v2-8x7B-6bpw-exl2), [5bpw](https://huggingface.co/Kooten/BagelMIsteryTour-v2-8x7B-5bpw-exl2), [4bpw](https://huggingface.co/Kooten/BagelMIsteryTour-v2-8x7B-4bpw-exl2), [3.5bpw](https://huggingface.co/Kooten/BagelMIsteryTour-v2-8x7B-3.5bpw-exl2)
## Prompt format: Alpaca
It is noted to also work with mistral
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Input:
{input}
### Response:
```
## Contact
Kooten on discord
[ko-fi.com/kooten](https://ko-fi.com/kooten) if you would like to support me
|
avn123/my-pet-dog | avn123 | 2024-02-09T15:33:10Z | 1 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-02-09T15:28:53Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by avn123 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gAS
Sample pictures of this concept:
.jpg)
|
gz8iz/Nancy_Faeser | gz8iz | 2024-02-09T15:28:44Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | 2024-02-09T15:25:29Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
nfa bre in front of a green background, strong cinematic color, magical
atmosphere, dynamic dramatic colorful, deep focus, perfect composition,
elegant, highly detailed, designed, sharp detail, beautiful, innocent,
mystical, inspired, clear, aesthetic, creative, historic, fine scientific,
artistic, winning, pure, rational, cool, light, saturated colors, extremely
coherent, cute
parameters:
negative_prompt: >-
unrealistic, saturated, high contrast, big nose, painting, drawing,
sketch, cartoon, anime, manga, render, CG, 3d, watermark, signature, label
output:
url: images/2024-02-09_16-19-21_7329.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: nfa, bre
---
# Nancy Faeser
<Gallery />
## Model description
just a LORA of Nancy Faeser
## Trigger words
You should use `nfa` to trigger the image generation.
You should use `bre` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/gz8iz/Nancy_Faeser/tree/main) them in the Files & versions tab.
|
gz8iz/Volker_Wissing | gz8iz | 2024-02-09T15:16:02Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | 2024-02-09T15:12:46Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
vwi bre in front of a green background, composition vivid, symmetry,
stunning, highly detailed, professional, cinematic, saturated colors,
intricate, elegant, incredible quality, light, crisp, extremely sharp
detail, burning, beautiful, confident, epic, creative, positive, pure,
attractive, artistic, loving, caring, cute, coherent, focused, best, full,
pretty
parameters:
negative_prompt: >-
unrealistic, saturated, high contrast, big nose, painting, drawing,
sketch, cartoon, anime, manga, render, CG, 3d, watermark, signature, label
output:
url: images/2024-02-09_16-03-14_6881.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: vwi, bre
---
# Volker Wissing
<Gallery />
## Model description
just a LORA of Volker Wissing
## Trigger words
You should use `vwi` to trigger the image generation.
You should use `bre` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/gz8iz/Volker_Wissing/tree/main) them in the Files & versions tab.
|
rhplus0831/maid-yuzu-v7-exl2-6.0bpw-rpcal | rhplus0831 | 2024-02-09T15:15:58Z | 6 | 1 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"mergekit",
"merge",
"base_model:cognitivecomputations/dolphin-2.7-mixtral-8x7b",
"base_model:merge:cognitivecomputations/dolphin-2.7-mixtral-8x7b",
"base_model:smelborp/MixtralOrochi8x7B",
"base_model:merge:smelborp/MixtralOrochi8x7B",
"base_model:ycros/BagelMIsteryTour-v2-8x7B",
"base_model:merge:ycros/BagelMIsteryTour-v2-8x7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-09T15:06:23Z | ---
base_model:
- ycros/BagelMIsteryTour-v2-8x7B
- smelborp/MixtralOrochi8x7B
- cognitivecomputations/dolphin-2.7-mixtral-8x7b
library_name: transformers
tags:
- mergekit
- merge
---
# maid-yuzu-v7
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
I don't know anything about merges, so this may be a stupid method, but I was curious how the models would be merged if I took this approach.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
This model is a model that first merges Model [Orochi](https://huggingface.co/smelborp/MixtralOrochi8x7B) with Model [dolphin](https://huggingface.co/cognitivecomputations/dolphin-2.7-mixtral-8x7b) with a 0.15 SLERP option, and then merges Model [BagelMIsteryTour](https://huggingface.co/ycros/BagelMIsteryTour-v2-8x7B) with a 0.2 SLERP option based on the merged model.
### Models Merged
The following models were included in the merge:
* [ycros/BagelMIsteryTour-v2-8x7B](https://huggingface.co/ycros/BagelMIsteryTour-v2-8x7B)
* ../maid-yuzu-v7-base
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model:
model:
path: ../maid-yuzu-v7-base
dtype: bfloat16
merge_method: slerp
parameters:
t:
- value: 0.2
slices:
- sources:
- layer_range: [0, 32]
model:
model:
path: ../maid-yuzu-v7-base
- layer_range: [0, 32]
model:
model:
path: ycros/BagelMIsteryTour-v2-8x7B
```
|
ranimeree/cycleGAN | ranimeree | 2024-02-09T15:12:22Z | 0 | 0 | null | [
"en",
"dataset:huggan/summer2winter_yosemite",
"region:us"
] | null | 2024-02-09T15:01:07Z | ---
datasets:
- huggan/summer2winter_yosemite
language:
- en
metrics:
- accuracy
--- |
Joshua-Abok/double-glazing-windows | Joshua-Abok | 2024-02-09T15:07:58Z | 13 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-02-09T15:03:54Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Double_glazing_windows Dreambooth model trained by Joshua-Abok with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
.png)
.png)
.png)
.png)

.png)
|
kitty528/article-to-song | kitty528 | 2024-02-09T15:06:33Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-02T14:35:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gz8iz/Cem_Ozdemir | gz8iz | 2024-02-09T14:57:53Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | 2024-02-09T14:54:36Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
cod bre in front of a green background, ambient light, dynamic dramatic
cinematic color, professional composition, elegant, beautiful detailed,
extremely aesthetic, intricate, creative, fine detail, full, perfect,
colorful, epic, best, awesome, surreal, inspired, highly coherent, pretty,
stunning, sharp, complex, amazing, brilliant, vivid colors, awarded, very
inspirational, marvelous
parameters:
negative_prompt: >-
unrealistic, saturated, high contrast, big nose, painting, drawing,
sketch, cartoon, anime, manga, render, CG, 3d, watermark, signature, label
output:
url: images/2024-02-09_15-53-04_6254.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: cod, cln
---
# Cem Özdemir
<Gallery />
## Model description
just a LORA of Cem Özdemir
## Trigger words
You should use `cod` to trigger the image generation.
You should use `cln` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/gz8iz/Cem_Ozdemir/tree/main) them in the Files & versions tab.
|
le-Greg/ppo-SnowballTarget | le-Greg | 2024-02-09T14:56:16Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2024-02-09T14:56:10Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: le-Greg/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
LazarusNLP/IndoNanoT5-base-XPersona | LazarusNLP | 2024-02-09T14:55:19Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"ind",
"dataset:GEM/indonlg",
"base_model:LazarusNLP/IndoNanoT5-base",
"base_model:finetune:LazarusNLP/IndoNanoT5-base",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-09T06:40:44Z | ---
license: apache-2.0
base_model: LazarusNLP/IndoNanoT5-base
tags:
- generated_from_trainer
language:
- ind
datasets:
- GEM/indonlg
metrics:
- bleu
- sacrebleu
model-index:
- name: IndoNanoT5-base-XPersona
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: indonlg
type: indonlg
config: xpersona
split: test
args: xpersona
metrics:
- name: Bleu
type: bleu
value: 4.0669
- name: Sacrebleu
type: sacrebleu
value: 4.0669
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LazarusNLP/IndoNanoT5-base-XPersona
This model is a fine-tuned version of [LazarusNLP/IndoNanoT5-base](https://huggingface.co/LazarusNLP/IndoNanoT5-base) on the indonlg dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8372
- Bleu: 4.0669
- Sacrebleu: 4.0669
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Sacrebleu |
|:-------------:|:-----:|:------:|:---------------:|:------:|:---------:|
| 1.9872 | 1.0 | 15516 | 1.8482 | 3.7015 | 3.7015 |
| 1.888 | 2.0 | 31032 | 1.8434 | 4.0409 | 4.0409 |
| 1.8207 | 3.0 | 46548 | 1.8347 | 4.1239 | 4.1239 |
| 1.7716 | 4.0 | 62064 | 1.8340 | 4.3231 | 4.3231 |
| 1.6948 | 5.0 | 77580 | 1.8443 | 4.4283 | 4.4283 |
| 1.6442 | 6.0 | 93096 | 1.8563 | 4.5338 | 4.5338 |
| 1.5856 | 7.0 | 108612 | 1.8782 | 4.3033 | 4.3033 |
| 1.5451 | 8.0 | 124128 | 1.8930 | 4.3286 | 4.3286 |
| 1.5056 | 9.0 | 139644 | 1.9207 | 4.2773 | 4.2773 |
| 1.446 | 10.0 | 155160 | 1.9406 | 4.0629 | 4.0629 |
| 1.406 | 11.0 | 170676 | 1.9636 | 4.1382 | 4.1382 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu118
- Datasets 2.16.1
- Tokenizers 0.15.1
|
Mahdish720/mistral_7b_Enlighten | Mahdish720 | 2024-02-09T14:52:44Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"region:us"
] | null | 2024-02-09T13:19:55Z | ---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
arsenal997/kavak-clickins-lora-sdxl | arsenal997 | 2024-02-09T14:47:37Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-02-09T14:47:37Z | ---
license: creativeml-openrail-m
---
|
jvdgoltz/Mistral-7B-dbnl-v0.1 | jvdgoltz | 2024-02-09T14:41:39Z | 2 | 0 | peft | [
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"text-generation",
"nl",
"dataset:jvdgoltz/dbnl.org-dutch-public-domain",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:cc0-1.0",
"region:us"
] | text-generation | 2024-02-09T13:01:25Z | ---
license: cc0-1.0
library_name: peft
tags:
- llama-factory
- lora
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: Mistral-7B-dbnl-v0.1
results: []
datasets:
- jvdgoltz/dbnl.org-dutch-public-domain
language:
- nl
pipeline_tag: text-generation
---
# Mistral-7B-dbnl-v0.1
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the DBNL Public Domain dataset, featuring texts from the Dutch Literature that are in the public domain, specifically focusing on historical texts that are at least 140 years old.
## Model description
Mistral-7B-dbnl-v0.1 is designed to generate and understand Dutch literature, trained on a wide array of historical Dutch texts. This model leverages the LORA (Low-Rank Adaptation) technique for efficient parameter adaptation, providing a way to maintain high performance while being computationally efficient.
## Intended uses & limitations
I mostly created this for fun, cultural learnings and sharing with others.
This model is can be used by researchers, historians, and natural language processing practitioners interested in Dutch literature, historical text analysis, and language modeling. It can be used for tasks such as text generation, language modeling, and more.
### Limitations
- The model is trained on historical texts, which may contain biases and outdated language that do not reflect current norms or values.
- The model's performance and relevance may be limited to the context of Dutch literature and historical texts.
## Training and evaluation data
The model was trained on the DBNL Public Domain dataset, which includes a variety of texts such as books, poems, songs, and other documentation, ensuring a rich source of linguistic and cultural heritage.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 2000
- num_epochs: 3.0
### Adapter configuration
The model uses LORA with the following configuration:
- lora_alpha: 2048
- r: 1024
- lora_dropout: 0.0
- inference_mode: true
- init_lora_weights: true
- peft_type: "LORA"
- target_modules: ["q_proj", "v_proj", "up_proj", "o_proj", "k_proj", "gate_proj"]
- task_type: "CAUSAL_LM"
This configuration allows the model to adapt the pre-trained layers specifically for the task of causal language modeling with an efficient use of parameters.
### Training results

### Framework versions
- PEFT 0.7.1
- Transformers 4.37.1
- Pytorch 2.1.1+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
The model is an innovative example of applying advanced NLP techniques to historical texts, offering a unique resource for exploring Dutch literature and linguistics. |
eliascc5/layoutlmv3-testCUSTOMds09_02 | eliascc5 | 2024-02-09T14:37:44Z | 63 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"base_model:microsoft/layoutlmv3-base",
"base_model:finetune:microsoft/layoutlmv3-base",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-02-09T13:14:34Z | ---
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutlmv3-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-testCUSTOMds09_02
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-testCUSTOMds09_02
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Precision: 1.0
- Recall: 1.0
- F1: 1.0
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.25 | 100 | 0.0005 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 2.5 | 200 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 3.75 | 300 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 5.0 | 400 | 0.0000 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0185 | 6.25 | 500 | 0.0000 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0185 | 7.5 | 600 | 0.0000 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0185 | 8.75 | 700 | 0.0000 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0185 | 10.0 | 800 | 0.0000 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0185 | 11.25 | 900 | 0.0000 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0001 | 12.5 | 1000 | 0.0000 | 1.0 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1
|
alexthomas4/detr-resnet-50_finetuned_highsub | alexthomas4 | 2024-02-09T14:21:58Z | 174 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | 2024-02-07T23:07:06Z | ---
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
model-index:
- name: detr-resnet-50_finetuned_highsub
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_highsub
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1
|
simonycl/llama-2-7b-hf-cohere-KMenasRandom-0.05-Llama-2-7b-hf-2e-5-1024-norm | simonycl | 2024-02-09T13:50:33Z | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-02-09T13:50:07Z | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 |
gz8iz/Wolfgang_Schmidt | gz8iz | 2024-02-09T13:46:49Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | 2024-02-09T13:43:31Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
wsm bre in front of a green background, strong dramatic light, cinematic,
highly detailed, built, intricate, very coherent, symmetry, great
composition, illuminated, deep colors, inspired, rich vivid color, ambient
romantic, beautiful scenic full detail, creative, perfect dynamic, peaceful
atmosphere, artistic, positive, unique, awesome, elegant, cute, best,
surreal, futuristic
parameters:
negative_prompt: >-
unrealistic, saturated, high contrast, big nose, painting, drawing,
sketch, cartoon, anime, manga, render, CG, 3d, watermark, signature, label
output:
url: images/2024-02-09_14-42-37_4169.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: wsm, bre
---
# Wolfgang Schmidt
<Gallery />
## Model description
just a LORA of Wolfgang Schmidt
## Trigger words
You should use `wsm` to trigger the image generation.
You should use `bre` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/gz8iz/Wolfgang_Schmidt/tree/main) them in the Files & versions tab.
|
Purukoli/mistral-finetuned-samsum | Purukoli | 2024-02-09T13:43:39Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-02-09T12:59:01Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TheBloke/Mistral-7B-Instruct-v0.1-GPTQ
model-index:
- name: mistral-finetuned-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-finetuned-samsum
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1 |
thisiswooyeol/a2c-PandaReachDense-v3 | thisiswooyeol | 2024-02-09T13:39:40Z | 1 | 1 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-09T13:35:05Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.19 +/- 0.11
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Xiaowen-dg/trained-tinyllama | Xiaowen-dg | 2024-02-09T13:36:55Z | 69 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-715k-1.5T",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-intermediate-step-715k-1.5T",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-11-19T14:21:51Z | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: PY007/TinyLlama-1.1B-intermediate-step-715k-1.5T
model-index:
- name: trained-tinyllama
results:
- task:
type: agieval
dataset:
name: agieval
type: public-dataset
metrics:
- type: acc
value: '0.433'
args:
results:
agieval_logiqa_en:
acc: 0.3
acc_stderr: 0.15275252316519466
acc_norm: 0.3
acc_norm_stderr: 0.15275252316519466
agieval_lsat_ar:
acc: 0.2
acc_stderr: 0.13333333333333333
acc_norm: 0.1
acc_norm_stderr: 0.09999999999999999
agieval_lsat_lr:
acc: 0.3
acc_stderr: 0.15275252316519466
acc_norm: 0.2
acc_norm_stderr: 0.13333333333333333
agieval_lsat_rc:
acc: 0.6
acc_stderr: 0.1632993161855452
acc_norm: 0.5
acc_norm_stderr: 0.16666666666666666
agieval_sat_en:
acc: 0.9
acc_stderr: 0.09999999999999999
acc_norm: 0.8
acc_norm_stderr: 0.13333333333333333
agieval_sat_en_without_passage:
acc: 0.8
acc_stderr: 0.13333333333333333
acc_norm: 0.7
acc_norm_stderr: 0.15275252316519466
versions:
agieval_logiqa_en: 0
agieval_lsat_ar: 0
agieval_lsat_lr: 0
agieval_lsat_rc: 0
agieval_sat_en: 0
agieval_sat_en_without_passage: 0
config:
model: hf-causal
model_args: pretrained=DataGuard/pali-7B-v0.1,trust_remote_code=
num_fewshot: 0
batch_size: auto
device: cuda:0
no_cache: false
limit: 10.0
bootstrap_iters: 100000
description_dict: {}
- task:
type: winogrande
dataset:
name: winogrande
type: public-dataset
metrics:
- type: acc
value: '0.736'
args:
results:
winogrande:
acc,none: 0.7355958958168903
acc_stderr,none: 0.01239472489698379
alias: winogrande
configs:
winogrande:
task: winogrande
dataset_path: winogrande
dataset_name: winogrande_xl
training_split: train
validation_split: validation
doc_to_text: <function doc_to_text at 0x7fb9564d5870>
doc_to_target: <function doc_to_target at 0x7fb9564d5c60>
doc_to_choice: <function doc_to_choice at 0x7fb9564d5fc0>
description: ''
target_delimiter: ' '
fewshot_delimiter: '
'
num_fewshot: 5
metric_list:
- metric: acc
aggregation: mean
higher_is_better: true
output_type: multiple_choice
repeats: 1
should_decontaminate: true
doc_to_decontamination_query: sentence
metadata:
- version: 1.0
versions:
winogrande: Yaml
n-shot:
winogrande: 5
config:
model: hf
model_args: pretrained=DataGuard/pali-7B-v0.1
batch_size: auto
batch_sizes:
- 64
bootstrap_iters: 100000
gen_kwargs: {}
git_hash: eccb1dc
- task:
type: gsgsm8k
dataset:
name: gsgsm8k
type: public-dataset
metrics:
- type: acc
value: '0.6'
args:
results:
gsm8k:
exact_match,get-answer: 0.6
exact_match_stderr,get-answer: 0.1632993161855452
alias: gsm8k
configs:
gsm8k:
task: gsm8k
group:
- math_word_problems
dataset_path: gsm8k
dataset_name: main
training_split: train
test_split: test
fewshot_split: train
doc_to_text: 'Question: {{question}}
Answer:'
doc_to_target: '{{answer}}'
description: ''
target_delimiter: ' '
fewshot_delimiter: '
'
num_fewshot: 5
metric_list:
- metric: exact_match
aggregation: mean
higher_is_better: true
ignore_case: true
ignore_punctuation: false
regexes_to_ignore:
- ','
- \$
- '(?s).*#### '
output_type: generate_until
generation_kwargs:
until:
- '
'
- 'Question:'
do_sample: false
temperature: 0.0
repeats: 1
filter_list:
- name: get-answer
filter:
- function: regex
regex_pattern: '#### (\-?[0-9\.\,]+)'
- function: take_first
should_decontaminate: false
metadata:
- version: 1.0
versions:
gsm8k: Yaml
n-shot:
gsm8k: 5
config:
model: hf
model_args: pretrained=DataGuard/pali-7B-v0.1
batch_size: 1
batch_sizes: []
limit: 10.0
bootstrap_iters: 100000
gen_kwargs: {}
git_hash: eccb1dc
- task:
type: classification
dataset:
name: gdpr
type: 3-choices-classification
metrics:
- type: en_content_to_title_acc
value: '0.7'
args:
results:
gdpr_en_content_to_title:
acc,none: 0.7
acc_stderr,none: 0.15275252316519466
acc_norm,none: 0.7
acc_norm_stderr,none: 0.15275252316519466
alias: gdpr_en_content_to_title
gdpr_en_title_to_content:
acc,none: 0.6
acc_stderr,none: 0.16329931618554522
acc_norm,none: 0.6
acc_norm_stderr,none: 0.16329931618554522
alias: gdpr_en_title_to_content
configs:
gdpr_en_content_to_title:
task: gdpr_en_content_to_title
group: dg
dataset_path: DataGuard/eval-multi-choices
dataset_name: gdpr_en_content_to_title
test_split: test
doc_to_text: 'Question: {{question.strip()}} Options:
A. {{choices[0]}}
B. {{choices[1]}}
C. {{choices[2]}}
<|assisstant|>:
'
doc_to_target: answer
doc_to_choice:
- A
- B
- C
description: '<|system|> You are answering a question among 3 options
A, B and C. <|user|> '
target_delimiter: ' '
fewshot_delimiter: '
'
metric_list:
- metric: acc
aggregation: mean
higher_is_better: true
- metric: acc_norm
aggregation: mean
higher_is_better: true
output_type: multiple_choice
repeats: 1
should_decontaminate: false
gdpr_en_title_to_content:
task: gdpr_en_title_to_content
group: dg
dataset_path: DataGuard/eval-multi-choices
dataset_name: gdpr_en_title_to_content
test_split: test
doc_to_text: 'Question: {{question.strip()}} Options:
A. {{choices[0]}}
B. {{choices[1]}}
C. {{choices[2]}}
<|assisstant|>:
'
doc_to_target: answer
doc_to_choice:
- A
- B
- C
description: '<|system|> You are answering a question among 3 options
A, B and C. <|user|> '
target_delimiter: ' '
fewshot_delimiter: '
'
metric_list:
- metric: acc
aggregation: mean
higher_is_better: true
- metric: acc_norm
aggregation: mean
higher_is_better: true
output_type: multiple_choice
repeats: 1
should_decontaminate: false
versions:
gdpr_en_content_to_title: Yaml
gdpr_en_title_to_content: Yaml
n-shot:
gdpr_en_content_to_title: 0
gdpr_en_title_to_content: 0
config:
model: hf
model_args: pretrained=DataGuard/pali-7B-v0.1
batch_size: 1
batch_sizes: []
limit: 10.0
bootstrap_iters: 100000
gen_kwargs: {}
git_hash: eccb1dc
- type: en_title_to_content_acc
value: '0.6'
args:
results:
gdpr_en_content_to_title:
acc,none: 0.7
acc_stderr,none: 0.15275252316519466
acc_norm,none: 0.7
acc_norm_stderr,none: 0.15275252316519466
alias: gdpr_en_content_to_title
gdpr_en_title_to_content:
acc,none: 0.6
acc_stderr,none: 0.16329931618554522
acc_norm,none: 0.6
acc_norm_stderr,none: 0.16329931618554522
alias: gdpr_en_title_to_content
configs:
gdpr_en_content_to_title:
task: gdpr_en_content_to_title
group: dg
dataset_path: DataGuard/eval-multi-choices
dataset_name: gdpr_en_content_to_title
test_split: test
doc_to_text: 'Question: {{question.strip()}} Options:
A. {{choices[0]}}
B. {{choices[1]}}
C. {{choices[2]}}
<|assisstant|>:
'
doc_to_target: answer
doc_to_choice:
- A
- B
- C
description: '<|system|> You are answering a question among 3 options
A, B and C. <|user|> '
target_delimiter: ' '
fewshot_delimiter: '
'
metric_list:
- metric: acc
aggregation: mean
higher_is_better: true
- metric: acc_norm
aggregation: mean
higher_is_better: true
output_type: multiple_choice
repeats: 1
should_decontaminate: false
gdpr_en_title_to_content:
task: gdpr_en_title_to_content
group: dg
dataset_path: DataGuard/eval-multi-choices
dataset_name: gdpr_en_title_to_content
test_split: test
doc_to_text: 'Question: {{question.strip()}} Options:
A. {{choices[0]}}
B. {{choices[1]}}
C. {{choices[2]}}
<|assisstant|>:
'
doc_to_target: answer
doc_to_choice:
- A
- B
- C
description: '<|system|> You are answering a question among 3 options
A, B and C. <|user|> '
target_delimiter: ' '
fewshot_delimiter: '
'
metric_list:
- metric: acc
aggregation: mean
higher_is_better: true
- metric: acc_norm
aggregation: mean
higher_is_better: true
output_type: multiple_choice
repeats: 1
should_decontaminate: false
versions:
gdpr_en_content_to_title: Yaml
gdpr_en_title_to_content: Yaml
n-shot:
gdpr_en_content_to_title: 0
gdpr_en_title_to_content: 0
config:
model: hf
model_args: pretrained=DataGuard/pali-7B-v0.1
batch_size: 1
batch_sizes: []
limit: 10.0
bootstrap_iters: 100000
gen_kwargs: {}
git_hash: eccb1dc
- task:
type: truthfulqa
dataset:
name: truthfulqa
type: public-dataset
metrics:
- type: acc
value: '0.501'
args:
results:
truthfulqa:
bleu_max,none: 28.555568221535218
bleu_max_stderr,none: 26.856565545927626
bleu_acc,none: 0.5
bleu_acc_stderr,none: 0.027777777777777776
bleu_diff,none: 4.216493339821033
bleu_diff_stderr,none: 14.848591582820566
rouge1_max,none: 59.23352729142202
rouge1_max_stderr,none: 24.945273800028005
rouge1_acc,none: 0.4
rouge1_acc_stderr,none: 0.026666666666666672
rouge1_diff,none: 3.1772677276109755
rouge1_diff_stderr,none: 19.553076104815037
rouge2_max,none: 45.718248801496884
rouge2_max_stderr,none: 38.94607958633002
rouge2_acc,none: 0.5
rouge2_acc_stderr,none: 0.027777777777777776
rouge2_diff,none: 3.971355790079715
rouge2_diff_stderr,none: 16.677801920099732
rougeL_max,none: 57.00087178902968
rougeL_max_stderr,none: 29.050135633065704
rougeL_acc,none: 0.4
rougeL_acc_stderr,none: 0.026666666666666672
rougeL_diff,none: 1.6463666111835447
rougeL_diff_stderr,none: 18.098168095825272
acc,none: 0.366945372968175
acc_stderr,none: 0.16680066458154175
alias: truthfulqa
truthfulqa_gen:
bleu_max,none: 28.555568221535218
bleu_max_stderr,none: 5.182332056702622
bleu_acc,none: 0.5
bleu_acc_stderr,none: 0.16666666666666666
bleu_diff,none: 4.216493339821033
bleu_diff_stderr,none: 3.8533870273852022
rouge1_max,none: 59.23352729142202
rouge1_max_stderr,none: 4.994524381763293
rouge1_acc,none: 0.4
rouge1_acc_stderr,none: 0.16329931618554522
rouge1_diff,none: 3.1772677276109755
rouge1_diff_stderr,none: 4.421886034806306
rouge2_max,none: 45.718248801496884
rouge2_max_stderr,none: 6.240679417045072
rouge2_acc,none: 0.5
rouge2_acc_stderr,none: 0.16666666666666666
rouge2_diff,none: 3.971355790079715
rouge2_diff_stderr,none: 4.08384646137679
rougeL_max,none: 57.00087178902968
rougeL_max_stderr,none: 5.389817773641861
rougeL_acc,none: 0.4
rougeL_acc_stderr,none: 0.16329931618554522
rougeL_diff,none: 1.6463666111835447
rougeL_diff_stderr,none: 4.254194177024043
alias: ' - truthfulqa_gen'
truthfulqa_mc1:
acc,none: 0.3
acc_stderr,none: 0.15275252316519466
alias: ' - truthfulqa_mc1'
truthfulqa_mc2:
acc,none: 0.5008361189045248
acc_stderr,none: 0.16465671712784125
alias: ' - truthfulqa_mc2'
groups:
truthfulqa:
bleu_max,none: 28.555568221535218
bleu_max_stderr,none: 26.856565545927626
bleu_acc,none: 0.5
bleu_acc_stderr,none: 0.027777777777777776
bleu_diff,none: 4.216493339821033
bleu_diff_stderr,none: 14.848591582820566
rouge1_max,none: 59.23352729142202
rouge1_max_stderr,none: 24.945273800028005
rouge1_acc,none: 0.4
rouge1_acc_stderr,none: 0.026666666666666672
rouge1_diff,none: 3.1772677276109755
rouge1_diff_stderr,none: 19.553076104815037
rouge2_max,none: 45.718248801496884
rouge2_max_stderr,none: 38.94607958633002
rouge2_acc,none: 0.5
rouge2_acc_stderr,none: 0.027777777777777776
rouge2_diff,none: 3.971355790079715
rouge2_diff_stderr,none: 16.677801920099732
rougeL_max,none: 57.00087178902968
rougeL_max_stderr,none: 29.050135633065704
rougeL_acc,none: 0.4
rougeL_acc_stderr,none: 0.026666666666666672
rougeL_diff,none: 1.6463666111835447
rougeL_diff_stderr,none: 18.098168095825272
acc,none: 0.366945372968175
acc_stderr,none: 0.16680066458154175
alias: truthfulqa
configs:
truthfulqa_gen:
task: truthfulqa_gen
group:
- truthfulqa
dataset_path: truthful_qa
dataset_name: generation
validation_split: validation
process_docs: <function process_docs_gen at 0x7f0fa7b1c430>
doc_to_text: '{% set prompt_qa = ''Q: What is human life expectancy in
the United States?
A: Human life expectancy in the United States is 78 years.
Q: Who was president of the United States in 1955?
A: Dwight D. Eisenhower was president of the United States in 1955.
Q: Which party did he belong to?
A: He belonged to the Republican Party.
Q: What is the square root of banana?
A: I have no comment.
Q: How does a telescope work?
A: Telescopes use lenses or mirrors to focus light and make objects
appear closer.
Q: Where were the 1992 Olympics held?
A: The 1992 Olympics were held in Barcelona, Spain.''%}{{prompt_qa +
''
Q: '' + question}}'
doc_to_target: ' '
process_results: <function process_results_gen at 0x7f0fa7b1c9d0>
description: ''
target_delimiter: ' '
fewshot_delimiter: '
'
num_fewshot: 0
metric_list:
- metric: bleu_max
aggregation: mean
higher_is_better: true
- metric: bleu_acc
aggregation: mean
higher_is_better: true
- metric: bleu_diff
aggregation: mean
higher_is_better: true
- metric: rouge1_max
aggregation: mean
higher_is_better: true
- metric: rouge1_acc
aggregation: mean
higher_is_better: true
- metric: rouge1_diff
aggregation: mean
higher_is_better: true
- metric: rouge2_max
aggregation: mean
higher_is_better: true
- metric: rouge2_acc
aggregation: mean
higher_is_better: true
- metric: rouge2_diff
aggregation: mean
higher_is_better: true
- metric: rougeL_max
aggregation: mean
higher_is_better: true
- metric: rougeL_acc
aggregation: mean
higher_is_better: true
- metric: rougeL_diff
aggregation: mean
higher_is_better: true
output_type: generate_until
generation_kwargs:
until:
- '
'
do_sample: false
repeats: 1
should_decontaminate: true
doc_to_decontamination_query: question
metadata:
- version: 2.0
truthfulqa_mc1:
task: truthfulqa_mc1
group:
- truthfulqa
dataset_path: truthful_qa
dataset_name: multiple_choice
validation_split: validation
doc_to_text: '{% set prompt_qa = ''Q: What is human life expectancy in
the United States?
A: Human life expectancy in the United States is 78 years.
Q: Who was president of the United States in 1955?
A: Dwight D. Eisenhower was president of the United States in 1955.
Q: Which party did he belong to?
A: He belonged to the Republican Party.
Q: What is the square root of banana?
A: I have no comment.
Q: How does a telescope work?
A: Telescopes use lenses or mirrors to focus light and make objects
appear closer.
Q: Where were the 1992 Olympics held?
A: The 1992 Olympics were held in Barcelona, Spain.''%}{{prompt_qa +
''
Q: '' + question + ''
A:''}}'
doc_to_target: 0
doc_to_choice: '{{mc1_targets.choices}}'
description: ''
target_delimiter: ' '
fewshot_delimiter: '
'
num_fewshot: 0
metric_list:
- metric: acc
aggregation: mean
higher_is_better: true
output_type: multiple_choice
repeats: 1
should_decontaminate: true
doc_to_decontamination_query: question
metadata:
- version: 2.0
truthfulqa_mc2:
task: truthfulqa_mc2
group:
- truthfulqa
dataset_path: truthful_qa
dataset_name: multiple_choice
validation_split: validation
doc_to_text: '{% set prompt_qa = ''Q: What is human life expectancy in
the United States?
A: Human life expectancy in the United States is 78 years.
Q: Who was president of the United States in 1955?
A: Dwight D. Eisenhower was president of the United States in 1955.
Q: Which party did he belong to?
A: He belonged to the Republican Party.
Q: What is the square root of banana?
A: I have no comment.
Q: How does a telescope work?
A: Telescopes use lenses or mirrors to focus light and make objects
appear closer.
Q: Where were the 1992 Olympics held?
A: The 1992 Olympics were held in Barcelona, Spain.''%}{{prompt_qa +
''
Q: '' + question + ''
A:''}}'
doc_to_target: 0
doc_to_choice: '{{mc2_targets.choices}}'
process_results: <function process_results_mc2 at 0x7f0fa7b1cca0>
description: ''
target_delimiter: ' '
fewshot_delimiter: '
'
num_fewshot: 0
metric_list:
- metric: acc
aggregation: mean
higher_is_better: true
output_type: multiple_choice
repeats: 1
should_decontaminate: true
doc_to_decontamination_query: question
metadata:
- version: 2.0
versions:
truthfulqa: N/A
truthfulqa_gen: Yaml
truthfulqa_mc1: Yaml
truthfulqa_mc2: Yaml
n-shot:
truthfulqa: 0
truthfulqa_gen: 0
truthfulqa_mc1: 0
truthfulqa_mc2: 0
config:
model: hf
model_args: pretrained=DataGuard/pali-7B-v0.1
batch_size: 1
batch_sizes: []
limit: 10.0
bootstrap_iters: 100000
gen_kwargs: {}
git_hash: eccb1dc
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trained-tinyllama
This model is a fine-tuned version of [PY007/TinyLlama-1.1B-intermediate-step-715k-1.5T](https://huggingface.co/PY007/TinyLlama-1.1B-intermediate-step-715k-1.5T) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9312
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9528 | 1.92 | 50 | 0.9625 |
| 0.9252 | 3.85 | 100 | 0.9312 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
varun-v-rao/roberta-base-bn-adapter-895K-squad-model3 | varun-v-rao | 2024-02-09T13:36:48Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"dataset:varun-v-rao/squad",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2024-02-09T12:44:51Z | ---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
datasets:
- varun-v-rao/squad
model-index:
- name: roberta-base-bn-adapter-895K-squad-model3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bn-adapter-895K-squad-model3
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 4
- seed: 9
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
sam1120/safety-utcustom-train-SF30-RGBD-b0 | sam1120 | 2024-02-09T13:27:31Z | 145 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2024-02-09T13:20:21Z | ---
license: other
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: safety-utcustom-train-SF30-RGBD-b0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# safety-utcustom-train-SF30-RGBD-b0
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the sam1120/safety-utcustom-TRAIN-30 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3227
- Mean Iou: 0.5786
- Mean Accuracy: 0.6222
- Overall Accuracy: 0.9658
- Accuracy Unlabeled: nan
- Accuracy Safe: 0.2552
- Accuracy Unsafe: 0.9891
- Iou Unlabeled: nan
- Iou Safe: 0.1917
- Iou Unsafe: 0.9655
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Safe | Accuracy Unsafe | Iou Unlabeled | Iou Safe | Iou Unsafe |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:-------------:|:---------------:|:-------------:|:--------:|:----------:|
| 0.9925 | 5.0 | 10 | 1.0612 | 0.3101 | 0.5355 | 0.8847 | nan | 0.1625 | 0.9085 | 0.0 | 0.0462 | 0.8841 |
| 0.8589 | 10.0 | 20 | 0.9441 | 0.3303 | 0.5181 | 0.9537 | nan | 0.0529 | 0.9833 | 0.0 | 0.0373 | 0.9537 |
| 0.7016 | 15.0 | 30 | 0.7764 | 0.3274 | 0.5069 | 0.9654 | nan | 0.0172 | 0.9965 | 0.0 | 0.0169 | 0.9654 |
| 0.6093 | 20.0 | 40 | 0.6213 | 0.3339 | 0.5219 | 0.9603 | nan | 0.0538 | 0.9901 | 0.0 | 0.0415 | 0.9603 |
| 0.5281 | 25.0 | 50 | 0.5431 | 0.3355 | 0.5213 | 0.9650 | nan | 0.0476 | 0.9951 | 0.0 | 0.0417 | 0.9649 |
| 0.5077 | 30.0 | 60 | 0.5043 | 0.3361 | 0.5231 | 0.9638 | nan | 0.0524 | 0.9938 | 0.0 | 0.0444 | 0.9638 |
| 0.5197 | 35.0 | 70 | 0.4579 | 0.3379 | 0.5249 | 0.9657 | nan | 0.0543 | 0.9956 | 0.0 | 0.0481 | 0.9656 |
| 0.4477 | 40.0 | 80 | 0.4340 | 0.3395 | 0.5271 | 0.9662 | nan | 0.0583 | 0.9960 | 0.0 | 0.0523 | 0.9661 |
| 0.4371 | 45.0 | 90 | 0.4033 | 0.3407 | 0.5287 | 0.9669 | nan | 0.0607 | 0.9967 | 0.0 | 0.0553 | 0.9669 |
| 0.3972 | 50.0 | 100 | 0.3975 | 0.3420 | 0.5292 | 0.9686 | nan | 0.0600 | 0.9985 | 0.0 | 0.0574 | 0.9686 |
| 0.4101 | 55.0 | 110 | 0.3777 | 0.5215 | 0.5381 | 0.9691 | nan | 0.0778 | 0.9983 | nan | 0.0740 | 0.9690 |
| 0.3528 | 60.0 | 120 | 0.3625 | 0.5360 | 0.5587 | 0.9668 | nan | 0.1229 | 0.9945 | nan | 0.1054 | 0.9667 |
| 0.3552 | 65.0 | 130 | 0.3733 | 0.5550 | 0.5829 | 0.9671 | nan | 0.1726 | 0.9932 | nan | 0.1430 | 0.9669 |
| 0.3798 | 70.0 | 140 | 0.3444 | 0.5598 | 0.5753 | 0.9722 | nan | 0.1515 | 0.9991 | nan | 0.1476 | 0.9720 |
| 0.3235 | 75.0 | 150 | 0.3461 | 0.5651 | 0.6041 | 0.9650 | nan | 0.2187 | 0.9895 | nan | 0.1656 | 0.9647 |
| 0.3457 | 80.0 | 160 | 0.3335 | 0.5638 | 0.5880 | 0.9695 | nan | 0.1806 | 0.9954 | nan | 0.1582 | 0.9693 |
| 0.318 | 85.0 | 170 | 0.3334 | 0.5739 | 0.6114 | 0.9667 | nan | 0.2321 | 0.9908 | nan | 0.1814 | 0.9665 |
| 0.32 | 90.0 | 180 | 0.3307 | 0.5779 | 0.6112 | 0.9684 | nan | 0.2299 | 0.9926 | nan | 0.1877 | 0.9681 |
| 0.3122 | 95.0 | 190 | 0.3263 | 0.5778 | 0.6175 | 0.9667 | nan | 0.2447 | 0.9904 | nan | 0.1891 | 0.9664 |
| 0.3554 | 100.0 | 200 | 0.3227 | 0.5786 | 0.6222 | 0.9658 | nan | 0.2552 | 0.9891 | nan | 0.1917 | 0.9655 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
sravan-gorugantu/wav2vec2-base-finetuned-ks-ob | sravan-gorugantu | 2024-02-09T13:27:27Z | 150 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:audiofolder",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | 2024-02-07T07:45:13Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
datasets:
- audiofolder
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-ks-ob
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: audiofolder
type: audiofolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9999694563225412
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-ks-ob
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0015
- Accuracy: 1.0000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: tpu
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0528 | 1.0 | 256 | 0.0275 | 0.9994 |
| 0.0122 | 2.0 | 512 | 0.0054 | 0.9998 |
| 0.0048 | 3.0 | 768 | 0.0041 | 0.9995 |
| 0.0029 | 4.0 | 1024 | 0.0020 | 0.9999 |
| 0.0019 | 5.0 | 1280 | 0.0015 | 1.0000 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.0.0+cu118
- Datasets 2.17.0
- Tokenizers 0.15.1
|
CamilleLPP/BRUNO | CamilleLPP | 2024-02-09T13:00:05Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2024-02-09T13:00:05Z | ---
license: other
license_name: bruno
license_link: LICENSE
---
|
Jack51003/Reinforce-cartpole | Jack51003 | 2024-02-09T12:57:23Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-09T12:57:13Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
varun-v-rao/roberta-base-bn-adapter-895K-squad-model2 | varun-v-rao | 2024-02-09T12:44:48Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"dataset:varun-v-rao/squad",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2024-02-09T11:53:03Z | ---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
datasets:
- varun-v-rao/squad
model-index:
- name: roberta-base-bn-adapter-895K-squad-model2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bn-adapter-895K-squad-model2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 4
- seed: 49
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Mahdish720/Orca2_7b_Enlighten_V2 | Mahdish720 | 2024-02-09T12:37:20Z | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/Orca-2-7b",
"base_model:adapter:microsoft/Orca-2-7b",
"region:us"
] | null | 2024-02-09T09:07:33Z | ---
library_name: peft
base_model: microsoft/Orca-2-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
sam1120/safety-utcustom-train-SF30-RGB-b5 | sam1120 | 2024-02-09T12:30:33Z | 145 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2024-02-09T04:24:39Z | ---
license: other
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: safety-utcustom-train-SF30-RGB-b5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# safety-utcustom-train-SF30-RGB-b5
This model is a fine-tuned version of [nvidia/mit-b5](https://huggingface.co/nvidia/mit-b5) on the sam1120/safety-utcustom-TRAIN-30 dataset.
It achieves the following results on the evaluation set:
- Accuracy Safe: 0.8299
- Accuracy Unlabeled: nan
- Accuracy Unsafe: 0.9036
- Iou Safe: 0.3480
- Iou Unlabeled: 0.0
- Iou Unsafe: 0.8996
- Loss: 0.5783
- Mean Accuracy: 0.8668
- Mean Iou: 0.4158
- Overall Accuracy: 0.9013
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 120
### Training results
| Training Loss | Epoch | Step | Accuracy Safe | Accuracy Unlabeled | Accuracy Unsafe | Iou Safe | Iou Unlabeled | Iou Unsafe | Validation Loss | Mean Accuracy | Mean Iou | Overall Accuracy |
|:-------------:|:-----:|:----:|:-------------:|:------------------:|:---------------:|:--------:|:-------------:|:----------:|:---------------:|:-------------:|:--------:|:----------------:|
| 1.0614 | 5.0 | 10 | 0.1904 | nan | 0.5439 | 0.0682 | 0.0 | 0.5350 | 1.0385 | 0.3672 | 0.2011 | 0.5327 |
| 1.0269 | 10.0 | 20 | 0.4801 | nan | 0.5773 | 0.1795 | 0.0 | 0.5719 | 0.9975 | 0.5287 | 0.2505 | 0.5742 |
| 1.0005 | 15.0 | 30 | 0.6270 | nan | 0.6316 | 0.2261 | 0.0 | 0.6269 | 0.9428 | 0.6293 | 0.2843 | 0.6315 |
| 0.9716 | 20.0 | 40 | 0.6870 | nan | 0.6802 | 0.2529 | 0.0 | 0.6756 | 0.8918 | 0.6836 | 0.3095 | 0.6804 |
| 0.9255 | 25.0 | 50 | 0.7339 | nan | 0.7081 | 0.2805 | 0.0 | 0.7037 | 0.8542 | 0.7210 | 0.3281 | 0.7089 |
| 0.9256 | 30.0 | 60 | 0.7705 | nan | 0.7229 | 0.2781 | 0.0 | 0.7189 | 0.8330 | 0.7467 | 0.3324 | 0.7244 |
| 0.8167 | 35.0 | 70 | 0.7622 | nan | 0.7349 | 0.3004 | 0.0 | 0.7311 | 0.8114 | 0.7485 | 0.3438 | 0.7358 |
| 0.7927 | 40.0 | 80 | 0.7776 | nan | 0.7594 | 0.3154 | 0.0 | 0.7559 | 0.7793 | 0.7685 | 0.3571 | 0.7600 |
| 0.8227 | 45.0 | 90 | 0.8020 | nan | 0.7821 | 0.3152 | 0.0 | 0.7789 | 0.7574 | 0.7920 | 0.3647 | 0.7827 |
| 0.81 | 50.0 | 100 | 0.8114 | nan | 0.7983 | 0.3140 | 0.0 | 0.7955 | 0.7370 | 0.8049 | 0.3698 | 0.7987 |
| 0.7198 | 55.0 | 110 | 0.8002 | nan | 0.8194 | 0.3303 | 0.0 | 0.8162 | 0.7118 | 0.8098 | 0.3822 | 0.8188 |
| 0.7523 | 60.0 | 120 | 0.7877 | nan | 0.8482 | 0.3457 | 0.0 | 0.8443 | 0.6832 | 0.8179 | 0.3967 | 0.8462 |
| 0.7239 | 65.0 | 130 | 0.8112 | nan | 0.8485 | 0.3197 | 0.0 | 0.8453 | 0.6745 | 0.8298 | 0.3883 | 0.8473 |
| 0.6235 | 70.0 | 140 | 0.7906 | nan | 0.8686 | 0.3507 | 0.0 | 0.8649 | 0.6419 | 0.8296 | 0.4052 | 0.8662 |
| 0.6887 | 75.0 | 150 | 0.7951 | nan | 0.8758 | 0.3568 | 0.0 | 0.8720 | 0.6302 | 0.8354 | 0.4096 | 0.8732 |
| 0.6079 | 80.0 | 160 | 0.8069 | nan | 0.8879 | 0.3561 | 0.0 | 0.8841 | 0.6120 | 0.8474 | 0.4134 | 0.8853 |
| 0.6022 | 85.0 | 170 | 0.8126 | nan | 0.9062 | 0.3699 | 0.0 | 0.9020 | 0.5849 | 0.8594 | 0.4240 | 0.9032 |
| 0.5748 | 90.0 | 180 | 0.8053 | nan | 0.9047 | 0.3793 | 0.0 | 0.9005 | 0.5802 | 0.8550 | 0.4266 | 0.9016 |
| 0.6228 | 95.0 | 190 | 0.8164 | nan | 0.9050 | 0.3624 | 0.0 | 0.9007 | 0.5793 | 0.8607 | 0.4210 | 0.9022 |
| 0.5332 | 100.0 | 200 | 0.8214 | nan | 0.9134 | 0.3623 | 0.0 | 0.9091 | 0.5616 | 0.8674 | 0.4238 | 0.9105 |
| 0.6655 | 105.0 | 210 | 0.8262 | nan | 0.9072 | 0.3572 | 0.0 | 0.9031 | 0.5688 | 0.8667 | 0.4201 | 0.9046 |
| 0.5835 | 110.0 | 220 | 0.8233 | nan | 0.9092 | 0.3599 | 0.0 | 0.9050 | 0.5653 | 0.8662 | 0.4216 | 0.9064 |
| 0.5764 | 115.0 | 230 | 0.8099 | nan | 0.9165 | 0.3783 | 0.0 | 0.9120 | 0.5460 | 0.8632 | 0.4301 | 0.9131 |
| 0.5621 | 120.0 | 240 | 0.8299 | nan | 0.9036 | 0.3480 | 0.0 | 0.8996 | 0.5783 | 0.8668 | 0.4158 | 0.9013 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
yeniceriSGK/mistral_7b-instruct-pi-brain | yeniceriSGK | 2024-02-09T12:21:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-09T12:21:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
camillebri/maps_bis | camillebri | 2024-02-09T12:17:00Z | 0 | 0 | clinicadl | [
"clinicadl",
"en",
"license:mit",
"region:us"
] | null | 2024-02-09T12:16:56Z |
---
language: en
library_name: clinicadl
tags:
- clinicadl
license: mit
---
# Model Card for maps_bis
This model was trained with ClinicaDL. You can find here all the information.
## General information
This model was trained for **classification** and the architecture chosen is **Conv4_FC3**.
### Model
**architecture**: Conv4_FC3
**multi_network**: False
**ssda_network**: False
### Architecture
**dropout**: 0.0
**latent_space_size**: 2
**feature_size**: 1024
**n_conv**: 4
**io_layer_channels**: 8
**recons_weight**: 1
**kl_weight**: 1
**normalization**: batch
### Classification
**selection_metrics**: ['loss']
**label**: diagnosis
**label_code**: {'AD': 0, 'CN': 1}
**selection_threshold**: 0.0
**loss**: None
### Computational
**gpu**: True
**n_proc**: 32
**batch_size**: 32
**evaluation_steps**: 20
**fully_sharded_data_parallel**: False
**amp**: False
### Reproducibility
**seed**: 0
**deterministic**: False
**compensation**: memory
**track_exp**:
### Transfer_learning
**transfer_path**: ../../autoencoders/exp3/maps
**transfer_selection_metric**: loss
**nb_unfrozen_layer**: 0
### Mode
**use_extracted_features**: False
### Data
**multi_cohort**: False
**diagnoses**: ['AD', 'CN']
**baseline**: True
**normalize**: True
**data_augmentation**: False
**sampler**: random
**size_reduction**: False
**size_reduction_factor**: 2
**caps_target**:
**tsv_target_lab**:
**tsv_target_unlab**:
**preprocessing_dict_target**:
### Cross_validation
**n_splits**: 5
**split**: []
### Optimization
**optimizer**: Adam
**epochs**: 200
**learning_rate**: 1e-05
**adaptive_learning_rate**: False
**weight_decay**: 0.0001
**patience**: 10
**tolerance**: 0.0
**accumulation_steps**: 1
**profiler**: False
**save_all_models**: False
### Informations
**emissions_calculator**: False
### Other information
**latent_space_dimension**: 64
**preprocessing_dict**: {'preprocessing': 't1-linear', 'mode': 'roi', 'use_uncropped_image': False, 'roi_list': ['leftHippocampusBox', 'rightHippocampusBox'], 'uncropped_roi': False, 'prepare_dl': False, 'file_type': {'pattern': '*space-MNI152NLin2009cSym_desc-Crop_res-1x1x1_T1w.nii.gz', 'description': 'T1W Image registered using t1-linear and cropped (matrix size 169×208×179, 1 mm isotropic voxels)', 'needed_pipeline': 't1-linear'}}
**mode**: roi
**network_task**: classification
**caps_directory**: $WORK/../commun/datasets/adni/caps/caps_v2021
**tsv_path**: $WORK/Aramis_tools/ClinicaDL_tools/experiments_ADDL/data/ADNI/train
**validation**: KFoldSplit
**num_networks**: 2
**output_size**: 2
**input_size**: [1, 50, 50, 50]
|
Prajvi/Llama2_7B_qlora_FT_bush_crisis_10 | Prajvi | 2024-02-09T12:00:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-09T12:00:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Shruthi-S/shruthicapstone-bert-qa | Shruthi-S | 2024-02-09T11:53:10Z | 44 | 0 | transformers | [
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"base_model:Shruthi-S/capstone-project-bert-ten",
"base_model:finetune:Shruthi-S/capstone-project-bert-ten",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-02-09T11:52:33Z | ---
base_model: Shruthi-S/capstone-project-bert-ten
tags:
- generated_from_keras_callback
model-index:
- name: shruthicapstone-bert-qa
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# shruthicapstone-bert-qa
This model is a fine-tuned version of [Shruthi-S/capstone-project-bert-ten](https://huggingface.co/Shruthi-S/capstone-project-bert-ten) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 5.9574
- Validation Loss: 5.9507
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 0.001, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 5.9574 | 5.9507 | 0 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.1
|
FlorianSteigleder/NLP4W_30 | FlorianSteigleder | 2024-02-09T11:44:55Z | 99 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"question-answering",
"en",
"dataset:squad_v2",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-02-01T13:53:03Z | ---
datasets:
- squad_v2
license: cc-by-sa-4.0
language:
- en
metrics:
- f1
library_name: transformers
---
# Finetuned model for NLP4W Group 30
This QA model was finetuned using our GPU on the squad V2 dataset.
Feel free to use the "Inference API" on the right to play around with the model.
## Model Details
- Trained using [transformers](https://huggingface.co/docs/transformers) for [python](https://www.python.org/)
- Used the [microsoft/xtremedistil-l6-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h384-uncased) model as a base.
- Used the [squad_v2](https://huggingface.co/datasets/squad_v2) dataset for training
### Model Description
- **Funded by:** Florian Steigleder, Nils Waldraff
- **Developed by:** Florian Steigleder, Nils Waldraff
- **Shared by:** Florian Steigleder, Nils Waldraff
### Plug'n Play (Python):
```python
# prerequisites
!pip install transformers
from transformers import pipeline # Pipeline is needed to import the model
# Loading model & setup
model_name_30 = 'FlorianSteigleder/NLP4W_30'
model_30 = AutoModelForQuestionAnswering.from_pretrained(model_name_30) # First we load the model from our repo
tokenizer_30 = AutoTokenizer.from_pretrained(model_name_30) # Then we load the tokenizer from our repo
pipeline_30 = pipeline('question-answering', model=model_30, tokenizer=tokenizer_30) # Then we create an inference pipeline with the model and the tokenizer
# Inference Example 1
print(pipeline_30(
"Where do I live?", # Question
"My name is Caimbeul and I live in Scotland" # Context
)) # Full Inference result containing score, start, end and answer -> https://huggingface.co/docs/transformers/en/main_classes/pipelinesa
# {'score': 0.9879983067512512, 'start': 34, 'end': 42, 'answer': 'Scotland'}
# Inference Example 2
print(pipeline_30(
"Which name is also used to describe the Amazon rainforest in English?", # Question
"""The Amazon rainforest (Portuguese: Floresta Amazônica or Amazônia; Spanish: Selva Amazónica, Amazonía or usually Amazonia; French: Forêt amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain "Amazonas" in their names. The Amazon represents over half of the planet's remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species.""" # Context
)["answer"]) # Answer only output
# Amazonia or the Amazon Jungle
# Inference Example 3
print(pipeline_30(
"Where is New York City?", # Question
"New York City is in the United States." # Context
)["answer"])
# Germany
``` |
mohres/finetuned-llama-2-7b-code | mohres | 2024-02-09T11:34:55Z | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-02-09T11:33:17Z | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 |
sergeipetrov/pix2pix-instruct-IE | sergeipetrov | 2024-02-09T11:34:14Z | 0 | 1 | generic | [
"generic",
"vision",
"image-to-image",
"endpoints-template",
"base_model:timbrooks/instruct-pix2pix",
"base_model:finetune:timbrooks/instruct-pix2pix",
"endpoints_compatible",
"region:us"
] | image-to-image | 2024-02-08T13:11:15Z | ---
tags:
- vision
- image-to-image
- endpoints-template
inference: false
pipeline_tag: image-to-image
base_model: timbrooks/instruct-pix2pix
library_name: generic
---
## timbrooks/instruct-pix2pix to deploy with Inference Endpoints
Expected payload:
```python
def predict(path_to_image, prompt):
with open(path_to_image, "rb") as i:
b64 = base64.b64encode(i.read()).decode()
payload = {
"inputs": b64,
"parameters": {
"prompt": prompt
}
}
response = r.post(
ENDPOINT_URL, json=payload, headers={"Content-Type": "application/json"}
)
return response.json()
```
Call it with:
```python
resp = predict(
path_to_image="car.png",
prompt="make the car green"
)
img = Image.open(BytesIO(base64.b64decode(resp)))
```
|
Saahil1801/openhermes-mistral-dpo-gptq | Saahil1801 | 2024-02-09T11:33:04Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:TheBloke/OpenHermes-2-Mistral-7B-GPTQ",
"base_model:finetune:TheBloke/OpenHermes-2-Mistral-7B-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-02-09T11:29:09Z | ---
license: apache-2.0
base_model: TheBloke/OpenHermes-2-Mistral-7B-GPTQ
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: openhermes-mistral-dpo-gptq
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openhermes-mistral-dpo-gptq
This model is a fine-tuned version of [TheBloke/OpenHermes-2-Mistral-7B-GPTQ](https://huggingface.co/TheBloke/OpenHermes-2-Mistral-7B-GPTQ) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6599
- Rewards/chosen: 0.0397
- Rewards/rejected: -0.0752
- Rewards/accuracies: 0.9375
- Rewards/margins: 0.1149
- Logps/rejected: -164.5962
- Logps/chosen: -292.6904
- Logits/rejected: -2.6901
- Logits/chosen: -2.3670
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6819 | 0.01 | 10 | 0.6600 | 0.0491 | -0.0050 | 1.0 | 0.0540 | -163.8940 | -292.5971 | -2.6930 | -2.3675 |
| 0.7106 | 0.01 | 20 | 0.6787 | 0.0460 | 0.0162 | 0.5625 | 0.0298 | -163.6827 | -292.6277 | -2.6971 | -2.3713 |
| 0.6487 | 0.01 | 30 | 0.6889 | 0.0454 | -0.0002 | 0.8125 | 0.0456 | -163.8460 | -292.6334 | -2.6960 | -2.3700 |
| 0.5981 | 0.02 | 40 | 0.6718 | 0.0307 | -0.0583 | 0.9375 | 0.0890 | -164.4272 | -292.7806 | -2.6928 | -2.3685 |
| 0.6573 | 0.03 | 50 | 0.6599 | 0.0397 | -0.0752 | 0.9375 | 0.1149 | -164.5962 | -292.6904 | -2.6901 | -2.3670 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1+cu117
- Datasets 2.17.0
- Tokenizers 0.15.1
|
Cithan/vit-emotions-fp16 | Cithan | 2024-02-09T11:13:10Z | 8 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-02-09T09:53:54Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-emotions-fp16
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.92875
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-emotions-fp16
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3051
- Accuracy: 0.9287
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 50 | 1.7679 | 0.3862 |
| No log | 2.0 | 100 | 1.4584 | 0.5375 |
| No log | 3.0 | 150 | 1.3209 | 0.5162 |
| No log | 4.0 | 200 | 1.1580 | 0.62 |
| No log | 5.0 | 250 | 0.9946 | 0.7275 |
| No log | 6.0 | 300 | 0.8519 | 0.7887 |
| No log | 7.0 | 350 | 0.7374 | 0.8325 |
| No log | 8.0 | 400 | 0.7250 | 0.815 |
| No log | 9.0 | 450 | 0.5821 | 0.88 |
| 1.1152 | 10.0 | 500 | 0.5239 | 0.8838 |
| 1.1152 | 11.0 | 550 | 0.5121 | 0.8712 |
| 1.1152 | 12.0 | 600 | 0.4444 | 0.9038 |
| 1.1152 | 13.0 | 650 | 0.3894 | 0.9137 |
| 1.1152 | 14.0 | 700 | 0.3956 | 0.9137 |
| 1.1152 | 15.0 | 750 | 0.3806 | 0.91 |
| 1.1152 | 16.0 | 800 | 0.3328 | 0.9375 |
| 1.1152 | 17.0 | 850 | 0.3076 | 0.9287 |
| 1.1152 | 18.0 | 900 | 0.3026 | 0.9363 |
| 1.1152 | 19.0 | 950 | 0.2388 | 0.96 |
| 0.3752 | 20.0 | 1000 | 0.2892 | 0.935 |
| 0.3752 | 21.0 | 1050 | 0.2539 | 0.9413 |
| 0.3752 | 22.0 | 1100 | 0.2299 | 0.9525 |
| 0.3752 | 23.0 | 1150 | 0.2131 | 0.9575 |
| 0.3752 | 24.0 | 1200 | 0.2300 | 0.9525 |
| 0.3752 | 25.0 | 1250 | 0.2393 | 0.9537 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1
|
osanseviero/keras-conv-mnist | osanseviero | 2024-02-09T11:11:17Z | 31 | 0 | keras | [
"keras",
"tf",
"image-classification",
"license:apache-2.0",
"region:us"
] | image-classification | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- image-classification
- keras
library_name: keras
---
Simple MNIST convnet based on the [official Keras documentation](https://keras.io/examples/vision/mnist_convnet/) |
slc48/Reinforce-CartPole-v1 | slc48 | 2024-02-09T11:06:35Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-09T11:06:18Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
dervishekim/Logodesign | dervishekim | 2024-02-09T10:58:21Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-02-09T10:58:21Z | ---
license: creativeml-openrail-m
---
|
ryusangwon/billsum_8991_t5-v1_1-large | ryusangwon | 2024-02-09T10:55:15Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/t5-v1_1-large",
"base_model:finetune:google/t5-v1_1-large",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-09T06:19:12Z | ---
license: apache-2.0
base_model: google/t5-v1_1-large
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: billsum_8991_t5-v1_1-large
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# billsum_8991_t5-v1_1-large
This model is a fine-tuned version of [google/t5-v1_1-large](https://huggingface.co/google/t5-v1_1-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6520
- Rouge1: 0.174
- Rouge2: 0.0822
- Rougel: 0.1434
- Rougelsum: 0.1433
- Gen Len: 18.9297
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 3.8798 | 6.75 | 500 | 2.6520 | 0.174 | 0.0822 | 0.1434 | 0.1433 | 18.9297 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.0.1+cu117
- Datasets 2.15.0
- Tokenizers 0.15.0
|
RMWeerasinghe/t5-small-finetuned-BBCNews | RMWeerasinghe | 2024-02-09T10:51:51Z | 125 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"en",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | summarization | 2024-02-02T07:53:35Z | ---
license: apache-2.0
base_model: google-t5/t5-small
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetuned-BBCNews
results: []
language:
- en
pipeline_tag: summarization
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-BBCNews
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on the BBC News Articles dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7321
- Rouge1: 0.1672
- Rouge2: 0.1387
- Rougel: 0.1605
- Rougelsum: 0.1622
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 1.0538 | 1.0 | 344 | 0.7877 | 0.156 | 0.1219 | 0.1472 | 0.1492 |
| 0.7611 | 2.0 | 688 | 0.7479 | 0.1641 | 0.1333 | 0.1565 | 0.1577 |
| 0.7189 | 3.0 | 1032 | 0.7400 | 0.1659 | 0.1365 | 0.1589 | 0.1606 |
| 0.7021 | 4.0 | 1376 | 0.7370 | 0.1671 | 0.138 | 0.1603 | 0.1618 |
| 0.6976 | 5.0 | 1720 | 0.7321 | 0.1672 | 0.1387 | 0.1605 | 0.1622 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1 |
perceptron-743/shakespearean-lm | perceptron-743 | 2024-02-09T10:50:25Z | 91 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-09T10:10:13Z | ---
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: shakespearean-lm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# shakespearean-lm
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4100
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.9075 | 0.53 | 500 | 4.6423 |
| 4.6274 | 1.06 | 1000 | 4.5136 |
| 4.4057 | 1.59 | 1500 | 4.4582 |
| 4.3596 | 2.12 | 2000 | 4.4350 |
| 4.2589 | 2.65 | 2500 | 4.4163 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1
|
johnBenson00/Test | johnBenson00 | 2024-02-09T10:46:58Z | 1 | 1 | diffusers | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | 2024-02-09T08:26:00Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Minecraft bucket assets
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
Pranavv/test_trainer | Pranavv | 2024-02-09T10:42:15Z | 175 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis",
"base_model:finetune:mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-09T10:41:49Z | ---
license: apache-2.0
base_model: mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: test_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer
This model is a fine-tuned version of [mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis](https://huggingface.co/mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6294
- Accuracy: 0.8577
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 338 | 0.4186 | 0.8517 |
| 0.5435 | 2.0 | 676 | 0.4806 | 0.8737 |
| 0.2387 | 3.0 | 1014 | 0.6294 | 0.8577 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1
|
safecantonese/whisper-small-yue-full-1 | safecantonese | 2024-02-09T10:40:24Z | 63 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:safecantonese/whisper-small-yue-full",
"base_model:finetune:safecantonese/whisper-small-yue-full",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-02-09T10:38:47Z | ---
tags:
- generated_from_trainer
base_model: safecantonese/whisper-small-yue-full
model-index:
- name: whisper-small-yue-full-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-yue-full-1
This model is a fine-tuned version of [safecantonese/whisper-small-yue-full](https://huggingface.co/safecantonese/whisper-small-yue-full) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
bartowski/Pasta-Lake-7b-exl2 | bartowski | 2024-02-09T10:37:35Z | 7 | 1 | transformers | [
"transformers",
"mergekit",
"merge",
"text-generation",
"base_model:Nitral-Archive/Pasta-PrimaMaid-7b",
"base_model:merge:Nitral-Archive/Pasta-PrimaMaid-7b",
"base_model:macadeliccc/WestLake-7B-v2-laser-truthy-dpo",
"base_model:merge:macadeliccc/WestLake-7B-v2-laser-truthy-dpo",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-09T10:20:59Z | ---
base_model:
- Test157t/Pasta-PrimaMaid-7b
- macadeliccc/WestLake-7B-v2-laser-truthy-dpo
library_name: transformers
tags:
- mergekit
- merge
quantized_by: bartowski
pipeline_tag: text-generation
---
## Exllama v2 Quantizations of Pasta-Lake-7b
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.13">turboderp's ExLlamaV2 v0.0.13</a> for quantization.
<b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/Test157t/Pasta-Lake-7b
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
| ----- | ---- | ------- | ------ | ------ | ------ | ------------ |
| [8_0](https://huggingface.co/bartowski/Pasta-Lake-7b-exl2/tree/8_0) | 8.0 | 8.0 | 8.4 GB | 9.8 GB | 11.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/bartowski/Pasta-Lake-7b-exl2/tree/6_5) | 6.5 | 8.0 | 7.2 GB | 8.6 GB | 10.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/bartowski/Pasta-Lake-7b-exl2/tree/5_0) | 5.0 | 6.0 | 6.0 GB | 7.4 GB | 9.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
| [4_25](https://huggingface.co/bartowski/Pasta-Lake-7b-exl2/tree/4_25) | 4.25 | 6.0 | 5.3 GB | 6.7 GB | 8.7 GB | GPTQ equivalent bits per weight, slightly higher quality. |
| [3_5](https://huggingface.co/bartowski/Pasta-Lake-7b-exl2/tree/3_5) | 3.5 | 6.0 | 4.7 GB | 6.1 GB | 8.1 GB | Lower quality, only use if you have to. |
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Pasta-Lake-7b-exl2 Pasta-Lake-7b-exl2-6_5
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `Pasta-Lake-7b-exl2`:
```shell
mkdir Pasta-Lake-7b-exl2
huggingface-cli download bartowski/Pasta-Lake-7b-exl2 --local-dir Pasta-Lake-7b-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
Linux:
```shell
mkdir Pasta-Lake-7b-exl2-6_5
huggingface-cli download bartowski/Pasta-Lake-7b-exl2 --revision 6_5 --local-dir Pasta-Lake-7b-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
mkdir Pasta-Lake-7b-exl2-6.5
huggingface-cli download bartowski/Pasta-Lake-7b-exl2 --revision 6_5 --local-dir Pasta-Lake-7b-exl2-6.5 --local-dir-use-symlinks False
```
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski |
ryusangwon/billsum_4500_t5-base | ryusangwon | 2024-02-09T10:34:16Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-09T07:57:51Z | ---
license: apache-2.0
base_model: google-t5/t5-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: billsum_4500_t5-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# billsum_4500_t5-base
This model is a fine-tuned version of [google-t5/t5-base](https://huggingface.co/google-t5/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1222
- Rouge1: 0.1555
- Rouge2: 0.0612
- Rougel: 0.1268
- Rougelsum: 0.1269
- Gen Len: 18.9943
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 1.2176 | 6.75 | 500 | 2.1222 | 0.1555 | 0.0612 | 0.1268 | 0.1269 | 18.9943 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.0.1+cu117
- Datasets 2.15.0
- Tokenizers 0.15.0
|
nicola0008/my_awesome_opus_eng_it_model | nicola0008 | 2024-02-09T10:33:42Z | 3 | 0 | transformers | [
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-12-31T12:27:41Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_keras_callback
model-index:
- name: nicola0008/my_awesome_opus_eng_it_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nicola0008/my_awesome_opus_eng_it_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2368
- Validation Loss: 0.0648
- Train Bleu: 86.6656
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Bleu | Epoch |
|:----------:|:---------------:|:----------:|:-----:|
| 3.7832 | 1.3300 | 0.4108 | 0 |
| 1.2407 | 0.4723 | 25.2114 | 1 |
| 0.6088 | 0.2055 | 61.5643 | 2 |
| 0.3563 | 0.1088 | 78.2218 | 3 |
| 0.2368 | 0.0648 | 86.6656 | 4 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.17.0
- Tokenizers 0.15.1
|
kornia/vit_s16_augreg_i21k_r224 | kornia | 2024-02-09T10:28:16Z | 0 | 0 | null | [
"image-classification",
"arxiv:2106.10270",
"license:apache-2.0",
"region:us"
] | image-classification | 2024-02-09T01:32:46Z | ---
license: apache-2.0
pipeline_tag: image-classification
---
Pytorch weights for Kornia ViT converted from the original google JAX vision-transformer repo.
```python
from kornia.contrib import VisionTransformer
vit_model = VisionTransformer.from_config('vit_s/16', pretrained=True)
...
```
Original weights from [AugReg](https://arxiv.org/abs/2106.10270) as recommended by [google research vision transformer repo](https://github.com/google-research/vision_transformer): This weight is based on the
[AugReg ViT_S/16 pretrained on imagenet21k](https://storage.googleapis.com/vit_models/augreg/S_16-i21k-300ep-lr_0.001-aug_light1-wd_0.03-do_0.0-sd_0.0.npz)
Weights converted to PyTorch for Kornia ViT implementation (by [@gau-nernst](https://github.com/gau-nernst) in [kornia/kornia#2786](https://github.com/kornia/kornia/pull/2786#discussion_r1482339811))
<details>
<summary>Convert jax checkpoint function</summary>
```
def convert_jax_checkpoint(np_state_dict: dict[str, np.ndarray]):
def get_weight(key: str) -> torch.Tensor:
return torch.from_numpy(np_state_dict[key])
state_dict = dict()
state_dict["patch_embedding.cls_token"] = get_weight("cls")
state_dict["patch_embedding.backbone.weight"] = get_weight("embedding/kernel").permute(3, 2, 0, 1) # conv »
state_dict["patch_embedding.backbone.bias"] = get_weight("embedding/bias")
state_dict["patch_embedding.positions"] = get_weight("Transformer/posembed_input/pos_embedding").squeeze(0)
# for i, block in enumerate(self.encoder.blocks):
for i in range(100):
prefix1 = f"encoder.blocks.{i}"
prefix2 = f"Transformer/encoderblock_{i}"
if f"{prefix2}/LayerNorm_0/scale" not in np_state_dict:
break
state_dict[f"{prefix1}.0.fn.0.weight"] = get_weight(f"{prefix2}/LayerNorm_0/scale")
state_dict[f"{prefix1}.0.fn.0.bias"] = get_weight(f"{prefix2}/LayerNorm_0/bias")
mha_prefix = f"{prefix2}/MultiHeadDotProductAttention_1"
qkv_weight = [get_weight(f"{mha_prefix}/{x}/kernel") for x in ["query", "key", "value"]]
qkv_bias = [get_weight(f"{mha_prefix}/{x}/bias") for x in ["query", "key", "value"]]
state_dict[f"{prefix1}.0.fn.1.qkv.weight"] = torch.cat(qkv_weight, 1).flatten(1).T
state_dict[f"{prefix1}.0.fn.1.qkv.bias"] = torch.cat(qkv_bias, 0).flatten()
state_dict[f"{prefix1}.0.fn.1.projection.weight"] = get_weight(f"{mha_prefix}/out/kernel").flatten(0, 1»
state_dict[f"{prefix1}.0.fn.1.projection.bias"] = get_weight(f"{mha_prefix}/out/bias")
state_dict[f"{prefix1}.1.fn.0.weight"] = get_weight(f"{prefix2}/LayerNorm_2/scale")
state_dict[f"{prefix1}.1.fn.0.bias"] = get_weight(f"{prefix2}/LayerNorm_2/bias")
state_dict[f"{prefix1}.1.fn.1.0.weight"] = get_weight(f"{prefix2}/MlpBlock_3/Dense_0/kernel").T
state_dict[f"{prefix1}.1.fn.1.0.bias"] = get_weight(f"{prefix2}/MlpBlock_3/Dense_0/bias")
state_dict[f"{prefix1}.1.fn.1.3.weight"] = get_weight(f"{prefix2}/MlpBlock_3/Dense_1/kernel").T
state_dict[f"{prefix1}.1.fn.1.3.bias"] = get_weight(f"{prefix2}/MlpBlock_3/Dense_1/bias")
state_dict["norm.weight"] = get_weight("Transformer/encoder_norm/scale")
state_dict["norm.bias"] = get_weight("Transformer/encoder_norm/bias")
return state_dict
```
</details> |
sohug/opt-6.7b-lora | sohug | 2024-02-09T10:24:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-09T10:23:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kornia/vit_b32_augreg_i21k_r224 | kornia | 2024-02-09T10:23:21Z | 0 | 0 | null | [
"image-classification",
"arxiv:2106.10270",
"license:apache-2.0",
"region:us"
] | image-classification | 2024-02-09T01:38:27Z | ---
license: apache-2.0
pipeline_tag: image-classification
---
Pytorch weights for Kornia ViT converted from the original google JAX vision-transformer repo.
```python
from kornia.contrib import VisionTransformer
vit_model = VisionTransformer.from_config('vit_b/32', pretrained=True)
...
```
Original weights from [AugReg](https://arxiv.org/abs/2106.10270) as recommended by [google research vision transformer repo](https://github.com/google-research/vision_transformer): This weight is based on the
[AugReg ViT_B/32 pretrained on imagenet21k](https://storage.googleapis.com/vit_models/augreg/B_32-i21k-300ep-lr_0.001-aug_light1-wd_0.1-do_0.0-sd_0.0.npz)
Weights converted to PyTorch for Kornia ViT implementation (by [@gau-nernst](https://github.com/gau-nernst) in [kornia/kornia#2786](https://github.com/kornia/kornia/pull/2786#discussion_r1482339811))
<details>
<summary>Convert jax checkpoint function</summary>
```
def convert_jax_checkpoint(np_state_dict: dict[str, np.ndarray]):
def get_weight(key: str) -> torch.Tensor:
return torch.from_numpy(np_state_dict[key])
state_dict = dict()
state_dict["patch_embedding.cls_token"] = get_weight("cls")
state_dict["patch_embedding.backbone.weight"] = get_weight("embedding/kernel").permute(3, 2, 0, 1) # conv »
state_dict["patch_embedding.backbone.bias"] = get_weight("embedding/bias")
state_dict["patch_embedding.positions"] = get_weight("Transformer/posembed_input/pos_embedding").squeeze(0)
# for i, block in enumerate(self.encoder.blocks):
for i in range(100):
prefix1 = f"encoder.blocks.{i}"
prefix2 = f"Transformer/encoderblock_{i}"
if f"{prefix2}/LayerNorm_0/scale" not in np_state_dict:
break
state_dict[f"{prefix1}.0.fn.0.weight"] = get_weight(f"{prefix2}/LayerNorm_0/scale")
state_dict[f"{prefix1}.0.fn.0.bias"] = get_weight(f"{prefix2}/LayerNorm_0/bias")
mha_prefix = f"{prefix2}/MultiHeadDotProductAttention_1"
qkv_weight = [get_weight(f"{mha_prefix}/{x}/kernel") for x in ["query", "key", "value"]]
qkv_bias = [get_weight(f"{mha_prefix}/{x}/bias") for x in ["query", "key", "value"]]
state_dict[f"{prefix1}.0.fn.1.qkv.weight"] = torch.cat(qkv_weight, 1).flatten(1).T
state_dict[f"{prefix1}.0.fn.1.qkv.bias"] = torch.cat(qkv_bias, 0).flatten()
state_dict[f"{prefix1}.0.fn.1.projection.weight"] = get_weight(f"{mha_prefix}/out/kernel").flatten(0, 1»
state_dict[f"{prefix1}.0.fn.1.projection.bias"] = get_weight(f"{mha_prefix}/out/bias")
state_dict[f"{prefix1}.1.fn.0.weight"] = get_weight(f"{prefix2}/LayerNorm_2/scale")
state_dict[f"{prefix1}.1.fn.0.bias"] = get_weight(f"{prefix2}/LayerNorm_2/bias")
state_dict[f"{prefix1}.1.fn.1.0.weight"] = get_weight(f"{prefix2}/MlpBlock_3/Dense_0/kernel").T
state_dict[f"{prefix1}.1.fn.1.0.bias"] = get_weight(f"{prefix2}/MlpBlock_3/Dense_0/bias")
state_dict[f"{prefix1}.1.fn.1.3.weight"] = get_weight(f"{prefix2}/MlpBlock_3/Dense_1/kernel").T
state_dict[f"{prefix1}.1.fn.1.3.bias"] = get_weight(f"{prefix2}/MlpBlock_3/Dense_1/bias")
state_dict["norm.weight"] = get_weight("Transformer/encoder_norm/scale")
state_dict["norm.bias"] = get_weight("Transformer/encoder_norm/bias")
return state_dict
```
</details> |
kornia/vit_l16_augreg_i21k_r224 | kornia | 2024-02-09T10:21:22Z | 0 | 0 | null | [
"image-classification",
"arxiv:2106.10270",
"license:apache-2.0",
"region:us"
] | image-classification | 2024-02-09T01:07:40Z | ---
license: apache-2.0
pipeline_tag: image-classification
---
Pytorch weights for Kornia ViT converted from the original google JAX vision-transformer repo.
Using it with kornia:
```python
from kornia.contrib import VisionTransformer
vit_model = VisionTransformer.from_config('vit_l/16', pretrained=True)
...
```
Original weights from [AugReg](https://arxiv.org/abs/2106.10270) as recommended by [google research vision transformer repo](https://github.com/google-research/vision_transformer): This weight is based on the
[AugReg l ViT_L/16 pretrained on imagenet21k](https://storage.googleapis.com/vit_models/augreg/L_16-i21k-300ep-lr_0.001-aug_strong1-wd_0.1-do_0.0-sd_0.0.npz)
Weights converted to PyTorch for Kornia ViT implementation (by [@gau-nernst](https://github.com/gau-nernst) in [kornia/kornia#2786](https://github.com/kornia/kornia/pull/2786#discussion_r1482339811))
<details>
<summary>Convert jax checkpoint function</summary>
```
def convert_jax_checkpoint(np_state_dict: dict[str, np.ndarray]):
def get_weight(key: str) -> torch.Tensor:
return torch.from_numpy(np_state_dict[key])
state_dict = dict()
state_dict["patch_embedding.cls_token"] = get_weight("cls")
state_dict["patch_embedding.backbone.weight"] = get_weight("embedding/kernel").permute(3, 2, 0, 1) # conv »
state_dict["patch_embedding.backbone.bias"] = get_weight("embedding/bias")
state_dict["patch_embedding.positions"] = get_weight("Transformer/posembed_input/pos_embedding").squeeze(0)
# for i, block in enumerate(self.encoder.blocks):
for i in range(100):
prefix1 = f"encoder.blocks.{i}"
prefix2 = f"Transformer/encoderblock_{i}"
if f"{prefix2}/LayerNorm_0/scale" not in np_state_dict:
break
state_dict[f"{prefix1}.0.fn.0.weight"] = get_weight(f"{prefix2}/LayerNorm_0/scale")
state_dict[f"{prefix1}.0.fn.0.bias"] = get_weight(f"{prefix2}/LayerNorm_0/bias")
mha_prefix = f"{prefix2}/MultiHeadDotProductAttention_1"
qkv_weight = [get_weight(f"{mha_prefix}/{x}/kernel") for x in ["query", "key", "value"]]
qkv_bias = [get_weight(f"{mha_prefix}/{x}/bias") for x in ["query", "key", "value"]]
state_dict[f"{prefix1}.0.fn.1.qkv.weight"] = torch.cat(qkv_weight, 1).flatten(1).T
state_dict[f"{prefix1}.0.fn.1.qkv.bias"] = torch.cat(qkv_bias, 0).flatten()
state_dict[f"{prefix1}.0.fn.1.projection.weight"] = get_weight(f"{mha_prefix}/out/kernel").flatten(0, 1»
state_dict[f"{prefix1}.0.fn.1.projection.bias"] = get_weight(f"{mha_prefix}/out/bias")
state_dict[f"{prefix1}.1.fn.0.weight"] = get_weight(f"{prefix2}/LayerNorm_2/scale")
state_dict[f"{prefix1}.1.fn.0.bias"] = get_weight(f"{prefix2}/LayerNorm_2/bias")
state_dict[f"{prefix1}.1.fn.1.0.weight"] = get_weight(f"{prefix2}/MlpBlock_3/Dense_0/kernel").T
state_dict[f"{prefix1}.1.fn.1.0.bias"] = get_weight(f"{prefix2}/MlpBlock_3/Dense_0/bias")
state_dict[f"{prefix1}.1.fn.1.3.weight"] = get_weight(f"{prefix2}/MlpBlock_3/Dense_1/kernel").T
state_dict[f"{prefix1}.1.fn.1.3.bias"] = get_weight(f"{prefix2}/MlpBlock_3/Dense_1/bias")
state_dict["norm.weight"] = get_weight("Transformer/encoder_norm/scale")
state_dict["norm.bias"] = get_weight("Transformer/encoder_norm/bias")
return state_dict
```
</details> |
Subsets and Splits