modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-04 06:27:36
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 466
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-04 06:25:54
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
blanchon/sd-geolora3 | blanchon | 2023-12-19T14:12:01Z | 2 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-12-19T13:32:56Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - blanchon/sd-geolora3
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the blanchon/merged_dataset dataset. You can find some example images in the following.




























|
sourabhdattawad/ppo-LunarLander-v2 | sourabhdattawad | 2023-12-19T14:11:38Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-19T14:11:18Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 228.99 +/- 82.90
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Zibing/llama2-qlora-finetunined-french | Zibing | 2023-12-19T14:09:03Z | 2 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:TinyPixel/Llama-2-7B-bf16-sharded",
"base_model:adapter:TinyPixel/Llama-2-7B-bf16-sharded",
"region:us"
] | null | 2023-12-19T14:08:54Z | ---
library_name: peft
base_model: TinyPixel/Llama-2-7B-bf16-sharded
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
kaRThik757/krt_sample_llm_7b | kaRThik757 | 2023-12-19T13:59:41Z | 2 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openlm-research/open_llama_7b",
"base_model:adapter:openlm-research/open_llama_7b",
"region:us"
] | null | 2023-12-14T09:53:59Z | ---
library_name: peft
base_model: openlm-research/open_llama_7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
EliottD/ppo-LunarLander-v21000 | EliottD | 2023-12-19T13:59:10Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-19T13:58:50Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -146.54 +/- 35.08
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
EliottD/ppo-LunarLander-v210 | EliottD | 2023-12-19T13:58:05Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-19T13:54:49Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -131.71 +/- 88.57
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
EliottD/ppo-LunarLander-v21 | EliottD | 2023-12-19T13:57:33Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-19T13:57:13Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -128.95 +/- 97.37
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
toyxyz/Concept_Slider_test | toyxyz | 2023-12-19T13:54:21Z | 0 | 14 | null | [
"region:us"
] | null | 2023-12-18T17:05:38Z | Test Concept sliders! Use same way as regular Lora.
Some sliders (Eye, breast size) use weights from -100 to 100.
https://github.com/rohitgandikota/sliders
ComfyUI workflow
https://github.com/comfyanonymous/ComfyUI/issues/2028#issuecomment-1824812919
Webui extension
https://github.com/cheald/sd-webui-loractl |
lawinsider/uk_ner_spacy | lawinsider | 2023-12-19T13:52:27Z | 3 | 1 | spacy | [
"spacy",
"token-classification",
"uk",
"dataset:lawinsider/uk_ner_contracts_spacy",
"model-index",
"region:us"
] | token-classification | 2023-11-13T15:48:31Z | ---
tags:
- spacy
- token-classification
language:
- uk
model-index:
- name: uk_ner_spacy
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.9543899658
- name: NER Recall
type: recall
value: 0.9399213925
- name: NER F Score
type: f_score
value: 0.9471004243
datasets:
- lawinsider/uk_ner_contracts_spacy
---
| Feature | Description |
| --- | --- |
| **Name** | `uk_ner_spacy` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.6.1,<3.7.0` |
| **Default Pipeline** | `tok2vec`, `ner` |
| **Components** | `tok2vec`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (5 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `CLAUSE_NUMBER`, `CLAUSE_TITLE`, `CONTRACT_TYPE`, `DEFINITION_TITLE`, `MARGINAL` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 94.71 |
| `ENTS_P` | 95.44 |
| `ENTS_R` | 93.99 |
| `TOK2VEC_LOSS` | 18944.45 |
| `NER_LOSS` | 38361.74 | |
kajol/model_01 | kajol | 2023-12-19T13:47:58Z | 2 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2023-12-18T23:22:15Z | ---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
Zynx3/jin-a-white-wolf | Zynx3 | 2023-12-19T13:47:24Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-12-19T13:43:24Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### Jin-a-white-wolf- Dreambooth model trained by Zynx3 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 2023UGME083
Sample pictures of this concept:

|
showrounak/bloom-song-lyrics | showrounak | 2023-12-19T13:46:33Z | 2 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:bigscience/bloom-7b1",
"base_model:adapter:bigscience/bloom-7b1",
"region:us"
] | null | 2023-12-19T07:43:31Z | ---
library_name: peft
base_model: bigscience/bloom-7b1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
ntc-ai/SDXL-LoRA-slider.maniacal-laughter | ntc-ai | 2023-12-19T13:36:05Z | 71 | 2 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] | text-to-image | 2023-12-19T13:36:02Z |
---
language:
- en
thumbnail: "images/evaluate/maniacal laughter.../maniacal laughter_17_3.0.png"
widget:
- text: maniacal laughter
output:
url: images/maniacal laughter_17_3.0.png
- text: maniacal laughter
output:
url: images/maniacal laughter_19_3.0.png
- text: maniacal laughter
output:
url: images/maniacal laughter_20_3.0.png
- text: maniacal laughter
output:
url: images/maniacal laughter_21_3.0.png
- text: maniacal laughter
output:
url: images/maniacal laughter_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "maniacal laughter"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - maniacal laughter (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/maniacal laughter_17_-3.0.png" width=256 height=256 /> | <img src="images/maniacal laughter_17_0.0.png" width=256 height=256 /> | <img src="images/maniacal laughter_17_3.0.png" width=256 height=256 /> |
| <img src="images/maniacal laughter_19_-3.0.png" width=256 height=256 /> | <img src="images/maniacal laughter_19_0.0.png" width=256 height=256 /> | <img src="images/maniacal laughter_19_3.0.png" width=256 height=256 /> |
| <img src="images/maniacal laughter_20_-3.0.png" width=256 height=256 /> | <img src="images/maniacal laughter_20_0.0.png" width=256 height=256 /> | <img src="images/maniacal laughter_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
maniacal laughter
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.maniacal-laughter', weight_name='maniacal laughter.safetensors', adapter_name="maniacal laughter")
# Activate the LoRA
pipe.set_adapters(["maniacal laughter"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, maniacal laughter"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 480+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
phatjk/vinallama-7b-chat-AWQ | phatjk | 2023-12-19T13:32:14Z | 7 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] | text-generation | 2023-12-19T13:04:21Z | quant_config = { "zero_point": True, "q_group_size": 128, "w_bit": 4, "version": "GEMM" } |
nesuri/sorsolingo-asr-bsl | nesuri | 2023-12-19T13:29:40Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"bsl",
"dataset:nesuri/sorsolingo-tts-bsl",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-12-16T18:27:06Z | ---
language:
- bsl
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- nesuri/sorsolingo-tts-bsl
model-index:
- name: Sorsolingo-asr-bsl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Sorsolingo-asr-bsl
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the sorsolingo-asr-bsl dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 450
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
N7D7/lucia_LoRA | N7D7 | 2023-12-19T13:27:06Z | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stablediffusionapi/juggernaut-xl-v7",
"base_model:adapter:stablediffusionapi/juggernaut-xl-v7",
"license:openrail++",
"region:us"
] | text-to-image | 2023-12-19T13:26:59Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stablediffusionapi/juggernaut-xl-v7
instance_prompt: a photo of TOK luciavarelaarroyo
license: openrail++
---
# SDXL LoRA DreamBooth - N7D7/lucia_LoRA
<Gallery />
## Model description
These are N7D7/lucia_LoRA LoRA adaption weights for stablediffusionapi/juggernaut-xl-v7.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK luciavarelaarroyo to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](N7D7/lucia_LoRA/tree/main) them in the Files & versions tab.
|
Dhanang/sent_model | Dhanang | 2023-12-19T13:24:05Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:indobenchmark/indobert-base-p2",
"base_model:finetune:indobenchmark/indobert-base-p2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-19T13:01:14Z | ---
license: mit
base_model: indobenchmark/indobert-base-p2
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sent_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sent_model
This model is a fine-tuned version of [indobenchmark/indobert-base-p2](https://huggingface.co/indobenchmark/indobert-base-p2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4024
- Accuracy: 0.9512
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 72 | 0.2019 | 0.9617 |
| No log | 2.0 | 144 | 0.2298 | 0.9582 |
| No log | 3.0 | 216 | 0.3607 | 0.9408 |
| No log | 4.0 | 288 | 0.4106 | 0.9338 |
| No log | 5.0 | 360 | 0.3390 | 0.9547 |
| No log | 6.0 | 432 | 0.3567 | 0.9547 |
| 0.0226 | 7.0 | 504 | 0.3608 | 0.9582 |
| 0.0226 | 8.0 | 576 | 0.3653 | 0.9547 |
| 0.0226 | 9.0 | 648 | 0.4015 | 0.9512 |
| 0.0226 | 10.0 | 720 | 0.4024 | 0.9512 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
akash2212/text-summarization-evaluation-model | akash2212 | 2023-12-19T13:21:00Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-12-19T13:09:10Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: text-summarization-evaluation-model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1909
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text-summarization-evaluation-model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4100
- Rouge1: 0.1909
- Rouge2: 0.0934
- Rougel: 0.1617
- Rougelsum: 0.1619
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.4775 | 0.1556 | 0.0622 | 0.1297 | 0.1301 | 19.0 |
| No log | 2.0 | 124 | 2.4374 | 0.1822 | 0.0868 | 0.1534 | 0.1537 | 19.0 |
| No log | 3.0 | 186 | 2.4164 | 0.1888 | 0.0922 | 0.16 | 0.1602 | 19.0 |
| No log | 4.0 | 248 | 2.4100 | 0.1909 | 0.0934 | 0.1617 | 0.1619 | 19.0 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
priteshkj/donut-base-balancesheet | priteshkj | 2023-12-19T13:20:42Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2023-12-08T04:35:54Z | ---
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-balancesheet
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-balancesheet
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 500
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Par1234/my-pet-dog | Par1234 | 2023-12-19T13:14:29Z | 8 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-12-19T13:10:04Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by Par1234 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gAS
Sample pictures of this concept:
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
|
akash2212/output | akash2212 | 2023-12-19T13:07:02Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-12-19T12:56:56Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: output
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1372
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5639
- Rouge1: 0.1372
- Rouge2: 0.0474
- Rougel: 0.1123
- Rougelsum: 0.1125
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.8673 | 0.1296 | 0.0367 | 0.1074 | 0.1074 | 19.0 |
| No log | 2.0 | 124 | 2.6480 | 0.1377 | 0.0469 | 0.1135 | 0.1137 | 19.0 |
| No log | 3.0 | 186 | 2.5819 | 0.1368 | 0.0477 | 0.1121 | 0.1123 | 19.0 |
| No log | 4.0 | 248 | 2.5639 | 0.1372 | 0.0474 | 0.1123 | 0.1125 | 19.0 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
geekradius/bart-large-cnn-fintetuned-samsum-repo | geekradius | 2023-12-19T13:05:12Z | 14 | 0 | transformers | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"summary",
"summerizer",
"summarization",
"en",
"dataset:gopalkalpande/bbc-news-summary",
"license:bigscience-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2023-12-19T03:02:07Z | ---
license: bigscience-openrail-m
datasets:
- gopalkalpande/bbc-news-summary
language:
- en
metrics:
- rouge
library_name: transformers
pipeline_tag: summarization
tags:
- summary
- summerizer
--- |
alitolga/deberta-v3-base-large-peft | alitolga | 2023-12-19T13:04:42Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"region:us"
] | null | 2023-12-19T00:19:03Z | ---
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
model-index:
- name: deberta-v3-base-large-peft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-base-large-peft
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4307
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8583 | 1.0 | 565 | 3.5436 |
| 3.7099 | 2.0 | 1130 | 3.4740 |
| 3.6845 | 3.0 | 1695 | 3.4610 |
| 3.6633 | 4.0 | 2260 | 3.4479 |
| 3.6405 | 5.0 | 2825 | 3.4307 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Ramyashree/gte-large-with500records-test | Ramyashree | 2023-12-19T12:57:12Z | 7 | 0 | setfit | [
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"dataset:Ramyashree/Dataset-setfit-Trainer",
"arxiv:2209.11055",
"base_model:thenlper/gte-large",
"base_model:finetune:thenlper/gte-large",
"region:us"
] | text-classification | 2023-12-19T12:56:23Z | ---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
datasets:
- Ramyashree/Dataset-setfit-Trainer
metrics:
- accuracy
widget:
- text: I wanna obtain some invoices, can you tell me how to do it?
- text: where to close my user account
- text: I have a problem when trying to pay, help me report it
- text: the concert was cancelled and I want to obtain a reimbursement
- text: I got an error message when I tried to make a payment, but I was charged anyway,
can you help me?
pipeline_tag: text-classification
inference: true
base_model: thenlper/gte-large
---
# SetFit with thenlper/gte-large
This is a [SetFit](https://github.com/huggingface/setfit) model trained on the [Ramyashree/Dataset-setfit-Trainer](https://huggingface.co/datasets/Ramyashree/Dataset-setfit-Trainer) dataset that can be used for Text Classification. This SetFit model uses [thenlper/gte-large](https://huggingface.co/thenlper/gte-large) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [thenlper/gte-large](https://huggingface.co/thenlper/gte-large)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 10 classes
- **Training Dataset:** [Ramyashree/Dataset-setfit-Trainer](https://huggingface.co/datasets/Ramyashree/Dataset-setfit-Trainer)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:--------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| create_account | <ul><li>"I don't have an online account, what do I have to do to register?"</li><li>'can you tell me if i can regisger two accounts with a single email address?'</li><li>'I have no online account, open one, please'</li></ul> |
| edit_account | <ul><li>'how can I modify the information on my profile?'</li><li>'can u ask an agent how to make changes to my profile?'</li><li>'I want to update the information on my profile'</li></ul> |
| delete_account | <ul><li>'can I close my account?'</li><li>"I don't want my account, can you delete it?"</li><li>'how do i close my online account?'</li></ul> |
| switch_account | <ul><li>'I would like to use my other online account , could you switch them, please?'</li><li>'i want to use my other online account, can u change them?'</li><li>'how do i change to another account?'</li></ul> |
| get_invoice | <ul><li>'what can you tell me about getting some bills?'</li><li>'tell me where I can request a bill'</li><li>'ask an agent if i can obtain some bills'</li></ul> |
| get_refund | <ul><li>'the game was postponed, help me obtain a reimbursement'</li><li>'the game was postponed, what should I do to obtain a reimbursement?'</li><li>'the concert was postponed, what should I do to request a reimbursement?'</li></ul> |
| payment_issue | <ul><li>'i have an issue making a payment with card and i want to inform of it, please'</li><li>'I got an error message when I attempted to pay, but my card was charged anyway and I want to notify it'</li><li>'I want to notify a problem making a payment, can you help me?'</li></ul> |
| check_refund_policy | <ul><li>"I'm interested in your reimbursement polivy"</li><li>'i wanna see your refund policy, can u help me?'</li><li>'where do I see your money back policy?'</li></ul> |
| recover_password | <ul><li>'my online account was hacked and I want tyo get it back'</li><li>"I lost my password and I'd like to retrieve it, please"</li><li>'could u ask an agent how i can reset my password?'</li></ul> |
| track_refund | <ul><li>'tell me if my refund was processed'</li><li>'I need help checking the status of my refund'</li><li>'I want to see the status of my refund, can you help me?'</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Ramyashree/gte-large-with500records-test")
# Run inference
preds = model("where to close my user account")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 3 | 10.258 | 24 |
| Label | Training Sample Count |
|:--------------------|:----------------------|
| check_refund_policy | 50 |
| create_account | 50 |
| delete_account | 50 |
| edit_account | 50 |
| get_invoice | 50 |
| get_refund | 50 |
| payment_issue | 50 |
| recover_password | 50 |
| switch_account | 50 |
| track_refund | 50 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0008 | 1 | 0.3248 | - |
| 0.04 | 50 | 0.1606 | - |
| 0.08 | 100 | 0.0058 | - |
| 0.12 | 150 | 0.0047 | - |
| 0.16 | 200 | 0.0009 | - |
| 0.2 | 250 | 0.0007 | - |
| 0.24 | 300 | 0.001 | - |
| 0.28 | 350 | 0.0008 | - |
| 0.32 | 400 | 0.0005 | - |
| 0.36 | 450 | 0.0004 | - |
| 0.4 | 500 | 0.0005 | - |
| 0.44 | 550 | 0.0005 | - |
| 0.48 | 600 | 0.0006 | - |
| 0.52 | 650 | 0.0005 | - |
| 0.56 | 700 | 0.0004 | - |
| 0.6 | 750 | 0.0004 | - |
| 0.64 | 800 | 0.0002 | - |
| 0.68 | 850 | 0.0003 | - |
| 0.72 | 900 | 0.0002 | - |
| 0.76 | 950 | 0.0002 | - |
| 0.8 | 1000 | 0.0003 | - |
| 0.84 | 1050 | 0.0002 | - |
| 0.88 | 1100 | 0.0002 | - |
| 0.92 | 1150 | 0.0003 | - |
| 0.96 | 1200 | 0.0003 | - |
| 1.0 | 1250 | 0.0003 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.1
- Sentence Transformers: 2.2.2
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.15.0
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
AshanGimhana/THTestModelV2 | AshanGimhana | 2023-12-19T12:55:16Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:TinyPixel/Llama-2-7B-bf16-sharded",
"base_model:adapter:TinyPixel/Llama-2-7B-bf16-sharded",
"region:us"
] | null | 2023-12-19T12:55:09Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: TinyPixel/Llama-2-7B-bf16-sharded
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [TinyPixel/Llama-2-7B-bf16-sharded](https://huggingface.co/TinyPixel/Llama-2-7B-bf16-sharded) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 120
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0 |
Sweta22/my-pet-cat | Sweta22 | 2023-12-19T12:53:35Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-12-19T12:49:03Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Cat- Dreambooth model trained by Sweta22 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: LCS2022050
Sample pictures of this concept:
.png)
|
espnet/kiritan_svs_rnn | espnet | 2023-12-19T12:46:56Z | 2 | 0 | espnet | [
"espnet",
"audio",
"singing-voice-synthesis",
"jp",
"dataset:kiritan",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null | 2023-12-19T12:45:54Z | ---
tags:
- espnet
- audio
- singing-voice-synthesis
language: jp
datasets:
- kiritan
license: cc-by-4.0
---
## ESPnet2 SVS model
### `espnet/kiritan_svs_rnn`
This model was trained by ftshijt using kiritan recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout 5c4d7cf7feba8461de2e1080bf82182f0efaef38
pip install -e .
cd egs2/kiritan/svs1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/kiritan_svs_rnn
```
## SVS config
<details><summary>expand</summary>
```
config: conf/tuning/train_naive_rnn_dp.yaml
print_config: false
log_level: INFO
drop_last_iter: false
dry_run: false
iterator_type: sequence
valid_iterator_type: null
output_dir: exp/svs_train_naive_rnn_dp_raw_phn_pyopenjtalk_jp
ngpu: 1
seed: 0
num_workers: 8
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 500
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
- - train
- loss
- min
keep_nbest_models: 2
nbest_averaging_interval: 0
grad_clip: 1.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
use_lora: false
save_lora_only: true
lora_conf: {}
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 16
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/svs_stats_raw_phn_pyopenjtalk_jp/train/text_shape.phn
- exp/svs_stats_raw_phn_pyopenjtalk_jp/train/singing_shape
valid_shape_file:
- exp/svs_stats_raw_phn_pyopenjtalk_jp/valid/text_shape.phn
- exp/svs_stats_raw_phn_pyopenjtalk_jp/valid/singing_shape
batch_type: sorted
valid_batch_type: null
fold_length:
- 150
- 240000
sort_in_batch: descending
shuffle_within_batch: false
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
chunk_excluded_key_prefixes: []
chunk_default_fs: null
train_data_path_and_name_and_type:
- - dump/raw/tr_no_dev/text
- text
- text
- - dump/raw/tr_no_dev/wav.scp
- singing
- sound
- - dump/raw/tr_no_dev/label
- label
- duration
- - dump/raw/tr_no_dev/score.scp
- score
- score
valid_data_path_and_name_and_type:
- - dump/raw/dev/text
- text
- text
- - dump/raw/dev/wav.scp
- singing
- sound
- - dump/raw/dev/label
- label
- duration
- - dump/raw/dev/score.scp
- score
- score
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
allow_multi_rates: false
valid_max_cache_size: null
exclude_weight_decay: false
exclude_weight_decay_conf: {}
optim: adam
optim_conf:
lr: 0.001
eps: 1.0e-06
weight_decay: 0.0
scheduler: null
scheduler_conf: {}
token_list:
- <blank>
- <unk>
- pau
- a
- i
- o
- e
- u
- k
- n
- r
- t
- m
- d
- s
- N
- sh
- g
- y
- b
- w
- cl
- ts
- z
- ch
- j
- h
- f
- p
- ky
- ry
- hy
- py
- ny
- <sos/eos>
odim: null
model_conf: {}
use_preprocessor: true
token_type: phn
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: pyopenjtalk
fs: 24000
score_feats_extract: syllable_score_feats
score_feats_extract_conf:
fs: 24000
n_fft: 2048
win_length: 1200
hop_length: 300
feats_extract: fbank
feats_extract_conf:
n_fft: 2048
hop_length: 300
win_length: 1200
fs: 24000
fmin: 80
fmax: 7600
n_mels: 80
normalize: global_mvn
normalize_conf:
stats_file: exp/svs_stats_raw_phn_pyopenjtalk_jp/train/feats_stats.npz
svs: naive_rnn_dp
svs_conf:
midi_dim: 129
embed_dim: 512
duration_dim: 500
eprenet_conv_layers: 0
eprenet_conv_chans: 256
eprenet_conv_filts: 3
elayers: 3
eunits: 256
ebidirectional: true
midi_embed_integration_type: add
dlayers: 2
dunits: 256
dbidirectional: true
postnet_layers: 5
postnet_chans: 512
postnet_filts: 5
use_batch_norm: true
reduction_factor: 1
eprenet_dropout_rate: 0.2
edropout_rate: 0.1
ddropout_rate: 0.1
postnet_dropout_rate: 0.5
init_type: pytorch
use_masking: true
pitch_extract: dio
pitch_extract_conf:
use_token_averaged_f0: false
fs: 24000
n_fft: 2048
hop_length: 300
f0max: 800
f0min: 80
reduction_factor: 1
pitch_normalize: global_mvn
pitch_normalize_conf:
stats_file: exp/svs_stats_raw_phn_pyopenjtalk_jp/train/pitch_stats.npz
ying_extract: null
ying_extract_conf: {}
energy_extract: null
energy_extract_conf: {}
energy_normalize: null
energy_normalize_conf: {}
required:
- output_dir
- token_list
version: '202310'
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{shi22d_interspeech,
author={Jiatong Shi and Shuai Guo and Tao Qian and Tomoki Hayashi and Yuning Wu and Fangzheng Xu and Xuankai Chang and Huazhe Li and Peter Wu and Shinji Watanabe and Qin Jin},
title={{Muskits: an End-to-end Music Processing Toolkit for Singing Voice Synthesis}},
year=2022,
booktitle={Proc. Interspeech 2022},
pages={4277--4281},
doi={10.21437/Interspeech.2022-10039}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
haramberesearch/llama2_xs_460M_uncensored | haramberesearch | 2023-12-19T12:43:52Z | 11 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:unalignment/toxic-dpo-v0.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-19T12:10:38Z | ---
datasets:
- unalignment/toxic-dpo-v0.1
---
# llama2_xs_460M_uncensored
## Model Details
[llama2_xs_460M_experimental](https://huggingface.co/ahxt/llama2_xs_460M_experimental) DPO finedtuned to remove alignment (3 epochs QLoRa).
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Harambe Research
- **Model type:** llama2
- **Finetuned from model:** [llama2_xs_460M_experimental](https://huggingface.co/ahxt/llama2_xs_460M_experimental)
### Out-of-Scope Use
Don't use this to do bad things. Bad things are bad.
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be aware of the risks, biases and limitations of the model.
## How to Get Started with the Model
https://github.com/oobabooga/text-generation-webui |
bartowski/Metis-0.4-exl2 | bartowski | 2023-12-19T12:42:49Z | 0 | 0 | null | [
"text-generation",
"base_model:Mihaiii/Metis-0.3",
"base_model:finetune:Mihaiii/Metis-0.3",
"license:apache-2.0",
"region:us"
] | text-generation | 2023-12-19T11:12:05Z | ---
base_model: Mihaiii/Metis-0.3
inference: false
license: apache-2.0
license_name: apache-2.0
metrics:
- accuracy
quantized_by: bartowski
pipeline_tag: text-generation
---
## Exllama v2 Quantizations of Metis-0.4
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.11">turboderp's ExLlamaV2 v0.0.11</a> for quantization.
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Conversion was done using the default calibration dataset.
Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.
Original model: https://huggingface.co/Mihaiii/Metis-0.4
<a href="https://huggingface.co/bartowski/Metis-0.4-exl2/tree/4_0">4.0 bits per weight</a>
<a href="https://huggingface.co/bartowski/Metis-0.4-exl2/tree/5_0">5.0 bits per weight</a>
<a href="https://huggingface.co/bartowski/Metis-0.4-exl2/tree/6_0">6.0 bits per weight</a>
<a href="https://huggingface.co/bartowski/Metis-0.4-exl2/tree/8_0">8.0 bits per weight</a>
## Download instructions
With git:
```shell
git clone --single-branch --branch 4_0 https://huggingface.co/bartowski/Metis-0.4-exl2
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `Metis-0.4-exl2`:
```shell
mkdir Metis-0.4-exl2
huggingface-cli download bartowski/Metis-0.4-exl2 --local-dir Metis-0.4-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir Metis-0.4-exl2
huggingface-cli download bartowski/Metis-0.4-exl2 --revision 4_0 --local-dir Metis-0.4-exl2 --local-dir-use-symlinks False
```
|
satani/phtben-8 | satani | 2023-12-19T12:40:54Z | 4 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-12-19T12:36:51Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### phtben_8 Dreambooth model trained by satani with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
matiperotti/subcategory | matiperotti | 2023-12-19T12:40:14Z | 0 | 0 | null | [
"image-classification",
"en",
"region:us"
] | image-classification | 2023-12-19T12:01:01Z | ---
language:
- en
pipeline_tag: image-classification
--- |
hkivancoral/smids_10x_deit_small_sgd_001_fold5 | hkivancoral | 2023-12-19T12:39:04Z | 11 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-12-19T11:36:51Z | ---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_10x_deit_small_sgd_001_fold5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.895
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_small_sgd_001_fold5
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2764
- Accuracy: 0.895
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5289 | 1.0 | 750 | 0.5577 | 0.7883 |
| 0.411 | 2.0 | 1500 | 0.4355 | 0.835 |
| 0.3696 | 3.0 | 2250 | 0.3887 | 0.85 |
| 0.3417 | 4.0 | 3000 | 0.3643 | 0.8517 |
| 0.3357 | 5.0 | 3750 | 0.3441 | 0.8617 |
| 0.2644 | 6.0 | 4500 | 0.3299 | 0.865 |
| 0.2577 | 7.0 | 5250 | 0.3164 | 0.8667 |
| 0.2725 | 8.0 | 6000 | 0.3096 | 0.875 |
| 0.2894 | 9.0 | 6750 | 0.3046 | 0.8717 |
| 0.2245 | 10.0 | 7500 | 0.2980 | 0.87 |
| 0.2663 | 11.0 | 8250 | 0.2930 | 0.8817 |
| 0.2488 | 12.0 | 9000 | 0.2925 | 0.8717 |
| 0.2365 | 13.0 | 9750 | 0.2865 | 0.88 |
| 0.2172 | 14.0 | 10500 | 0.2813 | 0.8833 |
| 0.2487 | 15.0 | 11250 | 0.2761 | 0.885 |
| 0.1796 | 16.0 | 12000 | 0.2827 | 0.8817 |
| 0.1959 | 17.0 | 12750 | 0.2794 | 0.8833 |
| 0.1795 | 18.0 | 13500 | 0.2745 | 0.8833 |
| 0.2262 | 19.0 | 14250 | 0.2788 | 0.885 |
| 0.1595 | 20.0 | 15000 | 0.2793 | 0.885 |
| 0.2022 | 21.0 | 15750 | 0.2745 | 0.8833 |
| 0.2023 | 22.0 | 16500 | 0.2758 | 0.8917 |
| 0.1864 | 23.0 | 17250 | 0.2773 | 0.8883 |
| 0.1869 | 24.0 | 18000 | 0.2763 | 0.8967 |
| 0.1883 | 25.0 | 18750 | 0.2788 | 0.89 |
| 0.1768 | 26.0 | 19500 | 0.2728 | 0.8967 |
| 0.1135 | 27.0 | 20250 | 0.2823 | 0.8867 |
| 0.1819 | 28.0 | 21000 | 0.2713 | 0.8933 |
| 0.1691 | 29.0 | 21750 | 0.2729 | 0.8967 |
| 0.1867 | 30.0 | 22500 | 0.2819 | 0.89 |
| 0.1549 | 31.0 | 23250 | 0.2710 | 0.8933 |
| 0.125 | 32.0 | 24000 | 0.2766 | 0.8917 |
| 0.1602 | 33.0 | 24750 | 0.2747 | 0.895 |
| 0.1131 | 34.0 | 25500 | 0.2730 | 0.9 |
| 0.1454 | 35.0 | 26250 | 0.2723 | 0.895 |
| 0.1829 | 36.0 | 27000 | 0.2731 | 0.8967 |
| 0.1 | 37.0 | 27750 | 0.2730 | 0.8967 |
| 0.1344 | 38.0 | 28500 | 0.2751 | 0.8983 |
| 0.1584 | 39.0 | 29250 | 0.2745 | 0.8983 |
| 0.1265 | 40.0 | 30000 | 0.2754 | 0.8967 |
| 0.1671 | 41.0 | 30750 | 0.2769 | 0.8967 |
| 0.147 | 42.0 | 31500 | 0.2744 | 0.8933 |
| 0.1588 | 43.0 | 32250 | 0.2753 | 0.8967 |
| 0.1433 | 44.0 | 33000 | 0.2767 | 0.9 |
| 0.1715 | 45.0 | 33750 | 0.2775 | 0.8967 |
| 0.1027 | 46.0 | 34500 | 0.2766 | 0.9 |
| 0.1628 | 47.0 | 35250 | 0.2771 | 0.8967 |
| 0.1468 | 48.0 | 36000 | 0.2769 | 0.895 |
| 0.1346 | 49.0 | 36750 | 0.2765 | 0.895 |
| 0.0897 | 50.0 | 37500 | 0.2764 | 0.895 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
sefercanapaydin/sdxl-lora-abid | sefercanapaydin | 2023-12-19T12:37:09Z | 1 | 1 | diffusers | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | 2023-12-19T10:06:09Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: A photo of a bald guy named Sefo wearing casual clothes, taking a selfie, and smiling.
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
OpenDILabCommunity/CartPole-v0-SampledEfficientZero | OpenDILabCommunity | 2023-12-19T12:35:26Z | 0 | 0 | pytorch | [
"pytorch",
"deep-reinforcement-learning",
"reinforcement-learning",
"DI-engine",
"CartPole-v0",
"en",
"arxiv:2310.08348",
"license:apache-2.0",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-19T12:35:13Z | ---
language: en
license: apache-2.0
library_name: pytorch
tags:
- deep-reinforcement-learning
- reinforcement-learning
- DI-engine
- CartPole-v0
benchmark_name: OpenAI/Gym/Box2d
task_name: CartPole-v0
pipeline_tag: reinforcement-learning
model-index:
- name: SampledEfficientZero
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v0
type: CartPole-v0
metrics:
- type: mean_reward
value: 162.4 +/- 23.27
name: mean_reward
---
# Play **CartPole-v0** with **SampledEfficientZero** Policy
## Model Description
<!-- Provide a longer summary of what this model is. -->
This implementation applies **SampledEfficientZero** to the OpenAI/Gym/Box2d **CartPole-v0** environment using [LightZero](https://github.com/opendilab/LightZero) and [DI-engine](https://github.com/opendilab/di-engine).
**LightZero** is an efficient, easy-to-understand open-source toolkit that merges Monte Carlo Tree Search (MCTS) with Deep Reinforcement Learning (RL), simplifying their integration for developers and researchers. More details are in paper [LightZero: A Unified Benchmark for Monte Carlo Tree Search in General Sequential Decision Scenarios](https://huggingface.co/papers/2310.08348).
## Model Usage
### Install the Dependencies
<details close>
<summary>(Click for Details)</summary>
```shell
# install huggingface_ding
git clone https://github.com/opendilab/huggingface_ding.git
pip3 install -e ./huggingface_ding/
# install environment dependencies if needed
pip3 install DI-engine[common_env,video]
pip3 install LightZero
```
</details>
### Git Clone from Huggingface and Run the Model
<details close>
<summary>(Click for Details)</summary>
```shell
# running with trained model
python3 -u run.py
```
**run.py**
```python
from lzero.agent import SampledEfficientZeroAgent
from ding.config import Config
from easydict import EasyDict
import torch
# Pull model from files which are git cloned from huggingface
policy_state_dict = torch.load("pytorch_model.bin", map_location=torch.device("cpu"))
cfg = EasyDict(Config.file_to_dict("policy_config.py").cfg_dict)
# Instantiate the agent
agent = SampledEfficientZeroAgent(
env_id="CartPole-v0", exp_name="CartPole-v0-SampledEfficientZero", cfg=cfg.exp_config, policy_state_dict=policy_state_dict
)
# Continue training
agent.train(step=5000)
# Render the new agent performance
agent.deploy(enable_save_replay=True)
```
</details>
### Run Model by Using Huggingface_ding
<details close>
<summary>(Click for Details)</summary>
```shell
# running with trained model
python3 -u run.py
```
**run.py**
```python
from lzero.agent import SampledEfficientZeroAgent
from huggingface_ding import pull_model_from_hub
# Pull model from Hugggingface hub
policy_state_dict, cfg = pull_model_from_hub(repo_id="OpenDILabCommunity/CartPole-v0-SampledEfficientZero")
# Instantiate the agent
agent = SampledEfficientZeroAgent(
env_id="CartPole-v0", exp_name="CartPole-v0-SampledEfficientZero", cfg=cfg.exp_config, policy_state_dict=policy_state_dict
)
# Continue training
agent.train(step=5000)
# Render the new agent performance
agent.deploy(enable_save_replay=True)
```
</details>
## Model Training
### Train the Model and Push to Huggingface_hub
<details close>
<summary>(Click for Details)</summary>
```shell
#Training Your Own Agent
python3 -u train.py
```
**train.py**
```python
from lzero.agent import SampledEfficientZeroAgent
from huggingface_ding import push_model_to_hub
# Instantiate the agent
agent = SampledEfficientZeroAgent(env_id="CartPole-v0", exp_name="CartPole-v0-SampledEfficientZero")
# Train the agent
return_ = agent.train(step=int(10000))
# Push model to huggingface hub
push_model_to_hub(
agent=agent.best,
env_name="OpenAI/Gym/Box2d",
task_name="CartPole-v0",
algo_name="SampledEfficientZero",
github_repo_url="https://github.com/opendilab/LightZero",
github_doc_model_url=None,
github_doc_env_url=None,
installation_guide='''
pip3 install DI-engine[common_env,video]
pip3 install LightZero
''',
usage_file_by_git_clone="./sampled_efficientzero/cartpole_sampled_efficientzero_deploy.py",
usage_file_by_huggingface_ding="./sampled_efficientzero/cartpole_sampled_efficientzero_download.py",
train_file="./sampled_efficientzero/cartpole_sampled_efficientzero.py",
repo_id="OpenDILabCommunity/CartPole-v0-SampledEfficientZero",
platform_info="[LightZero](https://github.com/opendilab/LightZero) and [DI-engine](https://github.com/opendilab/di-engine)",
model_description="**LightZero** is an efficient, easy-to-understand open-source toolkit that merges Monte Carlo Tree Search (MCTS) with Deep Reinforcement Learning (RL), simplifying their integration for developers and researchers. More details are in paper [LightZero: A Unified Benchmark for Monte Carlo Tree Search in General Sequential Decision Scenarios](https://huggingface.co/papers/2310.08348).",
create_repo=True
)
```
</details>
**Configuration**
<details close>
<summary>(Click for Details)</summary>
```python
exp_config = {
'main_config': {
'exp_name': 'CartPole-v0-SampledEfficientZero',
'env': {
'env_id': 'CartPole-v0',
'continuous': False,
'manually_discretization': False,
'collector_env_num': 8,
'evaluator_env_num': 3,
'n_evaluator_episode': 3,
'manager': {
'shared_memory': False
}
},
'policy': {
'on_policy': False,
'cuda': True,
'multi_gpu': False,
'bp_update_sync': True,
'traj_len_inf': False,
'model': {
'observation_shape': 4,
'action_space_size': 2,
'continuous_action_space': False,
'num_of_sampled_actions': 2,
'model_type': 'mlp',
'lstm_hidden_size': 128,
'latent_state_dim': 128,
'discrete_action_encoding_type': 'one_hot',
'norm_type': 'BN'
},
'use_rnd_model': False,
'sampled_algo': True,
'gumbel_algo': False,
'mcts_ctree': True,
'collector_env_num': 8,
'evaluator_env_num': 3,
'env_type': 'not_board_games',
'action_type': 'fixed_action_space',
'battle_mode': 'play_with_bot_mode',
'monitor_extra_statistics': True,
'game_segment_length': 50,
'transform2string': False,
'gray_scale': False,
'use_augmentation': False,
'augmentation': ['shift', 'intensity'],
'ignore_done': False,
'update_per_collect': 100,
'model_update_ratio': 0.1,
'batch_size': 256,
'optim_type': 'Adam',
'learning_rate': 0.003,
'target_update_freq': 100,
'target_update_freq_for_intrinsic_reward': 1000,
'weight_decay': 0.0001,
'momentum': 0.9,
'grad_clip_value': 10,
'n_episode': 8,
'num_simulations': 25,
'discount_factor': 0.997,
'td_steps': 5,
'num_unroll_steps': 5,
'reward_loss_weight': 1,
'value_loss_weight': 0.25,
'policy_loss_weight': 1,
'policy_entropy_loss_weight': 0,
'ssl_loss_weight': 2,
'lr_piecewise_constant_decay': False,
'threshold_training_steps_for_final_lr': 50000,
'manual_temperature_decay': False,
'threshold_training_steps_for_final_temperature': 100000,
'fixed_temperature_value': 0.25,
'use_ture_chance_label_in_chance_encoder': False,
'use_priority': True,
'priority_prob_alpha': 0.6,
'priority_prob_beta': 0.4,
'root_dirichlet_alpha': 0.3,
'root_noise_weight': 0.25,
'random_collect_episode_num': 0,
'eps': {
'eps_greedy_exploration_in_collect': False,
'type': 'linear',
'start': 1.0,
'end': 0.05,
'decay': 100000
},
'cfg_type': 'SampledEfficientZeroPolicyDict',
'init_w': 0.003,
'normalize_prob_of_sampled_actions': False,
'policy_loss_type': 'cross_entropy',
'lstm_horizon_len': 5,
'cos_lr_scheduler': False,
'reanalyze_ratio': 0.0,
'eval_freq': 200,
'replay_buffer_size': 1000000
},
'wandb_logger': {
'gradient_logger': False,
'video_logger': False,
'plot_logger': False,
'action_logger': False,
'return_logger': False
}
},
'create_config': {
'env': {
'type':
'cartpole_lightzero',
'import_names':
['zoo.classic_control.cartpole.envs.cartpole_lightzero_env']
},
'env_manager': {
'type': 'subprocess'
},
'policy': {
'type': 'sampled_efficientzero',
'import_names': ['lzero.policy.sampled_efficientzero']
}
}
}
```
</details>
**Training Procedure**
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
- **Weights & Biases (wandb):** [monitor link](<TODO>)
## Model Information
<!-- Provide the basic links for the model. -->
- **Github Repository:** [repo link](https://github.com/opendilab/LightZero)
- **Doc**: [Algorithm link](<TODO>)
- **Configuration:** [config link](https://huggingface.co/OpenDILabCommunity/CartPole-v0-SampledEfficientZero/blob/main/policy_config.py)
- **Demo:** [video](https://huggingface.co/OpenDILabCommunity/CartPole-v0-SampledEfficientZero/blob/main/replay.mp4)
<!-- Provide the size information for the model. -->
- **Parameters total size:** 14064.13 KB
- **Last Update Date:** 2023-12-19
## Environments
<!-- Address questions around what environment the model is intended to be trained and deployed at, including the necessary information needed to be provided for future users. -->
- **Benchmark:** OpenAI/Gym/Box2d
- **Task:** CartPole-v0
- **Gym version:** 0.25.1
- **DI-engine version:** v0.5.0
- **PyTorch version:** 2.0.1+cu117
- **Doc**: [Environments link](<TODO>)
|
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned_SystemError1.0_Seed104 | behzadnet | 2023-12-19T12:27:13Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | 2023-12-19T12:27:06Z | ---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned-adapters_SystemError1.0_Seed104 | behzadnet | 2023-12-19T12:27:00Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | 2023-12-19T12:26:50Z | ---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
toonhirunkupt/knestanadonmodel | toonhirunkupt | 2023-12-19T12:16:24Z | 0 | 0 | null | [
"region:us"
] | null | 2023-12-19T09:34:05Z | Thai Language : โมเดลนี้ถูกเทรนขึ้นจากเสียงร้องใน 3 เพลงของคุณเนส(ธนดล นิลนพรัตน์) ได้แก่
- Go!! (OST. Tales Runner)
- Finding Love ตามหาจนเจอ (Thai Ver.)
- หัวใจฉันเป็นของเธอ (OST. Tales Runner)
หวังว่าผู้ใช้งานจะใช้มันในทางที่ถูกต้องนะครับ : )
ด้วยรัก และระลึกถึง "ธนดล นิลนพรัตน์" (2528 - 2553)
Note. : โมเดลที่ถูกโพสต์ขึ้นนี้ ทางผู้จัดทำไม่มีส่วนเกี่ยวข้องกับทางต้นสังกัดเดิม และครอบครัว |
Weiming1122/q-Taxi-v3 | Weiming1122 | 2023-12-19T12:15:18Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-19T08:43:27Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Weiming1122/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
racheilla/bert-base-indonesian-522M-finetuned-pemilu | racheilla | 2023-12-19T12:12:24Z | 5 | 0 | transformers | [
"transformers",
"tf",
"bert",
"fill-mask",
"generated_from_keras_callback",
"base_model:cahya/bert-base-indonesian-522M",
"base_model:finetune:cahya/bert-base-indonesian-522M",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-12-19T09:21:41Z | ---
license: mit
base_model: cahya/bert-base-indonesian-522M
tags:
- generated_from_keras_callback
model-index:
- name: racheilla/bert-base-indonesian-522M-finetuned-pemilu
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# racheilla/bert-base-indonesian-522M-finetuned-pemilu
This model is a fine-tuned version of [cahya/bert-base-indonesian-522M](https://huggingface.co/cahya/bert-base-indonesian-522M) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.2573
- Validation Loss: 3.4101
- Epoch: 39
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -950, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.2847 | 3.4266 | 0 |
| 3.3000 | 3.4116 | 1 |
| 3.2702 | 3.3975 | 2 |
| 3.2675 | 3.4689 | 3 |
| 3.2982 | 3.3540 | 4 |
| 3.3109 | 3.4127 | 5 |
| 3.2698 | 3.4126 | 6 |
| 3.2852 | 3.4165 | 7 |
| 3.2977 | 3.3816 | 8 |
| 3.2749 | 3.3923 | 9 |
| 3.2777 | 3.3841 | 10 |
| 3.2555 | 3.4534 | 11 |
| 3.2940 | 3.4194 | 12 |
| 3.2860 | 3.3810 | 13 |
| 3.2585 | 3.3328 | 14 |
| 3.2979 | 3.4310 | 15 |
| 3.2844 | 3.4374 | 16 |
| 3.2961 | 3.3630 | 17 |
| 3.2729 | 3.4132 | 18 |
| 3.2775 | 3.4114 | 19 |
| 3.2561 | 3.3869 | 20 |
| 3.3089 | 3.4583 | 21 |
| 3.2839 | 3.4010 | 22 |
| 3.2863 | 3.4335 | 23 |
| 3.2347 | 3.4040 | 24 |
| 3.2691 | 3.3805 | 25 |
| 3.2779 | 3.4005 | 26 |
| 3.3175 | 3.3627 | 27 |
| 3.2853 | 3.3995 | 28 |
| 3.2787 | 3.3904 | 29 |
| 3.2739 | 3.4169 | 30 |
| 3.2976 | 3.3728 | 31 |
| 3.2474 | 3.4051 | 32 |
| 3.3152 | 3.3760 | 33 |
| 3.2939 | 3.4185 | 34 |
| 3.2955 | 3.3978 | 35 |
| 3.2823 | 3.3749 | 36 |
| 3.3171 | 3.4078 | 37 |
| 3.2513 | 3.4022 | 38 |
| 3.2573 | 3.4101 | 39 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Matheusmatos2916/my_awesome_qa_model | Matheusmatos2916 | 2023-12-19T12:08:41Z | 21 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"roberta",
"question-answering",
"generated_from_trainer",
"base_model:deepset/roberta-base-squad2",
"base_model:finetune:deepset/roberta-base-squad2",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-10-24T13:56:01Z | ---
license: cc-by-4.0
base_model: deepset/roberta-base-squad2
tags:
- generated_from_trainer
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 8.0800
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 150 | 6.9242 |
| No log | 2.0 | 300 | 7.7030 |
| No log | 3.0 | 450 | 8.7695 |
| 1.1393 | 4.0 | 600 | 8.1844 |
| 1.1393 | 5.0 | 750 | 8.0800 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
clarin-knext/RoBERTa-large-CST-finetuned | clarin-knext | 2023-12-19T12:01:43Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:clarin-knext/cst_datasets",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:cc-by-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-11-28T08:13:58Z | ---
license: cc-by-sa-4.0
language:
- en
metrics:
- accuracy
datasets:
- clarin-knext/cst_datasets
base_model: roberta-large
pipeline_tag: text-classification
model-index:
- name: accuracy
results:
- task:
type: text-classification
name: Text Classification
metrics:
- type: accuracy
value: 61.07
verified: false
widget:
- text: "Taking pictures can be straining for the arms. | The photographer is massaging her arm, sore from holding the lens."
example_title: "Generalization example"
- text: "The children told their parents that as they were going up to the third floor, the escalator stopped. | When we were reaching the third floor, the escalator stopped."
example_title: "Indirect speech example"
---
# Accuracy per class
<code>TODO</code>
# Usage
<code>TODO</code> |
alitolga/bart-base-large-peft | alitolga | 2023-12-19T12:00:41Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"region:us"
] | null | 2023-12-19T11:43:06Z | ---
license: apache-2.0
base_model: facebook/bart-base
tags:
- generated_from_trainer
model-index:
- name: bart-base-large-peft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-large-peft
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6188
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.9432 | 1.0 | 843 | 3.7161 |
| 3.916 | 2.0 | 1686 | 3.6846 |
| 3.8955 | 3.0 | 2529 | 3.6695 |
| 3.8601 | 4.0 | 3372 | 3.6538 |
| 3.8141 | 5.0 | 4215 | 3.6188 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
hkivancoral/smids_10x_deit_small_adamax_00001_fold4 | hkivancoral | 2023-12-19T12:00:13Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-12-19T10:51:46Z | ---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_10x_deit_small_adamax_00001_fold4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8783333333333333
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_small_adamax_00001_fold4
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3483
- Accuracy: 0.8783
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2714 | 1.0 | 750 | 0.3483 | 0.875 |
| 0.1928 | 2.0 | 1500 | 0.3347 | 0.8833 |
| 0.1383 | 3.0 | 2250 | 0.3802 | 0.87 |
| 0.0835 | 4.0 | 3000 | 0.4083 | 0.8833 |
| 0.07 | 5.0 | 3750 | 0.4749 | 0.8833 |
| 0.0338 | 6.0 | 4500 | 0.5541 | 0.8767 |
| 0.0133 | 7.0 | 5250 | 0.6527 | 0.8783 |
| 0.0087 | 8.0 | 6000 | 0.7143 | 0.88 |
| 0.0145 | 9.0 | 6750 | 0.7738 | 0.88 |
| 0.0002 | 10.0 | 7500 | 0.8388 | 0.8767 |
| 0.0004 | 11.0 | 8250 | 0.9053 | 0.8817 |
| 0.0065 | 12.0 | 9000 | 0.9720 | 0.8783 |
| 0.0 | 13.0 | 9750 | 1.0304 | 0.8767 |
| 0.0 | 14.0 | 10500 | 1.0771 | 0.8717 |
| 0.0 | 15.0 | 11250 | 1.0764 | 0.8783 |
| 0.0326 | 16.0 | 12000 | 1.0955 | 0.8833 |
| 0.0001 | 17.0 | 12750 | 1.0921 | 0.8817 |
| 0.0 | 18.0 | 13500 | 1.1024 | 0.8817 |
| 0.0 | 19.0 | 14250 | 1.1225 | 0.8817 |
| 0.0 | 20.0 | 15000 | 1.1467 | 0.88 |
| 0.0 | 21.0 | 15750 | 1.1711 | 0.88 |
| 0.0 | 22.0 | 16500 | 1.1842 | 0.8783 |
| 0.0 | 23.0 | 17250 | 1.1878 | 0.8783 |
| 0.0 | 24.0 | 18000 | 1.2170 | 0.8817 |
| 0.0 | 25.0 | 18750 | 1.2183 | 0.88 |
| 0.0 | 26.0 | 19500 | 1.2367 | 0.88 |
| 0.0 | 27.0 | 20250 | 1.2535 | 0.8783 |
| 0.0 | 28.0 | 21000 | 1.2655 | 0.8833 |
| 0.0 | 29.0 | 21750 | 1.2701 | 0.8783 |
| 0.0 | 30.0 | 22500 | 1.2647 | 0.8783 |
| 0.0 | 31.0 | 23250 | 1.2884 | 0.8783 |
| 0.0 | 32.0 | 24000 | 1.2899 | 0.8733 |
| 0.0 | 33.0 | 24750 | 1.3073 | 0.8817 |
| 0.0 | 34.0 | 25500 | 1.3112 | 0.8833 |
| 0.0 | 35.0 | 26250 | 1.3094 | 0.8817 |
| 0.0 | 36.0 | 27000 | 1.3116 | 0.88 |
| 0.0 | 37.0 | 27750 | 1.3157 | 0.88 |
| 0.0 | 38.0 | 28500 | 1.3213 | 0.88 |
| 0.0 | 39.0 | 29250 | 1.3285 | 0.8767 |
| 0.0 | 40.0 | 30000 | 1.3297 | 0.8767 |
| 0.0 | 41.0 | 30750 | 1.3323 | 0.8783 |
| 0.0 | 42.0 | 31500 | 1.3346 | 0.8767 |
| 0.0 | 43.0 | 32250 | 1.3389 | 0.8783 |
| 0.0 | 44.0 | 33000 | 1.3404 | 0.8783 |
| 0.0 | 45.0 | 33750 | 1.3431 | 0.8783 |
| 0.0 | 46.0 | 34500 | 1.3453 | 0.8783 |
| 0.0 | 47.0 | 35250 | 1.3463 | 0.8783 |
| 0.0 | 48.0 | 36000 | 1.3478 | 0.8783 |
| 0.0 | 49.0 | 36750 | 1.3483 | 0.8783 |
| 0.0 | 50.0 | 37500 | 1.3483 | 0.8783 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
clarin-knext/roberta-large-cst-augm-finetuned | clarin-knext | 2023-12-19T12:00:08Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:clarin-knext/cst_datasets",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-19T11:54:41Z | ---
license: cc-by-sa-4.0
language:
- en
metrics:
- accuracy
datasets:
- clarin-knext/cst_datasets
base_model: roberta-large
pipeline_tag: text-classification
widget:
- text: "Taking pictures can be straining for the arms. | The photographer is massaging her arm, sore from holding the lens."
example_title: "Generalization example"
- text: "The children told their parents that as they were going up to the third floor, the escalator stopped. | When we were reaching the third floor, the escalator stopped."
example_title: "Indirect speech example"
---
# Accuracy per class
<code>TODO</code>
# Usage
<code>TODO</code> |
Ramyashree/gte-large-with500records | Ramyashree | 2023-12-19T11:59:16Z | 9 | 0 | setfit | [
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"dataset:Ramyashree/Dataset-setfit-Trainer",
"arxiv:2209.11055",
"base_model:thenlper/gte-large",
"base_model:finetune:thenlper/gte-large",
"region:us"
] | text-classification | 2023-12-19T11:57:52Z | ---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
datasets:
- Ramyashree/Dataset-setfit-Trainer
metrics:
- accuracy
widget:
- text: I wanna obtain some invoices, can you tell me how to do it?
- text: where to close my user account
- text: I have a problem when trying to pay, help me report it
- text: the concert was cancelled and I want to obtain a reimbursement
- text: I got an error message when I tried to make a payment, but I was charged anyway,
can you help me?
pipeline_tag: text-classification
inference: true
base_model: thenlper/gte-large
---
# SetFit with thenlper/gte-large
This is a [SetFit](https://github.com/huggingface/setfit) model trained on the [Ramyashree/Dataset-setfit-Trainer](https://huggingface.co/datasets/Ramyashree/Dataset-setfit-Trainer) dataset that can be used for Text Classification. This SetFit model uses [thenlper/gte-large](https://huggingface.co/thenlper/gte-large) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [thenlper/gte-large](https://huggingface.co/thenlper/gte-large)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 10 classes
- **Training Dataset:** [Ramyashree/Dataset-setfit-Trainer](https://huggingface.co/datasets/Ramyashree/Dataset-setfit-Trainer)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:--------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| create_account | <ul><li>"I don't have an online account, what do I have to do to register?"</li><li>'can you tell me if i can regisger two accounts with a single email address?'</li><li>'I have no online account, open one, please'</li></ul> |
| edit_account | <ul><li>'how can I modify the information on my profile?'</li><li>'can u ask an agent how to make changes to my profile?'</li><li>'I want to update the information on my profile'</li></ul> |
| delete_account | <ul><li>'can I close my account?'</li><li>"I don't want my account, can you delete it?"</li><li>'how do i close my online account?'</li></ul> |
| switch_account | <ul><li>'I would like to use my other online account , could you switch them, please?'</li><li>'i want to use my other online account, can u change them?'</li><li>'how do i change to another account?'</li></ul> |
| get_invoice | <ul><li>'what can you tell me about getting some bills?'</li><li>'tell me where I can request a bill'</li><li>'ask an agent if i can obtain some bills'</li></ul> |
| get_refund | <ul><li>'the game was postponed, help me obtain a reimbursement'</li><li>'the game was postponed, what should I do to obtain a reimbursement?'</li><li>'the concert was postponed, what should I do to request a reimbursement?'</li></ul> |
| payment_issue | <ul><li>'i have an issue making a payment with card and i want to inform of it, please'</li><li>'I got an error message when I attempted to pay, but my card was charged anyway and I want to notify it'</li><li>'I want to notify a problem making a payment, can you help me?'</li></ul> |
| check_refund_policy | <ul><li>"I'm interested in your reimbursement polivy"</li><li>'i wanna see your refund policy, can u help me?'</li><li>'where do I see your money back policy?'</li></ul> |
| recover_password | <ul><li>'my online account was hacked and I want tyo get it back'</li><li>"I lost my password and I'd like to retrieve it, please"</li><li>'could u ask an agent how i can reset my password?'</li></ul> |
| track_refund | <ul><li>'tell me if my refund was processed'</li><li>'I need help checking the status of my refund'</li><li>'I want to see the status of my refund, can you help me?'</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Ramyashree/gte-large-with500records")
# Run inference
preds = model("where to close my user account")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 3 | 10.258 | 24 |
| Label | Training Sample Count |
|:--------------------|:----------------------|
| check_refund_policy | 50 |
| create_account | 50 |
| delete_account | 50 |
| edit_account | 50 |
| get_invoice | 50 |
| get_refund | 50 |
| payment_issue | 50 |
| recover_password | 50 |
| switch_account | 50 |
| track_refund | 50 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0008 | 1 | 0.3248 | - |
| 0.04 | 50 | 0.1606 | - |
| 0.08 | 100 | 0.0058 | - |
| 0.12 | 150 | 0.0047 | - |
| 0.16 | 200 | 0.0009 | - |
| 0.2 | 250 | 0.0007 | - |
| 0.24 | 300 | 0.001 | - |
| 0.28 | 350 | 0.0008 | - |
| 0.32 | 400 | 0.0005 | - |
| 0.36 | 450 | 0.0004 | - |
| 0.4 | 500 | 0.0005 | - |
| 0.44 | 550 | 0.0005 | - |
| 0.48 | 600 | 0.0006 | - |
| 0.52 | 650 | 0.0005 | - |
| 0.56 | 700 | 0.0004 | - |
| 0.6 | 750 | 0.0004 | - |
| 0.64 | 800 | 0.0002 | - |
| 0.68 | 850 | 0.0003 | - |
| 0.72 | 900 | 0.0002 | - |
| 0.76 | 950 | 0.0002 | - |
| 0.8 | 1000 | 0.0003 | - |
| 0.84 | 1050 | 0.0002 | - |
| 0.88 | 1100 | 0.0002 | - |
| 0.92 | 1150 | 0.0003 | - |
| 0.96 | 1200 | 0.0003 | - |
| 1.0 | 1250 | 0.0003 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.1
- Sentence Transformers: 2.2.2
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.15.0
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
intrinsic-disorder/bert-250k-2redo | intrinsic-disorder | 2023-12-19T11:50:50Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-19T10:54:53Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-250k-2redo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-250k-2redo
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2479
- Accuracy: 0.5543
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
GhostDragon01/habib_photo_LoRA_Realistic_Vision_V2 | GhostDragon01 | 2023-12-19T11:50:03Z | 2 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:SG161222/RealVisXL_V2.0",
"base_model:adapter:SG161222/RealVisXL_V2.0",
"license:openrail++",
"region:us"
] | text-to-image | 2023-12-19T10:44:13Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: SG161222/RealVisXL_V2.0
instance_prompt: photo of a <LLMQSDFHABIBQSDFMLKJ> man
license: openrail++
---
# SDXL LoRA DreamBooth - GhostDragon01/habib_photo_LoRA_Realistic_Vision_V2
<Gallery />
## Model description
These are GhostDragon01/habib_photo_LoRA_Realistic_Vision_V2 LoRA adaption weights for SG161222/RealVisXL_V2.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use photo of a <LLMQSDFHABIBQSDFMLKJ> man to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](GhostDragon01/habib_photo_LoRA_Realistic_Vision_V2/tree/main) them in the Files & versions tab.
|
LoneStriker/Metis-0.4-8.0bpw-h8-exl2 | LoneStriker | 2023-12-19T11:46:10Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"base_model:Mihaiii/Metis-0.3",
"base_model:finetune:Mihaiii/Metis-0.3",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-19T10:50:09Z | ---
base_model: Mihaiii/Metis-0.3
inference: false
license: apache-2.0
license_name: apache-2.0
metrics:
- accuracy
---
This is a merge between Metis-0.3 and Metis-0.1 having Metis-0.1 as base.
It was done using [mergekit](https://github.com/cg123/mergekit).
It works well with long system prompts.
It isn't generic in a sense that it shouldn't be used for story telling, for example, but only for reasoning and text comprehension.
This model is trained on a private dataset. The high GSM8K score is **NOT** because of the MetaMath dataset.
# Prompt Format:
```
<|system|>
{system_message} </s>
<|user|>
{prompt} </s>
<|assistant|>
```
Merge config:
```yaml
slices:
- sources:
- model: Mihaiii/Metis-0.3
layer_range: [0, 32]
- model: Mihaiii/Metis-0.1
layer_range: [0, 32]
merge_method: slerp
base_model: Mihaiii/Metis-0.1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
``` |
LoneStriker/Metis-0.4-6.0bpw-h6-exl2 | LoneStriker | 2023-12-19T11:46:05Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"base_model:Mihaiii/Metis-0.3",
"base_model:finetune:Mihaiii/Metis-0.3",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-19T10:44:56Z | ---
base_model: Mihaiii/Metis-0.3
inference: false
license: apache-2.0
license_name: apache-2.0
metrics:
- accuracy
---
This is a merge between Metis-0.3 and Metis-0.1 having Metis-0.1 as base.
It was done using [mergekit](https://github.com/cg123/mergekit).
It works well with long system prompts.
It isn't generic in a sense that it shouldn't be used for story telling, for example, but only for reasoning and text comprehension.
This model is trained on a private dataset. The high GSM8K score is **NOT** because of the MetaMath dataset.
# Prompt Format:
```
<|system|>
{system_message} </s>
<|user|>
{prompt} </s>
<|assistant|>
```
Merge config:
```yaml
slices:
- sources:
- model: Mihaiii/Metis-0.3
layer_range: [0, 32]
- model: Mihaiii/Metis-0.1
layer_range: [0, 32]
merge_method: slerp
base_model: Mihaiii/Metis-0.1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
``` |
LoneStriker/Metis-0.4-4.0bpw-h6-exl2 | LoneStriker | 2023-12-19T11:45:57Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"base_model:Mihaiii/Metis-0.3",
"base_model:finetune:Mihaiii/Metis-0.3",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-19T10:34:26Z | ---
base_model: Mihaiii/Metis-0.3
inference: false
license: apache-2.0
license_name: apache-2.0
metrics:
- accuracy
---
This is a merge between Metis-0.3 and Metis-0.1 having Metis-0.1 as base.
It was done using [mergekit](https://github.com/cg123/mergekit).
It works well with long system prompts.
It isn't generic in a sense that it shouldn't be used for story telling, for example, but only for reasoning and text comprehension.
This model is trained on a private dataset. The high GSM8K score is **NOT** because of the MetaMath dataset.
# Prompt Format:
```
<|system|>
{system_message} </s>
<|user|>
{prompt} </s>
<|assistant|>
```
Merge config:
```yaml
slices:
- sources:
- model: Mihaiii/Metis-0.3
layer_range: [0, 32]
- model: Mihaiii/Metis-0.1
layer_range: [0, 32]
merge_method: slerp
base_model: Mihaiii/Metis-0.1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
``` |
alitolga/electra-base-generator-large-peft | alitolga | 2023-12-19T11:42:48Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:google/electra-base-generator",
"base_model:finetune:google/electra-base-generator",
"license:apache-2.0",
"region:us"
] | null | 2023-12-19T11:35:19Z | ---
license: apache-2.0
base_model: google/electra-base-generator
tags:
- generated_from_trainer
model-index:
- name: electra-base-generator-large-peft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-base-generator-large-peft
This model is a fine-tuned version of [google/electra-base-generator](https://huggingface.co/google/electra-base-generator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0406
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0677 | 1.0 | 565 | 0.0436 |
| 0.064 | 2.0 | 1130 | 0.0415 |
| 0.048 | 3.0 | 1695 | 0.0418 |
| 0.0441 | 4.0 | 2260 | 0.0410 |
| 0.0437 | 5.0 | 2825 | 0.0406 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
racheltong/whisper-tiny-cn-100steps | racheltong | 2023-12-19T11:40:07Z | 1 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:openai/whisper-tiny",
"base_model:adapter:openai/whisper-tiny",
"region:us"
] | null | 2023-12-19T11:40:05Z | ---
library_name: peft
base_model: openai/whisper-tiny
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
kghanlon/distilbert-base-uncased-RILE-v1 | kghanlon | 2023-12-19T11:36:24Z | 8 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-19T10:52:52Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
model-index:
- name: distilbert-base-uncased-RILE-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-RILE-v1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8587
- Accuracy: 0.7364
- Recall: 0.7364
- F1: 0.7358
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:------:|
| 0.6966 | 1.0 | 15490 | 0.6831 | 0.7164 | 0.7164 | 0.7123 |
| 0.5738 | 2.0 | 30980 | 0.6934 | 0.7300 | 0.7300 | 0.7300 |
| 0.422 | 3.0 | 46470 | 0.8587 | 0.7364 | 0.7364 | 0.7358 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
hkivancoral/smids_10x_deit_small_sgd_001_fold4 | hkivancoral | 2023-12-19T11:36:19Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-12-19T10:34:10Z | ---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_10x_deit_small_sgd_001_fold4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8766666666666667
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_small_sgd_001_fold4
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3291
- Accuracy: 0.8767
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5469 | 1.0 | 750 | 0.5533 | 0.7983 |
| 0.4148 | 2.0 | 1500 | 0.4326 | 0.8367 |
| 0.3982 | 3.0 | 2250 | 0.3912 | 0.8467 |
| 0.355 | 4.0 | 3000 | 0.3693 | 0.8533 |
| 0.3032 | 5.0 | 3750 | 0.3569 | 0.8583 |
| 0.2345 | 6.0 | 4500 | 0.3466 | 0.8617 |
| 0.2053 | 7.0 | 5250 | 0.3412 | 0.865 |
| 0.2443 | 8.0 | 6000 | 0.3381 | 0.8633 |
| 0.259 | 9.0 | 6750 | 0.3314 | 0.875 |
| 0.2146 | 10.0 | 7500 | 0.3275 | 0.8717 |
| 0.2301 | 11.0 | 8250 | 0.3262 | 0.8733 |
| 0.298 | 12.0 | 9000 | 0.3264 | 0.8733 |
| 0.2031 | 13.0 | 9750 | 0.3234 | 0.8783 |
| 0.1941 | 14.0 | 10500 | 0.3276 | 0.8783 |
| 0.1822 | 15.0 | 11250 | 0.3209 | 0.88 |
| 0.2209 | 16.0 | 12000 | 0.3226 | 0.8767 |
| 0.1294 | 17.0 | 12750 | 0.3179 | 0.8817 |
| 0.1726 | 18.0 | 13500 | 0.3224 | 0.88 |
| 0.2222 | 19.0 | 14250 | 0.3196 | 0.8833 |
| 0.1604 | 20.0 | 15000 | 0.3199 | 0.8817 |
| 0.1742 | 21.0 | 15750 | 0.3204 | 0.8783 |
| 0.1599 | 22.0 | 16500 | 0.3188 | 0.88 |
| 0.1753 | 23.0 | 17250 | 0.3189 | 0.8817 |
| 0.1975 | 24.0 | 18000 | 0.3189 | 0.8817 |
| 0.1797 | 25.0 | 18750 | 0.3190 | 0.8817 |
| 0.1646 | 26.0 | 19500 | 0.3244 | 0.8817 |
| 0.1585 | 27.0 | 20250 | 0.3244 | 0.885 |
| 0.1303 | 28.0 | 21000 | 0.3225 | 0.8817 |
| 0.1144 | 29.0 | 21750 | 0.3207 | 0.8817 |
| 0.1409 | 30.0 | 22500 | 0.3230 | 0.8817 |
| 0.1303 | 31.0 | 23250 | 0.3219 | 0.8833 |
| 0.1405 | 32.0 | 24000 | 0.3260 | 0.8817 |
| 0.1503 | 33.0 | 24750 | 0.3248 | 0.88 |
| 0.1402 | 34.0 | 25500 | 0.3257 | 0.8817 |
| 0.1266 | 35.0 | 26250 | 0.3227 | 0.88 |
| 0.1495 | 36.0 | 27000 | 0.3271 | 0.8817 |
| 0.1021 | 37.0 | 27750 | 0.3248 | 0.8833 |
| 0.1616 | 38.0 | 28500 | 0.3242 | 0.885 |
| 0.158 | 39.0 | 29250 | 0.3254 | 0.88 |
| 0.1668 | 40.0 | 30000 | 0.3256 | 0.8833 |
| 0.1276 | 41.0 | 30750 | 0.3297 | 0.88 |
| 0.1072 | 42.0 | 31500 | 0.3307 | 0.88 |
| 0.1457 | 43.0 | 32250 | 0.3289 | 0.8783 |
| 0.1691 | 44.0 | 33000 | 0.3278 | 0.8817 |
| 0.1442 | 45.0 | 33750 | 0.3288 | 0.88 |
| 0.1231 | 46.0 | 34500 | 0.3279 | 0.88 |
| 0.1011 | 47.0 | 35250 | 0.3276 | 0.8767 |
| 0.1059 | 48.0 | 36000 | 0.3287 | 0.8767 |
| 0.1263 | 49.0 | 36750 | 0.3292 | 0.8767 |
| 0.1053 | 50.0 | 37500 | 0.3291 | 0.8767 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
ziumks/zwave-dbgatekeeper-v0.3 | ziumks | 2023-12-19T11:36:11Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2023-12-19T11:35:49Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.2
model-index:
- name: mistral-sql-finetune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-sql-finetune
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0305
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- training_steps: 300
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1899 | 0.17 | 25 | 0.3116 |
| 0.1795 | 0.33 | 50 | 0.1088 |
| 0.0819 | 0.5 | 75 | 0.0425 |
| 0.0453 | 0.67 | 100 | 0.0419 |
| 0.0534 | 0.83 | 125 | 0.0382 |
| 0.0338 | 1.0 | 150 | 0.0315 |
| 0.0358 | 1.17 | 175 | 0.0345 |
| 0.0336 | 1.33 | 200 | 0.0334 |
| 0.0401 | 1.5 | 225 | 0.0322 |
| 0.0326 | 1.67 | 250 | 0.0308 |
| 0.0396 | 1.83 | 275 | 0.0309 |
| 0.0307 | 2.0 | 300 | 0.0305 |
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.37.0.dev0
- Pytorch 2.1.2
- Datasets 2.15.0
- Tokenizers 0.15.0 |
StellarMilk/t5-base-newsqa-qag-trained | StellarMilk | 2023-12-19T11:34:50Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"questions and answers generation",
"en",
"dataset:StellarMilk/newsqa",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-12-19T10:30:01Z |
---
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
language: en
datasets:
- StellarMilk/newsqa
pipeline_tag: text2text-generation
tags:
- questions and answers generation
widget:
- text: "generate question and answer: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records."
example_title: "Questions & Answers Generation Example 1"
model-index:
- name: StellarMilk/t5-base-newsqa-qag-trained
results:
- task:
name: Text2text Generation
type: text2text-generation
dataset:
name: StellarMilk/newsqa
type: default
args: default
metrics:
- name: BLEU4 (Question & Answer Generation)
type: bleu4_question_answer_generation
value: 3.18
---
# Model Card of `StellarMilk/t5-base-newsqa-qag-trained`
This model is fine-tuned version of [t5-base](https://huggingface.co/t5-base) for question & answer pair generation task on the [StellarMilk/newsqa](https://huggingface.co/datasets/StellarMilk/newsqa) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [t5-base](https://huggingface.co/t5-base)
- **Language:** en
- **Training data:** [StellarMilk/newsqa](https://huggingface.co/datasets/StellarMilk/newsqa) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="StellarMilk/t5-base-newsqa-qag-trained")
# model prediction
question_answer_pairs = model.generate_qa("William Turner was an English painter who specialised in watercolour landscapes")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "StellarMilk/t5-base-newsqa-qag-trained")
output = pipe("generate question and answer: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
```
## Evaluation
- ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/StellarMilk/t5-base-newsqa-qag-trained/raw/main/eval/metric.first.answer.paragraph.questions_answers.StellarMilk_newsqa.default.json)
| Score | Type | Dataset |
|---------|--------|-----------|
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: StellarMilk/newsqa
- dataset_name: default
- input_types: ['paragraph']
- output_types: ['questions_answers']
- prefix_types: ['qag']
- model: t5-base
- max_length: 512
- max_length_output: 512
- epoch: 14
- batch: 2
- lr: 0.0001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 2
- label_smoothing: 0.0
The full configuration can be found at [fine-tuning config file](https://huggingface.co/StellarMilk/t5-base-newsqa-qag-trained/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
MaxG1/roberta_fine_tuning_newsmtsc | MaxG1 | 2023-12-19T11:31:34Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-18T12:04:44Z | ---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: testing_roberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# testing_roberta
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5704
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.7567 | 1.0 | 1093 | 0.6133 |
| 0.6006 | 2.0 | 2186 | 0.5704 |
| 0.3937 | 3.0 | 3279 | 0.6010 |
| 0.2514 | 4.0 | 4372 | 0.6876 |
| 0.1718 | 5.0 | 5465 | 0.8447 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
baltop/zwave-dbgatekeeper-v0.3 | baltop | 2023-12-19T11:28:29Z | 3 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2023-12-19T11:27:58Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.2
model-index:
- name: mistral-sql-finetune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-sql-finetune
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0305
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- training_steps: 300
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1899 | 0.17 | 25 | 0.3116 |
| 0.1795 | 0.33 | 50 | 0.1088 |
| 0.0819 | 0.5 | 75 | 0.0425 |
| 0.0453 | 0.67 | 100 | 0.0419 |
| 0.0534 | 0.83 | 125 | 0.0382 |
| 0.0338 | 1.0 | 150 | 0.0315 |
| 0.0358 | 1.17 | 175 | 0.0345 |
| 0.0336 | 1.33 | 200 | 0.0334 |
| 0.0401 | 1.5 | 225 | 0.0322 |
| 0.0326 | 1.67 | 250 | 0.0308 |
| 0.0396 | 1.83 | 275 | 0.0309 |
| 0.0307 | 2.0 | 300 | 0.0305 |
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.37.0.dev0
- Pytorch 2.1.2
- Datasets 2.15.0
- Tokenizers 0.15.0 |
gchindemi/a2c-PandaReachDense-v3 | gchindemi | 2023-12-19T11:26:54Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-19T11:22:33Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.20 +/- 0.14
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Hemanth-thunder/tamil_ner_classification | Hemanth-thunder | 2023-12-19T11:25:13Z | 7 | 1 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"ta",
"dataset:wikiann",
"dataset:aitamilnadu/tamil_ner_data_wikiann",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-12-16T07:37:11Z | ---
license: apache-2.0
datasets:
- wikiann
- aitamilnadu/tamil_ner_data_wikiann
language:
- ta
metrics:
- accuracy
library_name: transformers
pipeline_tag: token-classification
widget:
- text: >-
திருநெல்வேலி உள்ளிட்ட தென் மாவட்டங்களை வரலாறு காணாத கனமழை வெளுத்தெடுத்துக்
கொண்டிருக்க.
- text: புத்தாண்டில் வருகிறது நல்ல செய்தி... அமேசானில் அதிரடி ஆப்பரில் ஐபோன் 15!
--- |
Kshitij2406/GPT_Test | Kshitij2406 | 2023-12-19T11:22:21Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:vilsonrodrigues/falcon-7b-instruct-sharded",
"base_model:adapter:vilsonrodrigues/falcon-7b-instruct-sharded",
"region:us"
] | null | 2023-12-19T11:09:37Z | ---
library_name: peft
base_model: vilsonrodrigues/falcon-7b-instruct-sharded
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
sd-concepts-library/gphone01 | sd-concepts-library | 2023-12-19T11:21:17Z | 0 | 0 | null | [
"base_model:stabilityai/stable-diffusion-2",
"base_model:finetune:stabilityai/stable-diffusion-2",
"license:mit",
"region:us"
] | null | 2023-12-19T11:21:12Z | ---
license: mit
base_model: stabilityai/stable-diffusion-2
---
### gphone01 on Stable Diffusion
This is the `*` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:





|
satani/phtben-6 | satani | 2023-12-19T11:15:38Z | 6 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-12-19T11:11:24Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### phtben_6 Dreambooth model trained by satani with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
vilm/vinallama-7b | vilm | 2023-12-19T11:10:40Z | 108 | 23 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"vi",
"arxiv:2312.11011",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-11-28T07:45:04Z | ---
license: llama2
language:
- vi
---
# VinaLLaMA - State-of-the-art Vietnamese LLMs

Read our [Paper](https://huggingface.co/papers/2312.11011) |
vilm/vinallama-2.7b-chat | vilm | 2023-12-19T11:10:26Z | 153 | 14 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"vi",
"arxiv:2312.11011",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-14T09:21:33Z | ---
license: llama2
language:
- vi
---
# VinaLLaMA - State-of-the-art Vietnamese LLMs

Read our [Paper](https://huggingface.co/papers/2312.11011)
Prompt Format (ChatML):
```
<|im_start|>system
Bạn là một trợ lí AI hữu ích. Hãy trả lời người dùng một cách chính xác.
<|im_end|>
<|im_start|>user
Hello world!<|im_end|>
<|im_start|>assistant
``` |
ngocminhta/Llama-2-Chat-Movie-Review | ngocminhta | 2023-12-19T11:03:55Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"movie",
"entertainment",
"text-classification",
"en",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-19T10:37:22Z | ---
license: apache-2.0
language:
- en
pipeline_tag: text-classification
tags:
- movie
- entertainment
---
# Model Card for Model ID
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Kshitij2406/GPTTest | Kshitij2406 | 2023-12-19T10:51:27Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:vilsonrodrigues/falcon-7b-instruct-sharded",
"base_model:adapter:vilsonrodrigues/falcon-7b-instruct-sharded",
"region:us"
] | null | 2023-12-15T10:39:01Z | ---
library_name: peft
base_model: vilsonrodrigues/falcon-7b-instruct-sharded
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
kghanlon/distilbert-base-uncased-finetuned-SOTUs-v1-RILE-v1 | kghanlon | 2023-12-19T10:50:52Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:kghanlon/distilbert-base-uncased-finetuned-SOTUs-v1",
"base_model:finetune:kghanlon/distilbert-base-uncased-finetuned-SOTUs-v1",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-19T10:05:36Z | ---
base_model: kghanlon/distilbert-base-uncased-finetuned-SOTUs-v1
tags:
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
model-index:
- name: distilbert-base-uncased-finetuned-SOTUs-v1-RILE-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-SOTUs-v1-RILE-v1
This model is a fine-tuned version of [kghanlon/distilbert-base-uncased-finetuned-SOTUs-v1](https://huggingface.co/kghanlon/distilbert-base-uncased-finetuned-SOTUs-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8575
- Accuracy: 0.7345
- Recall: 0.7345
- F1: 0.7343
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:------:|
| 0.703 | 1.0 | 15490 | 0.6829 | 0.7138 | 0.7138 | 0.7109 |
| 0.5689 | 2.0 | 30980 | 0.6758 | 0.7348 | 0.7348 | 0.7344 |
| 0.4264 | 3.0 | 46470 | 0.8575 | 0.7345 | 0.7345 | 0.7343 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
joxen/Jungkook | joxen | 2023-12-19T10:34:52Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2023-12-19T10:33:49Z | ---
license: other
license_name: korea
license_link: LICENSE
---
|
metamath/distilbert-base-uncased-finetuned-emotion | metamath | 2023-12-19T10:33:54Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-07T02:44:12Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.9239450387720956
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2135
- Accuracy: 0.924
- F1: 0.9239
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8092 | 1.0 | 250 | 0.2940 | 0.9065 | 0.9056 |
| 0.2385 | 2.0 | 500 | 0.2135 | 0.924 | 0.9239 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
hkivancoral/smids_10x_deit_small_sgd_001_fold3 | hkivancoral | 2023-12-19T10:33:39Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-12-19T09:31:22Z | ---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_10x_deit_small_sgd_001_fold3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9083333333333333
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_deit_small_sgd_001_fold3
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2811
- Accuracy: 0.9083
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.545 | 1.0 | 750 | 0.5587 | 0.785 |
| 0.4133 | 2.0 | 1500 | 0.4211 | 0.8467 |
| 0.358 | 3.0 | 2250 | 0.3782 | 0.8633 |
| 0.3237 | 4.0 | 3000 | 0.3490 | 0.87 |
| 0.3443 | 5.0 | 3750 | 0.3305 | 0.8767 |
| 0.2928 | 6.0 | 4500 | 0.3200 | 0.8817 |
| 0.2686 | 7.0 | 5250 | 0.3122 | 0.8867 |
| 0.2534 | 8.0 | 6000 | 0.3123 | 0.885 |
| 0.2251 | 9.0 | 6750 | 0.2946 | 0.8933 |
| 0.1954 | 10.0 | 7500 | 0.2908 | 0.9 |
| 0.2504 | 11.0 | 8250 | 0.2911 | 0.8967 |
| 0.2172 | 12.0 | 9000 | 0.2849 | 0.905 |
| 0.2089 | 13.0 | 9750 | 0.2810 | 0.905 |
| 0.2631 | 14.0 | 10500 | 0.2804 | 0.905 |
| 0.2076 | 15.0 | 11250 | 0.2751 | 0.915 |
| 0.1833 | 16.0 | 12000 | 0.2763 | 0.9067 |
| 0.2051 | 17.0 | 12750 | 0.2775 | 0.905 |
| 0.1927 | 18.0 | 13500 | 0.2752 | 0.9083 |
| 0.1896 | 19.0 | 14250 | 0.2722 | 0.9117 |
| 0.193 | 20.0 | 15000 | 0.2720 | 0.905 |
| 0.1978 | 21.0 | 15750 | 0.2723 | 0.905 |
| 0.193 | 22.0 | 16500 | 0.2691 | 0.91 |
| 0.1867 | 23.0 | 17250 | 0.2706 | 0.9133 |
| 0.1588 | 24.0 | 18000 | 0.2753 | 0.9083 |
| 0.1896 | 25.0 | 18750 | 0.2771 | 0.8983 |
| 0.1697 | 26.0 | 19500 | 0.2708 | 0.9133 |
| 0.1259 | 27.0 | 20250 | 0.2702 | 0.9117 |
| 0.152 | 28.0 | 21000 | 0.2731 | 0.9083 |
| 0.1891 | 29.0 | 21750 | 0.2747 | 0.9117 |
| 0.1716 | 30.0 | 22500 | 0.2723 | 0.9083 |
| 0.1252 | 31.0 | 23250 | 0.2778 | 0.905 |
| 0.1227 | 32.0 | 24000 | 0.2742 | 0.9083 |
| 0.166 | 33.0 | 24750 | 0.2738 | 0.9017 |
| 0.1299 | 34.0 | 25500 | 0.2772 | 0.9083 |
| 0.1287 | 35.0 | 26250 | 0.2752 | 0.91 |
| 0.1172 | 36.0 | 27000 | 0.2784 | 0.9033 |
| 0.1292 | 37.0 | 27750 | 0.2763 | 0.9033 |
| 0.1686 | 38.0 | 28500 | 0.2772 | 0.9067 |
| 0.1469 | 39.0 | 29250 | 0.2777 | 0.9067 |
| 0.1673 | 40.0 | 30000 | 0.2785 | 0.9083 |
| 0.1244 | 41.0 | 30750 | 0.2779 | 0.9067 |
| 0.149 | 42.0 | 31500 | 0.2782 | 0.9067 |
| 0.1031 | 43.0 | 32250 | 0.2799 | 0.905 |
| 0.1374 | 44.0 | 33000 | 0.2832 | 0.9067 |
| 0.1179 | 45.0 | 33750 | 0.2818 | 0.905 |
| 0.1282 | 46.0 | 34500 | 0.2810 | 0.905 |
| 0.1603 | 47.0 | 35250 | 0.2819 | 0.9067 |
| 0.1237 | 48.0 | 36000 | 0.2811 | 0.9083 |
| 0.1333 | 49.0 | 36750 | 0.2808 | 0.9067 |
| 0.1344 | 50.0 | 37500 | 0.2811 | 0.9083 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
tresbien1/ppo-Huggy | tresbien1 | 2023-12-19T10:29:10Z | 5 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-12-19T10:29:00Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: tresbien1/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Federm1512/ppo-Huggy | Federm1512 | 2023-12-19T10:25:01Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-12-19T09:54:11Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Federm1512/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
XingeTong/9-testresults | XingeTong | 2023-12-19T10:19:31Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:cardiffnlp/twitter-roberta-base-sentiment-latest",
"base_model:finetune:cardiffnlp/twitter-roberta-base-sentiment-latest",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-19T10:17:13Z | ---
base_model: cardiffnlp/twitter-roberta-base-sentiment-latest
tags:
- generated_from_trainer
model-index:
- name: 9-testresults
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 9-testresults
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9.359061927977144e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.34.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
satani/phtben-4 | satani | 2023-12-19T10:17:16Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-12-19T10:13:18Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### phtben_4 Dreambooth model trained by satani with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
VictorNGomes/pttmario5 | VictorNGomes | 2023-12-19T10:15:08Z | 6 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xlsum",
"base_model:VictorNGomes/pttmario5",
"base_model:finetune:VictorNGomes/pttmario5",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-12-17T01:40:38Z | ---
license: mit
base_model: VictorNGomes/pttmario5
tags:
- generated_from_trainer
datasets:
- xlsum
model-index:
- name: pttmario5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pttmario5
This model is a fine-tuned version of [VictorNGomes/pttmario5](https://huggingface.co/VictorNGomes/pttmario5) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2144
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 384
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5131 | 3.34 | 500 | 2.2600 |
| 2.4594 | 6.69 | 1000 | 2.2144 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints | baichuan-inc | 2023-12-19T10:03:18Z | 16 | 18 | null | [
"en",
"zh",
"license:other",
"region:us"
] | null | 2023-09-05T09:35:23Z | ---
language:
- en
- zh
license: other
tasks:
- text-generation
---
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<div align="center">
<h1>
Baichuan 2
</h1>
</div>
<div align="center">
<a href="https://github.com/baichuan-inc/Baichuan2" target="_blank">🦉GitHub</a> | <a href="https://github.com/baichuan-inc/Baichuan-7B/blob/main/media/wechat.jpeg?raw=true" target="_blank">💬WeChat</a>
</div>
<div align="center">
百川API支持搜索增强和192K长窗口,新增百川搜索增强知识库、限时免费!<br>
🚀 <a href="https://www.baichuan-ai.com/" target="_blank">百川大模型在线对话平台</a> 已正式向公众开放 🎉
</div>
# 目录/Table of Contents
- [📖 模型介绍/Introduction](#Introduction)
- [⚙️ 快速开始/Quick Start](#Start)
- [📊 Benchmark评估/Benchmark Evaluation](#Benchmark)
- [📜 声明与协议/Terms and Conditions](#Terms)
# <span id="Introduction">模型介绍/Introduction</span>
Baichuan 2 是[百川智能]推出的新一代开源大语言模型,采用 **2.6 万亿** Tokens 的高质量语料训练,在权威的中文和英文 benchmark
上均取得同尺寸最好的效果。本次发布包含有 7B、13B 的 Base 和 Chat 版本,并提供了 Chat 版本的 4bits
量化,所有版本不仅对学术研究完全开放,开发者也仅需[邮件申请]并获得官方商用许可后,即可以免费商用。具体发布版本和下载见下表:
Baichuan 2 is the new generation of large-scale open-source language models launched by [Baichuan Intelligence inc.](https://www.baichuan-ai.com/).
It is trained on a high-quality corpus with 2.6 trillion tokens and has achieved the best performance in authoritative Chinese and English benchmarks of the same size.
This release includes 7B and 13B versions for both Base and Chat models, along with a 4bits quantized version for the Chat model.
All versions are fully open to academic research, and developers can also use them for free in commercial applications after obtaining an official commercial license through [email request](mailto:[email protected]).
The specific release versions and download links are listed in the table below:
| | Base Model | Chat Model | 4bits Quantized Chat Model |
|:---:|:--------------------:|:--------------------:|:--------------------------:|
| 7B | [Baichuan2-7B-Base](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base) | [Baichuan2-7B-Chat](https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat) | [Baichuan2-7B-Chat-4bits](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base-4bits) |
| 13B | [Baichuan2-13B-Base](https://huggingface.co/baichuan-inc/Baichuan2-13B-Base) | [Baichuan2-13B-Chat](https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat) | [Baichuan2-13B-Chat-4bits](https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat-4bits) |
# <span id="Start">快速开始/Quick Start</span>
在Baichuan2系列模型中,我们为了加快推理速度使用了Pytorch2.0加入的新功能F.scaled_dot_product_attention,因此模型需要在Pytorch2.0环境下运行。
In the Baichuan 2 series models, we have utilized the new feature `F.scaled_dot_product_attention` introduced in PyTorch 2.0 to accelerate inference speed. Therefore, the model needs to be run in a PyTorch 2.0 environment.
**我们将训练中的Checkpoints上传到了本项目中,可以通过指定revision来加载不同step的Checkpoint。**
**We have uploaded the checkpoints during training to this project. You can load checkpoints from different steps by specifying the revision.**
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints", revision="train_02200B", use_fast=False, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints", revision="train_02200B", device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True)
inputs = tokenizer('登鹳雀楼->王之涣\n夜雨寄北->', return_tensors='pt')
inputs = inputs.to('cuda:0')
pred = model.generate(**inputs, max_new_tokens=64, repetition_penalty=1.1)
print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
```
# <span id="Benchmark">Benchmark 结果/Benchmark Evaluation</span>
我们在[通用]、[法律]、[医疗]、[数学]、[代码]和[多语言翻译]六个领域的中英文权威数据集上对模型进行了广泛测试,更多详细测评结果可查看[GitHub]。
We have extensively tested the model on authoritative Chinese-English datasets across six domains: [General](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#general-domain), [Legal](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#law-and-medicine), [Medical](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#law-and-medicine), [Mathematics](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#mathematics-and-code), [Code](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#mathematics-and-code), and [Multilingual Translation](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#multilingual-translation). For more detailed evaluation results, please refer to [GitHub](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md).
### 7B Model Results
| | **C-Eval** | **MMLU** | **CMMLU** | **Gaokao** | **AGIEval** | **BBH** |
|:-----------------------:|:----------:|:--------:|:---------:|:----------:|:-----------:|:-------:|
| | 5-shot | 5-shot | 5-shot | 5-shot | 5-shot | 3-shot |
| **GPT-4** | 68.40 | 83.93 | 70.33 | 66.15 | 63.27 | 75.12 |
| **GPT-3.5 Turbo** | 51.10 | 68.54 | 54.06 | 47.07 | 46.13 | 61.59 |
| **LLaMA-7B** | 27.10 | 35.10 | 26.75 | 27.81 | 28.17 | 32.38 |
| **LLaMA2-7B** | 28.90 | 45.73 | 31.38 | 25.97 | 26.53 | 39.16 |
| **MPT-7B** | 27.15 | 27.93 | 26.00 | 26.54 | 24.83 | 35.20 |
| **Falcon-7B** | 24.23 | 26.03 | 25.66 | 24.24 | 24.10 | 28.77 |
| **ChatGLM2-6B** | 50.20 | 45.90 | 49.00 | 49.44 | 45.28 | 31.65 |
| **[Baichuan-7B]** | 42.80 | 42.30 | 44.02 | 36.34 | 34.44 | 32.48 |
| **[Baichuan2-7B-Base]** | 54.00 | 54.16 | 57.07 | 47.47 | 42.73 | 41.56 |
### 13B Model Results
| | **C-Eval** | **MMLU** | **CMMLU** | **Gaokao** | **AGIEval** | **BBH** |
|:---------------------------:|:----------:|:--------:|:---------:|:----------:|:-----------:|:-------:|
| | 5-shot | 5-shot | 5-shot | 5-shot | 5-shot | 3-shot |
| **GPT-4** | 68.40 | 83.93 | 70.33 | 66.15 | 63.27 | 75.12 |
| **GPT-3.5 Turbo** | 51.10 | 68.54 | 54.06 | 47.07 | 46.13 | 61.59 |
| **LLaMA-13B** | 28.50 | 46.30 | 31.15 | 28.23 | 28.22 | 37.89 |
| **LLaMA2-13B** | 35.80 | 55.09 | 37.99 | 30.83 | 32.29 | 46.98 |
| **Vicuna-13B** | 32.80 | 52.00 | 36.28 | 30.11 | 31.55 | 43.04 |
| **Chinese-Alpaca-Plus-13B** | 38.80 | 43.90 | 33.43 | 34.78 | 35.46 | 28.94 |
| **XVERSE-13B** | 53.70 | 55.21 | 58.44 | 44.69 | 42.54 | 38.06 |
| **[Baichuan-13B-Base]** | 52.40 | 51.60 | 55.30 | 49.69 | 43.20 | 43.01 |
| **[Baichuan2-13B-Base]** | 58.10 | 59.17 | 61.97 | 54.33 | 48.17 | 48.78 |
## 训练过程模型/Training Dynamics
除了训练了 2.6 万亿 Tokens 的 [Baichuan2-7B-Base](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base) 模型,我们还提供了在此之前的另外 11 个中间过程的模型(分别对应训练了约 0.2 ~ 2.4 万亿 Tokens)供社区研究使用
([训练过程checkpoint下载](https://huggingface.co/baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints))。下图给出了这些 checkpoints 在 C-Eval、MMLU、CMMLU 三个 benchmark 上的效果变化:
In addition to the [Baichuan2-7B-Base](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base) model trained on 2.6 trillion tokens, we also offer 11 additional intermediate-stage models for community research, corresponding to training on approximately 0.2 to 2.4 trillion tokens each ([Intermediate Checkpoints Download](https://huggingface.co/baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints)). The graph below shows the performance changes of these checkpoints on three benchmarks: C-Eval, MMLU, and CMMLU.

# <span id="Terms">声明与协议/Terms and Conditions</span>
## 声明
我们在此声明,我们的开发团队并未基于 Baichuan 2 模型开发任何应用,无论是在 iOS、Android、网页或任何其他平台。我们强烈呼吁所有使用者,不要利用
Baichuan 2 模型进行任何危害国家社会安全或违法的活动。另外,我们也要求使用者不要将 Baichuan 2
模型用于未经适当安全审查和备案的互联网服务。我们希望所有的使用者都能遵守这个原则,确保科技的发展能在规范和合法的环境下进行。
我们已经尽我们所能,来确保模型训练过程中使用的数据的合规性。然而,尽管我们已经做出了巨大的努力,但由于模型和数据的复杂性,仍有可能存在一些无法预见的问题。因此,如果由于使用
Baichuan 2 开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。
We hereby declare that our team has not developed any applications based on Baichuan 2 models, not on iOS, Android, the web, or any other platform. We strongly call on all users not to use Baichuan 2 models for any activities that harm national / social security or violate the law. Also, we ask users not to use Baichuan 2 models for Internet services that have not undergone appropriate security reviews and filings. We hope that all users can abide by this principle and ensure that the development of technology proceeds in a regulated and legal environment.
We have done our best to ensure the compliance of the data used in the model training process. However, despite our considerable efforts, there may still be some unforeseeable issues due to the complexity of the model and data. Therefore, if any problems arise due to the use of Baichuan 2 open-source models, including but not limited to data security issues, public opinion risks, or any risks and problems brought about by the model being misled, abused, spread or improperly exploited, we will not assume any responsibility.
## 协议
社区使用 Baichuan 2 模型需要遵循 [Apache 2.0](https://github.com/baichuan-inc/Baichuan2/blob/main/LICENSE) 和[《Baichuan 2 模型社区许可协议》](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/resolve/main/Baichuan%202%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf)。Baichuan 2 模型支持商业用途,如果您计划将 Baichuan 2 模型或其衍生品用于商业目的,请您确认您的主体符合以下情况:
1. 您或您的关联方的服务或产品的日均用户活跃量(DAU)低于100万。
2. 您或您的关联方不是软件服务提供商、云服务提供商。
3. 您或您的关联方不存在将授予您的商用许可,未经百川许可二次授权给其他第三方的可能。
在符合以上条件的前提下,您需要通过以下联系邮箱 [email protected] ,提交《Baichuan 2 模型社区许可协议》要求的申请材料。审核通过后,百川将特此授予您一个非排他性、全球性、不可转让、不可再许可、可撤销的商用版权许可。
The community usage of Baichuan 2 model requires adherence to [Apache 2.0](https://github.com/baichuan-inc/Baichuan2/blob/main/LICENSE) and [Community License for Baichuan2 Model](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/resolve/main/Baichuan%202%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf). The Baichuan 2 model supports commercial use. If you plan to use the Baichuan 2 model or its derivatives for commercial purposes, please ensure that your entity meets the following conditions:
1. The Daily Active Users (DAU) of your or your affiliate's service or product is less than 1 million.
2. Neither you nor your affiliates are software service providers or cloud service providers.
3. There is no possibility for you or your affiliates to grant the commercial license given to you, to reauthorize it to other third parties without Baichuan's permission.
Upon meeting the above conditions, you need to submit the application materials required by the Baichuan 2 Model Community License Agreement via the following contact email: [email protected]. Once approved, Baichuan will hereby grant you a non-exclusive, global, non-transferable, non-sublicensable, revocable commercial copyright license.
[GitHub]:https://github.com/baichuan-inc/Baichuan2
[Baichuan2]:https://github.com/baichuan-inc/Baichuan2
[Baichuan-7B]:https://huggingface.co/baichuan-inc/Baichuan-7B
[Baichuan2-7B-Base]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Base
[Baichuan2-7B-Chat]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat
[Baichuan2-7B-Chat-4bits]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat-4bits
[Baichuan-13B-Base]:https://huggingface.co/baichuan-inc/Baichuan-13B-Base
[Baichuan2-13B-Base]:https://huggingface.co/baichuan-inc/Baichuan2-13B-Base
[Baichuan2-13B-Chat]:https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat
[Baichuan2-13B-Chat-4bits]:https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat-4bits
[通用]:https://github.com/baichuan-inc/Baichuan2#%E9%80%9A%E7%94%A8%E9%A2%86%E5%9F%9F
[法律]:https://github.com/baichuan-inc/Baichuan2#%E6%B3%95%E5%BE%8B%E5%8C%BB%E7%96%97
[医疗]:https://github.com/baichuan-inc/Baichuan2#%E6%B3%95%E5%BE%8B%E5%8C%BB%E7%96%97
[数学]:https://github.com/baichuan-inc/Baichuan2#%E6%95%B0%E5%AD%A6%E4%BB%A3%E7%A0%81
[代码]:https://github.com/baichuan-inc/Baichuan2#%E6%95%B0%E5%AD%A6%E4%BB%A3%E7%A0%81
[多语言翻译]:https://github.com/baichuan-inc/Baichuan2#%E5%A4%9A%E8%AF%AD%E8%A8%80%E7%BF%BB%E8%AF%91
[《Baichuan 2 模型社区许可协议》]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/blob/main/Baichuan%202%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf
[邮件申请]: mailto:[email protected]
[Email]: mailto:[email protected]
[[email protected]]: mailto:[email protected]
[训练过程heckpoint下载]: https://huggingface.co/baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints
[百川智能]: https://www.baichuan-ai.com
|
sdpkjc/HalfCheetah-v4-sac_continuous_action-seed4 | sdpkjc | 2023-12-19T09:59:40Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"HalfCheetah-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-19T09:59:20Z | ---
tags:
- HalfCheetah-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: HalfCheetah-v4
type: HalfCheetah-v4
metrics:
- type: mean_reward
value: 11652.55 +/- 146.95
name: mean_reward
verified: false
---
# (CleanRL) **SAC** Agent Playing **HalfCheetah-v4**
This is a trained model of a SAC agent playing HalfCheetah-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sac_continuous_action.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[sac_continuous_action]"
python -m cleanrl_utils.enjoy --exp-name sac_continuous_action --env-id HalfCheetah-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/sdpkjc/HalfCheetah-v4-sac_continuous_action-seed4/raw/main/sac_continuous_action.py
curl -OL https://huggingface.co/sdpkjc/HalfCheetah-v4-sac_continuous_action-seed4/raw/main/pyproject.toml
curl -OL https://huggingface.co/sdpkjc/HalfCheetah-v4-sac_continuous_action-seed4/raw/main/poetry.lock
poetry install --all-extras
python sac_continuous_action.py --save-model --upload-model --hf-entity sdpkjc --env-id HalfCheetah-v4 --seed 4 --track
```
# Hyperparameters
```python
{'alpha': 0.2,
'autotune': True,
'batch_size': 256,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'env_id': 'HalfCheetah-v4',
'exp_name': 'sac_continuous_action',
'gamma': 0.99,
'hf_entity': 'sdpkjc',
'learning_starts': 5000.0,
'noise_clip': 0.5,
'policy_frequency': 2,
'policy_lr': 0.0003,
'q_lr': 0.001,
'save_model': True,
'seed': 4,
'target_network_frequency': 1,
'tau': 0.005,
'torch_deterministic': True,
'total_timesteps': 1000000,
'track': True,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
sdpkjc/Ant-v4-sac_continuous_action-seed2 | sdpkjc | 2023-12-19T09:57:43Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Ant-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-19T09:57:34Z | ---
tags:
- Ant-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Ant-v4
type: Ant-v4
metrics:
- type: mean_reward
value: 5816.91 +/- 66.05
name: mean_reward
verified: false
---
# (CleanRL) **SAC** Agent Playing **Ant-v4**
This is a trained model of a SAC agent playing Ant-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sac_continuous_action.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[sac_continuous_action]"
python -m cleanrl_utils.enjoy --exp-name sac_continuous_action --env-id Ant-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/sdpkjc/Ant-v4-sac_continuous_action-seed2/raw/main/sac_continuous_action.py
curl -OL https://huggingface.co/sdpkjc/Ant-v4-sac_continuous_action-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/sdpkjc/Ant-v4-sac_continuous_action-seed2/raw/main/poetry.lock
poetry install --all-extras
python sac_continuous_action.py --save-model --upload-model --hf-entity sdpkjc --env-id Ant-v4 --seed 2 --track
```
# Hyperparameters
```python
{'alpha': 0.2,
'autotune': True,
'batch_size': 256,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'env_id': 'Ant-v4',
'exp_name': 'sac_continuous_action',
'gamma': 0.99,
'hf_entity': 'sdpkjc',
'learning_starts': 5000.0,
'noise_clip': 0.5,
'policy_frequency': 2,
'policy_lr': 0.0003,
'q_lr': 0.001,
'save_model': True,
'seed': 2,
'target_network_frequency': 1,
'tau': 0.005,
'torch_deterministic': True,
'total_timesteps': 1000000,
'track': True,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
Breyten/mistral-instruct-dutch-syntax-10000 | Breyten | 2023-12-19T09:56:16Z | 1 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2023-12-16T22:54:25Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.1
model-index:
- name: mistral-instruct-dutch-syntax-10000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-Instruct-v0.1-syntax2023-12-16-21-24
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on a Lassy_small dataset curated for dutch syntax.
10000 samples where used, batch-size 2, runtime 2 epochs.
It achieves the following results on the evaluation set:
- Loss: 0.2522
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 0.03
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.7075 | 0.11 | 500 | 0.6710 |
| 0.3569 | 0.21 | 1000 | 0.4348 |
| 0.3458 | 0.32 | 1500 | 0.3517 |
| 0.3325 | 0.42 | 2000 | 0.3151 |
| 0.3014 | 0.53 | 2500 | 0.2928 |
| 0.2304 | 0.63 | 3000 | 0.2817 |
| 0.2984 | 0.74 | 3500 | 0.2736 |
| 0.2283 | 0.84 | 4000 | 0.2680 |
| 0.2399 | 0.95 | 4500 | 0.2640 |
| 0.24 | 1.05 | 5000 | 0.2609 |
| 0.2039 | 1.16 | 5500 | 0.2588 |
| 0.2447 | 1.26 | 6000 | 0.2558 |
| 0.2377 | 1.37 | 6500 | 0.2544 |
| 0.2399 | 1.47 | 7000 | 0.2544 |
| 0.2424 | 1.58 | 7500 | 0.2532 |
| 0.2626 | 1.68 | 8000 | 0.2527 |
| 0.2346 | 1.79 | 8500 | 0.2524 |
| 0.2194 | 1.89 | 9000 | 0.2522 |
| 0.2123 | 2.0 | 9500 | 0.2522 |
| 0.2618 | 2.11 | 10000 | 0.2522 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.1
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0 |
sdpkjc/Walker2d-v4-sac_continuous_action-seed2 | sdpkjc | 2023-12-19T09:51:57Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Walker2d-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-19T09:51:48Z | ---
tags:
- Walker2d-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Walker2d-v4
type: Walker2d-v4
metrics:
- type: mean_reward
value: 3860.43 +/- 46.19
name: mean_reward
verified: false
---
# (CleanRL) **SAC** Agent Playing **Walker2d-v4**
This is a trained model of a SAC agent playing Walker2d-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sac_continuous_action.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[sac_continuous_action]"
python -m cleanrl_utils.enjoy --exp-name sac_continuous_action --env-id Walker2d-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/sdpkjc/Walker2d-v4-sac_continuous_action-seed2/raw/main/sac_continuous_action.py
curl -OL https://huggingface.co/sdpkjc/Walker2d-v4-sac_continuous_action-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/sdpkjc/Walker2d-v4-sac_continuous_action-seed2/raw/main/poetry.lock
poetry install --all-extras
python sac_continuous_action.py --save-model --upload-model --hf-entity sdpkjc --env-id Walker2d-v4 --seed 2 --track
```
# Hyperparameters
```python
{'alpha': 0.2,
'autotune': True,
'batch_size': 256,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'env_id': 'Walker2d-v4',
'exp_name': 'sac_continuous_action',
'gamma': 0.99,
'hf_entity': 'sdpkjc',
'learning_starts': 5000.0,
'noise_clip': 0.5,
'policy_frequency': 2,
'policy_lr': 0.0003,
'q_lr': 0.001,
'save_model': True,
'seed': 2,
'target_network_frequency': 1,
'tau': 0.005,
'torch_deterministic': True,
'total_timesteps': 1000000,
'track': True,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
sdpkjc/HalfCheetah-v4-sac_continuous_action-seed3 | sdpkjc | 2023-12-19T09:50:17Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"HalfCheetah-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-19T09:50:08Z | ---
tags:
- HalfCheetah-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: HalfCheetah-v4
type: HalfCheetah-v4
metrics:
- type: mean_reward
value: 11596.06 +/- 106.74
name: mean_reward
verified: false
---
# (CleanRL) **SAC** Agent Playing **HalfCheetah-v4**
This is a trained model of a SAC agent playing HalfCheetah-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sac_continuous_action.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[sac_continuous_action]"
python -m cleanrl_utils.enjoy --exp-name sac_continuous_action --env-id HalfCheetah-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/sdpkjc/HalfCheetah-v4-sac_continuous_action-seed3/raw/main/sac_continuous_action.py
curl -OL https://huggingface.co/sdpkjc/HalfCheetah-v4-sac_continuous_action-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/sdpkjc/HalfCheetah-v4-sac_continuous_action-seed3/raw/main/poetry.lock
poetry install --all-extras
python sac_continuous_action.py --save-model --upload-model --hf-entity sdpkjc --env-id HalfCheetah-v4 --seed 3 --track
```
# Hyperparameters
```python
{'alpha': 0.2,
'autotune': True,
'batch_size': 256,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'env_id': 'HalfCheetah-v4',
'exp_name': 'sac_continuous_action',
'gamma': 0.99,
'hf_entity': 'sdpkjc',
'learning_starts': 5000.0,
'noise_clip': 0.5,
'policy_frequency': 2,
'policy_lr': 0.0003,
'q_lr': 0.001,
'save_model': True,
'seed': 3,
'target_network_frequency': 1,
'tau': 0.005,
'torch_deterministic': True,
'total_timesteps': 1000000,
'track': True,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
Breyten/mistral-instruct-dutch-syntax-2000 | Breyten | 2023-12-19T09:50:13Z | 1 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2023-12-16T20:41:32Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.1
model-index:
- name: mistral-instruct-dutch-syntax-2000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-instruct-dutch-syntax-2000
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on a curated version of Lassy-Small with syntax data.
2000 samples.
It achieves the following results on the evaluation set:
- Loss: 0.6808
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 0.03
- training_steps: 950
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1019 | 0.11 | 100 | 1.0701 |
| 0.9093 | 0.21 | 200 | 0.9592 |
| 0.8341 | 0.32 | 300 | 0.8800 |
| 0.7975 | 0.42 | 400 | 0.8150 |
| 0.7859 | 0.53 | 500 | 0.7638 |
| 0.7069 | 0.63 | 600 | 0.7254 |
| 0.6007 | 0.74 | 700 | 0.6974 |
| 0.6971 | 0.84 | 800 | 0.6832 |
| 0.6331 | 0.95 | 900 | 0.6808 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.1
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0 |
sdpkjc/Hopper-v4-sac_continuous_action-seed3 | sdpkjc | 2023-12-19T09:48:41Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Hopper-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-19T09:48:34Z | ---
tags:
- Hopper-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Hopper-v4
type: Hopper-v4
metrics:
- type: mean_reward
value: 2600.97 +/- 646.04
name: mean_reward
verified: false
---
# (CleanRL) **SAC** Agent Playing **Hopper-v4**
This is a trained model of a SAC agent playing Hopper-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sac_continuous_action.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[sac_continuous_action]"
python -m cleanrl_utils.enjoy --exp-name sac_continuous_action --env-id Hopper-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/sdpkjc/Hopper-v4-sac_continuous_action-seed3/raw/main/sac_continuous_action.py
curl -OL https://huggingface.co/sdpkjc/Hopper-v4-sac_continuous_action-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/sdpkjc/Hopper-v4-sac_continuous_action-seed3/raw/main/poetry.lock
poetry install --all-extras
python sac_continuous_action.py --save-model --upload-model --hf-entity sdpkjc --env-id Hopper-v4 --seed 3 --track
```
# Hyperparameters
```python
{'alpha': 0.2,
'autotune': True,
'batch_size': 256,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'env_id': 'Hopper-v4',
'exp_name': 'sac_continuous_action',
'gamma': 0.99,
'hf_entity': 'sdpkjc',
'learning_starts': 5000.0,
'noise_clip': 0.5,
'policy_frequency': 2,
'policy_lr': 0.0003,
'q_lr': 0.001,
'save_model': True,
'seed': 3,
'target_network_frequency': 1,
'tau': 0.005,
'torch_deterministic': True,
'total_timesteps': 1000000,
'track': True,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
satani/phtben-3 | satani | 2023-12-19T09:46:06Z | 3 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-12-19T09:42:14Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### phtben_3 Dreambooth model trained by satani with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
SebastianSchramm/LlamaGuard-7b-GPTQ-4bit-128g-actorder_True | SebastianSchramm | 2023-12-19T09:44:35Z | 8 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-2",
"4bit",
"gptq",
"conversational",
"en",
"base_model:meta-llama/LlamaGuard-7b",
"base_model:quantized:meta-llama/LlamaGuard-7b",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | text-generation | 2023-12-08T17:54:13Z | ---
license: llama2
language:
- en
library_name: transformers
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
- 4bit
- gptq
base_model: meta-llama/LlamaGuard-7b
inference: false
---
# Quantized version of meta-llama/LlamaGuard-7b
## Model Description
The model [meta-llama/LlamaGuard-7b](https://huggingface.co/meta-llama/LlamaGuard-7b) was quantized to 4bit, group_size 128, and act-order=True with auto-gptq integration in transformers (https://huggingface.co/blog/gptq-integration).
## Evaluation
To evaluate the qunatized model and compare it with the full precision model, I performed binary classification on the "toxicity" label from the ~5k samples test set of lmsys/toxic-chat.
📊 Full Precision Model:
Average Precision Score: 0.3625
📊 4-bit Quantized Model:
Average Precision Score: 0.3450
|
sdpkjc/Hopper-v4-sac_continuous_action-seed5 | sdpkjc | 2023-12-19T09:43:49Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Hopper-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-19T09:43:44Z | ---
tags:
- Hopper-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Hopper-v4
type: Hopper-v4
metrics:
- type: mean_reward
value: 1680.67 +/- 734.03
name: mean_reward
verified: false
---
# (CleanRL) **SAC** Agent Playing **Hopper-v4**
This is a trained model of a SAC agent playing Hopper-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sac_continuous_action.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[sac_continuous_action]"
python -m cleanrl_utils.enjoy --exp-name sac_continuous_action --env-id Hopper-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/sdpkjc/Hopper-v4-sac_continuous_action-seed5/raw/main/sac_continuous_action.py
curl -OL https://huggingface.co/sdpkjc/Hopper-v4-sac_continuous_action-seed5/raw/main/pyproject.toml
curl -OL https://huggingface.co/sdpkjc/Hopper-v4-sac_continuous_action-seed5/raw/main/poetry.lock
poetry install --all-extras
python sac_continuous_action.py --save-model --upload-model --hf-entity sdpkjc --env-id Hopper-v4 --seed 5 --track
```
# Hyperparameters
```python
{'alpha': 0.2,
'autotune': True,
'batch_size': 256,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'env_id': 'Hopper-v4',
'exp_name': 'sac_continuous_action',
'gamma': 0.99,
'hf_entity': 'sdpkjc',
'learning_starts': 5000.0,
'noise_clip': 0.5,
'policy_frequency': 2,
'policy_lr': 0.0003,
'q_lr': 0.001,
'save_model': True,
'seed': 5,
'target_network_frequency': 1,
'tau': 0.005,
'torch_deterministic': True,
'total_timesteps': 1000000,
'track': True,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
aditnnda/gacoanReviewer | aditnnda | 2023-12-19T09:43:42Z | 6 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:indobenchmark/indobert-base-p1",
"base_model:finetune:indobenchmark/indobert-base-p1",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-18T12:44:29Z | ---
license: mit
base_model: indobenchmark/indobert-base-p1
tags:
- generated_from_keras_callback
model-index:
- name: aditnnda/gacoanReviewer
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# aditnnda/gacoanReviewer
This model is a fine-tuned version of [indobenchmark/indobert-base-p1](https://huggingface.co/indobenchmark/indobert-base-p1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0001
- Validation Loss: 0.5471
- Train Accuracy: 0.9163
- Epoch: 24
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 3550, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.2751 | 0.2043 | 0.9107 | 0 |
| 0.1202 | 0.2077 | 0.9177 | 1 |
| 0.0583 | 0.2770 | 0.9079 | 2 |
| 0.0435 | 0.3412 | 0.9066 | 3 |
| 0.0251 | 0.3762 | 0.9079 | 4 |
| 0.0208 | 0.2241 | 0.9303 | 5 |
| 0.0070 | 0.2794 | 0.9317 | 6 |
| 0.0151 | 0.3823 | 0.9219 | 7 |
| 0.0088 | 0.3740 | 0.9261 | 8 |
| 0.0019 | 0.4286 | 0.9261 | 9 |
| 0.0030 | 0.6086 | 0.8912 | 10 |
| 0.0052 | 0.4023 | 0.9344 | 11 |
| 0.0005 | 0.5193 | 0.9121 | 12 |
| 0.0002 | 0.5171 | 0.9135 | 13 |
| 0.0002 | 0.5276 | 0.9163 | 14 |
| 0.0002 | 0.5344 | 0.9135 | 15 |
| 0.0002 | 0.5362 | 0.9163 | 16 |
| 0.0001 | 0.5407 | 0.9163 | 17 |
| 0.0001 | 0.5406 | 0.9163 | 18 |
| 0.0001 | 0.5484 | 0.9149 | 19 |
| 0.0001 | 0.5406 | 0.9177 | 20 |
| 0.0001 | 0.5431 | 0.9177 | 21 |
| 0.0001 | 0.5453 | 0.9163 | 22 |
| 0.0001 | 0.5466 | 0.9163 | 23 |
| 0.0001 | 0.5471 | 0.9163 | 24 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
sdpkjc/Humanoid-v4-sac_continuous_action-seed2 | sdpkjc | 2023-12-19T09:42:38Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Humanoid-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-19T09:42:23Z | ---
tags:
- Humanoid-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Humanoid-v4
type: Humanoid-v4
metrics:
- type: mean_reward
value: 4993.72 +/- 1028.23
name: mean_reward
verified: false
---
# (CleanRL) **SAC** Agent Playing **Humanoid-v4**
This is a trained model of a SAC agent playing Humanoid-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sac_continuous_action.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[sac_continuous_action]"
python -m cleanrl_utils.enjoy --exp-name sac_continuous_action --env-id Humanoid-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/sdpkjc/Humanoid-v4-sac_continuous_action-seed2/raw/main/sac_continuous_action.py
curl -OL https://huggingface.co/sdpkjc/Humanoid-v4-sac_continuous_action-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/sdpkjc/Humanoid-v4-sac_continuous_action-seed2/raw/main/poetry.lock
poetry install --all-extras
python sac_continuous_action.py --save-model --upload-model --hf-entity sdpkjc --env-id Humanoid-v4 --seed 2 --track
```
# Hyperparameters
```python
{'alpha': 0.2,
'autotune': True,
'batch_size': 256,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'env_id': 'Humanoid-v4',
'exp_name': 'sac_continuous_action',
'gamma': 0.99,
'hf_entity': 'sdpkjc',
'learning_starts': 5000.0,
'noise_clip': 0.5,
'policy_frequency': 2,
'policy_lr': 0.0003,
'q_lr': 0.001,
'save_model': True,
'seed': 2,
'target_network_frequency': 1,
'tau': 0.005,
'torch_deterministic': True,
'total_timesteps': 1000000,
'track': True,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
Vageesh1/Appointment_bot | Vageesh1 | 2023-12-19T09:42:15Z | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2023-12-14T17:51:41Z | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
sdpkjc/Swimmer-v4-sac_continuous_action-seed4 | sdpkjc | 2023-12-19T09:41:45Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Swimmer-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-19T09:41:39Z | ---
tags:
- Swimmer-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Swimmer-v4
type: Swimmer-v4
metrics:
- type: mean_reward
value: 50.58 +/- 1.83
name: mean_reward
verified: false
---
# (CleanRL) **SAC** Agent Playing **Swimmer-v4**
This is a trained model of a SAC agent playing Swimmer-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sac_continuous_action.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[sac_continuous_action]"
python -m cleanrl_utils.enjoy --exp-name sac_continuous_action --env-id Swimmer-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/sdpkjc/Swimmer-v4-sac_continuous_action-seed4/raw/main/sac_continuous_action.py
curl -OL https://huggingface.co/sdpkjc/Swimmer-v4-sac_continuous_action-seed4/raw/main/pyproject.toml
curl -OL https://huggingface.co/sdpkjc/Swimmer-v4-sac_continuous_action-seed4/raw/main/poetry.lock
poetry install --all-extras
python sac_continuous_action.py --save-model --upload-model --hf-entity sdpkjc --env-id Swimmer-v4 --seed 4 --track
```
# Hyperparameters
```python
{'alpha': 0.2,
'autotune': True,
'batch_size': 256,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'env_id': 'Swimmer-v4',
'exp_name': 'sac_continuous_action',
'gamma': 0.99,
'hf_entity': 'sdpkjc',
'learning_starts': 5000.0,
'noise_clip': 0.5,
'policy_frequency': 2,
'policy_lr': 0.0003,
'q_lr': 0.001,
'save_model': True,
'seed': 4,
'target_network_frequency': 1,
'tau': 0.005,
'torch_deterministic': True,
'total_timesteps': 1000000,
'track': True,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
sdpkjc/Swimmer-v4-sac_continuous_action-seed3 | sdpkjc | 2023-12-19T09:41:25Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Swimmer-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-19T09:41:19Z | ---
tags:
- Swimmer-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Swimmer-v4
type: Swimmer-v4
metrics:
- type: mean_reward
value: 149.90 +/- 5.08
name: mean_reward
verified: false
---
# (CleanRL) **SAC** Agent Playing **Swimmer-v4**
This is a trained model of a SAC agent playing Swimmer-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sac_continuous_action.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[sac_continuous_action]"
python -m cleanrl_utils.enjoy --exp-name sac_continuous_action --env-id Swimmer-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/sdpkjc/Swimmer-v4-sac_continuous_action-seed3/raw/main/sac_continuous_action.py
curl -OL https://huggingface.co/sdpkjc/Swimmer-v4-sac_continuous_action-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/sdpkjc/Swimmer-v4-sac_continuous_action-seed3/raw/main/poetry.lock
poetry install --all-extras
python sac_continuous_action.py --save-model --upload-model --hf-entity sdpkjc --env-id Swimmer-v4 --seed 3 --track
```
# Hyperparameters
```python
{'alpha': 0.2,
'autotune': True,
'batch_size': 256,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'env_id': 'Swimmer-v4',
'exp_name': 'sac_continuous_action',
'gamma': 0.99,
'hf_entity': 'sdpkjc',
'learning_starts': 5000.0,
'noise_clip': 0.5,
'policy_frequency': 2,
'policy_lr': 0.0003,
'q_lr': 0.001,
'save_model': True,
'seed': 3,
'target_network_frequency': 1,
'tau': 0.005,
'torch_deterministic': True,
'total_timesteps': 1000000,
'track': True,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
sdpkjc/Swimmer-v4-sac_continuous_action-seed2 | sdpkjc | 2023-12-19T09:40:07Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Swimmer-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-19T09:40:01Z | ---
tags:
- Swimmer-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Swimmer-v4
type: Swimmer-v4
metrics:
- type: mean_reward
value: 68.64 +/- 25.15
name: mean_reward
verified: false
---
# (CleanRL) **SAC** Agent Playing **Swimmer-v4**
This is a trained model of a SAC agent playing Swimmer-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sac_continuous_action.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[sac_continuous_action]"
python -m cleanrl_utils.enjoy --exp-name sac_continuous_action --env-id Swimmer-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/sdpkjc/Swimmer-v4-sac_continuous_action-seed2/raw/main/sac_continuous_action.py
curl -OL https://huggingface.co/sdpkjc/Swimmer-v4-sac_continuous_action-seed2/raw/main/pyproject.toml
curl -OL https://huggingface.co/sdpkjc/Swimmer-v4-sac_continuous_action-seed2/raw/main/poetry.lock
poetry install --all-extras
python sac_continuous_action.py --save-model --upload-model --hf-entity sdpkjc --env-id Swimmer-v4 --seed 2 --track
```
# Hyperparameters
```python
{'alpha': 0.2,
'autotune': True,
'batch_size': 256,
'buffer_size': 1000000,
'capture_video': False,
'cuda': True,
'env_id': 'Swimmer-v4',
'exp_name': 'sac_continuous_action',
'gamma': 0.99,
'hf_entity': 'sdpkjc',
'learning_starts': 5000.0,
'noise_clip': 0.5,
'policy_frequency': 2,
'policy_lr': 0.0003,
'q_lr': 0.001,
'save_model': True,
'seed': 2,
'target_network_frequency': 1,
'tau': 0.005,
'torch_deterministic': True,
'total_timesteps': 1000000,
'track': True,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
Stillkgb/test_butterflies_model | Stillkgb | 2023-12-19T09:37:49Z | 1 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2023-12-19T09:37:04Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute butterflies.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Stillkgb/test_butterflies_model')
image = pipeline().images[0]
image
```
|
ysbetter/zephyr-beta-support-chatbot | ysbetter | 2023-12-19T09:19:49Z | 12 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:TheBloke/zephyr-7B-beta-GPTQ",
"base_model:adapter:TheBloke/zephyr-7B-beta-GPTQ",
"license:mit",
"region:us"
] | null | 2023-12-18T23:29:43Z | ---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: TheBloke/zephyr-7B-beta-GPTQ
model-index:
- name: zephyr-beta-support-chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-beta-support-chatbot
This model is a fine-tuned version of [TheBloke/zephyr-7B-beta-GPTQ](https://huggingface.co/TheBloke/zephyr-7B-beta-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0 |
Zigeng/SlimSAM | Zigeng | 2023-12-19T09:16:39Z | 0 | 0 | null | [
"arxiv:2312.05284",
"arxiv:2304.02643",
"license:apache-2.0",
"region:us"
] | null | 2023-12-19T04:55:18Z | ---
license: apache-2.0
---
# SlimSAM: 0.1% Data Makes Segment Anything Slim
<div align="center">
<img src="images/paper/intro.PNG" width="66%">
<img src="images/paper/everything.PNG" width="100%">
</div>
> **0.1% Data Makes Segment Anything Slim**
> [Zigeng Chen](https://github.com/czg1225), [Gongfan Fang](https://fangggf.github.io/), [Xinyin Ma](https://horseee.github.io/), [Xinchao Wang](https://sites.google.com/site/sitexinchaowang/)
> [Learning and Vision Lab](http://lv-nus.org/), National University of Singapore
> Paper: [[Arxiv]](https://arxiv.org/abs/2312.05284)
### Updates
* 🚀 **December 11, 2023**: Release the training code, inference code and pre-trained models for **SlimSAM**.
## Introduction
<div align="center">
<img src="images/paper/process.PNG" width="100%">
</div>
**SlimSAM** is a novel SAM compression method, which efficiently reuses pre-trained SAMs without the necessity for extensive retraining. This is achieved by the efficient reuse of pre-trained SAMs through a unified pruning-distillation framework. To enhance knowledge inheritance from the original SAM, we employ an innovative alternate slimming strategy that partitions the compression process into a progressive procedure. Diverging from prior pruning techniques, we meticulously prune and distill decoupled model structures in an alternating fashion. Furthermore, a novel label-free pruning criterion is also proposed to align the pruning objective with the optimization target, thereby boosting the post-distillation after pruning.

SlimSAM achieves approaching performance while reducing the parameter counts to **0.9\% (5.7M)**, MACs to **0.8\% (21G)**, and requiring mere **0.1\% (10k)** of the training data when compared to the original SAM-H. Extensive experiments demonstrate that our method realize significant superior performance while utilizing over **10 times** less training data when compared to other SAM compression methods.
## Visualization Results
Qualitative comparison of results obtained using point prompts, box prompts, and segment everything prompts are shown in the following section.
### Segment Everything Prompts
<div align="center">
<img src="images/paper/everything2.PNG" width="100%">
</div>
### Box Prompts and Point Prompts
<div align="center">
<img src="images/paper/prompt.PNG" width="100%">
</div>
## Quantitative Results
We conducted a comprehensive comparison encompassing performance, efficiency, and training costs with other SAM compression methods and structural pruning methods.
### Comparing with other SAM compression methods.
<div align="center">
<img src="images/paper/compare_tab1.PNG" width="100%">
</div>
### Comparing with other structural pruning methods.
<div align="center">
<img src="images/paper/compare_tab2.PNG" width="50%">
</div>
## Installation
The code requires `python>=3.8`, as well as `pytorch>=1.7` and `torchvision>=0.8`. Please follow the instructions [here](https://pytorch.org/get-started/locally/) to install both PyTorch and TorchVision dependencies. Installing both PyTorch and TorchVision with CUDA support is strongly recommended.
Install with
```
pip install -e .
```
The following optional dependencies are necessary for mask post-processing, saving masks in COCO format.
```
pip install opencv-python pycocotools matplotlib
```
## Dataset
We use the original SA-1B dataset in our code. See [here](https://ai.facebook.com/datasets/segment-anything/) for an overview of the datastet. The dataset can be downloaded [here](https://ai.facebook.com/datasets/segment-anything-downloads/).
The download dataset should be saved as:
```
<train_data_root>/
sa_xxxxxxx.jpg
sa_xxxxxxx.json
......
<val_data_root>/
sa_xxxxxxx.jpg
sa_xxxxxxx.json
......
```
To decode a mask in COCO RLE format into binary:
```
from pycocotools import mask as mask_utils
mask = mask_utils.decode(annotation["segmentation"])
```
See [here](https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocotools/mask.py) for more instructions to manipulate masks stored in RLE format.
## <a name="Models"></a>Model Checkpoints
The base model of our method is available. To enhance collaboration with our dependency dectection algorithm, we have split the original image encoder's qkv layer into three distinct linear layers: q, k, and v.
<div align="center">
<img src="images/paper/split.PNG" width="70%">
</div>
Click the links below to download the checkpoints of orginal SAM-B.
- `SAM-B`: [SAM-B model.](https://drive.google.com/file/d/1CtcyOm4h9bXgBF8DEVWn3N7g9-3r4Xzz/view?usp=sharing)
The check points of our SlimSAM are avalable. We release two versions, which are SlimSAM-50 (pruning ratio = 50%) and SlimSAM-77 (pruning ratio = 77%).
Click the links below to download the checkpoints for the corresponding pruning ratio.
- `SlimSAM-50`: [SlimSAM-50 model.](https://drive.google.com/file/d/1iCN9IW0Su0Ud_fOFoQUnTdkC3bFveMND/view?usp=sharing)
- `SlimSAM-77`: [SlimSAM-77 model.](https://drive.google.com/file/d/1L7LB6gHDzR-3D63pH9acD9E0Ul9_wMF-/view)
These models can be instantiated by running
```
import torch
SlimSAM_model = torch.load(<model_path>)
SlimSAM_model.image_encoder = SlimSAM_model.image_encoder.module
def forward(self, x):
x = self.patch_embed(x)
if self.pos_embed is not None:
x = x + self.pos_embed
for blk in self.blocks:
x,qkv_emb,mid_emb,x_emb = blk(x)
x = self.neck(x.permute(0, 3, 1, 2))
return x
import types
funcType = types.MethodType
SlimSAM_model.image_encoder.forward = funcType(forward, SlimSAM_model.image_encoder)
```
## <a name="Inference"></a>Inference
First download [SlimSAM-50 model](https://drive.google.com/file/d/1iCN9IW0Su0Ud_fOFoQUnTdkC3bFveMND/view?usp=sharing) or [SlimSAM-77 model](https://drive.google.com/file/d/1L7LB6gHDzR-3D63pH9acD9E0Ul9_wMF-/view) for inference
We provide detailed instructions in 'inference.py' on how to use a range of prompts, including 'point' and 'box' and 'everything', for inference purposes.
```
CUDA_VISIBLE_DEVICES=0 python inference.py
```
## <a name="Train"></a>Train
First download a [SAM-B model](https://drive.google.com/file/d/1CtcyOm4h9bXgBF8DEVWn3N7g9-3r4Xzz/view?usp=sharing) into 'checkpoints/' as the base model.
### Step1: Embedding Pruning + Bottleneck Aligning ###
The model after step1 is saved as 'checkpoints/vit_b_slim_step1_.pth'
```
CUDA_VISIBLE_DEVICES=0 python prune_distill_step1.py --traindata_path <train_data_root> --valdata_path <val_data_root> --prune_ratio <pruning ratio> --epochs <training epochs>
```
### Step2: Bottleneck Pruning + Embedding Aligning ###
The model after step2 is saved as 'checkpoints/vit_b_slim_step2_.pth'
```
CUDA_VISIBLE_DEVICES=0 python prune_distill_step2.py --traindata_path <train_data_root> --valdata_path <val_data_root> --prune_ratio <pruning ratio> --epochs <training epochs> --model_path 'checkpoints/vit_b_slim_step1_.pth'
```
You can adjust the training settings to meet your specific requirements. While our method demonstrates impressive performance with just 10,000 training data, incorporating additional training data will further enhance the model's effectiveness
## BibTex of our SlimSAM
If you use SlimSAM in your research, please use the following BibTeX entry. Thank you!
```bibtex
@misc{chen202301,
title={0.1% Data Makes Segment Anything Slim},
author={Zigeng Chen and Gongfan Fang and Xinyin Ma and Xinchao Wang},
year={2023},
eprint={2312.05284},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
## Acknowledgement
<details>
<summary>
<a href="https://github.com/facebookresearch/segment-anything">SAM</a> (Segment Anything) [<b>bib</b>]
</summary>
```bibtex
@article{kirillov2023segany,
title={Segment Anything},
author={Kirillov, Alexander and Mintun, Eric and Ravi, Nikhila and Mao, Hanzi and Rolland, Chloe and Gustafson, Laura and Xiao, Tete and Whitehead, Spencer and Berg, Alexander C. and Lo, Wan-Yen and Doll{\'a}r, Piotr and Girshick, Ross},
journal={arXiv:2304.02643},
year={2023}
}
```
</details>
<details>
<summary>
<a href="https://github.com/VainF/Torch-Pruning">Torch Pruning</a> (DepGraph: Towards Any Structural Pruning) [<b>bib</b>]
</summary>
```bibtex
@inproceedings{fang2023depgraph,
title={Depgraph: Towards any structural pruning},
author={Fang, Gongfan and Ma, Xinyin and Song, Mingli and Mi, Michael Bi and Wang, Xinchao},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={16091--16101},
year={2023}
}
```
</details> |
ketman/whisper_for_dominion | ketman | 2023-12-19T09:11:59Z | 15 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"ja",
"license:mit",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-12-08T13:55:29Z | ---
license: mit
language:
- ja
---
# ドミニオン日本語LLM for Whisper(2023/12/19 1.0版)
## 概要
Whisperでドミニオン(ボードゲーム)のカード用語などを含んだ音声を文字起こし出来ることを目標にチューニングされたLLMです。<br>
open-ai/largeモデルをベースにファインチューニングすることで生成されています。<br>
2023/12/19時点、全てのカードを学習済み。通常のlargeモデルと比較して適切に出力される様子が確認できると思います。<br>
※認識しにくい語の例
* 寵臣(調子)、出納官(水筒感)など他の一般語に含まれやすい語
* 岐路(木)、馬丁(バテー)、鉄工所(鉄工場)など語尾の音が弱い語
* 執事(羊)など活舌によって揺れやすい語
## 実行例
```python
from faster_whisper import WhisperModel
from transformers import WhisperForConditionalGeneration, WhisperProcessor
from datasets import load_dataset, DatasetDict, Dataset
from datasets import Audio
MODEL_PATH = "trained_model" # ローカルにダウンロードしたketman/whisper_for_dominionの入ったフォルダ
fileList = ["out_4315_1.wav","out_4369_1.wav","out_4436_1.wav","out_4494_1.wav","out_4557_1.wav"]
processor = WhisperProcessor.from_pretrained("openai/whisper-large", language="Japanese", task="transcribe")
# チューニング済モデルを定義
model = WhisperForConditionalGeneration.from_pretrained(MODEL_PATH)
model.config.forced_decoder_ids = processor.get_decoder_prompt_ids(language = "ja", task = "transcribe")
model.config.suppress_tokens = []
# Dataset準備
common_voice = DatasetDict()
common_voice["train"] = Dataset.from_dict({"audio": fileList}).cast_column("audio", Audio(sampling_rate=16000))
# Whisper実行(transcription)
for i in range(len(common_voice["train"])):
inputs = processor(common_voice["train"][i]["audio"]["array"], return_tensors="pt")
input_features = inputs.input_features
generated_ids = model.generate(inputs=input_features)
transcription = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(transcription)
```
# 参考文献
[自作データセットでWhisperをファインチューニングしたら、独自用語だらけのクラロワ実況でも使えるようになった:「ファインチューニング編」](https://zenn.dev/k_sone/articles/4d137d58dd06a6)
|
hkivancoral/smids_5x_deit_small_sgd_0001_fold2 | hkivancoral | 2023-12-19T09:01:25Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-12-19T07:22:07Z | ---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_5x_deit_small_sgd_0001_fold2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8036605657237936
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_5x_deit_small_sgd_0001_fold2
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5006
- Accuracy: 0.8037
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0629 | 1.0 | 375 | 1.0383 | 0.4592 |
| 1.0151 | 2.0 | 750 | 1.0009 | 0.4925 |
| 0.9588 | 3.0 | 1125 | 0.9619 | 0.5574 |
| 0.924 | 4.0 | 1500 | 0.9255 | 0.5890 |
| 0.8743 | 5.0 | 1875 | 0.8899 | 0.6290 |
| 0.8177 | 6.0 | 2250 | 0.8563 | 0.6522 |
| 0.7888 | 7.0 | 2625 | 0.8262 | 0.6755 |
| 0.7921 | 8.0 | 3000 | 0.7964 | 0.7005 |
| 0.7372 | 9.0 | 3375 | 0.7699 | 0.7138 |
| 0.7291 | 10.0 | 3750 | 0.7453 | 0.7221 |
| 0.7295 | 11.0 | 4125 | 0.7221 | 0.7255 |
| 0.6995 | 12.0 | 4500 | 0.7007 | 0.7288 |
| 0.621 | 13.0 | 4875 | 0.6811 | 0.7388 |
| 0.6398 | 14.0 | 5250 | 0.6638 | 0.7504 |
| 0.6383 | 15.0 | 5625 | 0.6483 | 0.7587 |
| 0.5747 | 16.0 | 6000 | 0.6341 | 0.7587 |
| 0.6097 | 17.0 | 6375 | 0.6214 | 0.7604 |
| 0.594 | 18.0 | 6750 | 0.6099 | 0.7604 |
| 0.5533 | 19.0 | 7125 | 0.5997 | 0.7654 |
| 0.5984 | 20.0 | 7500 | 0.5904 | 0.7687 |
| 0.5406 | 21.0 | 7875 | 0.5822 | 0.7720 |
| 0.525 | 22.0 | 8250 | 0.5743 | 0.7704 |
| 0.5434 | 23.0 | 8625 | 0.5673 | 0.7720 |
| 0.5253 | 24.0 | 9000 | 0.5609 | 0.7737 |
| 0.5143 | 25.0 | 9375 | 0.5549 | 0.7754 |
| 0.5351 | 26.0 | 9750 | 0.5494 | 0.7787 |
| 0.5716 | 27.0 | 10125 | 0.5444 | 0.7787 |
| 0.4849 | 28.0 | 10500 | 0.5399 | 0.7820 |
| 0.4878 | 29.0 | 10875 | 0.5357 | 0.7887 |
| 0.4887 | 30.0 | 11250 | 0.5319 | 0.7920 |
| 0.4866 | 31.0 | 11625 | 0.5283 | 0.7920 |
| 0.5025 | 32.0 | 12000 | 0.5250 | 0.7937 |
| 0.4672 | 33.0 | 12375 | 0.5219 | 0.7903 |
| 0.4395 | 34.0 | 12750 | 0.5192 | 0.7887 |
| 0.473 | 35.0 | 13125 | 0.5166 | 0.7920 |
| 0.4458 | 36.0 | 13500 | 0.5143 | 0.7920 |
| 0.4639 | 37.0 | 13875 | 0.5122 | 0.7937 |
| 0.4488 | 38.0 | 14250 | 0.5103 | 0.7953 |
| 0.4766 | 39.0 | 14625 | 0.5086 | 0.7970 |
| 0.4603 | 40.0 | 15000 | 0.5071 | 0.7987 |
| 0.4461 | 41.0 | 15375 | 0.5058 | 0.8003 |
| 0.4671 | 42.0 | 15750 | 0.5046 | 0.8003 |
| 0.4415 | 43.0 | 16125 | 0.5036 | 0.8020 |
| 0.4496 | 44.0 | 16500 | 0.5027 | 0.8020 |
| 0.4327 | 45.0 | 16875 | 0.5020 | 0.8020 |
| 0.5062 | 46.0 | 17250 | 0.5015 | 0.8020 |
| 0.4692 | 47.0 | 17625 | 0.5010 | 0.8037 |
| 0.426 | 48.0 | 18000 | 0.5008 | 0.8037 |
| 0.518 | 49.0 | 18375 | 0.5006 | 0.8037 |
| 0.4765 | 50.0 | 18750 | 0.5006 | 0.8037 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
satani/phtben-1 | satani | 2023-12-19T08:54:26Z | 1 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-12-19T08:50:31Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### phtben_1 Dreambooth model trained by satani with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
Kraaven/ppo-LunarLanderV2_Test | Kraaven | 2023-12-19T08:42:21Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-19T08:42:01Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 262.24 +/- 14.66
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Subsets and Splits