Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
null | null | {} | pcso1991/distilbert-base-uncased-finetuned-squad | null | [
"region:us"
] | null | 2024-05-01T11:35:15+00:00 |
|
null | null | {"license": "mit"} | phmota/diabetes_risk_predictor | null | [
"license:mit",
"region:us"
] | null | 2024-05-01T11:36:46+00:00 |
|
reinforcement-learning | null |
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
| {"tags": ["CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class"], "model-index": [{"name": "CartPole-v1", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "CartPole-v1", "type": "CartPole-v1"}, "metrics": [{"type": "mean_reward", "value": "500.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]} | commanderxa/CartPole-v1 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | null | 2024-05-01T11:40:11+00:00 |
null | null | {} | letgoofthepizza/result-model | null | [
"region:us"
] | null | 2024-05-01T11:40:16+00:00 |
|
null | null | {} | asude55/emotion-turkish21 | null | [
"region:us"
] | null | 2024-05-01T11:40:41+00:00 |
|
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | fxmeng/PiSSA-Llama-3-70B-4bit-r64-1iter | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T11:41:19+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_instruct_generation
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7257
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.03
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.0986 | 0.1709 | 20 | 1.8805 |
| 2.0093 | 0.3419 | 40 | 1.7881 |
| 2.0904 | 0.5128 | 60 | 1.7578 |
| 1.8353 | 0.6838 | 80 | 1.7418 |
| 1.7356 | 0.8547 | 100 | 1.7257 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.41.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "model-index": [{"name": "mistral_instruct_generation", "results": []}]} | ajinkyabhandare/mistral_instruct_generation | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-05-01T11:41:23+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# no_board_history_with_sys_history_v2_10epoch_lr5e-5_batch2
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.2
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"license": "llama2", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "no_board_history_with_sys_history_v2_10epoch_lr5e-5_batch2", "results": []}]} | wenshicheng97/no_board_history_with_sys_history_v2_10epoch_lr5e-5_batch2 | null | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"license:llama2",
"region:us"
] | null | 2024-05-01T11:41:48+00:00 |
image-classification | transformers | {} | hemanthkandimalla/vit-base-beans | null | [
"transformers",
"safetensors",
"vit",
"image-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T11:44:07+00:00 |
|
null | null | {} | SaniyaP28/garuda_llama2_bahasa | null | [
"region:us"
] | null | 2024-05-01T11:44:30+00:00 |
|
null | null | {} | asude55/emotion-turkish22 | null | [
"region:us"
] | null | 2024-05-01T11:44:37+00:00 |
|
text-to-image | diffusers |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - mrtuandao/dreambooth-tuan-without-prior
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of SKS person using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.




DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | {"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers"], "base_model": "runwayml/stable-diffusion-v1-5", "inference": true, "instance_prompt": "a photo of SKS person"} | mrtuandao/dreambooth-tuan-without-prior | null | [
"diffusers",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null | 2024-05-01T11:44:43+00:00 |
text-to-speech | adapter-transformers | {"language": ["es", "en"], "license": "artistic-2.0", "library_name": "adapter-transformers", "tags": ["art"], "metrics": ["character"], "pipeline_tag": "text-to-speech"} | Asevis/Andy | null | [
"adapter-transformers",
"art",
"text-to-speech",
"es",
"en",
"license:artistic-2.0",
"region:us"
] | null | 2024-05-01T11:44:50+00:00 |
|
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | baraah/blip2-opt-2.7b-1-5 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T11:45:01+00:00 |
null | null | {} | yk20240501/yoneda_model | null | [
"tensorboard",
"region:us"
] | null | 2024-05-01T11:45:19+00:00 |
|
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1769
- F1: 0.8516
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2935 | 1.0 | 835 | 0.1943 | 0.8149 |
| 0.1554 | 2.0 | 1670 | 0.1648 | 0.8464 |
| 0.1014 | 3.0 | 2505 | 0.1769 | 0.8516 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["f1"], "base_model": "xlm-roberta-base", "model-index": [{"name": "xlm-roberta-base-finetuned-panx-all", "results": []}]} | u00890358/xlm-roberta-base-finetuned-panx-all | null | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T11:45:31+00:00 |
text-to-image | diffusers |
# DreamBooth model for the space concept trained by livewalk on the livewalk/james-webb-telescope dataset.
This is a Stable Diffusion model fine-tuned on the space concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of space telescope**
This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
## Description
This is a Stable Diffusion model fine-tuned on `telescope` images for the science theme.
## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('livewalk/space-telescope')
image = pipeline().images[0]
image
```
| {"license": "creativeml-openrail-m", "tags": ["pytorch", "diffusers", "stable-diffusion", "text-to-image", "diffusion-models-class", "dreambooth-hackathon", "science"], "widget": [{"text": "a photo of space telescope in the second Sun-Earth Lagrange point (L2)"}]} | livewalk/space-telescope | null | [
"diffusers",
"safetensors",
"pytorch",
"stable-diffusion",
"text-to-image",
"diffusion-models-class",
"dreambooth-hackathon",
"science",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null | 2024-05-01T11:47:18+00:00 |
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-base-patch4-window7-224-finetuned-ind-17-imbalanced-aadhaarmask
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224](https://huggingface.co/microsoft/swin-base-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3209
- Accuracy: 0.8557
- Recall: 0.8557
- F1: 0.8542
- Precision: 0.8560
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 | Precision |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.5155 | 0.9974 | 293 | 0.5710 | 0.7935 | 0.7935 | 0.7821 | 0.7895 |
| 0.4245 | 1.9983 | 587 | 0.4729 | 0.8238 | 0.8238 | 0.8187 | 0.8266 |
| 0.4183 | 2.9991 | 881 | 0.4145 | 0.8408 | 0.8408 | 0.8309 | 0.8350 |
| 0.4088 | 4.0 | 1175 | 0.3901 | 0.8425 | 0.8425 | 0.8375 | 0.8501 |
| 0.3489 | 4.9974 | 1468 | 0.3703 | 0.8463 | 0.8463 | 0.8446 | 0.8518 |
| 0.3115 | 5.9983 | 1762 | 0.3500 | 0.8540 | 0.8540 | 0.8525 | 0.8605 |
| 0.3087 | 6.9991 | 2056 | 0.3338 | 0.8519 | 0.8519 | 0.8494 | 0.8582 |
| 0.2372 | 8.0 | 2350 | 0.3181 | 0.8548 | 0.8548 | 0.8543 | 0.8587 |
| 0.2816 | 8.9974 | 2643 | 0.3167 | 0.8536 | 0.8536 | 0.8530 | 0.8561 |
| 0.2378 | 9.9745 | 2930 | 0.3063 | 0.8702 | 0.8702 | 0.8686 | 0.8709 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.0a0+81ea7a4
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy", "recall", "f1", "precision"], "base_model": "microsoft/swin-base-patch4-window7-224", "model-index": [{"name": "swin-base-patch4-window7-224-finetuned-ind-17-imbalanced-aadhaarmask", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.855683269476373, "name": "Accuracy"}, {"type": "recall", "value": 0.855683269476373, "name": "Recall"}, {"type": "f1", "value": 0.8542203503644927, "name": "F1"}, {"type": "precision", "value": 0.8559779206156822, "name": "Precision"}]}]}]} | Kushagra07/swin-base-patch4-window7-224-finetuned-ind-17-imbalanced-aadhaarmask | null | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-base-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T11:48:34+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | vijayvarmak/gemma-FT-Gemini-Full | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T11:48:54+00:00 |
null | null | {} | BasantSubba/gpt2-finetuned-URL | null | [
"region:us"
] | null | 2024-05-01T11:49:11+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sea-lion-7b-text-to-sql
This model is a fine-tuned version of [aisingapore/sea-lion-7b-instruct](https://huggingface.co/aisingapore/sea-lion-7b-instruct) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 10
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.2.1
- Datasets 2.16.1
- Tokenizers 0.15.2 | {"license": "mit", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "aisingapore/sea-lion-7b-instruct", "model-index": [{"name": "sea-lion-7b-text-to-sql", "results": []}]} | Phuree/sea-lion-7b-text-to-sql | null | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:aisingapore/sea-lion-7b-instruct",
"license:mit",
"region:us"
] | null | 2024-05-01T11:49:37+00:00 |
text-generation | transformers | {} | itay-nakash/model_e5f8f849f6 | null | [
"transformers",
"mistral",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T11:50:13+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral 7B NL2BASH Agent
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the nl2bash dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5952
## Model description
Mistral 7B NL2BASH Agent is a fine-tuned model that converts natural language queries into Linux commands. It serves as an intelligent agent capable of generating Linux commands based on user input in the form of natural language queries.
## Intended uses & limitations
- Automating the process of creating Linux commands from natural language queries.
- Assisting users in generating complex Linux commands quickly and accurately.
- The model's performance may vary based on the complexity and specificity of the natural language queries.
- It may not handle all edge cases or uncommon scenarios effectively.
## Installation
```bash
pip install transformers accelerate torch bitsandbytes peft
```
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
import torch
from peft import PeftModel, PeftConfig
read_token="YOUR HUGGINGFACE TOKEN"
nf4_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=torch.bfloat16
)
model = AutoModelForCausalLM.from_pretrained(
"mistralai/Mistral-7B-Instruct-v0.2",
device_map='auto',
quantization_config=nf4_config,
use_cache=False,
token=read_token
)
model = PeftModel.from_pretrained(model, "pranay-j/mistral-7b-nl2bash-agent",device_map='auto',token=read_token)
tokenizer=AutoTokenizer.from_pretrained("pranay-j/mistral-7b-nl2bash-agent",add_eos_token=False)
nl='Add "execute" to the permissions of all directories in the home directory tree'
prompt= f"[INST] {nl} [/INST]"
inputs=tokenizer(prompt,return_tensors="pt")
input_ids=inputs["input_ids"].to("cuda")
with torch.no_grad():
out=model.generate(input_ids,top_p=0.5, temperature=0.7, max_new_tokens=30)
tokenizer.decode(out[0][input_ids.shape[-1]:])
# Output: find ~ -type d -exec chmod +x {} </s>
```
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6136 | 1.0 | 202 | 1.6451 |
| 1.5448 | 2.0 | 404 | 1.5952 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"language": ["en"], "license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "datasets": ["jiacheng-ye/nl2bash"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "model-index": [{"name": "Mistral 7B NL2BASH Agent", "results": []}]} | pranay-j/mistral-7b-nl2bash-agent | null | [
"peft",
"safetensors",
"generated_from_trainer",
"en",
"dataset:jiacheng-ye/nl2bash",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-05-01T11:50:23+00:00 |
reinforcement-learning | stable-baselines3 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| {"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "239.60 +/- 23.52", "name": "mean_reward", "verified": false}]}]}]} | emmermarcell/ppo-LunarLander-v2 | null | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null | 2024-05-01T11:50:33+00:00 |
null | null | {"license": "creativeml-openrail-m"} | casque/0285_pink_bodysuit_v1 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-05-01T11:51:58+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Finetune-test3
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6143
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.1515 | 0.9956 | 56 | 0.7617 |
| 0.7099 | 1.9911 | 112 | 0.6591 |
| 0.6427 | 2.9867 | 168 | 0.6336 |
| 0.5897 | 4.0 | 225 | 0.6145 |
| 0.5634 | 4.9956 | 281 | 0.6038 |
| 0.5328 | 5.9911 | 337 | 0.6003 |
| 0.5084 | 6.9867 | 393 | 0.6019 |
| 0.4793 | 8.0 | 450 | 0.6030 |
| 0.4718 | 8.9956 | 506 | 0.6088 |
| 0.4559 | 9.9556 | 560 | 0.6143 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.0.1+cu118
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "model-index": [{"name": "Finetune-test3", "results": []}]} | AmaanUsmani/Finetune-test3 | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-05-01T11:53:42+00:00 |
reinforcement-learning | null |
# **Q-Learning** Agent playing1 **FrozenLake-v1-4x4-no_slippery**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1-4x4-no_slippery** .
## Usage
model = load_from_hub(repo_id="ws11yrin/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
| {"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]} | ws11yrin/q-FrozenLake-v1-4x4-noSlippery | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | null | 2024-05-01T11:53:46+00:00 |
null | transformers | # GGUF / IQ / Imatrix for [Spicy-Laymonade-7B](https://huggingface.co/ABX-AI/Spicy-Laymonade-7B)
Adding GGUF as I noticed the HF model had a lot of downloads but I never quantized it originally.

**Why Importance Matrix?**
**Importance Matrix**, at least based on my testing, has shown to improve the output and performance of "IQ"-type quantizations, where the compression becomes quite heavy.
The **Imatrix** performs a calibration, using a provided dataset. Testing has shown that semi-randomized data can help perserve more important segments as the compression is applied.
Related discussions in Github:
[[1]](https://github.com/ggerganov/llama.cpp/discussions/5006) [[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
The imatrix.txt file that I used contains general, semi-random data, with some custom kink.
# Spicy-Laymonade-7B
Well, we have Laymonade, so why not spice it up? This merge is a step into creating a new 9B.
However, I did try it out, and it seemed to work pretty well.
## Merge Details
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [cgato/TheSpice-7b-v0.1.1](https://huggingface.co/cgato/TheSpice-7b-v0.1.1)
* [ABX-AI/Laymonade-7B](https://huggingface.co/ABX-AI/Laymonade-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: cgato/TheSpice-7b-v0.1.1
layer_range: [0, 32]
- model: ABX-AI/Laymonade-7B
layer_range: [0, 32]
merge_method: slerp
base_model: ABX-AI/Laymonade-7B
parameters:
t:
- filter: self_attn
value: [0.7, 0.3, 0.6, 0.2, 0.5]
- filter: mlp
value: [0.3, 0.7, 0.4, 0.8, 0.5]
- value: 0.5
dtype: bfloat16
``` | {"license": "other", "library_name": "transformers", "tags": ["mergekit", "merge", "not-for-all-audiences"], "base_model": ["cgato/TheSpice-7b-v0.1.1", "ABX-AI/Laymonade-7B"]} | ABX-AI/Spicy-Laymonade-7B-GGUF-IQ-Imatrix | null | [
"transformers",
"gguf",
"mergekit",
"merge",
"not-for-all-audiences",
"base_model:cgato/TheSpice-7b-v0.1.1",
"base_model:ABX-AI/Laymonade-7B",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T11:55:18+00:00 |
null | null | {"license": "mit"} | STONE11112/gpt-solvits | null | [
"license:mit",
"region:us"
] | null | 2024-05-01T11:56:03+00:00 |
|
null | null | {"license": "mit"} | ap00rvmohit/Llama_adolescent_therapy_chat | null | [
"license:mit",
"region:us"
] | null | 2024-05-01T11:56:16+00:00 |
|
null | null | {} | johnpccd/orpo-phi3 | null | [
"region:us"
] | null | 2024-05-01T11:56:19+00:00 |
|
text-to-image | diffusers |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - naripok/corgy_dog_LoRA
<Gallery />
## Model description
These are naripok/corgy_dog_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK dog to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](naripok/corgy_dog_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | {"license": "openrail++", "library_name": "diffusers", "tags": ["text-to-image", "text-to-image", "diffusers-training", "diffusers", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "a photo of TOK dog", "widget": []} | naripok/corgy_dog_LoRA | null | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | null | 2024-05-01T11:56:25+00:00 |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlm-funsd
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on the funsd dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6771
- Answer: {'precision': 0.7107258938244854, 'recall': 0.8108776266996292, 'f1': 0.7575057736720554, 'number': 809}
- Header: {'precision': 0.3543307086614173, 'recall': 0.37815126050420167, 'f1': 0.3658536585365853, 'number': 119}
- Question: {'precision': 0.7716814159292036, 'recall': 0.8187793427230047, 'f1': 0.7945330296127562, 'number': 1065}
- Overall Precision: 0.7216
- Overall Recall: 0.7893
- Overall F1: 0.7539
- Overall Accuracy: 0.8139
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 1.8027 | 1.0 | 10 | 1.5884 | {'precision': 0.01997780244173141, 'recall': 0.022249690976514216, 'f1': 0.02105263157894737, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.18858307849133538, 'recall': 0.17370892018779344, 'f1': 0.18084066471163246, 'number': 1065} | 0.1079 | 0.1019 | 0.1048 | 0.3753 |
| 1.4071 | 2.0 | 20 | 1.2076 | {'precision': 0.23890339425587467, 'recall': 0.22620519159456118, 'f1': 0.23238095238095238, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.41302791696492486, 'recall': 0.5417840375586854, 'f1': 0.4687246141348498, 'number': 1065} | 0.3512 | 0.3813 | 0.3656 | 0.5772 |
| 1.0593 | 3.0 | 30 | 0.9154 | {'precision': 0.4750542299349241, 'recall': 0.5414091470951793, 'f1': 0.5060658578856152, 'number': 809} | {'precision': 0.11363636363636363, 'recall': 0.04201680672268908, 'f1': 0.06134969325153375, 'number': 119} | {'precision': 0.5922493681550126, 'recall': 0.6600938967136151, 'f1': 0.6243339253996447, 'number': 1065} | 0.5323 | 0.5750 | 0.5528 | 0.7136 |
| 0.802 | 4.0 | 40 | 0.7552 | {'precision': 0.5981404958677686, 'recall': 0.715698393077874, 'f1': 0.6516601012943164, 'number': 809} | {'precision': 0.20253164556962025, 'recall': 0.13445378151260504, 'f1': 0.1616161616161616, 'number': 119} | {'precision': 0.6680707666385847, 'recall': 0.7446009389671362, 'f1': 0.7042628774422734, 'number': 1065} | 0.6213 | 0.6964 | 0.6567 | 0.7659 |
| 0.6561 | 5.0 | 50 | 0.7030 | {'precision': 0.6381856540084389, 'recall': 0.7478368355995055, 'f1': 0.6886738759248718, 'number': 809} | {'precision': 0.3, 'recall': 0.226890756302521, 'f1': 0.25837320574162675, 'number': 119} | {'precision': 0.6780766096169519, 'recall': 0.7812206572769953, 'f1': 0.7260034904013962, 'number': 1065} | 0.6464 | 0.7346 | 0.6876 | 0.7889 |
| 0.5591 | 6.0 | 60 | 0.6842 | {'precision': 0.6502100840336135, 'recall': 0.765142150803461, 'f1': 0.7030096536059057, 'number': 809} | {'precision': 0.3132530120481928, 'recall': 0.2184873949579832, 'f1': 0.25742574257425743, 'number': 119} | {'precision': 0.7165820642978004, 'recall': 0.7953051643192488, 'f1': 0.7538940809968847, 'number': 1065} | 0.6730 | 0.7486 | 0.7088 | 0.7942 |
| 0.4858 | 7.0 | 70 | 0.6508 | {'precision': 0.6569948186528497, 'recall': 0.7836835599505563, 'f1': 0.7147688838782412, 'number': 809} | {'precision': 0.34210526315789475, 'recall': 0.3277310924369748, 'f1': 0.33476394849785407, 'number': 119} | {'precision': 0.7205503009458297, 'recall': 0.7868544600938967, 'f1': 0.7522441651705565, 'number': 1065} | 0.6740 | 0.7582 | 0.7136 | 0.8063 |
| 0.431 | 8.0 | 80 | 0.6674 | {'precision': 0.6578140960163432, 'recall': 0.796044499381953, 'f1': 0.7203579418344519, 'number': 809} | {'precision': 0.35964912280701755, 'recall': 0.3445378151260504, 'f1': 0.351931330472103, 'number': 119} | {'precision': 0.7482517482517482, 'recall': 0.8037558685446009, 'f1': 0.775011317338162, 'number': 1065} | 0.6889 | 0.7732 | 0.7286 | 0.7969 |
| 0.3878 | 9.0 | 90 | 0.6526 | {'precision': 0.6787564766839378, 'recall': 0.8096415327564895, 'f1': 0.7384441939120632, 'number': 809} | {'precision': 0.336283185840708, 'recall': 0.31932773109243695, 'f1': 0.32758620689655166, 'number': 119} | {'precision': 0.7586206896551724, 'recall': 0.7849765258215963, 'f1': 0.7715736040609138, 'number': 1065} | 0.7014 | 0.7672 | 0.7328 | 0.8073 |
| 0.3744 | 10.0 | 100 | 0.6519 | {'precision': 0.6854410201912858, 'recall': 0.7972805933250927, 'f1': 0.7371428571428571, 'number': 809} | {'precision': 0.3130434782608696, 'recall': 0.3025210084033613, 'f1': 0.3076923076923077, 'number': 119} | {'precision': 0.7611940298507462, 'recall': 0.8140845070422535, 'f1': 0.7867513611615246, 'number': 1065} | 0.7052 | 0.7767 | 0.7393 | 0.8120 |
| 0.3161 | 11.0 | 110 | 0.6696 | {'precision': 0.6948257655755016, 'recall': 0.8133498145859085, 'f1': 0.7494305239179954, 'number': 809} | {'precision': 0.3283582089552239, 'recall': 0.3697478991596639, 'f1': 0.34782608695652173, 'number': 119} | {'precision': 0.7604166666666666, 'recall': 0.8225352112676056, 'f1': 0.7902571041948578, 'number': 1065} | 0.7067 | 0.7918 | 0.7468 | 0.8060 |
| 0.3039 | 12.0 | 120 | 0.6656 | {'precision': 0.7007534983853606, 'recall': 0.8046971569839307, 'f1': 0.7491369390103566, 'number': 809} | {'precision': 0.3524590163934426, 'recall': 0.36134453781512604, 'f1': 0.35684647302904565, 'number': 119} | {'precision': 0.7695769576957696, 'recall': 0.8028169014084507, 'f1': 0.7858455882352942, 'number': 1065} | 0.7165 | 0.7772 | 0.7456 | 0.8131 |
| 0.2877 | 13.0 | 130 | 0.6742 | {'precision': 0.6927138331573389, 'recall': 0.8108776266996292, 'f1': 0.7471526195899771, 'number': 809} | {'precision': 0.32592592592592595, 'recall': 0.3697478991596639, 'f1': 0.3464566929133859, 'number': 119} | {'precision': 0.7651715039577837, 'recall': 0.8169014084507042, 'f1': 0.7901907356948229, 'number': 1065} | 0.7075 | 0.7878 | 0.7455 | 0.8109 |
| 0.2681 | 14.0 | 140 | 0.6743 | {'precision': 0.7128927410617552, 'recall': 0.8133498145859085, 'f1': 0.7598152424942264, 'number': 809} | {'precision': 0.36220472440944884, 'recall': 0.3865546218487395, 'f1': 0.37398373983739847, 'number': 119} | {'precision': 0.7734513274336283, 'recall': 0.8206572769953052, 'f1': 0.7963553530751709, 'number': 1065} | 0.7239 | 0.7918 | 0.7563 | 0.8148 |
| 0.2609 | 15.0 | 150 | 0.6771 | {'precision': 0.7107258938244854, 'recall': 0.8108776266996292, 'f1': 0.7575057736720554, 'number': 809} | {'precision': 0.3543307086614173, 'recall': 0.37815126050420167, 'f1': 0.3658536585365853, 'number': 119} | {'precision': 0.7716814159292036, 'recall': 0.8187793427230047, 'f1': 0.7945330296127562, 'number': 1065} | 0.7216 | 0.7893 | 0.7539 | 0.8139 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["funsd"], "base_model": "microsoft/layoutlm-base-uncased", "model-index": [{"name": "layoutlm-funsd", "results": []}]} | leom21/layoutlm-funsd | null | [
"transformers",
"tensorboard",
"safetensors",
"layoutlm",
"token-classification",
"generated_from_trainer",
"dataset:funsd",
"base_model:microsoft/layoutlm-base-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T11:56:57+00:00 |
text-generation | transformers |
다른 가짜 컨텍 확장과 다른 아직까지 유일무이한 라마3 70b inst 32k. | {"license": "other", "license_name": "llama3", "license_link": "LICENSE"} | maywell/Llama-3-70B-Instruct-32k | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T11:59:47+00:00 |
text-generation | transformers | {"license": "mit"} | zhumj34/Mipha-phi1_5-1.6B | null | [
"transformers",
"safetensors",
"mipha-phi",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T12:00:01+00:00 |
|
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tiny chinese - VingeNie
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 16.1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7486
- Cer Ortho: 54.2817
- Cer: 29.8525
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 25
- training_steps: 650
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer Ortho | Cer |
|:-------------:|:------:|:----:|:---------------:|:---------:|:-------:|
| 0.8045 | 0.3594 | 225 | 0.8040 | 53.9956 | 31.6580 |
| 0.7595 | 0.7188 | 450 | 0.7486 | 54.2817 | 29.8525 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"language": ["zh"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["mozilla-foundation/common_voice_16_1"], "base_model": "openai/whisper-tiny", "model-index": [{"name": "Whisper Tiny chinese - VingeNie", "results": []}]} | VingeNie/whisper-tiny-zh_CN_lr5 | null | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"zh",
"dataset:mozilla-foundation/common_voice_16_1",
"base_model:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T12:00:23+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tiny chinese - VingeNie
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 16.1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7223
- Cer Ortho: 40.6344
- Cer: 29.5803
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 25
- training_steps: 650
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer Ortho | Cer |
|:-------------:|:------:|:----:|:---------------:|:---------:|:-------:|
| 0.7743 | 0.3594 | 225 | 0.7942 | 40.4431 | 31.4481 |
| 0.7344 | 0.7188 | 450 | 0.7223 | 40.6344 | 29.5803 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"language": ["zh"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["mozilla-foundation/common_voice_16_1"], "base_model": "openai/whisper-tiny", "model-index": [{"name": "Whisper Tiny chinese - VingeNie", "results": []}]} | VingeNie/whisper-tiny-zh_CN_lr4 | null | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"zh",
"dataset:mozilla-foundation/common_voice_16_1",
"base_model:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T12:00:31+00:00 |
text-generation | transformers | youko-8bは、追加の日本語継続事前学習により日本語が大変流暢なLlama-3です。
Instructionモデルとの差分ベクトルマージを行いました。
> rinna/llama-3-youko-8b + 0.8*(meta-llama/Meta-Llama-3-8B-Instruct - meta-llama/Meta-Llama-3-8B)
詳細は[rinna/llama-3-youko-8b](https://huggingface.co/rinna/llama-3-youko-8b)をご確認ください。 | {"license": "llama3"} | aixsatoshi/Llama-3-youko-8b-instruct-chatvector | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T12:00:50+00:00 |
text-to-image | diffusers |
# Insane Realistic 2
Original page: https://civitai.com/models/108585/
Samples and prompts:

(Click for larger)
Top left: a cute girl with freckles on her face, cgsociety unreal engine, wet t-shirt, short skirt, style of aenami alena, trending on artstartion, inspired by Fyodor Vasilyev, looks a bit similar to amy adams, emissive light, fluffy orange skin, dribbble, dramatic rendering
Top right: 90s grainy vhs still young mother loose shirt, headband. holding a baby, on the couch, posing, bow. bokeh, bright lighting. smile
Bottom left: beautiful image of the first day of creation of the world and planet earth in the dark deep space, light and darkness separated, planets, under a black night sky of astronomical glittering starlight in the outer reaches of the solar system beyond, trending on artstation, octane render, symmetry by raqib shaw, presence of god, eye of god.
Bottom right: hill, mountains, sunset, field, world, ocean, trees, underground, city, village, path, urban, mountain, buildings, waterfall, skyline, nature, town, industrial, architecture, road, jungle, valley, bridge, horizon, landscape, house, building, environment, wilderness, enviroment, river, cave, desert, forest
| {"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["Base Model", "Realism", "Female", "Woman", "cordonsolution8", "stable-diffusion", "stable-diffusion-diffusers", "diffusers", "text-to-image"], "pipeline_tag": "text-to-image"} | Yntec/insaneRealistic_v2 | null | [
"diffusers",
"safetensors",
"Base Model",
"Realism",
"Female",
"Woman",
"cordonsolution8",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us",
"has_space"
] | null | 2024-05-01T12:01:07+00:00 |
text-to-image | diffusers |
# AutoTrain SDXL LoRA DreamBooth - rshir/sdxl-lora-rshir
<Gallery />
## Model description
These are rshir/sdxl-lora-rshir LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: None.
## Trigger words
You should use A photo of Roman Shirochenko wearing casual clothes, taking a selfie, and smiling. to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](rshir/sdxl-lora-rshir/tree/main) them in the Files & versions tab.
| {"license": "openrail++", "tags": ["autotrain", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "diffusers", "lora", "template:sd-lora"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "A photo of Roman Shirochenko wearing casual clothes, taking a selfie, and smiling."} | rshir/sdxl-lora-rshir | null | [
"diffusers",
"autotrain",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | null | 2024-05-01T12:01:20+00:00 |
null | null | {"license": "llama3"} | BreeK/Aurora | null | [
"license:llama3",
"region:us"
] | null | 2024-05-01T12:02:15+00:00 |
|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | AbhishekG13/phi-3-alto-shaam-qna | null | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T12:02:39+00:00 |
null | null | {"license": "unknown"} | hautc/h2 | null | [
"license:unknown",
"region:us"
] | null | 2024-05-01T12:03:23+00:00 |
|
null | null | {"license": "openrail"} | trewatyou/m | null | [
"license:openrail",
"region:us"
] | null | 2024-05-01T12:05:23+00:00 |
|
image-classification | transformers | {} | DouglasBraga/swin-tiny-patch4-window7-224-finetuned-eurosat-leukemia-3000-finetuned-leukemia-3000 | null | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T12:06:08+00:00 |
|
null | null | {} | Bobermikola/sn25-1-3 | null | [
"region:us"
] | null | 2024-05-01T12:07:00+00:00 |
|
null | null | {"license": "openrail"} | sharjjel67/credit_card_fraud_detection | null | [
"license:openrail",
"region:us"
] | null | 2024-05-01T12:08:02+00:00 |
|
null | null | {} | wdndev/tiny_llm_ptm_92m | null | [
"region:us"
] | null | 2024-05-01T12:08:32+00:00 |
|
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Image-Arousal-new
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6535
- Accuracy: 0.4591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 1.2322 | 0.1855 | 100 | 1.2411 | 0.4452 |
| 1.1613 | 0.3711 | 200 | 1.2600 | 0.3987 |
| 1.2851 | 0.5566 | 300 | 1.2428 | 0.4052 |
| 1.1931 | 0.7421 | 400 | 1.2041 | 0.4559 |
| 1.1098 | 0.9276 | 500 | 1.1918 | 0.4586 |
| 1.1714 | 1.1132 | 600 | 1.1806 | 0.4721 |
| 1.1216 | 1.2987 | 700 | 1.1692 | 0.4651 |
| 1.2208 | 1.4842 | 800 | 1.1801 | 0.4614 |
| 1.0644 | 1.6698 | 900 | 1.1775 | 0.4596 |
| 1.1638 | 1.8553 | 1000 | 1.2031 | 0.4721 |
| 0.9559 | 2.0408 | 1100 | 1.2392 | 0.4521 |
| 0.8442 | 2.2263 | 1200 | 1.2544 | 0.4661 |
| 0.8713 | 2.4119 | 1300 | 1.2792 | 0.4744 |
| 0.8442 | 2.5974 | 1400 | 1.2618 | 0.4647 |
| 0.831 | 2.7829 | 1500 | 1.3202 | 0.4554 |
| 0.7774 | 2.9685 | 1600 | 1.3087 | 0.4572 |
| 0.5501 | 3.1540 | 1700 | 1.4975 | 0.4600 |
| 0.6069 | 3.3395 | 1800 | 1.5869 | 0.4512 |
| 0.4397 | 3.5250 | 1900 | 1.6458 | 0.4387 |
| 0.4468 | 3.7106 | 2000 | 1.6341 | 0.4493 |
| 0.4198 | 3.8961 | 2100 | 1.6535 | 0.4591 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "google/vit-base-patch16-224-in21k", "model-index": [{"name": "Image-Arousal-new", "results": []}]} | SeyedAli/Image-Arousal-new | null | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T12:08:37+00:00 |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_withdpo_4iters_bs256_432lr_iter_2
This model is a fine-tuned version of [ZhangShenao/0.0_ablation_sample1_4iters_bs256_iter_1](https://huggingface.co/ZhangShenao/0.0_ablation_sample1_4iters_bs256_iter_1) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ZhangShenao/0.0_ablation_sample1_4iters_bs256_iter_1", "model-index": [{"name": "0.001_withdpo_4iters_bs256_432lr_iter_2", "results": []}]} | ShenaoZ/0.001_withdpo_4iters_bs256_432lr_iter_2 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ZhangShenao/0.0_ablation_sample1_4iters_bs256_iter_1",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T12:08:52+00:00 |
text-generation | transformers |
## Tiny LLM 92M SFT
### 简介
本项目[wdndev/tiny-llm-zh (github.com)](https://github.com/wdndev/tiny-llm-zh)旨在构建一个小参数量的中文语言大模型,用于快速入门学习大模型相关知识。
模型架构:整体模型架构采用开源通用架构,包括:RMSNorm,RoPE,MHA等
实现细节:实现大模型两阶段训练及后续人类对齐,即:预训练(PTM) -> 指令微调(SFT) -> 人类对齐(RLHF, DPO) -> 测评。
注意:因资源限制,本项目的第一要务是走通大模型整个流程,而不是调教比较好的效果,故评测结果分数较低,部分生成错误。
### 模型细节
大约在9B的中文预料中训练,主要包含百科内容,模型架构采用开源通用架构,包括:RMSNorm,RoPE,MHA等。
### 环境
只需要安装 `transformers` 即可运行
### 快速开始
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "wdndev/tiny_llm_sft_92m"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", trust_remote_code=True)
sys_text = "你是由wdndev开发的个人助手。"
# user_text = "中国的首都是哪儿?"
# user_text = "你叫什么名字?"
user_text = "介绍一下中国"
input_txt = "\n".join(["<|system|>", sys_text.strip(),
"<|user|>", user_text.strip(),
"<|assistant|>"]).strip() + "\n"
model_inputs = tokenizer(input_txt, return_tensors="pt").to(model.device)
generated_ids = model.generate(model_inputs.input_ids, max_new_tokens=200)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
``` | {"language": ["zh"], "tags": ["chat"], "pipeline_tag": "text-generation"} | wdndev/tiny_llm_sft_92m | null | [
"transformers",
"pytorch",
"tinyllm",
"text-generation",
"chat",
"conversational",
"custom_code",
"zh",
"autotrain_compatible",
"region:us"
] | null | 2024-05-01T12:08:55+00:00 |
text-generation | transformers | # Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` library installed.
```bash
pip install transformers==4.40.1
```
Also make sure you are providing your huggingface token to the pipeline if the model is lying in a private repo.
- Either leave `token=True` in the `pipeline` and login to hugginface_hub by running
```python
import huggingface_hub
huggingface_hub.login(<ACCESS_TOKEN>)
```
- Or directly pass your <ACCESS_TOKEN> to `token` in the `pipeline`
```python
from transformers import pipeline
generate_text = pipeline(
model="Aaryan-Nakhat/experiment-41-intelligent-layer-2-plus-exp-39-data",
torch_dtype="auto",
trust_remote_code=True,
use_fast=True,
device_map={"": "cuda:0"},
token=True,
)
# generate configuration can be modified to your needs
# generate_text.model.generation_config.min_new_tokens = 1
# generate_text.model.generation_config.max_new_tokens = 192
# generate_text.model.generation_config.do_sample = True
# generate_text.model.generation_config.num_beams = 1
# generate_text.model.generation_config.temperature = float(0.3)
# generate_text.model.generation_config.repetition_penalty = float(1.2)
res = generate_text(
"Why is drinking water so healthy?",
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?<|end_of_text|><|answer|>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`.
```python
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"Aaryan-Nakhat/experiment-41-intelligent-layer-2-plus-exp-39-data",
use_fast=True,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"Aaryan-Nakhat/experiment-41-intelligent-layer-2-plus-exp-39-data",
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
# generate configuration can be modified to your needs
# generate_text.model.generation_config.min_new_tokens = 1
# generate_text.model.generation_config.max_new_tokens = 192
# generate_text.model.generation_config.do_sample = True
# generate_text.model.generation_config.num_beams = 1
# generate_text.model.generation_config.temperature = float(0.3)
# generate_text.model.generation_config.repetition_penalty = float(1.2)
res = generate_text(
"Why is drinking water so healthy?",
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Aaryan-Nakhat/experiment-41-intelligent-layer-2-plus-exp-39-data" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?<|end_of_text|><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=True,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
# model.generation_config.min_new_tokens = 1
# model.generation_config.max_new_tokens = 192
# model.generation_config.do_sample = True
# model.generation_config.num_beams = 1
# model.generation_config.temperature = float(0.3)
# model.generation_config.repetition_penalty = float(1.2)
tokens = model.generate(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Quantization and sharding
You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
## Model Architecture
```
LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(128256, 4096, padding_idx=128001)
(layers): ModuleList(
(0-31): 32 x LlamaDecoderLayer(
(self_attn): LlamaSdpaAttention(
(q_proj): Linear(in_features=4096, out_features=4096, bias=False)
(k_proj): Linear(in_features=4096, out_features=1024, bias=False)
(v_proj): Linear(in_features=4096, out_features=1024, bias=False)
(o_proj): Linear(in_features=4096, out_features=4096, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear(in_features=4096, out_features=14336, bias=False)
(up_proj): Linear(in_features=4096, out_features=14336, bias=False)
(down_proj): Linear(in_features=14336, out_features=4096, bias=False)
(act_fn): SiLU()
)
(input_layernorm): LlamaRMSNorm()
(post_attention_layernorm): LlamaRMSNorm()
)
)
(norm): LlamaRMSNorm()
)
(lm_head): Linear(in_features=4096, out_features=128256, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it. | {"language": ["en"], "library_name": "transformers", "tags": ["gpt", "llm", "large language model", "h2o-llmstudio"], "inference": false, "thumbnail": "https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico"} | Aaryan-Nakhat/experiment-41-intelligent-layer-2-plus-exp-39-data | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"conversational",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T12:09:38+00:00 |
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | research-dump/Fine_tuned_bert-base-uncased_TAQA_extension | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T12:12:10+00:00 |
video-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2776
- Accuracy: 0.8429
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 300
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.7979 | 0.25 | 75 | 1.5073 | 0.3857 |
| 0.6556 | 1.25 | 150 | 0.8398 | 0.6571 |
| 0.3096 | 2.25 | 225 | 0.2880 | 0.8571 |
| 0.1904 | 3.25 | 300 | 0.2776 | 0.8429 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "cc-by-nc-4.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "MCG-NJU/videomae-base", "model-index": [{"name": "videomae-base-finetuned-ucf101-subset", "results": []}]} | kkumtori/videomae-base-finetuned-ucf101-subset | null | [
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T12:13:17+00:00 |
reinforcement-learning | null |
# **Q-Learning** Agent playing1 **FrozenLake-v1-4x4**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1-4x4** .
## Usage
model = load_from_hub(repo_id="ws11yrin/q-FrozenLake-v1-4x4", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
| {"tags": ["FrozenLake-v1-4x4", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4", "type": "FrozenLake-v1-4x4"}, "metrics": [{"type": "mean_reward", "value": "0.65 +/- 0.48", "name": "mean_reward", "verified": false}]}]}]} | ws11yrin/q-FrozenLake-v1-4x4 | null | [
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | null | 2024-05-01T12:14:41+00:00 |
null | transformers |
# Uploaded model
- **Developed by:** Crysiss
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | Crysiss/llama3-8B-welfare-unsloth-last-2 | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T12:14:54+00:00 |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0604
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.5675 | 0.08 | 5000 | 1.7421 |
| 1.6789 | 0.15 | 10000 | 1.5231 |
| 1.5321 | 0.23 | 15000 | 1.4227 |
| 1.4526 | 0.31 | 20000 | 1.3572 |
| 1.3952 | 0.38 | 25000 | 1.3033 |
| 1.3431 | 0.46 | 30000 | 1.2568 |
| 1.2983 | 0.54 | 35000 | 1.2112 |
| 1.2522 | 0.61 | 40000 | 1.1708 |
| 1.2095 | 0.69 | 45000 | 1.1319 |
| 1.1742 | 0.77 | 50000 | 1.0989 |
| 1.1429 | 0.84 | 55000 | 1.0754 |
| 1.1244 | 0.92 | 60000 | 1.0634 |
| 1.1143 | 1.0 | 65000 | 1.0604 |
### Framework versions
- Transformers 4.37.2
- Pytorch 1.13.1.post200
- Datasets 2.14.6
- Tokenizers 0.15.1
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "gpt2", "model-index": [{"name": "codeparrot-ds", "results": []}]} | mengren1942/codeparrot-ds | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T12:15:00+00:00 |
null | mlx |
# mlx-community/llama-3-youko-8b-8bit
This model was converted to MLX format from [`rinna/llama-3-youko-8b`]() using mlx-lm version **0.12.1**.
Refer to the [original model card](https://huggingface.co/rinna/llama-3-youko-8b) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/llama-3-youko-8b-8bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
| {"language": ["ja", "en"], "license": "llama3", "tags": ["mlx"], "datasets": ["mc4", "wikipedia", "EleutherAI/pile", "oscar-corpus/colossal-oscar-1.0", "cc100"], "thumbnail": "https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png", "inference": false} | mlx-community/llama-3-youko-8b-8bit | null | [
"mlx",
"safetensors",
"llama",
"ja",
"en",
"dataset:mc4",
"dataset:wikipedia",
"dataset:EleutherAI/pile",
"dataset:oscar-corpus/colossal-oscar-1.0",
"dataset:cc100",
"license:llama3",
"region:us"
] | null | 2024-05-01T12:15:12+00:00 |
null | null | {} | Bobermikola/sn25-2-3 | null | [
"region:us"
] | null | 2024-05-01T12:15:14+00:00 |
|
null | null | {} | shankha/mistral_7b_instruct | null | [
"region:us"
] | null | 2024-05-01T12:16:03+00:00 |
|
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_3_epochs_no_perturb
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1620
- Precision: 0.2876
- Recall: 0.3063
- F1: 0.2967
- Accuracy: 0.9558
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 103 | 0.1906 | 0.2105 | 0.1778 | 0.1928 | 0.9508 |
| No log | 2.0 | 206 | 0.1676 | 0.2550 | 0.3016 | 0.2764 | 0.9534 |
| No log | 3.0 | 309 | 0.1620 | 0.2876 | 0.3063 | 0.2967 | 0.9558 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.0+cpu
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "distilbert/distilbert-base-uncased", "model-index": [{"name": "model_3_epochs_no_perturb", "results": []}]} | cria111/model_3_epochs_no_perturb | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T12:16:05+00:00 |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mT5.testy.tedtalks.simple
This model is a fine-tuned version of [samzirbo/mT5.pretrained.en-es.16K](https://huggingface.co/samzirbo/mT5.pretrained.en-es.16K) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: nan
- eval_bleu: 0.0
- eval_meteor: 0.0
- eval_chrF++: 0.0
- eval_runtime: 87.7839
- eval_samples_per_second: 22.783
- eval_steps_per_second: 0.365
- epoch: 0.4905
- step: 4500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["generated_from_trainer"], "base_model": "samzirbo/mT5.pretrained.en-es.16K", "model-index": [{"name": "mT5.testy.tedtalks.simple", "results": []}]} | samzirbo/mT5.testy.tedtalks.simple | null | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:samzirbo/mT5.pretrained.en-es.16K",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T12:16:17+00:00 |
null | transformers |
# Uploaded model
- **Developed by:** ghaluh
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/Phi-3-mini-4k-instruct-bnb-4bit"} | ghaluh/lora_SS_phi3_cefr_cep | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T12:16:31+00:00 |
null | null | {} | sai-vatturi/whisper-base-english-indian-accent | null | [
"region:us"
] | null | 2024-05-01T12:16:36+00:00 |
|
text-classification | transformers | {"language": "en", "license": "mit", "pipeline_tag": "text-classification", "widget": [{"text": "I had a good time at Jane\u2019s party, last month. It was a good night. It was especially good talking to John and Ross about going to the beach. Then we sat on a couch: I ate potato chips and drank a lot of soft drink. We talked for about an hour or so. I remember Ross wearing a blue t-shirt. There were about 15 people at the party and I remember they played a lot of music that I liked."}, {"text": "Last summer I felt great."}, {"text": "I always enjoy a good party."}]} | exp-psych-lab/AMT-pipeline | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T12:16:46+00:00 |
|
null | null | {} | FDGDFGD/DFGFDH | null | [
"region:us"
] | null | 2024-05-01T12:16:51+00:00 |
|
null | null | {} | 4shL3I/sample_data | null | [
"region:us"
] | null | 2024-05-01T12:17:00+00:00 |
|
reinforcement-learning | ml-agents |
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: DeMuenu/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| {"library_name": "ml-agents", "tags": ["Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy"]} | DeMuenu/ppo-Huggy | null | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | null | 2024-05-01T12:17:29+00:00 |
text-generation | transformers |
> Update @ 2024.05.01: Pre-Release Llama-3-KoEn-8B model & [Llama-3-KoEn-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-KoEn-8B-Instruct-preview)
## Model Details
**Llama-3-KoEn-8B**
Llama-3-KoEn-8B model is continued pretrained language model based on Llama-3-8B.
This model is trained with Korean+English corpus.
The train was done on TPUv4-256, with the warm support from TRC program by Google.
**Note for [Llama-3-KoEn-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-KoEn-8B-Instruct-preview)**
With applying the idea from [Chat Vector paper](https://arxiv.org/abs/2310.04799), I released Instruction model named [Llama-3-KoEn-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-KoEn-8B-Instruct-preview).
Since it is NOT finetuned with any Korean instruction set(indeed `preview`), but it would be great starting point for creating new Chat/Instruct models.
**Model developers** Junbum Lee (Beomi)
**Variations** Llama-3-KoEn comes in one size — 8B.
**Input** Models input text only.
**Output** Models generate text and code only.
**Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture.
<table>
<tr>
<td>
</td>
<td><strong>Training Data</strong>
</td>
<td><strong>Params</strong>
</td>
<td><strong>Context length</strong>
</td>
<td><strong>GQA</strong>
</td>
<td><strong>Token count</strong>
</td>
<td><strong>Knowledge cutoff</strong>
</td>
</tr>
<tr>
<td rowspan="2" >Llama-3-KoEn
</td>
<td rowspan="2" >Same as *Llama-2-KoEn Dataset
</td>
<td>8B
</td>
<td>8k
</td>
<td>Yes
</td>
<td rowspan="2" >XXB+
</td>
<td>Jun, 2023
</td>
</tr>
</table>
**Model Release Date** Pre-release @ 2024.05.01
**Status** This is a static model trained on an offline dataset.
**License** CC-By-NC-SA-4.0 + Llama3 License: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license)
## Intended Use
**Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
**Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**.
**Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
## How to use
TBD
### Responsibility & Safety
We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.
Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.
Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.
As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started.
#### Responsible release
In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.
Misuse
If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/).
## Ethical Considerations and Limitations
The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.
Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide)
## Citation instructions
**Llama-3-Open-Ko**
```
@article{llama3koen,
title={Llama-3-KoEn},
author={L, Junbum},
year={2024},
url={https://huggingface.co/beomi/Llama-3-KoEn-8B}
}
```
**Original Llama-3**
```
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
``` | {"language": ["en", "ko"], "license": "cc-by-nc-sa-4.0", "tags": ["facebook", "meta", "pytorch", "llama", "llama-3", "llama-3-ko"], "pipeline_tag": "text-generation", "license_name": "llama3", "license_link": "LICENSE"} | beomi/Llama-3-KoEn-8B-preview | null | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"llama-3",
"llama-3-ko",
"conversational",
"en",
"ko",
"arxiv:2310.04799",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T12:18:52+00:00 |
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-tiny-patch4-window8-256-finetuned-ind-17-imbalanced-aadhaarmask
This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3601
- Accuracy: 0.8565
- Recall: 0.8565
- F1: 0.8537
- Precision: 0.8631
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 | Precision |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| No log | 0.9974 | 293 | 0.6645 | 0.7820 | 0.7820 | 0.7661 | 0.7678 |
| No log | 1.9983 | 587 | 0.5493 | 0.8033 | 0.8033 | 0.7897 | 0.7964 |
| No log | 2.9991 | 881 | 0.4242 | 0.8416 | 0.8416 | 0.8380 | 0.8460 |
| No log | 4.0 | 1175 | 0.4124 | 0.8310 | 0.8310 | 0.8288 | 0.8299 |
| No log | 4.9974 | 1468 | 0.3769 | 0.8412 | 0.8412 | 0.8388 | 0.8478 |
| No log | 5.9983 | 1762 | 0.3589 | 0.8501 | 0.8501 | 0.8481 | 0.8582 |
| No log | 6.9991 | 2056 | 0.3503 | 0.8455 | 0.8455 | 0.8456 | 0.8535 |
| No log | 8.0 | 2350 | 0.3400 | 0.8404 | 0.8404 | 0.8416 | 0.8465 |
| No log | 8.9974 | 2643 | 0.3533 | 0.8480 | 0.8480 | 0.8480 | 0.8501 |
| 0.5214 | 9.9745 | 2930 | 0.3358 | 0.8459 | 0.8459 | 0.8460 | 0.8473 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.0a0+81ea7a4
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy", "recall", "f1", "precision"], "base_model": "microsoft/swinv2-tiny-patch4-window8-256", "model-index": [{"name": "swinv2-tiny-patch4-window8-256-finetuned-ind-17-imbalanced-aadhaarmask", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.8565346956151554, "name": "Accuracy"}, {"type": "recall", "value": 0.8565346956151554, "name": "Recall"}, {"type": "f1", "value": 0.853731165851545, "name": "F1"}, {"type": "precision", "value": 0.8631033150629456, "name": "Precision"}]}]}]} | Kushagra07/swinv2-tiny-patch4-window8-256-finetuned-ind-17-imbalanced-aadhaarmask | null | [
"transformers",
"tensorboard",
"safetensors",
"swinv2",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swinv2-tiny-patch4-window8-256",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T12:19:50+00:00 |
text-to-image | diffusers |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - naripok/books_LoRA
<Gallery />
## Model description
These are naripok/books_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a cover of TOK book to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](naripok/books_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | {"license": "openrail++", "library_name": "diffusers", "tags": ["text-to-image", "diffusers-training", "diffusers", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "text-to-image", "diffusers-training", "diffusers", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "a cover of TOK book", "widget": []} | naripok/books_LoRA | null | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | null | 2024-05-01T12:20:47+00:00 |
text-to-image | diffusers | {} | arqamwadiwala/stable-diffusion-A | null | [
"diffusers",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null | 2024-05-01T12:20:54+00:00 |
|
automatic-speech-recognition | transformers | {} | srmd/asr_whisper_large_v2_train_shrutilipi_spring_inx | null | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T12:23:15+00:00 |
|
text-generation | transformers | {} | cclabadmin/deepseek_base_network | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T12:23:44+00:00 |
|
null | null | {} | sayakpaul/sdxl-orpo-large-beta_orpo-0.05-beta_inner-500-lr-5e-7 | null | [
"region:us"
] | null | 2024-05-01T12:24:06+00:00 |
|
null | null | {"license": "apache-2.0"} | naurunnahansa/healthcare-mistral-7b | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-01T12:24:44+00:00 |
|
text-generation | transformers | {"license": "apache-2.0"} | TensorSenseAI/videogemma | null | [
"transformers",
"safetensors",
"llava_gemma",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T12:26:50+00:00 |
|
null | null | {} | AfhamAhmed1/ajrak-design | null | [
"region:us"
] | null | 2024-05-01T12:29:16+00:00 |
|
feature-extraction | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | bumblebee-testing/tiny-random-Phi3Model | null | [
"transformers",
"safetensors",
"phi3",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T12:29:36+00:00 |
token-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | bumblebee-testing/tiny-random-Phi3ForTokenClassification | null | [
"transformers",
"safetensors",
"phi3",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T12:30:10+00:00 |
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | bumblebee-testing/tiny-random-Phi3ForSequenceClassification | null | [
"transformers",
"safetensors",
"phi3",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T12:30:16+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | bumblebee-testing/tiny-random-Phi3ForCausalLM | null | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T12:30:21+00:00 |
reinforcement-learning | null |
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
| {"tags": ["Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class"], "model-index": [{"name": "Reinforce-PixelCopter", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Pixelcopter-PLE-v0", "type": "Pixelcopter-PLE-v0"}, "metrics": [{"type": "mean_reward", "value": "38.30 +/- 22.65", "name": "mean_reward", "verified": false}]}]}]} | urkidi/Reinforce-PixelCopter | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | null | 2024-05-01T12:30:47+00:00 |
null | null | {} | cxfajar197/content | null | [
"region:us"
] | null | 2024-05-01T12:31:41+00:00 |
|
null | null | Using the WholeClear PST to MSG Converter Tool, the user can efficiently convert the data in their Outlook PST file into an MSG data file. The tool, PST to MSG Converter, makes it simple to convert Outlook PST files to MSG files. Additionally, files are retained exactly as they are throughout the conversion process. Users' Outlook PST files were successfully converted without any issues. It swiftly converts emails from PST to MSG and other formats, including attachments, headers, email data, email formatting, images, hyperlinks, and any other data. The software converts ANSI and UNICODE PST files with a success rate of 100%. Users must use this advanced tool once. which enables you to convert the first few items in each folder of data. The fully licensed versions of this tool offer a variety of capabilities. If you want to know more about this software, then download its free demo version and check its efficiency. The free demo version of this tool allows the user to export the first 25 items per folder.
Visit Here - https://www.wholeclear.com/pst/msg/ | {} | wholeclearsoftware/pst-to-msg-converter | null | [
"region:us"
] | null | 2024-05-01T12:32:29+00:00 |
text-to-image | diffusers |
# controlnet-YuhoLiang/model_out
These are controlnet weights trained on stabilityai/stable-diffusion-2-1-base with new type of conditioning.
You can find some example images below.
prompt: High-quality close-up dslr photo of man wearing a hat with trees in the background

prompt: Girl smiling, professional dslr photograph, dark background, studio lights, high quality

prompt: Portrait of a clown face, oil on canvas, bittersweet expression

| {"license": "creativeml-openrail-m", "tags": ["stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers", "controlnet"], "base_model": "stabilityai/stable-diffusion-2-1-base", "inference": true} | YuhoLiang/model_out | null | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"base_model:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-05-01T12:32:58+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | UdomsakC/mistralai-Code-Instruct-Finetune-test | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T12:33:36+00:00 |
null | null | {} | AoiKazama/pretrain_mistral_9b-bf16 | null | [
"region:us"
] | null | 2024-05-01T12:33:40+00:00 |
|
image-classification | transformers | {} | Bombex/dinov2-base-finetuned-oxford | null | [
"transformers",
"safetensors",
"dinov2",
"image-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T12:33:40+00:00 |
|
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | np28work/openchat-function-calling | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T12:34:04+00:00 |
null | null | {} | TachyHealthResearch/Llama3-70B-Medical-Finetune_QA_MCQ | null | [
"region:us"
] | null | 2024-05-01T12:34:30+00:00 |
|
null | null | # aixsatoshi-Llama-3-8b-Cosmopedia-japanese-gguf
[aixsatoshiさんが公開しているLlama-3-8b-Cosmopedia-japanese](https://huggingface.co/aixsatoshi/Llama-3-8b-Cosmopedia-japanese)のggufフォーマット変換版です。
imatrixのデータは[TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)を使用して作成しました。
## 他のモデル
[mmnga/aixsatoshi-Llama-3-8b-Cosmopedia-japanese-gguf](https://huggingface.co/mmnga/aixsatoshi-Llama-3-8b-Cosmopedia-japanese-gguf)
[mmnga/aixsatoshi-Honyaku-7b-v2-gguf](https://huggingface.co/mmnga/aixsatoshi-Honyaku-7b-v2-gguf)
[mmnga/aixsatoshi-Honyaku-Multi-Translator-Swallow-ms7b-gguf](https://huggingface.co/mmnga/aixsatoshi-Honyaku-Multi-Translator-Swallow-ms7b-gguf)
[mmnga/aixsatoshi-Swallow-MX-8x7b-NVE-chatvector-Mixtral-instruct-v2-gguf](https://huggingface.co/mmnga/aixsatoshi-Swallow-MX-8x7b-NVE-chatvector-Mixtral-instruct-v2-gguf)
[mmnga/aixsatoshi-Mixtral-8x7B-ja-sft-ChatbotArenaJAcalm2-bnb4bit](https://huggingface.co/mmnga/aixsatoshi-Mixtral-8x7B-ja-sft-ChatbotArenaJAcalm2-bnb4bit)
[mmnga/aixsatoshi-calm2-7b-chat-7b-moe-gguf](https://huggingface.co/mmnga/aixsatoshi-calm2-7b-chat-7b-moe-gguf)
## Usage
```
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
make -j
./main -m 'aixsatoshi-Llama-3-8b-Cosmopedia-japanese-q4_0.gguf' -n 128 -p "<|begin_of_text|><|start_header_id|>user <|end_header_id|>\n\nこんにちわ<|eot_id|><|start_header_id|>assistant <|end_header_id|>\n\n"
``` | {"language": ["en", "ja"], "license": "llama3", "tags": ["llama3"], "datasets": ["TFMC/imatrix-dataset-for-japanese-llm"]} | mmnga/aixsatoshi-Llama-3-8b-Cosmopedia-japanese-gguf | null | [
"gguf",
"llama3",
"en",
"ja",
"dataset:TFMC/imatrix-dataset-for-japanese-llm",
"license:llama3",
"region:us"
] | null | 2024-05-01T12:36:43+00:00 |
video-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vivit-b-16x2-kinetics400-finetuned-ucf101-subset
This model is a fine-tuned version of [google/vivit-b-16x2-kinetics400](https://huggingface.co/google/vivit-b-16x2-kinetics400) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0712
- Accuracy: 0.9730
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0044 | 0.25 | 300 | 0.0061 | 1.0 |
| 0.0008 | 1.25 | 600 | 0.1386 | 0.9459 |
| 0.0067 | 2.25 | 900 | 0.0455 | 0.9730 |
| 0.0005 | 3.25 | 1200 | 0.0712 | 0.9730 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "google/vivit-b-16x2-kinetics400", "model-index": [{"name": "vivit-b-16x2-kinetics400-finetuned-ucf101-subset", "results": []}]} | kkumtori/vivit-b-16x2-kinetics400-finetuned-ucf101-subset | null | [
"transformers",
"tensorboard",
"safetensors",
"vivit",
"video-classification",
"generated_from_trainer",
"base_model:google/vivit-b-16x2-kinetics400",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T12:37:42+00:00 |
null | diffusers |
More information on all the CLI arguments and the environment are available on your [`wandb` run page](https://wandb.ai/sayakpaul/diffusion-orpo-lora-sdxl/runs/fdwunkc2).
| {} | sayakpaul/sdxl-orpo-large-beta_orpo-0.005-beta_inner-500-lr-5e-6 | null | [
"diffusers",
"safetensors",
"region:us"
] | null | 2024-05-01T12:37:47+00:00 |
audio-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-bass-classifier8
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the augmented_bass_sounds dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0147
- Accuracy: 0.9991
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2108 | 1.0 | 479 | 0.0723 | 0.9841 |
| 0.1248 | 2.0 | 958 | 0.0992 | 0.9921 |
| 0.04 | 3.0 | 1437 | 0.0538 | 0.9968 |
| 0.0 | 4.0 | 1916 | 0.0207 | 0.9988 |
| 0.0 | 5.0 | 2395 | 0.0147 | 0.9991 |
### Framework versions
- Transformers 4.39.2
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["TheDuyx/augmented_bass_sounds"], "metrics": ["accuracy"], "base_model": "ntu-spml/distilhubert", "model-index": [{"name": "distilhubert-bass-classifier8", "results": [{"task": {"type": "audio-classification", "name": "Audio Classification"}, "dataset": {"name": "augmented_bass_sounds", "type": "TheDuyx/augmented_bass_sounds"}, "metrics": [{"type": "accuracy", "value": 0.9991181657848325, "name": "Accuracy"}]}]}]} | TheDuyx/distilhubert-bass-classifier8 | null | [
"transformers",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:TheDuyx/augmented_bass_sounds",
"base_model:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T12:38:15+00:00 |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"library_name": "peft", "base_model": "codellama/CodeLlama-7b-hf"} | CHAFIK12/tsql_to_plsql | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:codellama/CodeLlama-7b-hf",
"region:us"
] | null | 2024-05-01T12:40:03+00:00 |
null | null | {} | Porameht/typhoon-7b-customer-support-th | null | [
"region:us"
] | null | 2024-05-01T12:41:53+00:00 |
|
null | null | {} | asude55/android-emotion-A | null | [
"region:us"
] | null | 2024-05-01T12:42:04+00:00 |
|
null | transformers |
# Uploaded model
- **Developed by:** Crysiss
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | Crysiss/llama3-8B-welfare-unsloth-last-3 | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T12:42:09+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NDD-pagekit_test-content
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2488
- Accuracy: 0.7039
- F1: 0.6933
- Precision: 0.7010
- Recall: 0.7039
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.0792 | 0.9993 | 684 | 1.2142 | 0.6896 | 0.6681 | 0.6923 | 0.6896 |
| 0.0301 | 1.9985 | 1368 | 1.2488 | 0.7039 | 0.6933 | 0.7010 | 0.7039 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "NDD-pagekit_test-content", "results": []}]} | lgk03/NDD-pagekit_test-content | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T12:43:14+00:00 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.