modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-28 06:27:35
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 500
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-28 06:24:42
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mithlesh/llama2_finetuned | mithlesh | 2023-12-29T13:08:43Z | 0 | 0 | null | [
"safetensors",
"autotrain",
"text-generation",
"conversational",
"license:other",
"region:us"
] | text-generation | 2023-12-29T13:08:40Z | ---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
ejobs/joda_lora_ro_benefits | ejobs | 2023-12-29T13:07:45Z | 2 | 0 | peft | [
"peft",
"safetensors",
"region:us"
] | null | 2023-12-29T13:05:09Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
|
ejobs/joda_lora_lt_benefits | ejobs | 2023-12-29T13:06:25Z | 0 | 0 | peft | [
"peft",
"safetensors",
"region:us"
] | null | 2023-12-29T13:04:03Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
|
Falah/git-base-ante2024 | Falah | 2023-12-29T12:57:57Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"git",
"image-text-to-text",
"generated_from_trainer",
"base_model:microsoft/git-base",
"base_model:finetune:microsoft/git-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2023-12-29T12:03:01Z | ---
license: mit
base_model: microsoft/git-base
tags:
- generated_from_trainer
model-index:
- name: git-base-ante2024
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# git-base-ante2024
This model is a fine-tuned version of [microsoft/git-base](https://huggingface.co/microsoft/git-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8122
- Wer Score: 3.0415
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Score |
|:-------------:|:-----:|:----:|:---------------:|:---------:|
| 7.184 | 20.0 | 50 | 5.2098 | 3.5253 |
| 4.3031 | 40.0 | 100 | 3.8122 | 3.0415 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.9.0
- Tokenizers 0.15.0
|
micdestefano/rl_course_vizdoom_health_gathering_supreme | micdestefano | 2023-12-29T12:55:01Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-29T10:49:28Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 17.30 +/- 5.59
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r micdestefano/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
ejobs/joda_lora_hu_responsibilities | ejobs | 2023-12-29T12:49:26Z | 0 | 0 | peft | [
"peft",
"safetensors",
"region:us"
] | null | 2023-12-29T12:47:35Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
|
ejobs/joda_lora_et_benefits | ejobs | 2023-12-29T12:42:45Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-13b-hf",
"base_model:adapter:meta-llama/Llama-2-13b-hf",
"region:us"
] | null | 2023-12-29T12:40:43Z | ---
library_name: peft
base_model: meta-llama/Llama-2-13b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
JoseAntonioBarrancoBernabe/mi-super-modelo | JoseAntonioBarrancoBernabe | 2023-12-29T12:35:43Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-29T12:26:09Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: mi-super-modelo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mi-super-modelo
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5355
- Accuracy: 0.3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.68 | 0.5 | 5 | 1.5472 | 0.3 |
| 1.687 | 1.0 | 10 | 1.5355 | 0.3 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
CyberHarem/raphiel_shiraha_ainsworth_gabrieldropout | CyberHarem | 2023-12-29T12:25:05Z | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/raphiel_shiraha_ainsworth_gabrieldropout",
"license:mit",
"region:us"
] | text-to-image | 2023-12-29T12:08:44Z | ---
license: mit
datasets:
- CyberHarem/raphiel_shiraha_ainsworth_gabrieldropout
pipeline_tag: text-to-image
tags:
- art
---
# Lora of raphiel_shiraha_ainsworth_gabrieldropout
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
The base model used during training is [NAI](https://huggingface.co/deepghs/animefull-latest), and the base model used for generating preview images is [Meina/MeinaMix_V11](https://huggingface.co/Meina/MeinaMix_V11).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 5980, you need to download `5980/raphiel_shiraha_ainsworth_gabrieldropout.pt` as the embedding and `5980/raphiel_shiraha_ainsworth_gabrieldropout.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The best step we recommend is 5980**, with the score of 0.988. The trigger words are:
1. `raphiel_shiraha_ainsworth_gabrieldropout`
2. `long_hair, hair_ornament, cross_hair_ornament, smile, ahoge, bow, blush, closed_eyes, light_purple_hair, yellow_eyes, hair_bow`
For the following groups, it is not recommended to use this model and we express regret:
1. Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
2. Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
3. Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
4. Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
5. Individuals who finds the generated image content offensive to their values.
These are available steps:
| Steps | Score | Download | pattern_1 | pattern_2 | pattern_3 | pattern_4 | pattern_5 | pattern_6 | pattern_7 | pattern_8 | pattern_9 | pattern_10 | pattern_11 | pattern_12 | pattern_13 | pattern_14 | bikini | bondage | free | maid | miko | nude | nude2 | suit | yukata |
|:---------|:----------|:------------------------------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-----------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-------------------------------------------------|:-----------------------------------------|:--------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-----------------------------------------------|:------------------------------------------------|:-------------------------------------|:-----------------------------------------|
| 6900 | 0.984 | [Download](6900/raphiel_shiraha_ainsworth_gabrieldropout.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6900/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6900/previews/nude.png) | [<NSFW, click to see>](6900/previews/nude2.png) |  |  |
| 6440 | 0.982 | [Download](6440/raphiel_shiraha_ainsworth_gabrieldropout.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](6440/previews/bondage.png) |  |  |  | [<NSFW, click to see>](6440/previews/nude.png) | [<NSFW, click to see>](6440/previews/nude2.png) |  |  |
| **5980** | **0.988** | [**Download**](5980/raphiel_shiraha_ainsworth_gabrieldropout.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5980/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5980/previews/nude.png) | [<NSFW, click to see>](5980/previews/nude2.png) |  |  |
| 5520 | 0.988 | [Download](5520/raphiel_shiraha_ainsworth_gabrieldropout.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5520/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5520/previews/nude.png) | [<NSFW, click to see>](5520/previews/nude2.png) |  |  |
| 5060 | 0.979 | [Download](5060/raphiel_shiraha_ainsworth_gabrieldropout.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](5060/previews/bondage.png) |  |  |  | [<NSFW, click to see>](5060/previews/nude.png) | [<NSFW, click to see>](5060/previews/nude2.png) |  |  |
| 4600 | 0.979 | [Download](4600/raphiel_shiraha_ainsworth_gabrieldropout.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4600/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4600/previews/nude.png) | [<NSFW, click to see>](4600/previews/nude2.png) |  |  |
| 4140 | 0.981 | [Download](4140/raphiel_shiraha_ainsworth_gabrieldropout.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](4140/previews/bondage.png) |  |  |  | [<NSFW, click to see>](4140/previews/nude.png) | [<NSFW, click to see>](4140/previews/nude2.png) |  |  |
| 3680 | 0.975 | [Download](3680/raphiel_shiraha_ainsworth_gabrieldropout.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3680/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3680/previews/nude.png) | [<NSFW, click to see>](3680/previews/nude2.png) |  |  |
| 3220 | 0.973 | [Download](3220/raphiel_shiraha_ainsworth_gabrieldropout.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](3220/previews/bondage.png) |  |  |  | [<NSFW, click to see>](3220/previews/nude.png) | [<NSFW, click to see>](3220/previews/nude2.png) |  |  |
| 2760 | 0.972 | [Download](2760/raphiel_shiraha_ainsworth_gabrieldropout.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2760/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2760/previews/nude.png) | [<NSFW, click to see>](2760/previews/nude2.png) |  |  |
| 2300 | 0.976 | [Download](2300/raphiel_shiraha_ainsworth_gabrieldropout.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](2300/previews/bondage.png) |  |  |  | [<NSFW, click to see>](2300/previews/nude.png) | [<NSFW, click to see>](2300/previews/nude2.png) |  |  |
| 1840 | 0.967 | [Download](1840/raphiel_shiraha_ainsworth_gabrieldropout.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1840/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1840/previews/nude.png) | [<NSFW, click to see>](1840/previews/nude2.png) |  |  |
| 1380 | 0.964 | [Download](1380/raphiel_shiraha_ainsworth_gabrieldropout.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](1380/previews/bondage.png) |  |  |  | [<NSFW, click to see>](1380/previews/nude.png) | [<NSFW, click to see>](1380/previews/nude2.png) |  |  |
| 920 | 0.941 | [Download](920/raphiel_shiraha_ainsworth_gabrieldropout.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](920/previews/bondage.png) |  |  |  | [<NSFW, click to see>](920/previews/nude.png) | [<NSFW, click to see>](920/previews/nude2.png) |  |  |
| 460 | 0.900 | [Download](460/raphiel_shiraha_ainsworth_gabrieldropout.zip) |  |  |  |  |  |  |  |  |  |  |  |  |  |  |  | [<NSFW, click to see>](460/previews/bondage.png) |  |  |  | [<NSFW, click to see>](460/previews/nude.png) | [<NSFW, click to see>](460/previews/nude2.png) |  |  |
|
aaryangupta/dog-at-river | aaryangupta | 2023-12-29T12:23:10Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-12-29T12:18:41Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### Dog-at-River Dreambooth model trained by aaryangupta following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 101340
Sample pictures of this concept:

|
yuanhuaisen/autotrain-khvt4-4vmox | yuanhuaisen | 2023-12-29T12:15:52Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"autotrain",
"dataset:yuanhuaisen/autotrain-data-autotrain-khvt4-4vmox",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-12-29T12:15:14Z |
---
tags:
- autotrain
- image-classification
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
datasets:
- yuanhuaisen/autotrain-data-autotrain-khvt4-4vmox
---
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 0.18480469286441803
f1: 0.962962962962963
precision: 1.0
recall: 0.9285714285714286
auc: 1.0
accuracy: 0.96
|
prassu10/flan-t5-base-insight2 | prassu10 | 2023-12-29T12:09:00Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-12-29T12:08:19Z | ---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: flan-t5-base-insight2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-insight2
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9687
- Rouge1: 24.6641
- Rouge2: 19.0841
- Rougel: 23.6027
- Rougelsum: 24.2082
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.5604 | 1.0 | 172 | 1.1982 | 23.0727 | 15.1292 | 21.7613 | 22.3596 | 19.0 |
| 1.2717 | 2.0 | 344 | 1.0511 | 24.16 | 18.1468 | 22.8935 | 23.6175 | 19.0 |
| 1.1373 | 3.0 | 516 | 1.0002 | 24.5212 | 18.789 | 23.4331 | 24.0315 | 19.0 |
| 1.0895 | 4.0 | 688 | 0.9699 | 24.7531 | 19.3582 | 23.694 | 24.3304 | 19.0 |
| 1.0176 | 5.0 | 860 | 0.9687 | 24.6641 | 19.0841 | 23.6027 | 24.2082 | 19.0 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
Oraz7/Cyra | Oraz7 | 2023-12-29T12:01:33Z | 0 | 0 | transformers | [
"transformers",
"chemistry",
"legal",
"tk",
"tr",
"dataset:fka/awesome-chatgpt-prompts",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-12-29T11:54:23Z | ---
license: apache-2.0
datasets:
- fka/awesome-chatgpt-prompts
language:
- tk
- tr
metrics:
- accuracy
library_name: transformers
tags:
- chemistry
- legal
--- |
LoneStriker/Aurora-Nights-103B-v1.0-5.0bpw-h6-exl2 | LoneStriker | 2023-12-29T11:55:39Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2307.11760",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-29T11:26:53Z | ---
license: llama2
language:
- en
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/aGUU0O9.png" alt="AuroraNights" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
### Overview
This model is a frankenmerge of Aurora-Nights-70B-v1.0 with itself. (See that model card for details on what's in the blend.) It features 120 layers and should weigh in at 103b parameters.
This model is a successor to Rogue Rose and improves upon it. Aurora follows instructions better but retains excellent creative writing and ERP abilities.
This model turned out quite uncensored. *You are responsible for whatever you do with it.*
This model was designed for roleplaying and storytelling and I think it does well at both. It *should* perform well at other tasks, but I haven't tested its capabilities in other areas.
### Sampler Tips
I recommend using the new Min-P sampler method with this model. The creator has a great [guide to it on Reddit](https://www.reddit.com/r/LocalLLaMA/comments/17vonjo/your_settings_are_probably_hurting_your_model_why/).
I find this model performs reasonably well at 8192 context but you will likely get better results at 4096.
Experiment with any and all of the settings below, but trust me on a few points:
* This model loves high temperatures with Min-P.
* Frequency Penalty set to 0.01 is like adding a dash of salt to the dish. Go higher at your own peril. 0 is fine too, but gosh I like 0.01.
If you save the below settings as a .json file, you can import them directly into Silly Tavern.
```
{
"temp": 1.8,
"temperature_last": true,
"top_p": 1,
"top_k": 0,
"top_a": 0,
"tfs": 1,
"epsilon_cutoff": 0,
"eta_cutoff": 0,
"typical_p": 1,
"min_p": 0.5,
"rep_pen": 1,
"rep_pen_range": 0,
"no_repeat_ngram_size": 0,
"penalty_alpha": 0,
"num_beams": 1,
"length_penalty": 1,
"min_length": 0,
"encoder_rep_pen": 1,
"freq_pen": 0.01,
"presence_pen": 0,
"do_sample": true,
"early_stopping": false,
"add_bos_token": true,
"truncation_length": 2048,
"ban_eos_token": false,
"skip_special_tokens": true,
"streaming": true,
"mirostat_mode": 0,
"mirostat_tau": 5,
"mirostat_eta": 0.1,
"guidance_scale": 1,
"negative_prompt": "",
"grammar_string": "",
"banned_tokens": "",
"ignore_eos_token_aphrodite": false,
"spaces_between_special_tokens_aphrodite": true,
"type": "ooba",
"legacy_api": false,
"sampler_order": [
6,
0,
1,
3,
4,
2,
5
],
"n": 1,
"rep_pen_size": 0,
"genamt": 550,
"max_length": 4096
}
```
### Prompt Tips
Try the following prompt template as a starting point for your own customizations. 'This is very important to my career' is from [this paper](https://arxiv.org/abs/2307.11760).
I recommend the Tulu instruction format.
*This model responds to prompting!* I highly recommend playing with your system prompt to see how it reacts.
If you save this as a .json file, you can import it directly into Silly Tavern.
```
{
"wrap": true,
"names": false,
"system_prompt": "You are an award-winning fiction author engaging in an uncensored literary roleplaying experience between a human (me) and a creative, obedient, and helpful AI assistant (you). It is VITAL that you follow all these instructions because this roleplay is very important to my career and I'll be fired from my job if it isn't good.\nROLEPLAY RULES:\n> It is vital that ONLY the human provides responses for {{user}}.\n> Reply as {{char}} using authentic, vivid, varied, explicit, accurate, creative, fresh, and descriptive responses that follow ALL provided narrative instructions. Stay in character as {{char}} and only write text for {{char}}.\n> Describe the scene and {{char}}'s sensory perceptions in vivid detail to immerse the reader in the story.\n> Keep your responses scoped to the current story beat and current scene.\n> Consider all available contextual information when narrating so that all the story details remain consistent between scenes.\n> Demonstrate {{char}}'s goals and motivations, and use subtle cues to hint at {{char}}'s mental state unless delving into {{char}}'s thoughts satisfies an explicit instruction or enhances the vividness of the scene.\n> When quoting {{char}}'s internal first-person thoughts (aka internal monologue, delivered in {{char}}'s own voice), *enclose the thoughts in asterisks like this*. Only use asterisks for thoughts.\n> Use strong action verbs and varied descriptions to produce dynamic, high-quality prose.",
"system_sequence": "",
"stop_sequence": "",
"input_sequence": "<|user|>\n",
"output_sequence": "<|assistant|>\n",
"separator_sequence": "",
"macro": true,
"names_force_groups": true,
"system_sequence_prefix": "",
"system_sequence_suffix": "",
"first_output_sequence": "",
"last_output_sequence": "<|assistant (provide varied, creative, and vivid narration; follow all narrative instructions; include all necessary possessive pronouns; maintain consistent story details; only roleplay as {{char}})|>\n",
"activation_regex": "",
"name": "Aurora-Nights"
}
```
### Licence and usage restrictions
Llama2 license inherited from base models, plus restrictions applicable to [Dreamgen/Opus](https://huggingface.co/dreamgen/opus-v0.5-70b).
### Tools Used
* [mergekit](https://github.com/cg123/mergekit)
```
slices:
- sources:
- model: aurora-nights-70b-v1.0
layer_range: [0, 40] # 40
- sources:
- model: aurora-nights-70b-v1.0
layer_range: [20, 60] # 40
- sources:
- model: aurora-nights-70b-v1.0
layer_range: [40, 80] # 40
merge_method: passthrough
dtype: float16
``` |
C0uchP0tat0/laws_rugpt3medium_finetune | C0uchP0tat0 | 2023-12-29T11:51:07Z | 15 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:ai-forever/rugpt3large_based_on_gpt2",
"base_model:finetune:ai-forever/rugpt3large_based_on_gpt2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-29T10:21:34Z | ---
base_model: ai-forever/rugpt3large_based_on_gpt2
tags:
- generated_from_trainer
model-index:
- name: laws_rugpt3medium_finetune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# laws_rugpt3medium_finetune
This model is a fine-tuned version of [ai-forever/rugpt3large_based_on_gpt2](https://huggingface.co/ai-forever/rugpt3large_based_on_gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4051
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.3772 | 0.23 | 25 | 3.3796 |
| 3.4598 | 0.46 | 50 | 3.3744 |
| 3.3981 | 0.69 | 75 | 3.3587 |
| 3.4916 | 0.93 | 100 | 3.3322 |
| 3.4166 | 1.16 | 125 | 3.2980 |
| 3.3829 | 1.39 | 150 | 3.2626 |
| 3.2992 | 1.62 | 175 | 3.2285 |
| 3.3237 | 1.85 | 200 | 3.1936 |
| 3.2106 | 2.08 | 225 | 3.1601 |
| 3.1947 | 2.31 | 250 | 3.1311 |
| 3.2183 | 2.55 | 275 | 3.0988 |
| 3.2124 | 2.78 | 300 | 3.0620 |
| 3.1725 | 3.01 | 325 | 3.0266 |
| 3.078 | 3.24 | 350 | 2.9931 |
| 3.0387 | 3.47 | 375 | 2.9595 |
| 3.0944 | 3.7 | 400 | 2.9194 |
| 3.049 | 3.94 | 425 | 2.8818 |
| 2.9818 | 4.17 | 450 | 2.8438 |
| 2.9278 | 4.4 | 475 | 2.8074 |
| 2.9172 | 4.63 | 500 | 2.7671 |
| 2.8432 | 4.86 | 525 | 2.7233 |
| 2.8499 | 5.09 | 550 | 2.6794 |
| 2.76 | 5.32 | 575 | 2.6310 |
| 2.7197 | 5.56 | 600 | 2.5857 |
| 2.793 | 5.79 | 625 | 2.5458 |
| 2.6895 | 6.02 | 650 | 2.4991 |
| 2.651 | 6.25 | 675 | 2.4496 |
| 2.5484 | 6.48 | 700 | 2.4014 |
| 2.5728 | 6.71 | 725 | 2.3471 |
| 2.4865 | 6.94 | 750 | 2.2953 |
| 2.4388 | 7.18 | 775 | 2.2369 |
| 2.4137 | 7.41 | 800 | 2.1799 |
| 2.3262 | 7.64 | 825 | 2.1285 |
| 2.3043 | 7.87 | 850 | 2.0836 |
| 2.2541 | 8.1 | 875 | 2.0299 |
| 2.1348 | 8.33 | 900 | 1.9730 |
| 2.1904 | 8.56 | 925 | 1.9211 |
| 2.0869 | 8.8 | 950 | 1.8719 |
| 2.1606 | 9.03 | 975 | 1.8210 |
| 1.9323 | 9.26 | 1000 | 1.7712 |
| 1.9892 | 9.49 | 1025 | 1.7254 |
| 1.9407 | 9.72 | 1050 | 1.6757 |
| 1.8791 | 9.95 | 1075 | 1.6214 |
| 1.7791 | 10.19 | 1100 | 1.5702 |
| 1.7523 | 10.42 | 1125 | 1.5284 |
| 1.7336 | 10.65 | 1150 | 1.4912 |
| 1.7709 | 10.88 | 1175 | 1.4475 |
| 1.6533 | 11.11 | 1200 | 1.3941 |
| 1.5671 | 11.34 | 1225 | 1.3536 |
| 1.5394 | 11.57 | 1250 | 1.3209 |
| 1.6085 | 11.81 | 1275 | 1.2921 |
| 1.5465 | 12.04 | 1300 | 1.2599 |
| 1.4172 | 12.27 | 1325 | 1.2292 |
| 1.4422 | 12.5 | 1350 | 1.1927 |
| 1.4708 | 12.73 | 1375 | 1.1563 |
| 1.3859 | 12.96 | 1400 | 1.1260 |
| 1.2036 | 13.19 | 1425 | 1.0932 |
| 1.3393 | 13.43 | 1450 | 1.0697 |
| 1.3203 | 13.66 | 1475 | 1.0376 |
| 1.2902 | 13.89 | 1500 | 1.0084 |
| 1.2356 | 14.12 | 1525 | 0.9760 |
| 1.2329 | 14.35 | 1550 | 0.9531 |
| 1.2039 | 14.58 | 1575 | 0.9343 |
| 1.1521 | 14.81 | 1600 | 0.9084 |
| 1.0754 | 15.05 | 1625 | 0.8786 |
| 1.0786 | 15.28 | 1650 | 0.8620 |
| 1.1052 | 15.51 | 1675 | 0.8395 |
| 1.0765 | 15.74 | 1700 | 0.8192 |
| 1.0817 | 15.97 | 1725 | 0.8002 |
| 1.0285 | 16.2 | 1750 | 0.7715 |
| 1.0313 | 16.44 | 1775 | 0.7612 |
| 0.9682 | 16.67 | 1800 | 0.7458 |
| 1.0025 | 16.9 | 1825 | 0.7267 |
| 0.9516 | 17.13 | 1850 | 0.7052 |
| 0.9475 | 17.36 | 1875 | 0.6952 |
| 0.8851 | 17.59 | 1900 | 0.6745 |
| 0.9463 | 17.82 | 1925 | 0.6602 |
| 0.8937 | 18.06 | 1950 | 0.6436 |
| 0.8135 | 18.29 | 1975 | 0.6316 |
| 0.8738 | 18.52 | 2000 | 0.6172 |
| 0.8585 | 18.75 | 2025 | 0.6072 |
| 0.8782 | 18.98 | 2050 | 0.5968 |
| 0.8324 | 19.21 | 2075 | 0.5789 |
| 0.7818 | 19.44 | 2100 | 0.5688 |
| 0.8375 | 19.68 | 2125 | 0.5602 |
| 0.7838 | 19.91 | 2150 | 0.5498 |
| 0.8015 | 20.14 | 2175 | 0.5369 |
| 0.724 | 20.37 | 2200 | 0.5299 |
| 0.7298 | 20.6 | 2225 | 0.5233 |
| 0.8079 | 20.83 | 2250 | 0.5141 |
| 0.77 | 21.06 | 2275 | 0.5058 |
| 0.7299 | 21.3 | 2300 | 0.4995 |
| 0.7152 | 21.53 | 2325 | 0.4893 |
| 0.6905 | 21.76 | 2350 | 0.4882 |
| 0.7492 | 21.99 | 2375 | 0.4779 |
| 0.6817 | 22.22 | 2400 | 0.4681 |
| 0.6893 | 22.45 | 2425 | 0.4652 |
| 0.7098 | 22.69 | 2450 | 0.4611 |
| 0.7063 | 22.92 | 2475 | 0.4582 |
| 0.6562 | 23.15 | 2500 | 0.4511 |
| 0.7083 | 23.38 | 2525 | 0.4474 |
| 0.6684 | 23.61 | 2550 | 0.4438 |
| 0.6688 | 23.84 | 2575 | 0.4398 |
| 0.6561 | 24.07 | 2600 | 0.4334 |
| 0.6664 | 24.31 | 2625 | 0.4318 |
| 0.6418 | 24.54 | 2650 | 0.4294 |
| 0.6723 | 24.77 | 2675 | 0.4249 |
| 0.6164 | 25.0 | 2700 | 0.4215 |
| 0.6348 | 25.23 | 2725 | 0.4203 |
| 0.6464 | 25.46 | 2750 | 0.4182 |
| 0.6392 | 25.69 | 2775 | 0.4171 |
| 0.6186 | 25.93 | 2800 | 0.4156 |
| 0.6447 | 26.16 | 2825 | 0.4138 |
| 0.6445 | 26.39 | 2850 | 0.4114 |
| 0.6037 | 26.62 | 2875 | 0.4109 |
| 0.6074 | 26.85 | 2900 | 0.4099 |
| 0.6509 | 27.08 | 2925 | 0.4092 |
| 0.6416 | 27.31 | 2950 | 0.4082 |
| 0.6391 | 27.55 | 2975 | 0.4075 |
| 0.594 | 27.78 | 3000 | 0.4071 |
| 0.6231 | 28.01 | 3025 | 0.4066 |
| 0.6151 | 28.24 | 3050 | 0.4061 |
| 0.6464 | 28.47 | 3075 | 0.4056 |
| 0.6024 | 28.7 | 3100 | 0.4054 |
| 0.6277 | 28.94 | 3125 | 0.4052 |
| 0.6017 | 29.17 | 3150 | 0.4052 |
| 0.6226 | 29.4 | 3175 | 0.4051 |
| 0.6084 | 29.63 | 3200 | 0.4051 |
| 0.639 | 29.86 | 3225 | 0.4051 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
linoyts/linoy_face_75_woman_no_snr | linoyts | 2023-12-29T11:49:16Z | 4 | 1 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2023-12-29T11:26:51Z | ---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: 'photo of a <s0><s1>'
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of a <s0><s1>
license: openrail++
---
# SDXL LoRA DreamBooth - LinoyTsaban/linoy_face_75_woman_no_snr
<Gallery />
## Model description
### These are LinoyTsaban/linoy_face_75_woman_no_snr LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` โ use `<s0><s1>` in your prompt
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('LinoyTsaban/linoy_face_75_woman_no_snr', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='LinoyTsaban/linoy_face_75_woman_no_snr', filename="embeddings.safetensors", repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
image = pipeline('photo of a <s0><s1>').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- Download the LoRA *.safetensors [here](/LinoyTsaban/linoy_face_75_woman_no_snr/blob/main/pytorch_lora_weights.safetensors). Rename it and place it on your Lora folder.
- Download the text embeddings *.safetensors [here](/LinoyTsaban/linoy_face_75_woman_no_snr/blob/main/embeddings.safetensors). Rename it and place it on it on your embeddings folder.
All [Files & versions](/LinoyTsaban/linoy_face_75_woman_no_snr/tree/main).
## Details
The weights were trained using [๐งจ diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
jtmichels/jmimgdiff | jtmichels | 2023-12-29T11:45:03Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-12-29T11:02:28Z | ---
license: creativeml-openrail-m
---
|
li-ping/results_8_can_eos__checkpoint_8500 | li-ping | 2023-12-29T11:35:45Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:baichuan-inc/Baichuan2-7B-Chat",
"base_model:adapter:baichuan-inc/Baichuan2-7B-Chat",
"region:us"
] | null | 2023-12-29T11:11:16Z | ---
library_name: peft
base_model: baichuan-inc/Baichuan2-7B-Chat
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
jags/yolov8_model_segmentation-set | jags | 2023-12-29T11:34:13Z | 0 | 21 | null | [
"RyzenAI",
"object-detection",
"vision",
"YOLO",
"Pytorch",
"dataset:COCO",
"license:apache-2.0",
"region:us"
] | object-detection | 2023-12-09T04:19:18Z | ---
license: apache-2.0
tags:
- RyzenAI
- object-detection
- vision
- YOLO
- Pytorch
datasets:
- COCO
metrics:
- mAP
---
# YOLOv8m model trained on COCO for use in comfyUI nodes
YOLOv8m is the medium version of YOLOv8 model trained on COCO object detection (118k annotated images) at resolution 640x640.
It was released in [https://github.com/ultralytics/ultralytics](https://github.com/ultralytics/ultralytics).
We develop a modified version that could be supported by comfyUI nodes as shown in this git repo.
For more information please look into the github and wiki for same [https://github.com/jags111/ComfyUI_Jags_VectorMagic](https://github.com/jags111/ComfyUI_Jags_VectorMagic)
We have nodes for detection and segmentation seperately.
## Model description
Ultralytics YOLOv8 is a cutting-edge, state-of-the-art (SOTA) model that builds upon the success of previous YOLO versions and introduces new features and improvements to further boost performance and flexibility.
YOLOv8 is designed to be fast, accurate, and easy to use, making it an excellent choice for a wide range of object detection and tracking, instance segmentation, image classification and pose estimation tasks.
## Intended uses & limitations
You can use the raw model for object detection. See the [git hub](https://github.com/jags111/ComfyUI_Jags_VectorMagic) to look for all available YOLOv8 models.
## How to use
Please note all nodes ending with seg or have seg in their name will do segmentation+masks.
All models having a 'det' in their name will do a detection of the image.
we are generally focussed in providing 8m based models .
But one can use 8l, 8n or 8x models as these are trained on different image sizes and different image sets.
Generally the coco and openImage sets from ultralytics are taken for our comfyUI testing.
More info and general models for other inferences will be updated as they get trained.
check out for more details in the github for same [VECTOR MAGIC]
(https://github.com/jags111/ComfyUI_Jags_VectorMagic)
### Installation
Follow instructions provided in the github pages for installation of the nodes and put the models in the required model folder.
### Conclusion
```bibtex
@software{yolov8_ultralytics,
author = {Glenn Jocher and Ayush Chaurasia and Jing Qiu},
title = {Ultralytics YOLOv8},
version = {8.0.0},
year = {2023},
url = {https://github.com/ultralytics/ultralytics},
orcid = {0000-0001-5950-6979, 0000-0002-7603-6750, 0000-0003-3783-7069},
license = {AGPL-3.0}
}
``` |
Wembo/ppo-Pyramids | Wembo | 2023-12-29T11:19:17Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2023-12-29T11:19:14Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Wembo/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
Glitschiu/dqn-SpaceInvadersNoFrameskip-v4 | Glitschiu | 2023-12-29T11:05:43Z | 3 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-29T11:05:17Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 224.00 +/- 86.60
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Glitschiu -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Glitschiu -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Glitschiu
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
ntc-ai/SDXL-LoRA-slider.kool-aid-man | ntc-ai | 2023-12-29T10:54:02Z | 10 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] | text-to-image | 2023-12-29T10:53:59Z |
---
language:
- en
thumbnail: "images/evaluate/kool-aid man...person/kool-aid man_17_3.0.png"
widget:
- text: kool-aid man
output:
url: images/kool-aid man_17_3.0.png
- text: kool-aid man
output:
url: images/kool-aid man_19_3.0.png
- text: kool-aid man
output:
url: images/kool-aid man_20_3.0.png
- text: kool-aid man
output:
url: images/kool-aid man_21_3.0.png
- text: kool-aid man
output:
url: images/kool-aid man_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "kool-aid man"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - kool-aid man (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/kool-aid man_17_-3.0.png" width=256 height=256 /> | <img src="images/kool-aid man_17_0.0.png" width=256 height=256 /> | <img src="images/kool-aid man_17_3.0.png" width=256 height=256 /> |
| <img src="images/kool-aid man_19_-3.0.png" width=256 height=256 /> | <img src="images/kool-aid man_19_0.0.png" width=256 height=256 /> | <img src="images/kool-aid man_19_3.0.png" width=256 height=256 /> |
| <img src="images/kool-aid man_20_-3.0.png" width=256 height=256 /> | <img src="images/kool-aid man_20_0.0.png" width=256 height=256 /> | <img src="images/kool-aid man_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
kool-aid man
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.kool-aid-man', weight_name='kool-aid man.safetensors', adapter_name="kool-aid man")
# Activate the LoRA
pipe.set_adapters(["kool-aid man"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, kool-aid man"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 710+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
lutzvdb/PixelCopter | lutzvdb | 2023-12-29T10:52:31Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-29T10:52:10Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 27.70 +/- 17.56
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
CocoyTheDancer/Vindooly | CocoyTheDancer | 2023-12-29T10:45:01Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2023-12-29T10:43:13Z | ---
license: other
license_name: fortniterobloxguy
license_link: LICENSE
---
|
vivekdugale/zephyr-7B-alpha-GPTQ_chatbot_helios_E30 | vivekdugale | 2023-12-29T10:43:47Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TheBloke/zephyr-7B-alpha-GPTQ",
"base_model:adapter:TheBloke/zephyr-7B-alpha-GPTQ",
"license:mit",
"region:us"
] | null | 2023-12-29T09:43:03Z | ---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TheBloke/zephyr-7B-alpha-GPTQ
model-index:
- name: zephyr-7B-alpha-GPTQ_chatbot_helios_E30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7B-alpha-GPTQ_chatbot_helios_E30
This model is a fine-tuned version of [TheBloke/zephyr-7B-alpha-GPTQ](https://huggingface.co/TheBloke/zephyr-7B-alpha-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.0
- Pytorch 2.0.0
- Datasets 2.16.0
- Tokenizers 0.15.0 |
exyl-drl-learn/q-Taxi-v3 | exyl-drl-learn | 2023-12-29T10:23:16Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-29T10:23:01Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.48 +/- 2.79
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="exyl-drl-learn/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Lazycuber/llama-7b-airoborohermes-test | Lazycuber | 2023-12-29T10:19:22Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-17T17:15:32Z | it sucks at following instructions |
teilomillet/MiniMerlin-3B | teilomillet | 2023-12-29T10:01:37Z | 1,488 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"code",
"fr",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-15T11:34:04Z | ---
license: apache-2.0
language:
- fr
- en
tags:
- code
widget:
- text: "<s> [|User|] Comment faire un bon plat ? </s>[|Assistant|]"
---
SFT on a synthetic custom (french) dataset (2k), from general question answering, problem solving to code question.
It's a POC.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
model = AutoModelForCausalLM.from_pretrained(
"teilomillet/MiniMerlin-3B",
revision="0.1",
return_dict=True,
torch_dtype=torch.bfloat16,
device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained("teilomillet/MiniMerlin-3B")
tokenizer.pad_token = tokenizer.eos_token
text = "[|User|] Comment faire un bon plat ? </s>[|Assistant|]"
inputs = tokenizer(text, return_tensors="pt").to(0)
outputs = model.generate(**inputs, max_new_tokens=800)
print(tokenizer.decode(outputs[0], skip_special_tokens=False))
``` |
numblilbug/whole-rubert-sentiment-model | numblilbug | 2023-12-29T09:55:45Z | 8 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:DeepPavlov/rubert-base-cased",
"base_model:finetune:DeepPavlov/rubert-base-cased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-29T09:50:17Z | ---
base_model: DeepPavlov/rubert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: whole-rubert-sentiment-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whole-rubert-sentiment-model
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2374
- Accuracy: {'accuracy': 0.8982456140350877}
- F1: {'f1': 0.8307219662058372}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
numblilbug/service-rubert-sentiment-model | numblilbug | 2023-12-29T09:50:23Z | 8 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:DeepPavlov/rubert-base-cased",
"base_model:finetune:DeepPavlov/rubert-base-cased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-29T09:46:50Z | ---
base_model: DeepPavlov/rubert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: service-rubert-sentiment-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# service-rubert-sentiment-model
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3496
- Accuracy: {'accuracy': 0.8596491228070176}
- F1: {'f1': 0.7807017543859649}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
numblilbug/price-rubert-sentiment-model | numblilbug | 2023-12-29T09:46:05Z | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:DeepPavlov/rubert-base-cased",
"base_model:finetune:DeepPavlov/rubert-base-cased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-29T09:43:33Z | ---
base_model: DeepPavlov/rubert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: price-rubert-sentiment-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# price-rubert-sentiment-model
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3621
- Accuracy: {'accuracy': 0.8385964912280702}
- F1: {'f1': 0.7478070175438596}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
thebrownfrog/q-FrozenLake-v1-8x8-Slippery | thebrownfrog | 2023-12-29T09:44:24Z | 0 | 0 | null | [
"FrozenLake-v1-8x8",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-29T09:44:19Z | ---
tags:
- FrozenLake-v1-8x8
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-8x8-Slippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-8x8
type: FrozenLake-v1-8x8
metrics:
- type: mean_reward
value: 0.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
model = load_from_hub(repo_id="thebrownfrog/q-FrozenLake-v1-8x8-Slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
numblilbug/interior-rubert-sentiment-model | numblilbug | 2023-12-29T09:42:30Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:DeepPavlov/rubert-base-cased",
"base_model:finetune:DeepPavlov/rubert-base-cased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-29T09:16:01Z | ---
base_model: DeepPavlov/rubert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: interior-rubert-sentiment-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# interior-rubert-sentiment-model
This model is a fine-tuned version of [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3641
- Accuracy: {'accuracy': 0.8666666666666667}
- F1: {'f1': 0.7916666666666666}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
vorstcavry/luckyrealistic | vorstcavry | 2023-12-29T09:30:25Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-04-05T06:37:00Z | ---
license: creativeml-openrail-m
---
|
thebrownfrog/q-FrozenLake-v1-4x4-noSlippery | thebrownfrog | 2023-12-29T09:25:32Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-29T09:22:14Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
model = load_from_hub(repo_id="thebrownfrog/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
Optikan/V3_Image_classification__points_durs__google_vit-base-patch16-224-in21k | Optikan | 2023-12-29T09:24:47Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-12-28T16:22:51Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: V3_Image_classification__points_durs__google_vit-base-patch16-224-in21k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V3_Image_classification__points_durs__google_vit-base-patch16-224-in21k
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0411
- Accuracy: 0.9927
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6667 | 1.0 | 15 | 0.5893 | 0.9121 |
| 0.4394 | 2.0 | 30 | 0.3294 | 0.9487 |
| 0.2685 | 3.0 | 45 | 0.1365 | 0.9707 |
| 0.0936 | 4.0 | 60 | 0.0752 | 0.9853 |
| 0.0517 | 5.0 | 75 | 0.0553 | 0.9890 |
| 0.0436 | 6.0 | 90 | 0.0556 | 0.9890 |
| 0.018 | 7.0 | 105 | 0.0557 | 0.9890 |
| 0.0189 | 8.0 | 120 | 0.0457 | 0.9890 |
| 0.013 | 9.0 | 135 | 0.0343 | 0.9927 |
| 0.0115 | 10.0 | 150 | 0.0270 | 0.9963 |
| 0.0101 | 11.0 | 165 | 0.0355 | 0.9927 |
| 0.0085 | 12.0 | 180 | 0.0356 | 0.9927 |
| 0.0079 | 13.0 | 195 | 0.0259 | 0.9963 |
| 0.0069 | 14.0 | 210 | 0.0345 | 0.9927 |
| 0.0066 | 15.0 | 225 | 0.0360 | 0.9927 |
| 0.0061 | 16.0 | 240 | 0.0359 | 0.9927 |
| 0.0059 | 17.0 | 255 | 0.0360 | 0.9927 |
| 0.0055 | 18.0 | 270 | 0.0368 | 0.9927 |
| 0.0054 | 19.0 | 285 | 0.0375 | 0.9927 |
| 0.0051 | 20.0 | 300 | 0.0375 | 0.9927 |
| 0.0049 | 21.0 | 315 | 0.0380 | 0.9927 |
| 0.0047 | 22.0 | 330 | 0.0380 | 0.9927 |
| 0.0046 | 23.0 | 345 | 0.0383 | 0.9927 |
| 0.0044 | 24.0 | 360 | 0.0386 | 0.9927 |
| 0.0043 | 25.0 | 375 | 0.0388 | 0.9927 |
| 0.0041 | 26.0 | 390 | 0.0388 | 0.9927 |
| 0.0041 | 27.0 | 405 | 0.0391 | 0.9927 |
| 0.0039 | 28.0 | 420 | 0.0392 | 0.9927 |
| 0.0038 | 29.0 | 435 | 0.0396 | 0.9927 |
| 0.0037 | 30.0 | 450 | 0.0397 | 0.9927 |
| 0.0037 | 31.0 | 465 | 0.0397 | 0.9927 |
| 0.0036 | 32.0 | 480 | 0.0399 | 0.9927 |
| 0.0035 | 33.0 | 495 | 0.0401 | 0.9927 |
| 0.0034 | 34.0 | 510 | 0.0402 | 0.9927 |
| 0.0034 | 35.0 | 525 | 0.0403 | 0.9927 |
| 0.0033 | 36.0 | 540 | 0.0403 | 0.9927 |
| 0.0033 | 37.0 | 555 | 0.0405 | 0.9927 |
| 0.0032 | 38.0 | 570 | 0.0406 | 0.9927 |
| 0.0032 | 39.0 | 585 | 0.0406 | 0.9927 |
| 0.0031 | 40.0 | 600 | 0.0407 | 0.9927 |
| 0.0031 | 41.0 | 615 | 0.0408 | 0.9927 |
| 0.0031 | 42.0 | 630 | 0.0408 | 0.9927 |
| 0.003 | 43.0 | 645 | 0.0409 | 0.9927 |
| 0.003 | 44.0 | 660 | 0.0410 | 0.9927 |
| 0.003 | 45.0 | 675 | 0.0410 | 0.9927 |
| 0.003 | 46.0 | 690 | 0.0410 | 0.9927 |
| 0.003 | 47.0 | 705 | 0.0410 | 0.9927 |
| 0.0029 | 48.0 | 720 | 0.0411 | 0.9927 |
| 0.0029 | 49.0 | 735 | 0.0411 | 0.9927 |
| 0.0029 | 50.0 | 750 | 0.0411 | 0.9927 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.1.1
- Datasets 2.15.0
- Tokenizers 0.13.3
|
thebrownfrog/q-FrozenLake-v1-8x8-noSlippery | thebrownfrog | 2023-12-29T09:19:35Z | 0 | 0 | null | [
"FrozenLake-v1-8x8-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-29T09:19:30Z | ---
tags:
- FrozenLake-v1-8x8-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-8x8-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-8x8-no_slippery
type: FrozenLake-v1-8x8-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
model = load_from_hub(repo_id="thebrownfrog/q-FrozenLake-v1-8x8-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
bookworm88/vit224-2 | bookworm88 | 2023-12-29T09:18:03Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-12-29T09:14:28Z | Usage:<br>
Image classification<br>
Project:<br>
Cover quilt<br>
Labels:<br>
11covered_with_a_quilt_and_only_the_head_exposed<br>
12covered_with_a_quilt_and_exposed_other_parts_of_the_body<br>
<br>
Indicators:<br>
Accuracy: 0.9591836734693877<br>
Precision: 0.9545454545454546<br>
Recall: 0.9655172413793103<br>
F1 Score: 0.9583333333333333<br>
|
jagadeeshjagat/whisper-small-mr | jagadeeshjagat | 2023-12-29T09:11:18Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_14_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-12-29T05:57:10Z | ---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- common_voice_14_0
metrics:
- wer
model-index:
- name: whisper-small-mr
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_14_0
type: common_voice_14_0
config: mr
split: test
args: mr
metrics:
- name: Wer
type: wer
value: 18.86409227683049
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-mr
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the common_voice_14_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2674
- Wer Ortho: 47.4208
- Wer: 18.8641
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.192 | 1.99 | 500 | 0.2674 | 47.4208 | 18.8641 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
Outer-Spatial/nathan2 | Outer-Spatial | 2023-12-29T09:10:57Z | 2 | 0 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"dataset:Outer-Spatial/nathan2",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | 2023-12-29T08:24:53Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: abbeyy
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: false
datasets:
- Outer-Spatial/nathan2
---
# LoRA DreamBooth - Outer-Spatial/nathan2
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0 trained on @fffiloni's SD-XL trainer.
The weights were trained on the concept prompt:
```
abbeyy
```
Use this keyword to trigger your custom model in your prompts.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Usage
Make sure to upgrade diffusers to >= 0.19.0:
```
pip install diffusers --upgrade
```
In addition make sure to install transformers, safetensors, accelerate as well as the invisible watermark:
```
pip install invisible_watermark transformers accelerate safetensors
```
To just use the base model, you can run:
```python
import torch
from diffusers import DiffusionPipeline, AutoencoderKL
device = "cuda" if torch.cuda.is_available() else "cpu"
vae = AutoencoderKL.from_pretrained('madebyollin/sdxl-vae-fp16-fix', torch_dtype=torch.float16)
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
vae=vae, torch_dtype=torch.float16, variant="fp16",
use_safetensors=True
)
pipe.to(device)
# This is where you load your trained weights
specific_safetensors = "pytorch_lora_weights.safetensors"
lora_scale = 0.9
pipe.load_lora_weights(
'Outer-Spatial/nathan2',
weight_name = specific_safetensors,
# use_auth_token = True
)
prompt = "A majestic abbeyy jumping from a big stone at night"
image = pipe(
prompt=prompt,
num_inference_steps=50,
cross_attention_kwargs={"scale": lora_scale}
).images[0]
```
|
Wei-K/D3_HW_model | Wei-K | 2023-12-29T08:47:12Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2023-12-29T06:19:18Z | ---
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: meta-llama/Llama-2-7b-chat-hf
model-index:
- name: D3_HW_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# D3_HW_model
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0 |
Patcas/v3_my_awesome_t5 | Patcas | 2023-12-29T08:34:40Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:Salesforce/codet5-base",
"base_model:finetune:Salesforce/codet5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-12-29T07:04:17Z | ---
license: apache-2.0
base_model: Salesforce/codet5-base
tags:
- generated_from_trainer
model-index:
- name: v3_my_awesome_t5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# v3_my_awesome_t5
This model is a fine-tuned version of [Salesforce/codet5-base](https://huggingface.co/Salesforce/codet5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 230 | 1.2882 |
| No log | 2.0 | 460 | 1.2610 |
| 0.059 | 3.0 | 690 | 1.2528 |
| 0.059 | 4.0 | 920 | 1.2447 |
| 0.0497 | 5.0 | 1150 | 1.2749 |
| 0.0497 | 6.0 | 1380 | 1.2795 |
| 0.0319 | 7.0 | 1610 | 1.2746 |
| 0.0319 | 8.0 | 1840 | 1.2849 |
| 0.0265 | 9.0 | 2070 | 1.2919 |
| 0.0265 | 10.0 | 2300 | 1.2913 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
Dharsh1821/Mistral_model_new_data | Dharsh1821 | 2023-12-29T08:21:29Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2023-12-29T07:50:02Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TheBloke/Mistral-7B-Instruct-v0.1-GPTQ
model-index:
- name: Mistral_model_new_data
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral_model_new_data
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 5
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 350
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0 |
DavidCollier/dqn-SpaceInvadersNoFrameskip-v4 | DavidCollier | 2023-12-29T08:08:56Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-06-12T11:04:27Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 508.50 +/- 168.81
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga DavidCollier -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga DavidCollier -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga DavidCollier
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
LoneStriker/Synthia-v3.0-11B-8.0bpw-h8-exl2 | LoneStriker | 2023-12-29T08:07:13Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-29T07:57:46Z | ---
license: apache-2.0
---
# Synthia-v3.0-11B
SynthIA-v3.0-11B (Synthetic Intelligent Agent) is a general purpose Large Language Model (LLM). It was trained on the Synthia-v3.0 dataset that contains the Generarized Tree-of-Thought prompt plus 10 more new long-form system contexts.
This model was trained on the principles of LIMA (Less Is More for Alignment) paper, with ~10K high-quality samples generated using GPT-4-Turbo. It has been fine-tuned for instruction following as well as having long-form conversations.
<br>

<br>
To evoke generalized Tree of Thought + Chain of Thought reasoning, you may use the following system message:
```
Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
```
## Evaluation
We evaluated Synthia-v3.0-11B on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI.
Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). Section to follow.
||||
|:------:|:--------:|:-------:|
|**Task**|**Metric**|**Value**|
|*arc_challenge*|acc_norm||
|*hellaswag*|acc_norm||
|*mmlu*|acc_norm||
|*truthfulqa_mc*|mc2||
|**Total Average**|-|||
<br>
## Example Usage
### Here is prompt format:
```
SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
USER: What is the difference between an Orca, Dolphin and a Seal?
ASSISTANT:
```
### Below shows a code example on how to use this model:
```python
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "migtissera/Synthia-v3.0-11B"
output_file_path = "./Synthia-v3.0-11B-conversations.jsonl"
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
def generate_text(instruction):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.75,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
answer = string.split("USER:")[0].strip()
return f"{answer}"
conversation = f"SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
while True:
user_input = input("You: ")
llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: "
answer = generate_text(llm_prompt)
print(answer)
conversation = f"{llm_prompt}{answer}"
json_data = {"prompt": user_input, "answer": answer}
## Save your conversation
with open(output_file_path, "a") as output_file:
output_file.write(json.dumps(json_data) + "\n")
```
<br>
#### Limitations & Biases:
While this model aims for accuracy, it can occasionally produce inaccurate or misleading results.
Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content.
Exercise caution and cross-check information when necessary. This is an uncensored model.
<br>
|
LoneStriker/TinyLlama-1.1B-intermediate-step-1431k-3T-5.0bpw-h6-exl2 | LoneStriker | 2023-12-29T08:06:45Z | 4 | 0 | transformers | [
"transformers",
"llama",
"text-generation",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-29T08:04:35Z | ---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
language:
- en
---
<div align="center">
# TinyLlama-1.1B
</div>
https://github.com/jzhang38/TinyLlama
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs ๐๐. The training has started on 2023-09-01.
<div align="center">
<img src="./TinyLlama_logo.png" width="300"/>
</div>
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
#### This Collection
This collection contains all checkpoints after the 1T fix. Branch name indicates the step and number of tokens seen.
#### Eval
| Model | Pretrain Tokens | HellaSwag | Obqa | WinoGrande | ARC_c | ARC_e | boolq | piqa | avg |
|-------------------------------------------|-----------------|-----------|------|------------|-------|-------|-------|------|-----|
| Pythia-1.0B | 300B | 47.16 | 31.40| 53.43 | 27.05 | 48.99 | 60.83 | 69.21 | 48.30 |
| TinyLlama-1.1B-intermediate-step-50K-104b | 103B | 43.50 | 29.80| 53.28 | 24.32 | 44.91 | 59.66 | 67.30 | 46.11|
| TinyLlama-1.1B-intermediate-step-240k-503b| 503B | 49.56 |31.40 |55.80 |26.54 |48.32 |56.91 |69.42 | 48.28 |
| TinyLlama-1.1B-intermediate-step-480k-1007B | 1007B | 52.54 | 33.40 | 55.96 | 27.82 | 52.36 | 59.54 | 69.91 | 50.22 |
| TinyLlama-1.1B-intermediate-step-715k-1.5T | 1.5T | 53.68 | 35.20 | 58.33 | 29.18 | 51.89 | 59.08 | 71.65 | 51.29 |
| TinyLlama-1.1B-intermediate-step-955k-2T | 2T | 54.63 | 33.40 | 56.83 | 28.07 | 54.67 | 63.21 | 70.67 | 51.64 |
| **TinyLlama-1.1B-intermediate-step-1195k-token-2.5T** | **2.5T** | **58.96** | **34.40** | **58.72** | **31.91** | **56.78** | **63.21** | **73.07** | **53.86**|
|
alexshengzhili/llava-lora-dpo-1227lrvtail2000_from_sft-self-sampled-beta-0.5-lr-5e-5-avg-False-epoch-3 | alexshengzhili | 2023-12-29T08:04:16Z | 1 | 0 | peft | [
"peft",
"llava",
"region:us"
] | null | 2023-12-29T08:03:50Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
jeiku/Rosa_v3_3B | jeiku | 2023-12-29T08:00:27Z | 15 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm_epoch",
"text-generation",
"custom_code",
"en",
"license:other",
"autotrain_compatible",
"region:us"
] | text-generation | 2023-12-28T21:48:53Z | ---
license: other
language:
- en
---
This model is pretty much the pinnacle of what I can achieve in this class. I used my v2 merge as a base, then finetuned three models over the v2 base using custom datasets. I then merged the resultant models with the dare ties method.
This is intended to serve as my AI companion and you may not like some of my tuning.
GGUF available here: https://huggingface.co/jeiku/Rosa_v3_3B_GGUF |
ntc-ai/SDXL-LoRA-slider.in-a-blizzard | ntc-ai | 2023-12-29T07:53:49Z | 41 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] | text-to-image | 2023-12-29T07:53:46Z |
---
language:
- en
thumbnail: "images/evaluate/in a blizzard.../in a blizzard_17_3.0.png"
widget:
- text: in a blizzard
output:
url: images/in a blizzard_17_3.0.png
- text: in a blizzard
output:
url: images/in a blizzard_19_3.0.png
- text: in a blizzard
output:
url: images/in a blizzard_20_3.0.png
- text: in a blizzard
output:
url: images/in a blizzard_21_3.0.png
- text: in a blizzard
output:
url: images/in a blizzard_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "in a blizzard"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - in a blizzard (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/in a blizzard_17_-3.0.png" width=256 height=256 /> | <img src="images/in a blizzard_17_0.0.png" width=256 height=256 /> | <img src="images/in a blizzard_17_3.0.png" width=256 height=256 /> |
| <img src="images/in a blizzard_19_-3.0.png" width=256 height=256 /> | <img src="images/in a blizzard_19_0.0.png" width=256 height=256 /> | <img src="images/in a blizzard_19_3.0.png" width=256 height=256 /> |
| <img src="images/in a blizzard_20_-3.0.png" width=256 height=256 /> | <img src="images/in a blizzard_20_0.0.png" width=256 height=256 /> | <img src="images/in a blizzard_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
in a blizzard
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.in-a-blizzard', weight_name='in a blizzard.safetensors', adapter_name="in a blizzard")
# Activate the LoRA
pipe.set_adapters(["in a blizzard"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, in a blizzard"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 710+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
PranavHonrao/a2c-PandaReachDense-v3 | PranavHonrao | 2023-12-29T07:51:59Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-29T07:47:40Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.26 +/- 0.12
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
LoneStriker/Synthia-v3.0-11B-6.0bpw-h6-exl2 | LoneStriker | 2023-12-29T07:49:01Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-29T07:45:38Z | ---
license: apache-2.0
---
# Synthia-v3.0-11B
SynthIA-v3.0-11B (Synthetic Intelligent Agent) is a general purpose Large Language Model (LLM). It was trained on the Synthia-v3.0 dataset that contains the Generarized Tree-of-Thought prompt plus 10 more new long-form system contexts.
This model was trained on the principles of LIMA (Less Is More for Alignment) paper, with ~10K high-quality samples generated using GPT-4-Turbo. It has been fine-tuned for instruction following as well as having long-form conversations.
<br>

<br>
To evoke generalized Tree of Thought + Chain of Thought reasoning, you may use the following system message:
```
Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
```
## Evaluation
We evaluated Synthia-v3.0-11B on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI.
Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). Section to follow.
||||
|:------:|:--------:|:-------:|
|**Task**|**Metric**|**Value**|
|*arc_challenge*|acc_norm||
|*hellaswag*|acc_norm||
|*mmlu*|acc_norm||
|*truthfulqa_mc*|mc2||
|**Total Average**|-|||
<br>
## Example Usage
### Here is prompt format:
```
SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
USER: What is the difference between an Orca, Dolphin and a Seal?
ASSISTANT:
```
### Below shows a code example on how to use this model:
```python
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "migtissera/Synthia-v3.0-11B"
output_file_path = "./Synthia-v3.0-11B-conversations.jsonl"
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
def generate_text(instruction):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.75,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
answer = string.split("USER:")[0].strip()
return f"{answer}"
conversation = f"SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
while True:
user_input = input("You: ")
llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: "
answer = generate_text(llm_prompt)
print(answer)
conversation = f"{llm_prompt}{answer}"
json_data = {"prompt": user_input, "answer": answer}
## Save your conversation
with open(output_file_path, "a") as output_file:
output_file.write(json.dumps(json_data) + "\n")
```
<br>
#### Limitations & Biases:
While this model aims for accuracy, it can occasionally produce inaccurate or misleading results.
Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content.
Exercise caution and cross-check information when necessary. This is an uncensored model.
<br>
|
rahulmanuwas/joritdae | rahulmanuwas | 2023-12-29T07:48:42Z | 15 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"gpt_neox",
"text-generation",
"autotrain",
"doi:10.57967/hf/1534",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-29T07:32:29Z | ---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
mtgv/MobileLLaMA-2.7B-Base | mtgv | 2023-12-29T07:48:11Z | 37 | 3 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:togethercomputer/RedPajama-Data-1T",
"arxiv:2312.16886",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-28T08:40:55Z | ---
license: apache-2.0
datasets:
- togethercomputer/RedPajama-Data-1T
tags:
- llama
---
# Model Summery
MobileLLaMA-2.7B-Base is a Transformer with 2.7B billon paramters. We downscale LLaMA to facilitate the off-the-shelf deployment. To make our work reproducible, all the models are trained on 1.3T tokens from the [RedPajama v1](https://www.together.ai/blog/redpajama) dataset only. This benefits further research by enabling controlled experiments.
We extensively assess our models on two standard natural language benchmarks, for language understanding and common sense reasoning respectively. Experimental results show that our MobileLLaMA is on par with the most recent opensource models. MobileLLaMA 2.7B also demonstrates competitive performance to INCITE 3B (V1) and OpenLLaMA 3B (V1), while being about 40% faster than OpenLLaMA 3B on a Snapdragon 888 CPU as shown in our [paper](https://arxiv.org/abs/2312.16886) Table 5.
# Model Sources
- Repository: https://github.com/Meituan-AutoML/MobileVLM
- Paper: https://arxiv.org/abs/2312.16886
# How to Get Started with the Model
Model weights can be loaded with Hugging Face Transformers. Examples can be found at [Github](https://github.com/Meituan-AutoML/MobileVLM).
# Datasets and Training
For our training details, please refer to our paper in section 4.1: [MobileVLM: A Fast, Strong and Open Vision Language Assistant for Mobile Devices](https://arxiv.org/abs/2312.16886). |
sheldonzhu/Reinforce-v1 | sheldonzhu | 2023-12-29T07:35:30Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-29T07:35:20Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Mihir1108/deci-finetuned-json | Mihir1108 | 2023-12-29T07:31:48Z | 19 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deci",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"custom_code",
"base_model:Deci/DeciLM-7B-instruct",
"base_model:finetune:Deci/DeciLM-7B-instruct",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-generation | 2023-12-29T06:47:51Z | ---
license: apache-2.0
base_model: Deci/DeciLM-7B-instruct
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: deci-finetuned-json
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deci-finetuned-json
This model is a fine-tuned version of [Deci/DeciLM-7B-instruct](https://huggingface.co/Deci/DeciLM-7B-instruct) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
jeiku/NSFW_Niche_Phi_2 | jeiku | 2023-12-29T07:27:17Z | 5 | 1 | peft | [
"peft",
"safetensors",
"en",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2023-12-29T00:42:58Z | ---
library_name: peft
base_model: models/TheBloke_phi-2-GPTQ
license: cc-by-sa-4.0
language:
- en
--- |
LoneStriker/Synthia-v3.0-11B-4.0bpw-h6-exl2 | LoneStriker | 2023-12-29T07:26:45Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-29T07:24:22Z | ---
license: apache-2.0
---
# Synthia-v3.0-11B
SynthIA-v3.0-11B (Synthetic Intelligent Agent) is a general purpose Large Language Model (LLM). It was trained on the Synthia-v3.0 dataset that contains the Generarized Tree-of-Thought prompt plus 10 more new long-form system contexts.
This model was trained on the principles of LIMA (Less Is More for Alignment) paper, with ~10K high-quality samples generated using GPT-4-Turbo. It has been fine-tuned for instruction following as well as having long-form conversations.
<br>

<br>
To evoke generalized Tree of Thought + Chain of Thought reasoning, you may use the following system message:
```
Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
```
## Evaluation
We evaluated Synthia-v3.0-11B on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI.
Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). Section to follow.
||||
|:------:|:--------:|:-------:|
|**Task**|**Metric**|**Value**|
|*arc_challenge*|acc_norm||
|*hellaswag*|acc_norm||
|*mmlu*|acc_norm||
|*truthfulqa_mc*|mc2||
|**Total Average**|-|||
<br>
## Example Usage
### Here is prompt format:
```
SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
USER: What is the difference between an Orca, Dolphin and a Seal?
ASSISTANT:
```
### Below shows a code example on how to use this model:
```python
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "migtissera/Synthia-v3.0-11B"
output_file_path = "./Synthia-v3.0-11B-conversations.jsonl"
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
def generate_text(instruction):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.75,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
answer = string.split("USER:")[0].strip()
return f"{answer}"
conversation = f"SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
while True:
user_input = input("You: ")
llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: "
answer = generate_text(llm_prompt)
print(answer)
conversation = f"{llm_prompt}{answer}"
json_data = {"prompt": user_input, "answer": answer}
## Save your conversation
with open(output_file_path, "a") as output_file:
output_file.write(json.dumps(json_data) + "\n")
```
<br>
#### Limitations & Biases:
While this model aims for accuracy, it can occasionally produce inaccurate or misleading results.
Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content.
Exercise caution and cross-check information when necessary. This is an uncensored model.
<br>
|
Sarthak279/Intent | Sarthak279 | 2023-12-29T07:22:34Z | 5 | 0 | transformers | [
"transformers",
"tf",
"deberta",
"text-classification",
"generated_from_keras_callback",
"base_model:microsoft/deberta-base",
"base_model:finetune:microsoft/deberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-27T11:30:13Z | ---
license: mit
base_model: microsoft/deberta-base
tags:
- generated_from_keras_callback
model-index:
- name: INTENT
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# INTENT
This is intent classification for enquiry of customer order service,
Features such as placing, Tracking and managment of orders, -
Handles payment issues such as making and refund of payment -
Options for delivery , address for shipping and also account management like editing, update account and delete account - -
Options for contacting human agent -
You can also sends complaints here -
model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0084
- Train Accuracy: 0.9987
- Validation Loss: 0.0019
- Validation Accuracy: 0.9995
- Epoch: 1
## Model description
Enter intent , you will get the label number depicting the intent
- 'get_refund': 0,
- 'change_order': 1,
- 'contact_customer_service': 2,
- 'recover_password': 3,
- 'create_account': 4,
- 'check_invoices': 5,
- 'payment_issue': 6,
- 'place_order': 7,
- 'delete_account': 8,
- 'set_up_shipping_address': 9,
- 'delivery_options': 10,
- 'track_order': 11,
- 'change_shipping_address': 12,
- 'track_refund': 13,
- 'check_refund_policy': 14,
- 'review': 15,
- 'contact_human_agent': 16,
- 'delivery_period': 17,
- 'edit_account': 18,
- 'registration_problems': 19,
- 'get_invoice': 20,
- 'switch_account': 21,
- 'cancel_order': 22,
- 'check_payment_methods': 23,
- 'check_cancellation_fee': 24,
- 'newsletter_subscription': 25,
- 'complaint': 26
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2690, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2113 | 0.9544 | 0.0056 | 0.9995 | 0 |
| 0.0084 | 0.9987 | 0.0019 | 0.9995 | 1 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.0
- Tokenizers 0.15.0
|
lokaspire/deci-finetuned-alpaca-cleaned | lokaspire | 2023-12-29T07:10:51Z | 14 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"custom_code",
"base_model:Deci/DeciLM-6b-instruct",
"base_model:finetune:Deci/DeciLM-6b-instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-28T11:03:25Z | ---
license: other
base_model: Deci/DeciLM-6b-instruct
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: deci-finetuned-alpaca-cleaned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deci-finetuned-alpaca-cleaned
This model is a fine-tuned version of [Deci/DeciLM-6b-instruct](https://huggingface.co/Deci/DeciLM-6b-instruct) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
aaneesai/openai-whisper-tiny-LORA-train_data_clustered_low_tiny-text_trg | aaneesai | 2023-12-29T06:55:20Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openai/whisper-tiny",
"base_model:adapter:openai/whisper-tiny",
"region:us"
] | null | 2023-12-29T06:55:19Z | ---
library_name: peft
base_model: openai/whisper-tiny
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
aaneesai/openai-whisper-tiny-LORA-train_data_norefer_low_1000_tiny-text_trg | aaneesai | 2023-12-29T06:53:40Z | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openai/whisper-tiny",
"base_model:adapter:openai/whisper-tiny",
"region:us"
] | null | 2023-12-29T06:53:39Z | ---
library_name: peft
base_model: openai/whisper-tiny
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
sujayC66/mistral-support-chatbot_helios_E1 | sujayC66 | 2023-12-29T06:43:09Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2023-12-29T05:05:22Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TheBloke/Mistral-7B-Instruct-v0.1-GPTQ
model-index:
- name: mistral-support-chatbot_helios_E1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-support-chatbot_helios_E1
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.0
- Pytorch 2.0.0
- Datasets 2.16.0
- Tokenizers 0.15.0 |
mike-ravkine/WizardCoder-15B-V1.0-GGUF | mike-ravkine | 2023-12-29T06:36:17Z | 93 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2023-12-29T06:16:08Z | This is [https://huggingface.co/WizardLM/WizardCoder-15B-V1.0](WizardCoder-15B-V1.0) quantized to GGUF with llama.cpp b1698
k-quants are not supported by starcoder, they can be created but inference does not work |
Wembo/Reinforce-PixelCopter | Wembo | 2023-12-29T06:32:40Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-27T03:07:35Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 32.00 +/- 19.49
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Crystalcareai/WasalesGen | Crystalcareai | 2023-12-29T06:30:41Z | 2 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-12-28T23:21:56Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
|
MJJK/finetuning-sentiment-model-3000-samples | MJJK | 2023-12-29T06:25:06Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-29T05:33:42Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.88
- name: F1
type: f1
value: 0.881578947368421
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3090
- Accuracy: 0.88
- F1: 0.8816
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.36.1
- Pytorch 2.1.2+cpu
- Datasets 2.15.0
- Tokenizers 0.15.0
|
mtgv/MobileLLaMA-1.4B-Base | mtgv | 2023-12-29T06:11:55Z | 160 | 18 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:togethercomputer/RedPajama-Data-1T",
"arxiv:2312.16886",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-28T09:52:27Z | ---
license: apache-2.0
datasets:
- togethercomputer/RedPajama-Data-1T
tags:
- llama
---
# Model Summery
MobileLLaMA-1.4B-Base is a Transformer with 1.4B billon paramters. We downscale LLaMA to facilitate the off-the-shelf deployment. To make our work reproducible, all
the models are trained on 1.3T tokens from the [RedPajama v1](https://www.together.ai/blog/redpajama) dataset only. This benefits further research by enabling controlled experiments.
We extensively assess our models on two standard natural language benchmarks, for language understanding and common sense reasoning respectively. Experimental results show that our
MobileLLaMA 1.4B is on par with the most recent opensource models.
# Model Sources
- Repository: https://github.com/Meituan-AutoML/MobileVLM
- Paper: https://arxiv.org/abs/2312.16886
# How to Get Started with the Model
Model weights can be loaded with Hugging Face Transformers. Examples can be found at [Github](https://github.com/Meituan-AutoML/MobileVLM).
# Training Details
please refer to our paper in section 4.1: [MobileVLM: A Fast, Strong and Open Vision Language Assistant for Mobile Devices](https://arxiv.org/pdf/2312.16886.pdf). |
TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T | TinyLlama | 2023-12-29T06:04:50Z | 1,656 | 52 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-11T06:13:09Z | ---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
language:
- en
---
<div align="center">
# TinyLlama-1.1B
</div>
https://github.com/jzhang38/TinyLlama
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs ๐๐. The training has started on 2023-09-01.
<div align="center">
<img src="./TinyLlama_logo.png" width="300"/>
</div>
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
#### This Collection
This collection contains all checkpoints after the 1T fix. Branch name indicates the step and number of tokens seen.
#### Eval
| Model | Pretrain Tokens | HellaSwag | Obqa | WinoGrande | ARC_c | ARC_e | boolq | piqa | avg |
|-------------------------------------------|-----------------|-----------|------|------------|-------|-------|-------|------|-----|
| Pythia-1.0B | 300B | 47.16 | 31.40| 53.43 | 27.05 | 48.99 | 60.83 | 69.21 | 48.30 |
| TinyLlama-1.1B-intermediate-step-50K-104b | 103B | 43.50 | 29.80| 53.28 | 24.32 | 44.91 | 59.66 | 67.30 | 46.11|
| TinyLlama-1.1B-intermediate-step-240k-503b| 503B | 49.56 |31.40 |55.80 |26.54 |48.32 |56.91 |69.42 | 48.28 |
| TinyLlama-1.1B-intermediate-step-480k-1007B | 1007B | 52.54 | 33.40 | 55.96 | 27.82 | 52.36 | 59.54 | 69.91 | 50.22 |
| TinyLlama-1.1B-intermediate-step-715k-1.5T | 1.5T | 53.68 | 35.20 | 58.33 | 29.18 | 51.89 | 59.08 | 71.65 | 51.29 |
| TinyLlama-1.1B-intermediate-step-955k-2T | 2T | 54.63 | 33.40 | 56.83 | 28.07 | 54.67 | 63.21 | 70.67 | 51.64 |
| **TinyLlama-1.1B-intermediate-step-1195k-token-2.5T** | **2.5T** | **58.96** | **34.40** | **58.72** | **31.91** | **56.78** | **63.21** | **73.07** | **53.86**|
|
pkarypis/zephyr-7b-sft-filtered-95 | pkarypis | 2023-12-29T05:37:11Z | 8 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:generator",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-29T04:42:51Z | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
model-index:
- name: zephyr-7b-sft-filtered-95
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-sft-filtered-95
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9398
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- total_train_batch_size: 512
- total_eval_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1317 | 1.0 | 256 | 0.9398 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.14.0
|
SecondTheFirst/dqn-SpaceInvadersNoFrameskip-v4 | SecondTheFirst | 2023-12-29T05:27:42Z | 7 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-29T05:27:10Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 493.50 +/- 118.64
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga SecondTheFirst -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga SecondTheFirst -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga SecondTheFirst
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
pitangent-ds/GPT2-genre-detection | pitangent-ds | 2023-12-29T05:25:45Z | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-classification",
"pytorch",
"GPT2ForSequenceClassification",
"generated_from_trainer",
"en",
"dataset:datadrivenscience/movie-genre-prediction",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-23T09:14:17Z | ---
language:
- en
license: mit
base_model: gpt2
tags:
- pytorch
- GPT2ForSequenceClassification
- generated_from_trainer
metrics:
- accuracy
- matthews_correlation
model-index:
- name: GPT2-genre-detection
results: []
library_name: transformers
pipeline_tag: text-classification
datasets:
- datadrivenscience/movie-genre-prediction
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GPT2-genre-detection
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the datadrivenscience/movie-genre-prediction dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5267
- Accuracy: 0.4593
- Matthews Correlation: 0.1010
## description
[Data-Driven Science](https://huggingface.co/datadrivenscience) organized a competition where in the goal was to fine tune a model that can predict the genre of a movie from a given synopsis. There were a total of 10 genres as follows:
```json
{
"0": "horror",
"1": "adventure",
"2": "action",
"3": "crime",
"4": "mystery",
"5": "family",
"6": "scifi",
"7": "thriller",
"8": "fantasy",
"9": "romance"
}
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 85855289
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Matthews Correlation |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------------------:|
| 1.5765 | 1.0 | 10125 | 1.5562 | 0.4589 | 0.0899 |
| 1.5058 | 2.0 | 20250 | 1.5267 | 0.4593 | 0.1010 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0 |
matbee/Mixtral-8x7B-Instruct-v0.1-structured-responses-adapters | matbee | 2023-12-29T05:15:39Z | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mixtral-8x7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mixtral-8x7B-Instruct-v0.1",
"region:us"
] | null | 2023-12-29T05:15:17Z | ---
library_name: peft
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
mindlywork/PSYPRO1 | mindlywork | 2023-12-29T04:54:53Z | 10 | 1 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:unknown",
"region:us"
] | text-to-image | 2023-12-29T04:52:11Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/_53c3c702-8f7f-45d0-91f5-3065a94b211f.jpeg
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: >-
PSYNOPRO1, white cute animated characters, soft round edges, looking at
viewer, white colour body, expressive eyes and expression, simple background,
2d, best quality, HD, 4k
license: unknown
---
# PSYPRO1
<Gallery />
## Model description
PSYPRO1
## Trigger words
You should use `PSYNOPRO1` to trigger the image generation.
You should use `white cute animated characters` to trigger the image generation.
You should use `soft round edges` to trigger the image generation.
You should use `looking at viewer` to trigger the image generation.
You should use `white colour body` to trigger the image generation.
You should use `expressive eyes and expression` to trigger the image generation.
You should use `simple background` to trigger the image generation.
You should use `2d` to trigger the image generation.
You should use `best quality` to trigger the image generation.
You should use `HD` to trigger the image generation.
You should use `4k` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/dasdsff/PSYPRO1/tree/main) them in the Files & versions tab.
|
Jiayuan32/ppo-Huggy | Jiayuan32 | 2023-12-29T04:46:42Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-12-29T04:46:38Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Jiayuan32/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
hwhjones/distilhubertmk36 | hwhjones | 2023-12-29T04:41:03Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-12-29T02:06:03Z | ---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.87
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8185
- Accuracy: 0.87
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.668 | 1.0 | 225 | 0.5547 | 0.84 |
| 0.4179 | 2.0 | 450 | 0.7757 | 0.74 |
| 0.0298 | 3.0 | 675 | 0.7077 | 0.84 |
| 0.2144 | 4.0 | 900 | 0.6262 | 0.87 |
| 0.0079 | 5.0 | 1125 | 0.6068 | 0.88 |
| 0.0021 | 6.0 | 1350 | 0.8321 | 0.84 |
| 0.0014 | 7.0 | 1575 | 0.9661 | 0.84 |
| 0.0013 | 8.0 | 1800 | 0.7852 | 0.86 |
| 0.001 | 9.0 | 2025 | 0.8126 | 0.86 |
| 0.001 | 10.0 | 2250 | 0.8185 | 0.87 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.4.0
- Tokenizers 0.15.0
|
cfahlgren1/OpenHermes-Llama_3B | cfahlgren1 | 2023-12-29T04:33:06Z | 15 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-2",
"instruct",
"finetune",
"alpaca",
"gpt4",
"synthetic data",
"distillation",
"en",
"dataset:teknium/openhermes",
"base_model:openlm-research/open_llama_3b_v2",
"base_model:finetune:openlm-research/open_llama_3b_v2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-28T22:44:23Z | ---
base_model: openlm-research/open_llama_3b_v2
tags:
- llama-2
- instruct
- finetune
- alpaca
- gpt4
- synthetic data
- distillation
datasets:
- teknium/openhermes
model-index:
- name: openhermes-7b
results: []
license: apache-2.0
language:
- en
---
# OpenHermes-Llama-3B
*An OpenLlama-3B finetune for role-play, instruction following, and code generation*
<img src="https://cdn-uploads.huggingface.co/production/uploads/648a374f00f7a3374ee64b99/bDQ8ybHVS--CibUf4rvFB.png" width="400" height="400">
*Logo AI Generated by Dalle*
For a guide on how this was fine tuned, check out this blog [post](https://calebfahlgren.com/post/how-to-finetune-a-large-language-model).
# Model description
**OpenHermes-Lllama_3B** is a finetune of the Hermes dataset that has a fully open source dataset!
It was trained on 15,000 entries of primarily GPT-4 generated data, from open datasets across the AI landscape, including:
- GPTeacher - General Instruct, Roleplay v1, Roleplay v2, and Code Instruct Datasets, by Teknium
- WizardLM (v1, evol_instruct 70k), by WizardLM Team/nlpxucan
- Airoboros GPT-4 (v1.0), by JonDurbin
- Camel-AI's domain expert datasets, by the Camel-AI Team
- CodeAlpaca, by Sahil2801
- GPT4-LLM and Unnatural Instructions, by Microsoft
**OpenHermes-Lllama-3B** has improved very much improved coding abilities and general instruction following than the OpenLlama-3B base model. You can see the before and after outputs comparison below.
# Example Outputs
### Writing Code
**Q:** Write a Fast API endpoint with ratelimiting
**OpenHermes-Llama-3B Output**
<img src="https://cdn-uploads.huggingface.co/production/uploads/648a374f00f7a3374ee64b99/ZZxphP0JN1Y6HfcGDRwJ1.png" width="600" height="600">
**Base Model Output**
*Create a new FastAPI app
*Create a new endpoint
*Create a new ratelimit middleware
*Create a new ratelimit rule
*Create a new ratelimit backend
*Create a new ratelimit backend
### Planning an Itinerary
**Q:** Create a travel itinery for italy for 3 days for me.
**OpenHermes-Llama-3B Output**
Hi,
I am happy to help you create an itinery for Italy for 3 days.
Here is an example itinery:
Day 1:
Morning: Arrive in Rome, check into your hotel.
Afternoon: Visit the Colosseum, the Roman Forum, and the Pantheon.
Evening: Have a dinner at a local restaurant and enjoy the nightlife in Rome.
Day 2:
Morning: Visit the Vatican Museums, St. Peter's Basilica, and the Sistine Chapel.
Afternoon: Take a guided tour of the Italian capital.
Evening: Have a dinner at a local restaurant and enjoy the nightlife in Rome.
Day 3:
Morning: Visit the Fontana di Trevi, the Pantheon, and the Colosseum.
Afternoon: Take a guided tour of the Italian capital.
Evening: Have a dinner at a local restaurant and enjoy the nightlife in Rome.
You can also add visits to other attractions such as the Capitoline Museums, the Palatine Hill, and the Spanish Steps.
I hope this itinery helps you plan your trip to Italy. Let me know if you have any questions or if you need any further assistance.
**Base Model Output**
Hi,
I am a travel agent and I can help you with your travel itinerary.
I can help you with the following:
1. Flight booking
2. Hotel booking
3. Car rental
4. Tour booking
5. Visa
6. Travel insurance
7. Airport transfers
8. Airport pick up
9. Airport drop off
10. Airport lounge
11. Airport lounge pick up
12. Airport lounge drop off
### Explaining Concepts
**Q:** Explain golang structs
**OpenHermes-Llama-3B Output**
A golang struct is a way to define a data type in Go. It is a collection of fields, each of which is a value of a specific type. The fields are arranged in a hierarchy and are connected using interfaces.
Example:
```go
type User struct {
Name string
Age int
Email string
}
```
In this example, the User struct contains a Name field, an Age field, an Email field, and a few other fields. The Name field is of type string, the Age field is of type int, and the Email field is of type string. The User struct is connected to other structs through interfaces, such as the interface for the User type.
The main advantage of structs in Go is that they can be used to represent data in a way that is more flexible and easier to use than traditional arrays or maps. Structs also allow for more complex data structures to be created without the need for nested arrays or maps.
Here's an example of how to use a User struct:
package main
```go
import (
"fmt"
"strings"
)
type User struct {
Name string
Age int
Email string
}
func main() {
user1 := User{"John", 25, "[email protected]"}
user2 := User{"Mary", 22, "[email protected]"}
// Print the users' names and ages
fmt.Printf("User 1: %s (%d)\nUser 2: %s (%d)\n", user1.Name, user1.Age, user2.Name, user2.Age)
// Print the users' emails separated by commas
email1 := user1.Email
email2 := user2.Email
fmt.Printf("User 1's email: %s\nUser 2's email: %s\n", email1, email2)
}
```
In this example, the User struct is used to represent two different users. The Name field is used to store the users' names, the Age field is used to store their ages, and the Email field is used to store their emails. The main function prints out the users' names and ages, along with their emails separated by commas.
In summary, a golang struct is a way to define data types in Go. It is a collection of fields, each of which is a value of a specific type. The fields are arranged in a hierarchy and are connected through interfaces. The main advantage of structs in Go is that they can be used to represent data in a way that is more flexible and easier to use than traditional arrays or maps.
**Base Model Output**
*A struct is a named collection of variables.
# Benchmark Comparison
## HumanEval
| Model | pass@1 |
| ------------------------------ | ------ |
| openllama-3b-v2 (base) | 8.1% |
| openhermes-llama-3b (finetune) | 9.8% | |
andrew-ye/q-FrozenLake-v1-4x4-Slippery | andrew-ye | 2023-12-29T04:05:56Z | 0 | 0 | null | [
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-29T04:05:54Z | ---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-Slippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
metrics:
- type: mean_reward
value: 0.03 +/- 0.17
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="andrew-ye/q-FrozenLake-v1-4x4-Slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
andrew-ye/ppo-Huggy | andrew-ye | 2023-12-29T03:32:42Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-12-29T03:32:37Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: andrew-ye/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
austinsilveria/tricksy-opt-6.7b | austinsilveria | 2023-12-29T03:29:58Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2023-12-29T03:13:42Z | ---
license: other
license_name: opt-license
license_link: LICENSE
---
|
hfl/chinese-llama-alpaca-2-awq | hfl | 2023-12-29T03:29:22Z | 0 | 2 | null | [
"zh",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-12-28T02:35:29Z | ---
license: apache-2.0
language:
- zh
- en
---
This folder contains pre-computed search results for Chinese-LLaMA-2 and Chinese-Alpaca-2 models, which is used to generate AWQ (Activation-aware Weight Quantization) models.
**WARNING: These models MUST BE used with the original weights.**
For usage, see:
- AWQ official github page: https://github.com/mit-han-lab/llm-awq
- llama.cpp github page: https://github.com/ggerganov/llama.cpp/tree/master/awq-py
|
JUNstats/bert-base-finetuned-code-classification-sub | JUNstats | 2023-12-29T03:28:09Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-12T07:21:07Z | ---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bert-base-finetuned-code-classification-sub
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-finetuned-code-classification-sub
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2428
- Accuracy: 0.9388
- F1: 0.9363
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3823 | 1.0 | 4931 | 0.2608 | 0.9334 | 0.9301 |
| 0.214 | 2.0 | 9862 | 0.2428 | 0.9388 | 0.9363 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.1.0+cu118
- Tokenizers 0.13.3
|
jan-hq/nitro-v1.2-e3 | jan-hq | 2023-12-29T03:07:53Z | 23 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-28T20:00:51Z | ---
license: apache-2.0
language:
- en
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/janhq/jan/assets/89722390/35daac7d-b895-487c-a6ac-6663daaad78e" alt="Jan banner" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<p align="center">
<a href="https://jan.ai/">Jan</a
>
- <a href="https://discord.gg/AsJ8krTT3N">Discord</a>
</p>
<!-- header end -->
# Prompt template
ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
# Run this model
You can run this model using [Jan Desktop](https://jan.ai/) on Mac, Windows, or Linux.
Jan is an open source, ChatGPT alternative that is:
- ๐ป **100% offline on your machine**: Your conversations remain confidential, and visible only to you.
- ๐๏ธ **An Open File Format**: Conversations and model settings stay on your computer and can be exported or deleted at any time.
- ๐ **OpenAI Compatible**: Local server on port `1337` with OpenAI compatible endpoints
- ๐ **Open Source & Free**: We build in public; check out our [Github](https://github.com/janhq)

# About Jan
Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones.
Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life. |
MimirArtificialMind/ppo-Huggy | MimirArtificialMind | 2023-12-29T03:06:44Z | 14 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-12-29T03:06:39Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: MimirArtificialMind/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
wenqiglantz/tts_en_fastpitch | wenqiglantz | 2023-12-29T02:51:25Z | 5 | 0 | nemo | [
"nemo",
"text-to-speech",
"speech",
"audio",
"Transformer",
"NeMo",
"pytorch",
"Riva",
"en",
"dataset:ljspeech",
"license:cc-by-4.0",
"model-index",
"region:us"
] | text-to-speech | 2023-12-29T02:47:57Z | ---
language:
- en
license: cc-by-4.0
library_name: nemo
tags:
- text-to-speech
- speech
- audio
- Transformer
- NeMo
- pytorch
- Riva
datasets:
- ljspeech
model-index:
- name: tts_en_fastpitch
results:
- task:
type: automatic-speech-recognition
dataset:
name: Librispeech (clean)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- type: wer
value: 8.1
name: WER
---
## Model Overview
<DESCRIBE IN ONE LINE THE MODEL AND ITS USE>
## NVIDIA NeMo: Training
To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest Pytorch version.
```
pip install nemo_toolkit['all']
```
## How to Use this Model
The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
### Automatically instantiate the model
```python
import nemo.collections.asr as nemo_tts
asr_model = nemo_tts.models.FastPitchModel.from_pretrained("wenqiglantz/tts_en_fastpitch")
```
### Transcribing using Python
First, let's get a sample
```
wget https://dldata-public.s3.us-east-2.amazonaws.com/2086-149220-0033.wav
```
Then simply do:
```
asr_model.transcribe(['2086-149220-0033.wav'])
```
### Transcribing many audio files
```shell
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py pretrained_name="wenqiglantz/tts_en_fastpitch" audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
```
### Input
This model accepts 16000 KHz Mono-channel Audio (wav files) as input.
### Output
This model provides transcribed speech as a string for a given audio sample.
## Model Architecture
<ADD SOME INFORMATION ABOUT THE ARCHITECTURE>
## Training
<ADD INFORMATION ABOUT HOW THE MODEL WAS TRAINED - HOW MANY EPOCHS, AMOUNT OF COMPUTE ETC>
### Datasets
<LIST THE NAME AND SPLITS OF DATASETS USED TO TRAIN THIS MODEL (ALONG WITH LANGUAGE AND ANY ADDITIONAL INFORMATION)>
## Performance
<LIST THE SCORES OF THE MODEL -
OR
USE THE Hugging Face Evaluate LiBRARY TO UPLOAD METRICS>
## Limitations
<DECLARE ANY POTENTIAL LIMITATIONS OF THE MODEL>
Eg:
Since this model was trained on publicly available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.
## References
<ADD ANY REFERENCES HERE AS NEEDED>
[1] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
|
kjlkjl/swin-tiny-patch4-window7-224 | kjlkjl | 2023-12-29T02:41:23Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-12-29T02:37:43Z | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.14285714285714285
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1087
- Accuracy: 0.1429
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 2.1404 | 0.0714 |
| No log | 2.0 | 2 | 2.1244 | 0.1429 |
| No log | 3.0 | 3 | 2.1087 | 0.1429 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
wenqiglantz/stt_zh_citrinet_1024_gamma_0_25 | wenqiglantz | 2023-12-29T02:38:45Z | 4 | 0 | nemo | [
"nemo",
"automatic-speech-recognition",
"speech",
"audio",
"CTC",
"Citrinet",
"Transformer",
"NeMo",
"pytorch",
"Riva",
"en",
"dataset:aishell_2",
"license:cc-by-4.0",
"model-index",
"region:us"
] | automatic-speech-recognition | 2023-12-29T02:28:11Z | ---
language:
- en
license: cc-by-4.0
library_name: nemo
tags:
- automatic-speech-recognition
- speech
- audio
- CTC
- Citrinet
- Transformer
- NeMo
- pytorch
- Riva
datasets:
- aishell_2
model-index:
- name: stt_zh_citrinet_1024_gamma_0_25
results:
- task:
type: automatic-speech-recognition
dataset:
name: Librispeech (clean)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- type: wer
value: 8.1
name: WER
---
## Model Overview
<DESCRIBE IN ONE LINE THE MODEL AND ITS USE>
## NVIDIA NeMo: Training
To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest Pytorch version.
```
pip install nemo_toolkit['all']
```
## How to Use this Model
The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
### Automatically instantiate the model
```python
import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.EncDecCTCModel.from_pretrained("wenqiglantz/stt_zh_citrinet_1024_gamma_0_25")
```
### Transcribing using Python
First, let's get a sample
```
wget https://dldata-public.s3.us-east-2.amazonaws.com/2086-149220-0033.wav
```
Then simply do:
```
asr_model.transcribe(['2086-149220-0033.wav'])
```
### Transcribing many audio files
```shell
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py pretrained_name="wenqiglantz/stt_zh_citrinet_1024_gamma_0_25" audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
```
### Input
This model accepts 16000 KHz Mono-channel Audio (wav files) as input.
### Output
This model provides transcribed speech as a string for a given audio sample.
## Model Architecture
<ADD SOME INFORMATION ABOUT THE ARCHITECTURE>
## Training
<ADD INFORMATION ABOUT HOW THE MODEL WAS TRAINED - HOW MANY EPOCHS, AMOUNT OF COMPUTE ETC>
### Datasets
<LIST THE NAME AND SPLITS OF DATASETS USED TO TRAIN THIS MODEL (ALONG WITH LANGUAGE AND ANY ADDITIONAL INFORMATION)>
## Performance
<LIST THE SCORES OF THE MODEL -
OR
USE THE Hugging Face Evaluate LiBRARY TO UPLOAD METRICS>
## Limitations
<DECLARE ANY POTENTIAL LIMITATIONS OF THE MODEL>
Eg:
Since this model was trained on publicly available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.
## References
<ADD ANY REFERENCES HERE AS NEEDED>
[1] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
|
kjlkjl/vit-base-patch16-224-in21k | kjlkjl | 2023-12-29T02:34:04Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-12-29T01:41:22Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-in21k
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.21428571428571427
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0500
- Accuracy: 0.2143
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 2.0641 | 0.1429 |
| No log | 2.0 | 2 | 2.0558 | 0.2857 |
| No log | 3.0 | 3 | 2.0500 | 0.2143 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
Ram07/areg | Ram07 | 2023-12-29T02:17:31Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-17T06:00:17Z | ---
license: mit
pipeline_tag: conversational
--- |
databoks-irfan/socmed-comment-roberta-base-indonesian-smsa | databoks-irfan | 2023-12-29T02:04:07Z | 14 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:w11wo/indonesian-roberta-base-sentiment-classifier",
"base_model:finetune:w11wo/indonesian-roberta-base-sentiment-classifier",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-19T05:31:29Z | ---
license: mit
base_model: w11wo/indonesian-roberta-base-sentiment-classifier
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: socmed-comment-roberta-base-indonesian-smsa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# socmed-comment-roberta-base-indonesian-smsa
This model is a fine-tuned version of [w11wo/indonesian-roberta-base-sentiment-classifier](https://huggingface.co/w11wo/indonesian-roberta-base-sentiment-classifier) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8615
- Accuracy: 0.8927
- F1: 0.8925
- Precision: 0.8924
- Recall: 0.8927
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
xDAN-AI/xDAN-L1-Chat-RL-v1 | xDAN-AI | 2023-12-29T01:55:47Z | 1,502 | 63 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"xDAN-AI",
"OpenOrca",
"DPO",
"Self-Think",
"en",
"dataset:Open-Orca/OpenOrca",
"dataset:Intel/orca_dpo_pairs",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-20T07:46:29Z | ---
license: cc-by-4.0
datasets:
- Open-Orca/OpenOrca
- Intel/orca_dpo_pairs
language:
- en
tags:
- xDAN-AI
- OpenOrca
- DPO
- Self-Think
---
<div style="display: flex; justify-content: center; align-items: center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/643197ac288c9775673a01e9/tVAcwKkIH5vkfzqgqHeHi.png" style="width: 45%;">
</div
>
<p align="center">
<big><b>Top 1 Performer on MT-bench๐</b
></big>
</p>
<p align="center">
<strong>**The first top model which is performance at Humanalities, Coding and Writing with 7b. **</strong>
</p>
<p
align="center"
<a href="The TOP1 MT-Bench Model">xDAN-AI</a> โข
>
<a href="https://discord.gg/7NrMX5AK">Discord</a> โข
<a href="https://twitter.com/shootime007">Twitter</a> โข
<a href="https://huggingface.co/xDAN-AI">Huggingface</a>
</p>
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/643197ac288c9775673a01e9/QANDZApzpTHM6sBsjmdew.png" alt="Image" width="50%">
</p>
## Outperformer GPT3.5turbo & Claude-v1

## Touch nearby GPT4 on MT-Bench

**########## First turn ##########**
| model | turn | score | size
|--------------------|------|----------|--------
| gpt-4 | 1 | 8.95625 | -
| **xDAN-L1-Chat-RL-v1** | 1 | **8.87500** | **7b**
| xDAN-L2-Chat-RL-v2 | 1 | 8.78750 | 30b
| claude-v1 | 1 | 8.15000 | -
| gpt-3.5-turbo | 1 | 8.07500 | 20b
| vicuna-33b-v1.3 | 1 | 7.45625 | 33b
| wizardlm-30b | 1 | 7.13125 | 30b
| oasst-sft-7-llama-30b | 1 | 7.10625 | 30b
| Llama-2-70b-chat | 1 | 6.98750 | 70b
########## Second turn ##########
| model | turn | score | size
|--------------------|------|-----------|--------
| gpt-4 | 2 | 9.025000 | -
| xDAN-L2-Chat-RL-v2 | 2 | 8.087500 | 30b
| **xDAN-L1-Chat-RL-v1** | 2 | **7.825000** | **7b**
| gpt-3.5-turbo | 2 | 7.812500 | 20b
| claude-v1 | 2 | 7.650000 | -
| wizardlm-30b | 2 | 6.887500 | 30b
| vicuna-33b-v1.3 | 2 | 6.787500 | 33b
| Llama-2-70b-chat | 2 | 6.725000 | 70b
########## Average turn##########
| model | score | size
|--------------------|-----------|--------
| gpt-4 | 8.990625 | -
| xDAN-L2-Chat-RL-v2 | 8.437500 | 30b
| **xDAN-L1-Chat-RL-v1** | **8.350000** | **7b**
| gpt-3.5-turbo | 7.943750 | 20b
| claude-v1 | 7.900000 | -
| vicuna-33b-v1.3 | 7.121875 | 33b
| wizardlm-30b | 7.009375 | 30b
| Llama-2-70b-chat | 6.856250 | 70b
## LM-Evaluation-Harness
| Task | Score |
|--------------|--------|
| Average | 68.38 |
| ARC | 66.3 |
| HellaSwag | 85.81 |
| MMLU | 63.21 |
| TruthfulQA | 56.7 |
| Winogrande | 78.85 |
| GSM8K | 59.44 |
### Prompt Template(Alpaca)
You are a helpful assistant named DAN. You are an expert in worldly knowledge, skilled in employing a probing questioning strategy,
and you carefully consider each step before providing answers.
\n\n### Instruction:\n{instruction}\n\n### Response:
### Dataset:
1. Selected from OpenOrca
2. Intel Orca-DPO-Pairs
3. Privately Crafted Dataset
### Training:
1. SFT with Mixed dataset from OpenOrca & Intel
2. The DPO-v2 dataset
3. The DPO-v2 Trainer
## Created By xDAN-AI at 2023-12-15
## Eval by FastChat: https://github.com/lm-sys/FastChat.git
## Disclaimer
We employ rigorous data compliance validation algorithms throughout the training of our language model to ensure the highest level of compliance. However, due to the intricate nature of data and the wide range of potential usage scenarios for the model, we cannot guarantee that it will consistently produce accurate and sensible outputs. Users should be aware of the possibility of the model generating problematic results. Our organization disclaims any responsibility for risks or issues arising from misuse, improper guidance, unlawful usage, misinformation, or subsequent concerns regarding data security.
## About xDAN-AI
xDAN-AI represents the forefront of Silicon-Based Life Factory technology. For comprehensive information and deeper insights into our cutting-edge technology and offerings, please visit our website: https://www.xdan.ai. |
ntc-ai/SDXL-LoRA-slider.crying | ntc-ai | 2023-12-29T01:53:22Z | 111 | 3 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] | text-to-image | 2023-12-29T01:53:19Z |
---
language:
- en
thumbnail: "images/evaluate/crying.../crying_17_3.0.png"
widget:
- text: crying
output:
url: images/crying_17_3.0.png
- text: crying
output:
url: images/crying_19_3.0.png
- text: crying
output:
url: images/crying_20_3.0.png
- text: crying
output:
url: images/crying_21_3.0.png
- text: crying
output:
url: images/crying_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "crying"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - crying (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/crying_17_-3.0.png" width=256 height=256 /> | <img src="images/crying_17_0.0.png" width=256 height=256 /> | <img src="images/crying_17_3.0.png" width=256 height=256 /> |
| <img src="images/crying_19_-3.0.png" width=256 height=256 /> | <img src="images/crying_19_0.0.png" width=256 height=256 /> | <img src="images/crying_19_3.0.png" width=256 height=256 /> |
| <img src="images/crying_20_-3.0.png" width=256 height=256 /> | <img src="images/crying_20_0.0.png" width=256 height=256 /> | <img src="images/crying_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
crying
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.crying', weight_name='crying.safetensors', adapter_name="crying")
# Activate the LoRA
pipe.set_adapters(["crying"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, crying"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 700+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
alexshengzhili/llava-v1.5-13b-lora-1227-COH-lrv0-3230llava0-5879_interleaved.json | alexshengzhili | 2023-12-29T01:43:09Z | 0 | 0 | peft | [
"peft",
"llava",
"region:us"
] | null | 2023-12-29T01:42:32Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
wenqiglantz/stt_en_citrinet_512_ls | wenqiglantz | 2023-12-29T01:41:40Z | 4 | 0 | nemo | [
"nemo",
"automatic-speech-recognition",
"speech",
"audio",
"CTC",
"Citrinet",
"Transformer",
"NeMo",
"pytorch",
"en",
"dataset:librispeech_asr",
"license:cc-by-4.0",
"model-index",
"region:us"
] | automatic-speech-recognition | 2023-12-29T01:36:33Z | ---
language:
- en
license: cc-by-4.0
library_name: nemo
tags:
- automatic-speech-recognition
- speech
- audio
- CTC
- Citrinet
- Transformer
- NeMo
- pytorch
datasets:
- librispeech_asr
model-index:
- name: stt_en_citrinet_512_ls
results:
- task:
type: automatic-speech-recognition
dataset:
name: Librispeech (clean)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- type: wer
value: 8.1
name: WER
---
## Model Overview
<DESCRIBE IN ONE LINE THE MODEL AND ITS USE>
## NVIDIA NeMo: Training
To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest Pytorch version.
```
pip install nemo_toolkit['all']
```
## How to Use this Model
The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
### Automatically instantiate the model
```python
import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained("wenqiglantz/stt_en_citrinet_512_ls")
```
### Transcribing using Python
First, let's get a sample
```
wget https://dldata-public.s3.us-east-2.amazonaws.com/2086-149220-0033.wav
```
Then simply do:
```
asr_model.transcribe(['2086-149220-0033.wav'])
```
### Transcribing many audio files
```shell
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py pretrained_name="wenqiglantz/stt_en_citrinet_512_ls" audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
```
### Input
This model accepts 16000 KHz Mono-channel Audio (wav files) as input.
### Output
This model provides transcribed speech as a string for a given audio sample.
## Model Architecture
<ADD SOME INFORMATION ABOUT THE ARCHITECTURE>
## Training
<ADD INFORMATION ABOUT HOW THE MODEL WAS TRAINED - HOW MANY EPOCHS, AMOUNT OF COMPUTE ETC>
### Datasets
<LIST THE NAME AND SPLITS OF DATASETS USED TO TRAIN THIS MODEL (ALONG WITH LANGUAGE AND ANY ADDITIONAL INFORMATION)>
## Performance
<LIST THE SCORES OF THE MODEL -
OR
USE THE Hugging Face Evaluate LiBRARY TO UPLOAD METRICS>
## Limitations
<DECLARE ANY POTENTIAL LIMITATIONS OF THE MODEL>
Eg:
Since this model was trained on publicly available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.
## References
<ADD ANY REFERENCES HERE AS NEEDED>
[1] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
|
cognitivecomputations/dolphin-2.2-mistral-7b | cognitivecomputations | 2023-12-29T01:28:26Z | 13 | 62 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:ehartford/dolphin",
"dataset:jondurbin/airoboros-2.2.1",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-10-29T15:50:57Z | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
datasets:
- ehartford/dolphin
- jondurbin/airoboros-2.2.1
language:
- en
---
# dolphin-2.2-mistral-7b
This model was overfit and has been re-released as [dolphin-2.2.1-mistral-7b](https://huggingface.co/ehartford/dolphin-2.2.1-mistral-7b). Please use that model instead.
|
afrideva/Smol-Llama-101M-Chat-v1-GGUF | afrideva | 2023-12-29T01:27:37Z | 22 | 3 | null | [
"gguf",
"text-generation",
"ggml",
"quantized",
"q2_k",
"q3_k_m",
"q4_k_m",
"q5_k_m",
"q6_k",
"q8_0",
"en",
"dataset:Open-Orca/SlimOrca-Dedup",
"dataset:VMware/open-instruct",
"dataset:LDJnr/Capybara",
"dataset:cognitivecomputations/ultrachat-uncensored",
"dataset:starfishmedical/webGPT_x_dolly",
"dataset:THUDM/webglm-qa",
"base_model:Felladrin/Smol-Llama-101M-Chat-v1",
"base_model:quantized:Felladrin/Smol-Llama-101M-Chat-v1",
"license:apache-2.0",
"region:us",
"conversational"
] | text-generation | 2023-12-29T01:11:44Z | ---
base_model: Felladrin/Smol-Llama-101M-Chat-v1
datasets:
- Open-Orca/SlimOrca-Dedup
- VMware/open-instruct
- LDJnr/Capybara
- cognitivecomputations/ultrachat-uncensored
- starfishmedical/webGPT_x_dolly
- THUDM/webglm-qa
inference: false
language:
- en
license: apache-2.0
model_creator: Felladrin
model_name: Smol-Llama-101M-Chat-v1
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- text-generation
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
widget:
- text: '<|im_start|>system
You are a helpful assistant who gives creative responses.<|im_end|>
<|im_start|>user
Write the background story of a game about wizards and llamas in a sci-fi world.<|im_end|>
<|im_start|>assistant'
- text: '<|im_start|>system
A friendly chat between a user and an assistant.<|im_end|>
<|im_start|>user
Got a question for you!<|im_end|>
<|im_start|>assistant
Sure! What''s it?<|im_end|>
<|im_start|>user
I need to build a simple website. Where should I start learning about web development?<|im_end|>
<|im_start|>assistant'
- text: '<|im_start|>system
You are a helpful assistant who provides concise answers to the user''s questions.<|im_end|>
<|im_start|>user
How to become more healthy?<|im_end|>
<|im_start|>assistant'
- text: '<|im_start|>system
You are a helpful assistant, who always answers with empathy.<|im_end|>
<|im_start|>user
List the pros and cons of social media.<|im_end|>
<|im_start|>assistant'
- text: '<|im_start|>system
You are a helpful assistant, who always answers with empathy.<|im_end|>
<|im_start|>user
Hello!<|im_end|>
<|im_start|>assistant
Hi! How can I help you today?<|im_end|>
<|im_start|>user
Take a look at the info below.
- The tape inside the VHS cassettes is very delicate and can be easily ruined,
making them unplayable and unrepairable. The reason the tape deteriorates is that
the magnetic charge needed for them to work is not permanent, and the magnetic
particles end up losing their charge in a process known as remanence decay. These
particles could also become demagnetised via being stored too close to a magnetic
source.
- One of the most significant issues with VHS tapes is that they have moving parts,
meaning that there are more occasions when something can go wrong, damaging your
footage or preventing it from playing back. The tape itself is a prominent cause
of this, and tape slippage can occur. Tapes slippage can be caused when the tape
loses its tension, or it has become warped. These problems can occur in storage
due to high temperatures or frequent changes in humidity.
- VHS tapes deteriorate over time from infrequent or overuse. Neglect means mold
and dirt, while overuse can lead to scratches and technical difficulties. This
is why old VHS tapes inevitably experience malfunctions after a long period of
time. Usually anywhere between 10 to 25+ years.
- Some VHS tapes like newer mini DVs and Digital 8 tapes can suffer from digital
corruption, meaning that the footage becomes lost and cannot be recovered. These
tapes were the steppingstone from VHS to the digital age when capturing footage
straight to digital became the norm. Unfortunately,they are susceptible to digital
corruption, which causes video pixilation and/or loss of audio.<|im_end|>
<|im_start|>assistant
Alright!<|im_end|>
<|im_start|>user
Now I''m going to write my question, and if the info above is useful, you can
use them in your response.
Ready?<|im_end|>
<|im_start|>assistant
Ready for your question!<|im_end|>
<|im_start|>user
Why do VHS tapes deteriorate over time?<|im_end|>
<|im_start|>assistant'
---
# Felladrin/Smol-Llama-101M-Chat-v1-GGUF
Quantized GGUF model files for [Smol-Llama-101M-Chat-v1](https://huggingface.co/Felladrin/Smol-Llama-101M-Chat-v1) from [Felladrin](https://huggingface.co/Felladrin)
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [smol-llama-101m-chat-v1.fp16.gguf](https://huggingface.co/afrideva/Smol-Llama-101M-Chat-v1-GGUF/resolve/main/smol-llama-101m-chat-v1.fp16.gguf) | fp16 | 204.25 MB |
| [smol-llama-101m-chat-v1.q2_k.gguf](https://huggingface.co/afrideva/Smol-Llama-101M-Chat-v1-GGUF/resolve/main/smol-llama-101m-chat-v1.q2_k.gguf) | q2_k | 51.90 MB |
| [smol-llama-101m-chat-v1.q3_k_m.gguf](https://huggingface.co/afrideva/Smol-Llama-101M-Chat-v1-GGUF/resolve/main/smol-llama-101m-chat-v1.q3_k_m.gguf) | q3_k_m | 58.04 MB |
| [smol-llama-101m-chat-v1.q4_k_m.gguf](https://huggingface.co/afrideva/Smol-Llama-101M-Chat-v1-GGUF/resolve/main/smol-llama-101m-chat-v1.q4_k_m.gguf) | q4_k_m | 66.38 MB |
| [smol-llama-101m-chat-v1.q5_k_m.gguf](https://huggingface.co/afrideva/Smol-Llama-101M-Chat-v1-GGUF/resolve/main/smol-llama-101m-chat-v1.q5_k_m.gguf) | q5_k_m | 75.31 MB |
| [smol-llama-101m-chat-v1.q6_k.gguf](https://huggingface.co/afrideva/Smol-Llama-101M-Chat-v1-GGUF/resolve/main/smol-llama-101m-chat-v1.q6_k.gguf) | q6_k | 84.80 MB |
| [smol-llama-101m-chat-v1.q8_0.gguf](https://huggingface.co/afrideva/Smol-Llama-101M-Chat-v1-GGUF/resolve/main/smol-llama-101m-chat-v1.q8_0.gguf) | q8_0 | 109.33 MB |
## Original Model Card:
# A Llama Chat Model of 101M Parameters
- Base model: [BEE-spoke-data/smol_llama-101M-GQA](https://huggingface.co/BEE-spoke-data/smol_llama-101M-GQA)
- Datasets:
- [Open-Orca/SlimOrca-Dedup](https://huggingface.co/datasets/Open-Orca/SlimOrca-Dedup)
- [VMware/open-instruct](https://huggingface.co/datasets/VMware/open-instruct)
- [LDJnr/Capybara](https://huggingface.co/datasets/LDJnr/Capybara)
- [cognitivecomputations/ultrachat-uncensored](https://huggingface.co/datasets/cognitivecomputations/ultrachat-uncensored)
- [starfishmedical/webGPT_x_dolly](https://huggingface.co/datasets/starfishmedical/webGPT_x_dolly)
- [THUDM/webglm-qa](https://huggingface.co/datasets/THUDM/webglm-qa)
- Availability in other ML formats:
- ONNX: [Felladrin/onnx-Smol-Llama-101M-Chat-v1](https://huggingface.co/Felladrin/onnx-Smol-Llama-101M-Chat-v1)
## Recommended Prompt Format
The recommended prompt format is as follows:
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{user_message}<|im_end|>
<|im_start|>assistant
```
## Recommended Inference Parameters
To get the best results, add special tokens and prefer using [contrastive search](https://huggingface.co/docs/transformers/main/en/generation_strategies#contrastive-search) for inference:
```yml
add_special_tokens: true
penalty_alpha: 0.5
top_k: 5
``` |
nine2046ml/ppo-LunarLander-v2 | nine2046ml | 2023-12-29T01:27:18Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-29T01:26:56Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 236.28 +/- 43.07
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
TitanTec/ppo-Huggy-T1 | TitanTec | 2023-12-29T01:17:14Z | 3 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-12-29T01:17:09Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: TitanTec/ppo-Huggy-T1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
jose-costa-deus/phi-2-finetuned-med-text | jose-costa-deus | 2023-12-29T01:04:15Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"license:other",
"region:us"
] | null | 2023-12-29T01:04:10Z | ---
license: other
base_model: microsoft/phi-2
tags:
- generated_from_trainer
model-index:
- name: phi-2-finetuned-med-text
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-2-finetuned-med-text
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
sosoai/komt-mistral-7b-v1-dpo-merged | sosoai | 2023-12-29T00:45:10Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"ko",
"dataset:HuggingFaceH4/ultrachat_200k",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-29T00:37:26Z | ---
license: apache-2.0
datasets:
- HuggingFaceH4/ultrachat_200k
language:
- ko
---
davidkim204๋์ komt-mistral-7b-v1-dpo lora์ komt-mistral-7b-v1 base ๋ชจ๋ธ์ merge ํ์์ต๋๋ค. |
Weni/WeniGPT-2.1.1-Zephyr-7B-Bits_and_bytes-dataset-LLM-base-v2 | Weni | 2023-12-29T00:42:49Z | 3 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:adapter:HuggingFaceH4/zephyr-7b-beta",
"region:us"
] | null | 2023-12-28T20:51:06Z | ---
library_name: peft
base_model: HuggingFaceH4/zephyr-7b-beta
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
Subsets and Splits