modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-22 00:45:16
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 491
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-22 00:44:03
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ws11yrin/dqn-SpaceInvadersNoFrameskip-v4 | ws11yrin | 2024-05-13T19:13:45Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-13T19:05:24Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 832.00 +/- 383.77
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ws11yrin -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ws11yrin -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga ws11yrin
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
fahad0071/Therapist-2 | fahad0071 | 2024-05-13T19:11:38Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"license:llama2",
"region:us"
] | null | 2024-05-13T19:02:18Z | ---
license: llama2
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: meta-llama/Llama-2-7b-chat-hf
datasets:
- generator
model-index:
- name: Therapist-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Therapist-2
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2547
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.188 | 1.0 | 272 | 1.2547 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2 |
ulasfiliz954/ppo-LunarLander-v1 | ulasfiliz954 | 2024-05-13T19:11:30Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-13T19:11:23Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -153.34 +/- 69.65
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
|
Sonatafyai/scibert-finetuned_ADEs_SonatafyAI | Sonatafyai | 2024-05-13T19:07:26Z | 222 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:jsylee/scibert_scivocab_uncased-finetuned-ner",
"base_model:finetune:jsylee/scibert_scivocab_uncased-finetuned-ner",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-05-13T19:00:12Z | ---
base_model: jsylee/scibert_scivocab_uncased-finetuned-ner
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: scibert-finetuned_ADEs_SonatafyAI
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# scibert-finetuned_ADEs_SonatafyAI
This model is a fine-tuned version of [jsylee/scibert_scivocab_uncased-finetuned-ner](https://huggingface.co/jsylee/scibert_scivocab_uncased-finetuned-ner) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2004
- Precision: 0.6454
- Recall: 0.6962
- F1: 0.6698
- Accuracy: 0.9095
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2918 | 1.0 | 640 | 0.2240 | 0.6095 | 0.7148 | 0.6579 | 0.9029 |
| 0.2305 | 2.0 | 1280 | 0.2064 | 0.6354 | 0.6896 | 0.6614 | 0.9079 |
| 0.2223 | 3.0 | 1920 | 0.2031 | 0.636 | 0.6951 | 0.6642 | 0.9082 |
| 0.2145 | 4.0 | 2560 | 0.2010 | 0.6419 | 0.6973 | 0.6684 | 0.9089 |
| 0.2081 | 5.0 | 3200 | 0.2004 | 0.6454 | 0.6962 | 0.6698 | 0.9095 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
RichardErkhov/Qwen_-_Qwen1.5-MoE-A2.7B-Chat-gguf | RichardErkhov | 2024-05-13T19:05:37Z | 212 | 2 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-13T14:45:54Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Qwen1.5-MoE-A2.7B-Chat - GGUF
- Model creator: https://huggingface.co/Qwen/
- Original model: https://huggingface.co/Qwen/Qwen1.5-MoE-A2.7B-Chat/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Qwen1.5-MoE-A2.7B-Chat.Q2_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen1.5-MoE-A2.7B-Chat-gguf/blob/main/Qwen1.5-MoE-A2.7B-Chat.Q2_K.gguf) | Q2_K | 5.49GB |
| [Qwen1.5-MoE-A2.7B-Chat.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen1.5-MoE-A2.7B-Chat-gguf/blob/main/Qwen1.5-MoE-A2.7B-Chat.IQ3_XS.gguf) | IQ3_XS | 6.07GB |
| [Qwen1.5-MoE-A2.7B-Chat.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen1.5-MoE-A2.7B-Chat-gguf/blob/main/Qwen1.5-MoE-A2.7B-Chat.IQ3_S.gguf) | IQ3_S | 6.37GB |
| [Qwen1.5-MoE-A2.7B-Chat.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen1.5-MoE-A2.7B-Chat-gguf/blob/main/Qwen1.5-MoE-A2.7B-Chat.Q3_K_S.gguf) | Q3_K_S | 6.37GB |
| [Qwen1.5-MoE-A2.7B-Chat.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen1.5-MoE-A2.7B-Chat-gguf/blob/main/Qwen1.5-MoE-A2.7B-Chat.IQ3_M.gguf) | IQ3_M | 6.46GB |
| [Qwen1.5-MoE-A2.7B-Chat.Q3_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen1.5-MoE-A2.7B-Chat-gguf/blob/main/Qwen1.5-MoE-A2.7B-Chat.Q3_K.gguf) | Q3_K | 6.93GB |
| [Qwen1.5-MoE-A2.7B-Chat.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen1.5-MoE-A2.7B-Chat-gguf/blob/main/Qwen1.5-MoE-A2.7B-Chat.Q3_K_M.gguf) | Q3_K_M | 6.93GB |
| [Qwen1.5-MoE-A2.7B-Chat.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen1.5-MoE-A2.7B-Chat-gguf/blob/main/Qwen1.5-MoE-A2.7B-Chat.Q3_K_L.gguf) | Q3_K_L | 7.21GB |
| [Qwen1.5-MoE-A2.7B-Chat.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen1.5-MoE-A2.7B-Chat-gguf/blob/main/Qwen1.5-MoE-A2.7B-Chat.IQ4_XS.gguf) | IQ4_XS | 7.4GB |
| [Qwen1.5-MoE-A2.7B-Chat.Q4_0.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen1.5-MoE-A2.7B-Chat-gguf/blob/main/Qwen1.5-MoE-A2.7B-Chat.Q4_0.gguf) | Q4_0 | 7.59GB |
| [Qwen1.5-MoE-A2.7B-Chat.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen1.5-MoE-A2.7B-Chat-gguf/blob/main/Qwen1.5-MoE-A2.7B-Chat.IQ4_NL.gguf) | IQ4_NL | 7.68GB |
| [Qwen1.5-MoE-A2.7B-Chat.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen1.5-MoE-A2.7B-Chat-gguf/blob/main/Qwen1.5-MoE-A2.7B-Chat.Q4_K_S.gguf) | Q4_K_S | 8.11GB |
| [Qwen1.5-MoE-A2.7B-Chat.Q4_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen1.5-MoE-A2.7B-Chat-gguf/blob/main/Qwen1.5-MoE-A2.7B-Chat.Q4_K.gguf) | Q4_K | 8.84GB |
| [Qwen1.5-MoE-A2.7B-Chat.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen1.5-MoE-A2.7B-Chat-gguf/blob/main/Qwen1.5-MoE-A2.7B-Chat.Q4_K_M.gguf) | Q4_K_M | 8.84GB |
| [Qwen1.5-MoE-A2.7B-Chat.Q4_1.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen1.5-MoE-A2.7B-Chat-gguf/blob/main/Qwen1.5-MoE-A2.7B-Chat.Q4_1.gguf) | Q4_1 | 8.41GB |
| [Qwen1.5-MoE-A2.7B-Chat.Q5_0.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen1.5-MoE-A2.7B-Chat-gguf/blob/main/Qwen1.5-MoE-A2.7B-Chat.Q5_0.gguf) | Q5_0 | 9.22GB |
| [Qwen1.5-MoE-A2.7B-Chat.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen1.5-MoE-A2.7B-Chat-gguf/blob/main/Qwen1.5-MoE-A2.7B-Chat.Q5_K_S.gguf) | Q5_K_S | 9.46GB |
| [Qwen1.5-MoE-A2.7B-Chat.Q5_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen1.5-MoE-A2.7B-Chat-gguf/blob/main/Qwen1.5-MoE-A2.7B-Chat.Q5_K.gguf) | Q5_K | 10.09GB |
| [Qwen1.5-MoE-A2.7B-Chat.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen1.5-MoE-A2.7B-Chat-gguf/blob/main/Qwen1.5-MoE-A2.7B-Chat.Q5_K_M.gguf) | Q5_K_M | 10.09GB |
| [Qwen1.5-MoE-A2.7B-Chat.Q5_1.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen1.5-MoE-A2.7B-Chat-gguf/blob/main/Qwen1.5-MoE-A2.7B-Chat.Q5_1.gguf) | Q5_1 | 10.04GB |
| [Qwen1.5-MoE-A2.7B-Chat.Q6_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen1.5-MoE-A2.7B-Chat-gguf/blob/main/Qwen1.5-MoE-A2.7B-Chat.Q6_K.gguf) | Q6_K | 11.89GB |
| [Qwen1.5-MoE-A2.7B-Chat.Q8_0.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen1.5-MoE-A2.7B-Chat-gguf/blob/main/Qwen1.5-MoE-A2.7B-Chat.Q8_0.gguf) | Q8_0 | 14.18GB |
Original model description:
---
license: other
license_name: tongyi-qianwen
license_link: >-
https://huggingface.co/Qwen/Qwen1.5-MoE-A2.7B-Chat/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- chat
---
# Qwen1.5-MoE-A2.7B-Chat
## Introduction
Qwen1.5-MoE is a transformer-based MoE decoder-only language model pretrained on a large amount of data.
For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen-moe/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).
## Model Details
Qwen1.5-MoE employs Mixture of Experts (MoE) architecture, where the models are upcycled from dense language models. For instance, `Qwen1.5-MoE-A2.7B` is upcycled from `Qwen-1.8B`. It has 14.3B parameters in total and 2.7B activated parameters during runtime, while achieching comparable performance to `Qwen1.5-7B`, it only requires 25% of the training resources. We also observed that the inference speed is 1.74 times that of `Qwen1.5-7B`.
## Training details
We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization.
## Requirements
The code of Qwen1.5-MoE has been in the latest Hugging face transformers and we advise you to build from source with command `pip install git+https://github.com/huggingface/transformers`, or you might encounter the following error:
```
KeyError: 'qwen2_moe'.
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen1.5-MoE-A2.7B-Chat",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen1.5-MoE-A2.7B-Chat")
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
For quantized models, we advise you to use the GPTQ correspondents, namely `Qwen1.5-MoE-A2.7B-Chat-GPTQ-Int4`.
## Tips
* If you encounter code switching or other bad cases, we advise you to use our provided hyper-parameters in `generation_config.json`.
*
|
infekt/virtue_id_03 | infekt | 2024-05-13T19:04:11Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-13T18:12:28Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** infekt
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
manusehgal/llamafinetuned | manusehgal | 2024-05-13T19:02:32Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-13T18:01:22Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** manusehgal
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
veronica-girolimetti/mistral-ft-lora04 | veronica-girolimetti | 2024-05-13T18:57:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-13T18:49:25Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit
---
# Uploaded model
- **Developed by:** veronica-girolimetti
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
veronica-girolimetti/mistral-ft-04 | veronica-girolimetti | 2024-05-13T18:55:02Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-13T18:43:39Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit
---
# Uploaded model
- **Developed by:** veronica-girolimetti
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
danyoung/leonardo | danyoung | 2024-05-13T18:54:40Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-13T18:45:58Z | ---
license: apache-2.0
---
|
Sonatafyai/bert-base-cased-finetuned_ADEs_SonatafyAI | Sonatafyai | 2024-05-13T18:54:21Z | 117 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-05-13T18:46:04Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-cased-finetuned_ADEs_SonatafyAI
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned_ADEs_SonatafyAI
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3543
- Precision: 0.3857
- Recall: 0.4776
- F1: 0.4268
- Accuracy: 0.8554
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.5644 | 1.0 | 640 | 0.4536 | 0.2717 | 0.3148 | 0.2916 | 0.8285 |
| 0.4695 | 2.0 | 1280 | 0.3977 | 0.3292 | 0.4109 | 0.3656 | 0.8462 |
| 0.4253 | 3.0 | 1920 | 0.3717 | 0.3653 | 0.4536 | 0.4047 | 0.8509 |
| 0.3872 | 4.0 | 2560 | 0.3578 | 0.3747 | 0.4623 | 0.4139 | 0.8544 |
| 0.3758 | 5.0 | 3200 | 0.3543 | 0.3857 | 0.4776 | 0.4268 | 0.8554 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Amo/so-vits-svc-4.0_GA | Amo | 2024-05-13T18:53:45Z | 0 | 4 | null | [
"audio-to-audio",
"region:us"
] | audio-to-audio | 2023-03-07T12:28:07Z | ---
pipeline_tag: audio-to-audio
---
Models creaded with and in the use in the Vul So-Vits-SVC 4.0 UI.
**_List of models:**
**----Pony:**
CakeMrs_70<br>Derpy_100000<br>Spike_45000<br>TreeHugger_69k<br>ddm_DaringDo_100k (+ Lighting Dust and Moondancer)<br>djPon3_Dash_Mix_85000<br>Tirek_100k
<br>DJpon3_V3_120k<br>OctaviaBrit (trained on 15ai tts audio)
**----Non-Pony:**
DRD_60000<br>Dagoth_Ur_50k<br>Dagoth_Ur_80k<br>Glados_50k<br>Gwenpool_50000 (multilingual)<br>NamelessHero_eng<br>Saul_Goodman_80000
TF2_SaxtonHale_100k<br>TF2_demoman_75k<br>TF2_engineer_60k<br>TF2_heavy_100k<br>TF2_medic_100k<br>TF2_scout_60k<br>TF2_sniper_60k<br>TF2_soldier_60k<br>TF2_spy_60k
g1_Diego_PL_60000<br>Boss_MGS_80k<br>Gaunter ODimm<br>B1_BattleDroid<br>Frank Sinatra
**_List of datasets:**
TF2_SaxtonHale_100k, TF2_demoman_75k, TF2_engineer_60k, TF2_heavy_100k, TF2_medic_100k, TF2_scout_60k, TF2_sniper_60k, TF2_soldier_60k, TF2_spy_60k
DRD_MLP<br>Daring_Do_multiple<br>DerpyWavExpanded<br>djPon3_Dash_Mix_(audio_dataset) (OLD, im working on making update on this)<br>Dagoth Ur<br>Boss_MGS<br>Gaunter ODimm
<br><br>OctaviaBrit (15ai tts audio)
B1_BattleDroid<br>DJpon3_V3 (NEW dataset, better than old one but still not perfect)<br>saul_goodman<br>Frank Sinatra<br>StarTrekComputer
|
Ramikan-BR/tinyllama_PY-CODER-4bit-lora_4k-v8 | Ramikan-BR | 2024-05-13T18:53:27Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"en",
"base_model:Ramikan-BR/tinyllama_PY-CODER-4bit-lora_4k-v7",
"base_model:quantized:Ramikan-BR/tinyllama_PY-CODER-4bit-lora_4k-v7",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-13T06:14:45Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: Ramikan-BR/tinyllama_PY-CODER-4bit-lora_4k-v7
---
# Uploaded model
- **Developed by:** Ramikan-BR
- **License:** apache-2.0
- **Finetuned from model :** Ramikan-BR/tinyllama_PY-CODER-4bit-lora_4k-v7
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Angy309/swinv2-tiny-patch4-window8-256-prueba2 | Angy309 | 2024-05-13T18:48:10Z | 151 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swinv2",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swinv2-tiny-patch4-window8-256",
"base_model:finetune:microsoft/swinv2-tiny-patch4-window8-256",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-13T18:47:11Z | ---
license: apache-2.0
base_model: microsoft/swinv2-tiny-patch4-window8-256
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swinv2-tiny-patch4-window8-256-prueba2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7948717948717948
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-tiny-patch4-window8-256-prueba2
This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4983
- Accuracy: 0.7949
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| No log | 0.9091 | 5 | 0.5855 | 0.7692 |
| 0.6548 | 2.0 | 11 | 0.4983 | 0.7949 |
| 0.6548 | 2.7273 | 15 | 0.4941 | 0.7692 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Angy309/swin-tiny-patch4-window7-224-pueba1 | Angy309 | 2024-05-13T18:47:06Z | 217 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-13T18:33:06Z | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-pueba1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7948717948717948
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-pueba1
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4784
- Accuracy: 0.7949
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| No log | 0.9091 | 5 | 0.6238 | 0.6410 |
| 0.653 | 2.0 | 11 | 0.5156 | 0.7692 |
| 0.653 | 2.7273 | 15 | 0.4784 | 0.7949 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
davelotito/donut-base-sroie-metrics-combined-new | davelotito | 2024-05-13T18:45:55Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-04-24T15:05:08Z | ---
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
metrics:
- bleu
- wer
model-index:
- name: donut-base-sroie-metrics-combined-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroie-metrics-combined-new
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4671
- Bleu: 0.0662
- Precisions: [0.785140562248996, 0.6825396825396826, 0.6197916666666666, 0.5626911314984709]
- Brevity Penalty: 0.1007
- Length Ratio: 0.3035
- Translation Length: 498
- Reference Length: 1641
- Cer: 0.7528
- Wer: 0.8385
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Precisions | Brevity Penalty | Length Ratio | Translation Length | Reference Length | Cer | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:----------------------------------------------------------------------------------:|:---------------:|:------------:|:------------------:|:----------------:|:------:|:------:|
| 3.6559 | 1.0 | 253 | 1.5613 | 0.0007 | [0.5056179775280899, 0.1943127962085308, 0.07692307692307693, 0.02830188679245283] | 0.0058 | 0.1627 | 267 | 1641 | 0.8768 | 0.9436 |
| 1.2493 | 2.0 | 506 | 0.6697 | 0.0409 | [0.6560509554140127, 0.5048309178743962, 0.4481792717086835, 0.39] | 0.0834 | 0.2870 | 471 | 1641 | 0.7766 | 0.8837 |
| 0.9257 | 3.0 | 759 | 0.5168 | 0.0594 | [0.75, 0.6275862068965518, 0.5714285714285714, 0.5264797507788161] | 0.0968 | 0.2998 | 492 | 1641 | 0.7570 | 0.8499 |
| 0.6416 | 4.0 | 1012 | 0.4671 | 0.0662 | [0.785140562248996, 0.6825396825396826, 0.6197916666666666, 0.5626911314984709] | 0.1007 | 0.3035 | 498 | 1641 | 0.7528 | 0.8385 |
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.1.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
mjrdbds/llama3-4b-classifierunsloth-130524 | mjrdbds | 2024-05-13T18:45:26Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-13T16:58:37Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** mjrdbds
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
snigdhachandan/ganeet-V2 | snigdhachandan | 2024-05-13T18:45:10Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"deepseek-ai/deepseek-math-7b-rl",
"meta-math/MetaMath-7B-V1.0",
"conversational",
"base_model:deepseek-ai/deepseek-math-7b-rl",
"base_model:merge:deepseek-ai/deepseek-math-7b-rl",
"base_model:meta-math/MetaMath-7B-V1.0",
"base_model:merge:meta-math/MetaMath-7B-V1.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-13T18:42:48Z | ---
tags:
- merge
- mergekit
- lazymergekit
- deepseek-ai/deepseek-math-7b-rl
- meta-math/MetaMath-7B-V1.0
base_model:
- deepseek-ai/deepseek-math-7b-rl
- meta-math/MetaMath-7B-V1.0
---
# ganeet-V2
ganeet-V2 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [deepseek-ai/deepseek-math-7b-rl](https://huggingface.co/deepseek-ai/deepseek-math-7b-rl)
* [meta-math/MetaMath-7B-V1.0](https://huggingface.co/meta-math/MetaMath-7B-V1.0)
## π§© Configuration
```yaml
slices:
- sources:
- model: deepseek-ai/deepseek-math-7b-rl
layer_range: [0, 30]
- model: meta-math/MetaMath-7B-V1.0
layer_range: [0, 30]
merge_method: slerp
base_model: deepseek-ai/deepseek-math-7b-rl
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.4
dtype: bfloat16
```
## π» Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "snigdhachandan/ganeet-V2"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
binhK/flan_t5_finetuned_sarcastic | binhK | 2024-05-13T18:43:55Z | 169 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-13T18:16:36Z | ---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: flan_t5_finetuned_sarcastic
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan_t5_finetuned_sarcastic
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.5109
- Rouge1: 19.0525
- Rouge2: 6.5322
- Rougel: 17.439
- Rougelsum: 17.4744
- Gen Len: 17.4448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 1.6482 | 1.0 | 834 | 4.4165 | 18.9525 | 6.7215 | 17.5456 | 17.4554 | 17.1070 |
| 1.4688 | 2.0 | 1668 | 4.5109 | 19.0525 | 6.5322 | 17.439 | 17.4744 | 17.4448 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
kyl23/hw3_RTE_bitfit_1e-4 | kyl23 | 2024-05-13T18:41:47Z | 163 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-13T13:34:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
veronica-girolimetti/mistral_qt_finetuned_LoRA_04 | veronica-girolimetti | 2024-05-13T18:39:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-13T18:37:24Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit
---
# Uploaded model
- **Developed by:** veronica-girolimetti
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AndiB93/ppo-Huggy | AndiB93 | 2024-05-13T18:38:46Z | 2 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2024-05-13T18:36:33Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog πΆ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: AndiB93/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play π
|
Ziyu25/dqn-SpaceInvadersNoFrameskip-v4 | Ziyu25 | 2024-05-13T18:37:59Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-13T18:37:24Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 149.00 +/- 123.18
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Ziyu25 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Ziyu25 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Ziyu25
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 100000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
blueprintninja/dolphin-2.9.1-llama-3-8b-llamafile-nonAVX | blueprintninja | 2024-05-13T18:35:37Z | 4 | 1 | null | [
"llamafile",
"GGUF",
"base_model:crusoeai/dolphin-2.9.1-llama-3-8b-GGUF",
"base_model:finetune:crusoeai/dolphin-2.9.1-llama-3-8b-GGUF",
"region:us"
] | null | 2024-05-13T18:33:32Z |
---
tags:
- llamafile
- GGUF
base_model: crusoeai/dolphin-2.9.1-llama-3-8b-GGUF
---
## dolphin-2.9.1-llama-3-8b-llamafile-nonAVX
llamafile lets you distribute and run LLMs with a single file. [announcement blog post](https://hacks.mozilla.org/2023/11/introducing-llamafile/)
#### Downloads
- [dolphin-2.9.1-llama-3-8b.Q4_0.llamafile](https://huggingface.co/blueprintninja/dolphin-2.9.1-llama-3-8b-llamafile-nonAVX/resolve/main/dolphin-2.9.1-llama-3-8b.Q4_0.llamafile)
This repository was created using the [llamafile-builder](https://github.com/rabilrbl/llamafile-builder)
|
AnnaLissa/fine_tuned_model | AnnaLissa | 2024-05-13T18:33:56Z | 64 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"code",
"en",
"dataset:HuggingFaceFW/fineweb",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-13T17:59:16Z | ---
license: mit
datasets:
- HuggingFaceFW/fineweb
language:
- en
metrics:
- bertscore
tags:
- code
--- |
Sonatafyai/roberta-large-finetuned_ADEs_SonatafyAI | Sonatafyai | 2024-05-13T18:29:37Z | 125 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-05-13T18:03:41Z | ---
license: mit
base_model: roberta-large
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-large-finetuned_ADEs_SonatafyAI
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-finetuned_ADEs_SonatafyAI
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2571
- Precision: 0.5269
- Recall: 0.6208
- F1: 0.5700
- Accuracy: 0.8859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.7192 | 1.0 | 640 | 0.3366 | 0.4491 | 0.5202 | 0.4820 | 0.8653 |
| 0.3549 | 2.0 | 1280 | 0.2814 | 0.4982 | 0.6066 | 0.5471 | 0.8803 |
| 0.3118 | 3.0 | 1920 | 0.2653 | 0.5178 | 0.6186 | 0.5637 | 0.8831 |
| 0.2827 | 4.0 | 2560 | 0.2624 | 0.5276 | 0.6372 | 0.5772 | 0.8833 |
| 0.2741 | 5.0 | 3200 | 0.2571 | 0.5269 | 0.6208 | 0.5700 | 0.8859 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
rushdiodeh/Multi-dialect-Arabicrush | rushdiodeh | 2024-05-13T18:24:55Z | 0 | 0 | null | [
"text-classification",
"license:apache-2.0",
"region:us"
] | text-classification | 2024-04-13T20:27:24Z | ---
license: apache-2.0
metrics:
- accuracy
pipeline_tag: text-classification
---
# Arabic-Dialects-Identification-Model
here's a complete Python code to perform the Arabic dialects identification using MultinomialNB and Random Forest classifiers and evaluate the model using various performance metrics:
This creates a dataframe 'comparison_df' that has three columns: 'Actual', 'Multinomial NB', and Random ,Forest'. The Actual column contains the true labels for the testing data, while the Multinomial NB and Random Forest columns contain the predicted labels for the testing data using the corresponding classifier.
Finally, the comparison dataframe is saved to an Excel file called arabic_dialects_comparison.xlsx using the to_excel() method with index=False argument to exclude the row index from the Excel file.
|
RichardErkhov/TeeZee_-_Bielik-SOLAR-LIKE-10.7B-Instruct-v0.1-8bits | RichardErkhov | 2024-05-13T18:24:32Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-13T18:12:06Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Bielik-SOLAR-LIKE-10.7B-Instruct-v0.1 - bnb 8bits
- Model creator: https://huggingface.co/TeeZee/
- Original model: https://huggingface.co/TeeZee/Bielik-SOLAR-LIKE-10.7B-Instruct-v0.1/
Original model description:
---
license: cc-by-nc-4.0
---
### TeeZee/Bielik-SOLAR-LIKE-10.7B-Instruct-v0.1 ###
Precise recipe used by Upstage to create [SOLAR](https://huggingface.co/upstage/SOLAR-10.7B-v1.0) was applied to https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1
*(just merge, no finetuning)
### Results ###
- model is still coherent in Polish language, even without finetuning after merge
- instruct mode works in ooba without issues
- model is censored and aligned
- seems that this model scores highest amongst all versions of original Bielik models, further finetunig should improve results even more.

- on dedicated to Polish speaking LLM leaderboards, its 2nd, just behind instruct version used for this merge, and thats to be expected when applying DUS merge - very small quality loss.
[Polish LLMs leaderboards](https://huggingface.co/spaces/speakleash/open_pl_llm_leaderboard)
- overall it seems like a good base for further finetunig in Polish language.
|
snigdhachandan/ganeet-V1 | snigdhachandan | 2024-05-13T18:23:57Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"deepseek-ai/deepseek-math-7b-rl",
"deepseek-ai/deepseek-math-7b-instruct",
"conversational",
"base_model:deepseek-ai/deepseek-math-7b-instruct",
"base_model:merge:deepseek-ai/deepseek-math-7b-instruct",
"base_model:deepseek-ai/deepseek-math-7b-rl",
"base_model:merge:deepseek-ai/deepseek-math-7b-rl",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-13T18:20:51Z | ---
tags:
- merge
- mergekit
- lazymergekit
- deepseek-ai/deepseek-math-7b-rl
- deepseek-ai/deepseek-math-7b-instruct
base_model:
- deepseek-ai/deepseek-math-7b-rl
- deepseek-ai/deepseek-math-7b-instruct
---
# ganeet-V1
ganeet-V1 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [deepseek-ai/deepseek-math-7b-rl](https://huggingface.co/deepseek-ai/deepseek-math-7b-rl)
* [deepseek-ai/deepseek-math-7b-instruct](https://huggingface.co/deepseek-ai/deepseek-math-7b-instruct)
## π§© Configuration
```yaml
slices:
- sources:
- model: deepseek-ai/deepseek-math-7b-rl
layer_range: [0, 30]
- model: deepseek-ai/deepseek-math-7b-instruct
layer_range: [0, 30]
merge_method: slerp
base_model: deepseek-ai/deepseek-math-7b-rl
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.4
dtype: bfloat16
```
## π» Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "snigdhachandan/ganeet-V1"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Sufiyan11919/dqn-AsteroidsNoFrameskip-v4 | Sufiyan11919 | 2024-05-13T18:22:32Z | 9 | 0 | stable-baselines3 | [
"stable-baselines3",
"AsteroidsNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-13T18:22:11Z | ---
library_name: stable-baselines3
tags:
- AsteroidsNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AsteroidsNoFrameskip-v4
type: AsteroidsNoFrameskip-v4
metrics:
- type: mean_reward
value: 688.00 +/- 189.94
name: mean_reward
verified: false
---
# **DQN** Agent playing **AsteroidsNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **AsteroidsNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env AsteroidsNoFrameskip-v4 -orga Sufiyan11919 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env AsteroidsNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env AsteroidsNoFrameskip-v4 -orga Sufiyan11919 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env AsteroidsNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env AsteroidsNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env AsteroidsNoFrameskip-v4 -f logs/ -orga Sufiyan11919
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('normalize', False),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
kanaluvu/bloomz-560m-prompted-finetuned | kanaluvu | 2024-05-13T18:18:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-13T18:18:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
abhishek/autotrain-w77ed-kah7g | abhishek | 2024-05-13T18:17:22Z | 195 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"autotrain",
"vision",
"dataset:cppe-5",
"endpoints_compatible",
"region:us"
] | object-detection | 2024-05-13T17:25:50Z |
---
tags:
- autotrain
- object-detection
- vision
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
datasets:
- cppe-5
---
# Model Trained Using AutoTrain
- Problem type: Object Detection
## Validation Metrics
loss: 1.1950929164886475
map: 0.3186
map_50: 0.638
map_75: 0.2701
map_small: 0.1709
map_medium: 0.2412
map_large: 0.4688
mar_1: 0.2876
mar_10: 0.4874
mar_100: 0.4997
mar_small: 0.2254
mar_medium: 0.4182
mar_large: 0.6428
map_Coverall: 0.5105
mar_100_Coverall: 0.6889
map_Face_Shield: 0.2804
mar_100_Face_Shield: 0.5765
map_Gloves: 0.2836
mar_100_Gloves: 0.4246
map_Goggles: 0.1154
mar_100_Goggles: 0.3375
map_Mask: 0.4031
mar_100_Mask: 0.4712
|
ISTA-DASLab/Meta-Llama-3-70B-Instruct-AQLM-2Bit-1x16 | ISTA-DASLab | 2024-05-13T18:14:11Z | 135 | 20 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"llama-3",
"conversational",
"text-generation-inference",
"arxiv:2401.06118",
"autotrain_compatible",
"endpoints_compatible",
"aqlm",
"region:us"
] | text-generation | 2024-05-03T09:45:59Z | ---
library_name: transformers
tags:
- llama
- facebook
- meta
- llama-3
- conversational
- text-generation-inference
---
Official [AQLM](https://arxiv.org/abs/2401.06118) quantization of [meta-llama/Meta-Llama-3-70B-Instruct
](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct).
For this quantization, we used 1 codebook of 16 bits.
Results (measured with `lm_eval==4.0`):
| Model | Quantization | MMLU (5-shot) | ArcC| ArcE| Hellaswag | Winogrande | PiQA | Model size, Gb |
|------|------|-------|------|------|------|------|------|------|
|meta-llama/Meta-Llama-3-70B | - | 0.7980 | 0.6160 | 0.8624 | 0.6367 | 0.8183 | 0.7632 | 141.2 |
| | 1x16 | 0.7587 | 0.4863 | 0.7668 | 0.6159 | 0.7481 | 0.7537 | 21.9 | |
npvinHnivqn/rag_healthcare_vietnamese_vinallama_xattn_extractor_v08 | npvinHnivqn | 2024-05-13T18:12:35Z | 49 | 0 | transformers | [
"transformers",
"safetensors",
"blip_2_qformer",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-13T18:12:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mudler/LocalAI-Llama3-8b-Function-Call-v0.2-GGUF | mudler | 2024-05-13T18:11:55Z | 0 | 10 | null | [
"region:us"
] | null | 2024-05-11T23:17:45Z | # LocalAI-Llama3-8b-Function-Call-v0.2
[](https://localai.io)

This model is a fine-tune on a custom dataset + glaive to work specifically and leverage all the [LocalAI](https://localai.io) features of constrained grammar.
Specifically, the model once enters in tools mode will always reply with JSON.
To run on LocalAI:
```
local-ai run huggingface://mudler/LocalAI-Llama3-8b-Function-Call-v0.2-GGUF/localai.yaml
```
If you like my work, consider up donating so can get resources for my fine-tunes! |
RichardErkhov/TeeZee_-_Bielik-SOLAR-LIKE-10.7B-Instruct-v0.1-4bits | RichardErkhov | 2024-05-13T18:11:19Z | 77 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-13T18:04:53Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Bielik-SOLAR-LIKE-10.7B-Instruct-v0.1 - bnb 4bits
- Model creator: https://huggingface.co/TeeZee/
- Original model: https://huggingface.co/TeeZee/Bielik-SOLAR-LIKE-10.7B-Instruct-v0.1/
Original model description:
---
license: cc-by-nc-4.0
---
### TeeZee/Bielik-SOLAR-LIKE-10.7B-Instruct-v0.1 ###
Precise recipe used by Upstage to create [SOLAR](https://huggingface.co/upstage/SOLAR-10.7B-v1.0) was applied to https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1
*(just merge, no finetuning)
### Results ###
- model is still coherent in Polish language, even without finetuning after merge
- instruct mode works in ooba without issues
- model is censored and aligned
- seems that this model scores highest amongst all versions of original Bielik models, further finetunig should improve results even more.

- on dedicated to Polish speaking LLM leaderboards, its 2nd, just behind instruct version used for this merge, and thats to be expected when applying DUS merge - very small quality loss.
[Polish LLMs leaderboards](https://huggingface.co/spaces/speakleash/open_pl_llm_leaderboard)
- overall it seems like a good base for further finetunig in Polish language.
|
afrideva/pip-library-etl-1.3b-GGUF | afrideva | 2024-05-13T18:06:59Z | 11 | 0 | transformers | [
"transformers",
"gguf",
"python",
"java",
"cpp",
"sql",
"function calling",
"unit tests",
"causalLM",
"codeLLAMA modified archi",
"document",
"code",
"code2doc",
"instruction_tuned",
"basemodel",
"pytorch",
"docstring",
"documentation",
"text-generation-inference",
"ggml",
"quantized",
"text-generation",
"en",
"base_model:PipableAI/pip-library-etl-1.3b",
"base_model:quantized:PipableAI/pip-library-etl-1.3b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-05-13T18:00:10Z | ---
base_model: PipableAI/pip-library-etl-1.3b
inference: true
language:
- en
library_name: transformers
license: apache-2.0
metrics:
- accuracy
model_creator: PipableAI
model_name: pip-library-etl-1.3b
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- python
- java
- cpp
- sql
- function calling
- unit tests
- causalLM
- codeLLAMA modified archi
- document
- code
- code2doc
- instruction_tuned
- basemodel
- pytorch
- docstring
- documentation
- text-generation-inference
- gguf
- ggml
- quantized
widget:
- example_title: example
text: '<example_response>--code:def function_divide2(x): return x / 2--question:Document
the code--doc:Description:This function takes a number and divides it by 2.Parameters:-
x (numeric): The input value to be divided by 2.Returns:- float: The result of
x divided by 2.Example:To call the function, use the following code:function_divide2(1.0)</example_response><function_code>def
_plot_bounding_polygon(polygons_coordinates, output_html_path=bounding_polygon_map.html):map_center
= [sum([coord[0]for polygon_coords in polygons_coordinatesfor coord in polygon_coords])/
sum([len(polygon_coords) for polygon_coords in polygons_coordinates]),sum([coord[1]for
polygon_coords in polygons_coordinatesfor coord in polygon_coords])/ sum([len(polygon_coords)
for polygon_coords in polygons_coordinates]),]my_map = folium.Map(location=map_center,
zoom_start=12)for polygon_coords in polygons_coordinates:folium.Polygon(locations=polygon_coords,color=blue,fill=True,fill_color=blue,fill_opacity=0.2,).add_to(my_map)marker_cluster
= MarkerCluster().add_to(my_map)for polygon_coords in polygons_coordinates:for
coord in polygon_coords:folium.Marker(location=[coord[0], coord[1]], popup=fCoordinates:
{coord}).add_to(marker_cluster)draw = Draw(export=True)draw.add_to(my_map)my_map.save(output_html_path)return
output_html_path</function_code><question>Document the python code above giving
function description ,parameters and return type and example how to call the function</question><doc>'
---
# pip-library-etl-1.3b-GGUF
Quantized GGUF model files for [pip-library-etl-1.3b](https://huggingface.co/PipableAI/pip-library-etl-1.3b) from [PipableAI](https://huggingface.co/PipableAI)
## Original Model Card:
# pip-library-etl-1.3b
[pipableAi](https://www.pipable.ai/)
[colab_notebook](https://colab.research.google.com/drive/10av3SxFf0Psx_IkmZbcUhiVznStV5pVS?usp=sharing)
[pip etl](https://github.com/PipableAI/pip-library-etl)
## How we built it?
We used softmax cross entropy and a modified form of policy grad along with Q loss, optimized in an EM set up.
The performance for the metioned tasks are comparable to much bigger LLMs and GPT-3.5
## License
The model is open source under apache 2.0. License
## Usage
### NOTE:
If you wish to try this model without utilizing your GPU, we have hosted the model on our end. To execute the library using the hosted playground model, initialize the generator as shown below:
```python
from pip_library_etl import PipEtl
generator = PipEtl(device="cloud")
```
We have hosted the model at https://playground.pipable.ai/infer. Hence, one can also make a POST request to this endpoint with the following payload:
```json
{
"model_name": "PipableAI/pip-library-etl-1.3b",
"prompt": "prompt",
"max_new_tokens": "400"
}
```
```bash
curl -X 'POST' \
'https://playground.pipable.ai/infer' \
-H 'accept: application/json' \
-H 'Content-Type: application/x-www-form-urlencoded' \
-d 'model_name=PipableAI%2Fpip-library-etl-1.3b&prompt="YOUR PROMPT"&max_new_tokens=400'
```
Alternatively, you can directly access UI endpoint at https://playground.pipable.ai/docs#/default/infer_infer_post.
### Library use
For directly using the capabilities of model without putting extra efforts on schems and prompts try to use [pip library_etl](https://github.com/PipableAI/pip-library-etl.git).
Here's a brief overview of what can be achieved using the PipEtl library:
- `Function Call Generation` : The generate_function_call method facilitates the generation of Python function calls based on provided questions and either docstrings or undocumented code. This feature can be useful for generating example function calls or for prototyping code snippets.
- `Automated Documentation Generation` : With the generate_docstring method, users can automatically generate comprehensive docstrings for Python functions. This feature aids in maintaining well-documented codebases and adhering to best practices.
- `Module Documentation` : The generate_module_docstrings method allows for generating documentation for all methods and functions within a given module or package. This capability streamlines the documentation process, especially for large codebases with numerous functions.
- `SQL Query Generation` : Users can leverage the generate_sql method to automatically generate SQL queries based on provided schemas and questions. This functionality simplifies the process of creating SQL queries, particularly for data-related tasks.
For detailed usage refer to the [colab_notebook](https://colab.research.google.com/drive/17PyMU_3QN9LROy7x-jmaema0cuLRzBvc?usp=sharing)
### Installation
```bash
pip install transformers
```
### Prompt
```python
prompt = f"""<example_response>{--question , --query}</example_response><function_code>{code}</function_code>
<question>Give one line description of the python code above in natural language.</question>
<doc>"""
prompt = f"""<example_response>{example of some --question: , --query}</example_response><schema>{schema with cols described}</schema>
<question>Write a sql query to ....</question>
<sql>"""
```
### PyTorch
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda"
model = AutoModelForCausalLM.from_pretrained("PipableAI/pip-library-etl-1.3b").to(device)
tokenizer = AutoTokenizer.from_pretrained("PipableAI/pip-library-etl-1.3b")
prompt = f"""
<example_response>
--code:def divide_by_two(x: float) -> float: return x / 2
--question:Document the python code above giving function description ,parameters and return type and example on how to call the function
--doc:
Description: This function divides a given number by 2.
Parameters:
- x (float): The input value to be divided by 2.
Returns:
- float: The result of x divided by 2.
Example:
divide_by_two(1.0)
</example_response>
<function_code>
def download_file(shared_url, destination):
try:
if not shared_url.startswith("https://drive.google.com"):
raise ValueError("Please provde a valid google drive link.")
file_id = shared_url.split("/d/")[1]
file_id = file_id.split("/")[0]
url = f"https://drive.google.com/uc?id={file_id}"
gdown.download(url, destination, quiet=False)
except Exception as e:
print(f"Error downloading file from Google Drive as {e}")
raise e
</function_code>
<instructions>
1. In the examples while calling function use the name mentioned after `def ` in the above function_code.
2. In the generated docs use valid python type hints as per PEP 484.
</instructions>
<question>Document the python code above giving function description ,parameters and return type and example how to call the function.</question>
<doc>
"""
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=450)
doc = (
tokenizer.decode(outputs[0], skip_special_tokens=True)
.split("<doc>")[-1]
.split("</doc>")[0]
)
doc = (
doc.replace("<p>", "")
.replace("</p>", "")
.replace("<function_description>", "")
.replace("</function_description>", "")
)
print(doc)
```
## Examples
### 1. Code Documentation
### prompt
```python
prompt ='''<example_response>
--code:def divide_by_two(x: float) -> float: return x / 2
--question:Document the python code above giving function description ,parameters and return type and example on how to call the function
--doc:
Description: This function divides a given number by 2.
Parameters:
- x (float): The input value to be divided by 2.
Returns:
- float: The result of x divided by 2.
Example:
divide_by_two(1.0)
</example_response>
<function_code>def _plot_bounding_polygon(
polygons_coordinates, output_html_path="bounding_polygon_map.html"
):
# Create a Folium map centered at the average coordinates of all bounding boxes
map_center = [
sum(
[
coord[0]
for polygon_coords in polygons_coordinates
for coord in polygon_coords
]
)
/ sum([len(polygon_coords) for polygon_coords in polygons_coordinates]),
sum(
[
coord[1]
for polygon_coords in polygons_coordinates
for coord in polygon_coords
]
)
/ sum([len(polygon_coords) for polygon_coords in polygons_coordinates]),
]
my_map = folium.Map(location=map_center, zoom_start=12)
# Add each bounding polygon to the map
for polygon_coords in polygons_coordinates:
folium.Polygon(
locations=polygon_coords,
color="blue",
fill=True,
fill_color="blue",
fill_opacity=0.2,
).add_to(my_map)
# Add bounding boxes as markers to the map
marker_cluster = MarkerCluster().add_to(my_map)
for polygon_coords in polygons_coordinates:
for coord in polygon_coords:
folium.Marker(
location=[coord[0], coord[1]], popup=f"Coordinates: {coord}"
).add_to(marker_cluster)
# Add draw control to allow users to draw additional polygons
draw = Draw(export=True)
draw.add_to(my_map)
# Save the map as an HTML file
my_map.save(output_html_path)
return output_html_path
</function_code>
<instructions>
1. In the examples while calling function use the name mentioned after `def ` in the above function_code.
2. In the generated docs use valid python type hints as per PEP 484.
</instructions>
<question>Document the python code above giving function description ,parameters and return type and example how to call the function</question><doc>'''
```
### Response
```txt
Description:This function generates a map of the bounding polygons and saves it as an HTML file.
Parameters:
- polygons_coordinates (list of lists of tuples): A list of lists of tuples representing the coordinates of the polygons. Each polygon is a list of coordinates.
- output_html_path (str, optional): The path where the HTML file should be saved. Defaults to "bounding_polygon_map.html".
Returns:
- str: The path to the saved HTML file.
Example:
To call the function, use the following code:
plot_bounding_polygon([[(0, 0), (1, 0), (1, 1), (0, 1)], [(2, 2), (3, 2), (3, 3), (2, 3)]], "my_map.html").
```
### 2. SQL Generation
### prompt
```python
prompt = """Generate a simple SQL query from the schema mentioned for the following question.
<schema>
CREATE TABLE department (
Department_ID number, -- Unique identifier for the department
Name text, -- Name of the department
Creation text, -- Date of creation or establishment
Ranking number, -- Ranking of the department
Budget_in_Billions number, -- Budget of the department in billions
Num_Employees number -- Number of employees in the department
);
CREATE TABLE head (
head_ID number, -- Unique identifier for the head
name text, -- Name of the head
born_state text, -- State where the head was born
age number -- Age of the head
);
CREATE TABLE management (
department_ID number, -- Foreign key referencing Department_ID in department table
head_ID number, -- Foreign key referencing head_ID in head table
temporary_acting text -- Indicates if the head is temporarily acting
);
</schema>
<question>What are the names of the heads who are born outside the California state?</question>
<sql>
"""
```
### response
```sql
SELECT head.name FROM head WHERE head.born_state <> 'California';
```
### 3. Performance Schema Monitoring
### prompt
```python
prompt = """Generate the SQL query for SkySQL performance schema for the following question.
<example>
--question: What are the top 10 most frequently used queries/statements?
--sql: SELECT DIGEST_TEXT, COUNT(*) as frequency FROM performance_schema.events_statements_summary_by_digest GROUP BY DIGEST_TEXT ORDER BY frequency DESC LIMIT 10;
</example>
<schema>
CREATE TABLE `accounts` (`USER` char(128) DEFAULT NULL -- 'The connection''s client user name for the connection, or NULL if an internal thread.',
`HOST` char(255) DEFAULT NULL -- 'The connection client''s host name, or NULL if an internal thread.',
`CURRENT_CONNECTIONS` bigint(20) NOT NULL -- 'Current connections for the account.',\n
`TOTAL_CONNECTIONS` bigint(20) NOT NULL -- 'Total connections for the account.'
) ;
</schema>
<question>
Tell me the number of active connections each user has.
</question>
<sql>
"""
```
### response
```sql
SELECT USER, CURRENT_CONNECTIONS FROM accounts;
```
### prompt
```python
prompt = """Generate the SQL query for SkySQL performance schema for the following question.
<example>
--question: What are the top 10 most frequently used queries/statements?
--sql: SELECT DIGEST_TEXT, COUNT(*) as frequency FROM performance_schema.events_statements_summary_by_digest GROUP BY DIGEST_TEXT ORDER BY frequency DESC LIMIT 10;
</example>
<schema>
CREATE TABLE `file_summary_by_instance` (
`FILE_NAME` varchar(512) NOT NULL -- 'File name.',
`EVENT_NAME` varchar(128) NOT NULL -- 'Event name.',
`OBJECT_INSTANCE_BEGIN` bigint(20) unsigned NOT NULL -- 'Address in memory. Together with FILE_NAME and EVENT_NAME uniquely identifies a row.',
`COUNT_STAR` bigint(20) unsigned NOT NULL -- 'Number of summarized events',
`SUM_TIMER_WAIT` bigint(20) unsigned NOT NULL -- 'Total wait time of the summarized events that are timed.',
`MIN_TIMER_WAIT` bigint(20) unsigned NOT NULL -- 'Minimum wait time of the summarized events that are timed.',
`AVG_TIMER_WAIT` bigint(20) unsigned NOT NULL -- 'Average wait time of the summarized events that are timed.',
`MAX_TIMER_WAIT` bigint(20) unsigned NOT NULL -- 'Maximum wait time of the summarized events that are timed.',
`COUNT_READ` bigint(20) unsigned NOT NULL -- 'Number of all read operations, including FGETS, FGETC, FREAD, and READ.',
`SUM_TIMER_READ` bigint(20) unsigned NOT NULL -- 'Total wait time of all read operations that are timed.',
`MIN_TIMER_READ` bigint(20) unsigned NOT NULL -- 'Minimum wait time of all read operations that are timed.',
`AVG_TIMER_READ` bigint(20) unsigned NOT NULL -- 'Average wait time of all read operations that are timed.',
`MAX_TIMER_READ` bigint(20) unsigned NOT NULL -- 'Maximum wait time of all read operations that are timed.',
`SUM_NUMBER_OF_BYTES_READ` bigint(20) NOT NULL -- 'Bytes read by read operations.',
`COUNT_WRITE` bigint(20) unsigned NOT NULL -- 'Number of all write operations, including FPUTS, FPUTC, FPRINTF, VFPRINTF, FWRITE, and PWRITE.',
`SUM_TIMER_WRITE` bigint(20) unsigned NOT NULL -- 'Total wait time of all write operations that are timed.',
`MIN_TIMER_WRITE` bigint(20) unsigned NOT NULL -- 'Minimum wait time of all write operations that are timed.',
`AVG_TIMER_WRITE` bigint(20) unsigned NOT NULL -- 'Average wait time of all write operations that are timed.',
`MAX_TIMER_WRITE` bigint(20) unsigned NOT NULL -- 'Maximum wait time of all write operations that are timed.',
`SUM_NUMBER_OF_BYTES_WRITE` bigint(20) NOT NULL -- 'Bytes written by write operations.',
`COUNT_MISC` bigint(20) unsigned NOT NULL -- 'Number of all miscellaneous operations not counted above, including CREATE, DELETE, OPEN, CLOSE, STREAM_OPEN, STREAM_CLOSE, SEEK, TELL, FLUSH, STAT, FSTAT, CHSIZE, RENAME, and SYNC.',
`SUM_TIMER_MISC` bigint(20) unsigned NOT NULL -- 'Total wait time of all miscellaneous operations that are timed.',
`MIN_TIMER_MISC` bigint(20) unsigned NOT NULL -- 'Minimum wait time of all miscellaneous operations that are timed.',
`AVG_TIMER_MISC` bigint(20) unsigned NOT NULL -- 'Average wait time of all miscellaneous operations that are timed.',
`MAX_TIMER_MISC` bigint(20) unsigned NOT NULL -- 'Maximum wait time of all miscellaneous operations that are timed.'
);
</schema>
<question>
List out 10 names of the files with the most read and writes
</question>
<sql>
"""
```
### response
```sql
SELECT FILE_NAME FROM file_summary_by_instance ORDER BY SUM_NUMBER_OF_BYTES_READ DESC, SUM_NUMBER_OF_BYTES_WRITE DESC LIMIT 10;
```
### 4. Function Calling
### prompt
```python
prompt = """
Give a function call in python langugae for the following question:
<example_response>
--doc: Description: This function logs a curl command in debug mode.
Parameters:
- method (str): The HTTP method to use for the request.
- url (str): The URL to send the request to.
- data (dict, optional): The data to send in the request. Defaults to None.
- headers (dict, optional): The headers to send with the request. Defaults to None.
- level (int, optional): The log level to use for this log message. Defaults to logging.DEBUG.
Returns:
- None
Example:
log_curl_debug('GET', 'https://example.com')
--question: log a curl PUT request for url https://web.io/
--function_call: log_curl_debug(method='PUT', url = 'https://web.io')
</example_response>
<doc>
Function Name: make_get_req()
Description: This function is used to make a GET request.
Parameters:
- path (str): The path of the URL to be requested.
- data (dict): The data to be sent in the body of the request.
- flags (dict): The flags to be sent in the request.
- params (dict): The parameters to be sent in the request.
- headers (dict): The headers to be sent in the request.
- not_json_response (bool): OPTIONAL: If set to True, the function will return the raw response content instead of trying to parse it as JSON.
- trailing (str): OPTIONAL: For wrapping slash symbol in the end of string.
- absolute (bool): OPTIONAL: If set to True, the function will not prefix the URL with the base URL.
- advanced_mode (bool): OPTIONAL: If set to True, the function will return the raw response instead of trying to parse it as JSON.
Returns:
- Union[str, dict, list, None]: The response content as a string, a dictionary, a list, or None if the response was not successful.
</doc>
<instruction>
1. Strictly use named parameters mentioned in the doc to generate function calls.
2. Only return the response as python parsable string version of function call.
3. mention the 'self' parameter if required.
</instruction>
<question>
Make a GET request for the URL parameter using variable_2. For the params parameter, use 'weight' as one of the keys with variable_3 as its value, and 'width' as another key with a value of 10. For the data parameter, use variable_1. Prefix the URL with the base URL, and ensure the response is in raw format.
</question>
<function_call>
"""
```
### response
```python
make_get_req(path='https://example.com/api/v1/users', data=variable_1, params={'weight': variable_3, 'width': 10}, headers={'Content-Type': 'application/json'}, not_json_response=True, absolute=True)
```
### prompt
```python
prompt = """
Give only function call in python langugae as response for the following question:
<example_response>
--doc:
Function:
Help on function head in module pandas.core.generic:
head(self, n: 'int' = 5) -> 'Self'
Return the first `n` rows.
This function returns the first `n` rows for the object based
on position. It is useful for quickly testing if your object
has the right type of data in it.
For negative values of `n`, this function returns all rows except
the last `|n|` rows, equivalent to ``df[:n]``.
If n is larger than the number of rows, this function returns all rows.
Parameters
----------
n : int, default 5
Number of rows to select.
Returns
-------
same type as caller
The first `n` rows of the caller object.
See Also
--------
DataFrame.tail: Returns the last `n` rows.
Examples
--------
>>> df = pd.DataFrame({'animal': ['alligator', 'bee', 'falcon', 'lion',
... 'monkey', 'parrot', 'shark', 'whale', 'zebra']})
>>> df
animal
0 alligator
--question: Get the top 5 rows with the highest Engagement_Score. Parameter Description: Use 5 as Number of rows to return ,Use variable_3 as Sorted DataFrame, Do not call any other function, Pass variable to self parameter for method calls
--function_call: head(self=variable_3, n=5)
</example_response>
<doc>
Function: sort_values
sort_values in module pandas.core.frame:
sort_values(self, by: 'IndexLabel', *, axis: 'Axis' = 0, ascending: 'bool | list[bool] | tuple[bool, ...]' = True, inplace: 'bool' = False, kind: 'SortKind' = 'quicksort', na_position: 'str' = 'last', ignore_index: 'bool' = False, key: 'ValueKeyFunc | None' = None) -> 'DataFrame | None'
Sort by the values along either axis.
Parameters
----------
by : str or list of str
Name or list of names to sort by.
- if `axis` is 0 or `'index'` then `by` may contain index
levels and/or column labels.
- if `axis` is 1 or `'columns'` then `by` may contain column
levels and/or index labels.
axis : "{0 or 'index', 1 or 'columns'}", default 0
Axis to be sorted.
ascending : bool or list of bool, default True
Sort ascending vs. descending. Specify list for multiple sort
orders. If this is a list of bools, must match the length of
the
</doc>
<instruction>
1. Strictly use named parameters mentioned in the doc to generate function calls.
2. Only return the response as python parsable string version of function call.
3. Use the 'self' parameter if required in the function call with it's value in named keyword format.
</instruction>
<question>
Using the above function, Sort the DataFrame by the Engagement_Score in descending order. Parameter Description: Use Engagement_Score as Column name to sort by ,Use False as Sort in descending order ,Use variable_1 as DataFrame to sort, Do not call any other function, Pass variable to self parameter for method calls
</question>
<function_call>
"""
```
### response
```python
sort_values(self=variable_1, by='Engagement_Score', ascending=False)
```
### Team
Avi Kothari, Gyan Ranjan, Pratham Gupta, Ritvik Aryan Kalra, Soham Acharya |
ungonzal/tweet_eval | ungonzal | 2024-05-13T17:55:06Z | 0 | 0 | fastai | [
"fastai",
"region:us"
] | null | 2024-05-13T17:51:30Z | ---
tags:
- fastai
---
# Amazing!
π₯³ Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using π€ Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner π€! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
afrideva/pip-code-bandit-GGUF | afrideva | 2024-05-13T17:52:28Z | 12 | 0 | transformers | [
"transformers",
"gguf",
"python",
"java",
"cpp",
"sql",
"function calling",
"unit tests",
"causalLM",
"codeLLAMA modified archi",
"document",
"code",
"code2doc",
"instruction_tuned",
"basemodel",
"pytorch",
"docstring",
"documentation",
"text-generation-inference",
"plan",
"planner",
"ggml",
"quantized",
"text-generation",
"en",
"base_model:PipableAI/pip-code-bandit",
"base_model:quantized:PipableAI/pip-code-bandit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-05-13T17:45:32Z | ---
base_model: PipableAI/pip-code-bandit
inference: true
language:
- en
library_name: transformers
license: apache-2.0
metrics:
- accuracy
model_creator: PipableAI
model_name: pip-code-bandit
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- python
- java
- cpp
- sql
- function calling
- unit tests
- causalLM
- codeLLAMA modified archi
- document
- code
- code2doc
- instruction_tuned
- basemodel
- pytorch
- docstring
- documentation
- text-generation-inference
- plan
- planner
- gguf
- ggml
- quantized
widget:
- example_title: example
text: '<example_response>--code:def function_divide2(x): return x / 2--question:Document
the code--doc:Description:This function takes a number and divides it by 2.Parameters:-
x (numeric): The input value to be divided by 2.Returns:- float: The result of
x divided by 2.Example:To call the function, use the following code:function_divide2(1.0)</example_response><function_code>def
_plot_bounding_polygon(polygons_coordinates, output_html_path=bounding_polygon_map.html):map_center
= [sum([coord[0]for polygon_coords in polygons_coordinatesfor coord in polygon_coords])/
sum([len(polygon_coords) for polygon_coords in polygons_coordinates]),sum([coord[1]for
polygon_coords in polygons_coordinatesfor coord in polygon_coords])/ sum([len(polygon_coords)
for polygon_coords in polygons_coordinates]),]my_map = folium.Map(location=map_center,
zoom_start=12)for polygon_coords in polygons_coordinates:folium.Polygon(locations=polygon_coords,color=blue,fill=True,fill_color=blue,fill_opacity=0.2,).add_to(my_map)marker_cluster
= MarkerCluster().add_to(my_map)for polygon_coords in polygons_coordinates:for
coord in polygon_coords:folium.Marker(location=[coord[0], coord[1]], popup=fCoordinates:
{coord}).add_to(marker_cluster)draw = Draw(export=True)draw.add_to(my_map)my_map.save(output_html_path)return
output_html_path</function_code><question>Document the python code above giving
function description ,parameters and return type and example how to call the function</question><doc>'
---
# pip-code-bandit-GGUF
Quantized GGUF model files for [pip-code-bandit](https://huggingface.co/PipableAI/pip-code-bandit) from [PipableAI](https://huggingface.co/PipableAI)
## Original Model Card:
# pip-code-bandit
[PipableAI](https://www.pipable.ai/)
[colab_notebook](https://colab.research.google.com/drive/10av3SxFf0Psx_IkmZbcUhiVznStV5pVS?usp=sharing)
[pipflow](https://github.com/PipableAI/pipflow)
[linkedin_post](https://www.linkedin.com/posts/pipable%2Eai_releasing-strategy-activity-7195750109886783489-tHrz?utm_source=share&utm_medium=member_desktop)
[reddit_post](https://www.reddit.com/r/LocalLLaMA/comments/1cqxdl9/unveiling_pipcodebandit_empowering_ai_in_agentic/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button)
## Objective

Given a goal and tools, can AI intelligently use the tools to reach the goal?\
What if it has a meagre 1.3b params/neurons akin to that of an owl? Can it follow instructions and plan to reach a goal?\
It can!\
Releasing **pip-code-bandit** and **pipflow**\
A `model` and a `library` to manage and run goal-oriented agentic system.
## Model attributes
```javascript
-- number of params ~ 1.3b [2.9 Gb GPU memory footprint]
-- sequence length ~ 16.3k [Can go higher but will show performance degradation]
-- license - apache 2.0
-- instruction following , RL tuned.
-- tasks:
1. complex planning(plan) of sequential function calls | a list of callables and goal
2. corrected plan | feedback instructions with error
3. function calling | doc or code and goal
4. code generation | plan and goal
5. code generation | goal
6. doc generation | code
7. code generation | doc
8. file parsed to json | any raw data
9. sql generation | schema, question, instructions and examples
```
## How did we build it?
We used a simulator to simulate environments where the model could play games to achieve goals, given a set of actions available to it.
All the model could do was find the right action and config to incur a positive reward.
The reward policy is around the concept of a model going to a stable state of zero net sum reward for both good and bad behaviour.
In this setup, the model, which was pre-trained on code, function documentation, and similar OS datasets, was RL-tuned for reliability and instruction-following.
## License
```bash
complete open-sourced - apache 2.0. License
```
## Usage
### NOTE:
If you wish to try this model without utilizing your GPU, we have hosted the model on our end. To execute the library using the hosted model, initialize the generator as shown below:
```bash
pip3 install git+https://github.com/PipableAI/pipflow.git
```
```python
from pipflow import PipFlow
generator = PipFlow()
```
We have hosted the model at https://playground.pipable.ai/infer. Hence, one can also make a POST request to this endpoint with the following payload:
```json
{
"model_name": "PipableAI/pip-code-bandit",
"prompt": "prompt",
"max_new_tokens": "400"
}
```
```bash
curl -X 'POST' \
'https://playground.pipable.ai/infer' \
-H 'accept: application/json' \
-H 'Content-Type: application/x-www-form-urlencoded' \
-d 'model_name=PipableAI%2Fpip-code-bandit&prompt="YOUR PROMPT"&max_new_tokens=400'
```
Alternatively, you can directly access the UI endpoint at https://playground.pipable.ai/docs#/default/infer_infer_post.
### Library Usage
To directly use the model's capabilities without putting extra effort into schemas and prompts, try to use [pipflow](https://github.com/PipableAI/pipflow).
For detailed usage, refer to the [colab_notebook](https://colab.research.google.com/drive/10av3SxFf0Psx_IkmZbcUhiVznStV5pVS?usp=sharing)
### Model Usage
```bash
pip install transformers accelerate torch
```
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from accelerate import Accelerator
model =AutoModelForCausalLM.from_pretrained("PipableAI/pip-code-bandit",torch_dtype=torch.bfloat16,device_map="auto")
tokenizer = tokenizer = AutoTokenizer.from_pretrained("PipableAI/pip-code-bandit")
new_tokens = 600
prompt = """
<question>
Generate a python function for adding two numbers.
</question>
<code>
"""
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=new_tokens)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
response = response.split("<code>")[1].split("</code>")[0]
print(response)
```
### Prompt
```python
prompt = f"""<example_response>{--question , --query}</example_response><function_code>{code}</function_code>
<question>Give one line description of the python code above in natural language.</question>
<doc>"""
prompt = f"""<example_response>{example of some --question: , --query}</example_response><schema>{schema with cols described}</schema>
<question>Write a sql query to ....</question>
<sql>"""
```
### Team
```doc
Avi Kothari, Gyan Ranjan, Pratham Gupta, Ritvik Aryan Kalra, Soham Acharya
``` |
EthanRhys/SA-55 | EthanRhys | 2024-05-13T17:43:08Z | 0 | 0 | null | [
"license:openrail++",
"region:us"
] | null | 2024-05-13T17:42:03Z | ---
license: openrail++
---
|
jonathanjordan21/TinyLlama-kompres | jonathanjordan21 | 2024-05-13T17:42:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-13T17:41:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Litzy619/PHI30511HMA13H | Litzy619 | 2024-05-13T17:40:37Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:finetune:microsoft/Phi-3-mini-4k-instruct",
"license:mit",
"region:us"
] | null | 2024-05-13T07:04:44Z | ---
license: mit
base_model: microsoft/Phi-3-mini-4k-instruct
tags:
- generated_from_trainer
model-index:
- name: PHI30511HMA13H
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PHI30511HMA13H
This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0823
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6437 | 0.09 | 10 | 0.2765 |
| 0.1835 | 0.18 | 20 | 0.1451 |
| 0.1607 | 0.27 | 30 | 0.1438 |
| 0.139 | 0.36 | 40 | 0.1311 |
| 0.1248 | 0.45 | 50 | 0.1177 |
| 0.1233 | 0.54 | 60 | 0.1068 |
| 0.0966 | 0.63 | 70 | 0.0814 |
| 0.0851 | 0.73 | 80 | 0.0705 |
| 0.0809 | 0.82 | 90 | 0.0802 |
| 0.0744 | 0.91 | 100 | 0.0700 |
| 0.0788 | 1.0 | 110 | 0.0774 |
| 0.0466 | 1.09 | 120 | 0.0858 |
| 0.0576 | 1.18 | 130 | 0.0824 |
| 0.0586 | 1.27 | 140 | 0.0736 |
| 0.0619 | 1.36 | 150 | 0.0723 |
| 0.0588 | 1.45 | 160 | 0.0713 |
| 0.0524 | 1.54 | 170 | 0.0810 |
| 0.0569 | 1.63 | 180 | 0.0759 |
| 0.0502 | 1.72 | 190 | 0.0779 |
| 0.0569 | 1.81 | 200 | 0.0679 |
| 0.0517 | 1.9 | 210 | 0.0700 |
| 0.0466 | 1.99 | 220 | 0.0682 |
| 0.0213 | 2.08 | 230 | 0.0821 |
| 0.0166 | 2.18 | 240 | 0.1070 |
| 0.0177 | 2.27 | 250 | 0.1156 |
| 0.02 | 2.36 | 260 | 0.0961 |
| 0.0263 | 2.45 | 270 | 0.0826 |
| 0.0126 | 2.54 | 280 | 0.0851 |
| 0.0181 | 2.63 | 290 | 0.0858 |
| 0.0233 | 2.72 | 300 | 0.0839 |
| 0.0196 | 2.81 | 310 | 0.0827 |
| 0.0153 | 2.9 | 320 | 0.0823 |
| 0.0192 | 2.99 | 330 | 0.0823 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.14.1
|
Litzy619/PHI30511HMA14H | Litzy619 | 2024-05-13T17:39:53Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:finetune:microsoft/Phi-3-mini-4k-instruct",
"license:mit",
"region:us"
] | null | 2024-05-13T07:16:53Z | ---
license: mit
base_model: microsoft/Phi-3-mini-4k-instruct
tags:
- generated_from_trainer
model-index:
- name: PHI30511HMA14H
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PHI30511HMA14H
This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0823
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6437 | 0.09 | 10 | 0.2765 |
| 0.1835 | 0.18 | 20 | 0.1451 |
| 0.1607 | 0.27 | 30 | 0.1438 |
| 0.139 | 0.36 | 40 | 0.1311 |
| 0.1248 | 0.45 | 50 | 0.1177 |
| 0.1233 | 0.54 | 60 | 0.1068 |
| 0.0966 | 0.63 | 70 | 0.0814 |
| 0.0851 | 0.73 | 80 | 0.0705 |
| 0.0809 | 0.82 | 90 | 0.0802 |
| 0.0744 | 0.91 | 100 | 0.0700 |
| 0.0788 | 1.0 | 110 | 0.0774 |
| 0.0466 | 1.09 | 120 | 0.0858 |
| 0.0576 | 1.18 | 130 | 0.0824 |
| 0.0586 | 1.27 | 140 | 0.0736 |
| 0.0619 | 1.36 | 150 | 0.0723 |
| 0.0588 | 1.45 | 160 | 0.0713 |
| 0.0524 | 1.54 | 170 | 0.0810 |
| 0.0569 | 1.63 | 180 | 0.0759 |
| 0.0502 | 1.72 | 190 | 0.0779 |
| 0.0569 | 1.81 | 200 | 0.0679 |
| 0.0517 | 1.9 | 210 | 0.0700 |
| 0.0466 | 1.99 | 220 | 0.0682 |
| 0.0213 | 2.08 | 230 | 0.0821 |
| 0.0166 | 2.18 | 240 | 0.1070 |
| 0.0177 | 2.27 | 250 | 0.1156 |
| 0.02 | 2.36 | 260 | 0.0961 |
| 0.0263 | 2.45 | 270 | 0.0826 |
| 0.0126 | 2.54 | 280 | 0.0851 |
| 0.0181 | 2.63 | 290 | 0.0858 |
| 0.0233 | 2.72 | 300 | 0.0839 |
| 0.0196 | 2.81 | 310 | 0.0827 |
| 0.0153 | 2.9 | 320 | 0.0823 |
| 0.0192 | 2.99 | 330 | 0.0823 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.14.1
|
vicaloy/llama-2-13-b-chat-autotrain | vicaloy | 2024-05-13T17:36:20Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"autotrain",
"text-generation-inference",
"peft",
"conversational",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-12T13:13:01Z | ---
license: other
library_name: transformers
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
widget:
- messages:
- role: user
content: What is your favorite condiment?
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
jfranklin-foundry/foundry_llama_flock_task11715621570 | jfranklin-foundry | 2024-05-13T17:34:35Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-13T17:31:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
niranjanramarajar/Llama-3-Tamil-v0-5 | niranjanramarajar | 2024-05-13T17:30:50Z | 31 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-13T17:25:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
cobrakenji/granite-8b-code-instruct-Q4_K_M-GGUF | cobrakenji | 2024-05-13T17:15:11Z | 4 | 0 | transformers | [
"transformers",
"gguf",
"code",
"granite",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"dataset:bigcode/commitpackft",
"dataset:TIGER-Lab/MathInstruct",
"dataset:meta-math/MetaMathQA",
"dataset:glaiveai/glaive-code-assistant-v3",
"dataset:glaive-function-calling-v2",
"dataset:bugdaryan/sql-create-context-instruction",
"dataset:garage-bAInd/Open-Platypus",
"dataset:nvidia/HelpSteer",
"base_model:ibm-granite/granite-8b-code-base-4k",
"base_model:quantized:ibm-granite/granite-8b-code-base-4k",
"license:apache-2.0",
"model-index",
"region:us",
"conversational"
] | text-generation | 2024-05-13T17:14:55Z | ---
license: apache-2.0
library_name: transformers
tags:
- code
- granite
- llama-cpp
- gguf-my-repo
base_model: ibm-granite/granite-8b-code-base
datasets:
- bigcode/commitpackft
- TIGER-Lab/MathInstruct
- meta-math/MetaMathQA
- glaiveai/glaive-code-assistant-v3
- glaive-function-calling-v2
- bugdaryan/sql-create-context-instruction
- garage-bAInd/Open-Platypus
- nvidia/HelpSteer
metrics:
- code_eval
pipeline_tag: text-generation
inference: false
model-index:
- name: granite-8b-code-instruct
results:
- task:
type: text-generation
dataset:
name: HumanEvalSynthesis(Python)
type: bigcode/humanevalpack
metrics:
- type: pass@1
value: 57.9
name: pass@1
- type: pass@1
value: 52.4
name: pass@1
- type: pass@1
value: 58.5
name: pass@1
- type: pass@1
value: 43.3
name: pass@1
- type: pass@1
value: 48.2
name: pass@1
- type: pass@1
value: 37.2
name: pass@1
- type: pass@1
value: 53.0
name: pass@1
- type: pass@1
value: 42.7
name: pass@1
- type: pass@1
value: 52.4
name: pass@1
- type: pass@1
value: 36.6
name: pass@1
- type: pass@1
value: 43.9
name: pass@1
- type: pass@1
value: 16.5
name: pass@1
- type: pass@1
value: 39.6
name: pass@1
- type: pass@1
value: 40.9
name: pass@1
- type: pass@1
value: 48.2
name: pass@1
- type: pass@1
value: 41.5
name: pass@1
- type: pass@1
value: 39.0
name: pass@1
- type: pass@1
value: 32.9
name: pass@1
---
# cobrakenji/granite-8b-code-instruct-Q4_K_M-GGUF
This model was converted to GGUF format from [`ibm-granite/granite-8b-code-instruct`](https://huggingface.co/ibm-granite/granite-8b-code-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ibm-granite/granite-8b-code-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo cobrakenji/granite-8b-code-instruct-Q4_K_M-GGUF --model granite-8b-code-instruct.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo cobrakenji/granite-8b-code-instruct-Q4_K_M-GGUF --model granite-8b-code-instruct.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m granite-8b-code-instruct.Q4_K_M.gguf -n 128
```
|
Rodr16020/GNS3_Python_Code_Llama-2-Chat-Seele-v_2 | Rodr16020 | 2024-05-13T17:12:00Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-13T16:59:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kyl23/hw3_RTE_bitfit_1e-5 | kyl23 | 2024-05-13T17:10:37Z | 162 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-13T17:09:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sanchit42/Mistral-7b-4bit-finetune_2 | sanchit42 | 2024-05-13T17:09:27Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-13T17:05:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Seanxh/gemma-2b-flock-1715619914 | Seanxh | 2024-05-13T17:09:06Z | 140 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-13T17:05:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kumarme072/model_med_195_E | kumarme072 | 2024-05-13T17:04:04Z | 126 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:kumarme072/med_model_1",
"base_model:finetune:kumarme072/med_model_1",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-13T16:52:02Z | ---
base_model: kumarme072/med_model_1
tags:
- generated_from_trainer
model-index:
- name: model_med_195_E
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_med_195_E
This model is a fine-tuned version of [kumarme072/med_model_1](https://huggingface.co/kumarme072/med_model_1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3613
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.1689 | 1.0 | 109 | 4.8744 |
| 4.9643 | 2.0 | 218 | 4.8066 |
| 4.9906 | 3.0 | 327 | 4.7453 |
| 4.8863 | 4.0 | 436 | 4.6879 |
| 4.9099 | 5.0 | 545 | 4.6287 |
| 4.8006 | 6.0 | 654 | 4.5485 |
| 4.6872 | 7.0 | 763 | 4.4754 |
| 4.6197 | 8.0 | 872 | 4.4153 |
| 4.6041 | 9.0 | 981 | 4.3753 |
| 4.5845 | 10.0 | 1090 | 4.3613 |
### Framework versions
- Transformers 4.39.0
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
|
miibanl/ModeloTextosEconomicos | miibanl | 2024-05-13T16:57:16Z | 0 | 0 | fastai | [
"fastai",
"region:us"
] | null | 2024-05-13T16:57:09Z | ---
tags:
- fastai
---
# Amazing!
π₯³ Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using π€ Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner π€! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
Arshik/mistral-7b-finetuned-chat-model | Arshik | 2024-05-13T16:55:24Z | 137 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-13T16:40:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
luis56125/mbart-neutralization | luis56125 | 2024-05-13T16:53:51Z | 14 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mbart",
"text2text-generation",
"simplification",
"generated_from_trainer",
"base_model:facebook/mbart-large-50",
"base_model:finetune:facebook/mbart-large-50",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-13T15:34:14Z | ---
license: mit
base_model: facebook/mbart-large-50
tags:
- simplification
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-neutralization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-neutralization
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 8.0077
- Bleu: 0.0536
- Gen Len: 31.6633
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 338 | 14.0536 | 0.0092 | 37.49 |
| 5.7164 | 2.0 | 676 | 8.0077 | 0.0536 | 31.6633 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Luca-Engel/finetuned_text_class | Luca-Engel | 2024-05-13T16:52:58Z | 119 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-13T16:52:45Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- recall
- precision
- f1
model-index:
- name: finetuned_text_class
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_text_class
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4994
- Accuracy: 0.7702
- Recall: 0.8076
- Precision: 0.7557
- F1: 0.7808
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | Precision | F1 |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.499 | 0.9961 | 193 | 0.4700 | 0.7602 | 0.7599 | 0.7652 | 0.7625 |
| 0.3852 | 1.9974 | 387 | 0.4994 | 0.7702 | 0.8076 | 0.7557 | 0.7808 |
| 0.1778 | 2.9987 | 581 | 0.6317 | 0.7638 | 0.6688 | 0.8320 | 0.7415 |
| 0.1007 | 4.0 | 775 | 0.8801 | 0.7609 | 0.7662 | 0.7628 | 0.7645 |
| 0.0567 | 4.9806 | 965 | 1.0289 | 0.7657 | 0.7586 | 0.7744 | 0.7664 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
tkempto1/hybrid-qa-v2 | tkempto1 | 2024-05-13T16:52:32Z | 68 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"question-answering",
"arxiv:1910.09700",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-05-07T00:35:35Z | ---
license: mit
library_name: transformers
pipeline_tag: question-answering
---
# Hybrid QA Model
This model takes the result of selected generative and extractive models and returns the result with the higher confidence.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DrNicefellow/microscopic-mamba-2.1B-hf-1.0ksteps | DrNicefellow | 2024-05-13T16:50:40Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"mamba",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-13T16:10:19Z | ---
license: apache-2.0
---
Self trained microscopic Mamba. Around 2.1G parameters.
The tokenizer is the one from https://huggingface.co/state-spaces/mamba-2.8b-hf.
It is being trained on around 400B tokens and this is step 1.0k.
The evaluation is being conducted now.
## License
This model is available under the Apache 2.0 License.
## Discord Server
Join our Discord server [here](https://discord.gg/xhcBDEM3).
## Feeling Generous? π
Eager to buy me a cup of 2$ coffe or iced tea?π΅β Sure, here is the link: [https://ko-fi.com/drnicefellow](https://ko-fi.com/drnicefellow). Please add a note on which one you want me to drink?
|
tkempto1/hybrid-qa3 | tkempto1 | 2024-05-13T16:47:35Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-13T16:41:35Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
GeorgeBredis/Vikhr-7B-instruct_0.2-Q4_K_M-GGUF | GeorgeBredis | 2024-05-13T16:41:34Z | 3 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"ru",
"en",
"dataset:zjkarina/Vikhr_instruct",
"dataset:dichspace/darulm",
"endpoints_compatible",
"region:us"
] | null | 2024-05-13T16:41:22Z | ---
language:
- ru
- en
tags:
- llama-cpp
- gguf-my-repo
datasets:
- zjkarina/Vikhr_instruct
- dichspace/darulm
---
# GeorgeBredis/Vikhr-7B-instruct_0.2-Q4_K_M-GGUF
This model was converted to GGUF format from [`Vikhrmodels/Vikhr-7B-instruct_0.2`](https://huggingface.co/Vikhrmodels/Vikhr-7B-instruct_0.2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Vikhrmodels/Vikhr-7B-instruct_0.2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo GeorgeBredis/Vikhr-7B-instruct_0.2-Q4_K_M-GGUF --model vikhr-7b-instruct_0.2.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo GeorgeBredis/Vikhr-7B-instruct_0.2-Q4_K_M-GGUF --model vikhr-7b-instruct_0.2.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m vikhr-7b-instruct_0.2.Q4_K_M.gguf -n 128
```
|
emilykang/Gemma_medprob-surgery | emilykang | 2024-05-13T16:35:04Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-13T00:24:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jfranklin-foundry/foundry_llama_flock_task11715617928 | jfranklin-foundry | 2024-05-13T16:33:58Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-13T16:30:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
theskrtnerd/soft-drinks | theskrtnerd | 2024-05-13T16:32:08Z | 30 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"stable-diffusion",
"text-to-image",
"diffusion-models-class",
"dreambooth-hackathon",
"animal",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-05-13T16:28:24Z | ---
license: creativeml-openrail-m
tags:
- pytorch
- diffusers
- stable-diffusion
- text-to-image
- diffusion-models-class
- dreambooth-hackathon
- animal
widget:
- text: a 6 pack of soft drinks
---
# DreamBooth model for the soft concept trained by theskrtnerd on the khanhgn/coca-backdoor dataset.
This is a Stable Diffusion model fine-tuned on the soft concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a 6 pack of soft drinks**
This model was created as part of the DreamBooth Hackathon π₯. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
## Description
This is a Stable Diffusion model fine-tuned on `drinks` images for the animal theme.
## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('theskrtnerd/soft-drinks')
image = pipeline().images[0]
image
```
|
shahkaran2807/llama3-8b-oig-unsloth-merged | shahkaran2807 | 2024-05-13T16:30:43Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-13T16:24:04Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** shahkaran2807
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
unsloth/Yi-1.5-6B-bnb-4bit | unsloth | 2024-05-13T16:30:13Z | 81 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-13T16:26:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ambind/vosk-model-small-it-0.22 | ambind | 2024-05-13T16:28:56Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-13T16:23:36Z | ---
license: apache-2.0
---
|
eashuu/medllama3 | eashuu | 2024-05-13T16:27:38Z | 0 | 0 | null | [
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2024-05-13T15:58:18Z | ---
license: apache-2.0
---
|
unsloth/Yi-1.5-6B | unsloth | 2024-05-13T16:26:28Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-13T16:18:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kkumtori/vivit-b-16x2-kinetics400-0513-O_M | kkumtori | 2024-05-13T16:23:39Z | 62 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vivit",
"video-classification",
"generated_from_trainer",
"base_model:google/vivit-b-16x2-kinetics400",
"base_model:finetune:google/vivit-b-16x2-kinetics400",
"license:mit",
"endpoints_compatible",
"region:us"
] | video-classification | 2024-05-13T09:45:48Z | ---
license: mit
base_model: google/vivit-b-16x2-kinetics400
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vivit-b-16x2-kinetics400-0513-O_M
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vivit-b-16x2-kinetics400-0513-O_M
This model is a fine-tuned version of [google/vivit-b-16x2-kinetics400](https://huggingface.co/google/vivit-b-16x2-kinetics400) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0697
- Accuracy: 0.805
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 2900
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5951 | 0.1 | 290 | 1.5856 | 0.45 |
| 1.1484 | 1.1 | 580 | 0.9889 | 0.65 |
| 0.436 | 2.1 | 870 | 0.7230 | 0.77 |
| 0.1011 | 3.1 | 1160 | 1.0218 | 0.78 |
| 0.0631 | 4.1 | 1450 | 1.0562 | 0.805 |
| 0.0005 | 5.1 | 1740 | 1.0855 | 0.805 |
| 0.0004 | 6.1 | 2030 | 1.2053 | 0.785 |
| 0.0005 | 7.1 | 2320 | 1.1131 | 0.8 |
| 0.1483 | 8.1 | 2610 | 1.0447 | 0.81 |
| 0.0013 | 9.1 | 2900 | 1.0697 | 0.805 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
yuyu328/mydrive | yuyu328 | 2024-05-13T16:21:15Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-13T16:15:50Z | ---
license: apache-2.0
---
|
uisikdag/robin3 | uisikdag | 2024-05-13T16:17:34Z | 1 | 0 | diffusers | [
"diffusers",
"safetensors",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2024-04-29T19:11:31Z | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
House plans
#trained on:WhiteCase
## Model Details
```python
@dataclass
class TrainingConfig:
image_size = 512 # the generated image resolution
train_batch_size = 2
eval_batch_size = 2 # how many images to sample during evaluation
num_epochs = 100
gradient_accumulation_steps = 1
learning_rate = 1e-4
lr_warmup_steps = 500
save_image_epochs = 10
save_model_epochs = 30
mixed_precision = 'fp16' # `no` for float32, `fp16` for automatic mixed precision
output_dir = 'ddpm-butterflies-128' # the model namy locally and on the HF Hub
push_to_hub = True # whether to upload the saved model to the HF Hub
hub_private_repo = False
overwrite_output_dir = True # overwrite the old model when re-running the notebook
seed = 0
config = TrainingConfig()
```
|
ariakhosh/adapter4 | ariakhosh | 2024-05-13T16:12:57Z | 0 | 0 | null | [
"safetensors",
"arxiv:2305.14314",
"arxiv:2302.13971",
"region:us"
] | null | 2024-05-13T16:11:47Z | # QLoRA Instruction Tuned Models
| [Paper](https://arxiv.org/abs/2305.14314) | [Code](https://github.com/artidoro/qlora) | [Demo](https://huggingface.co/spaces/uwnlp/guanaco-playground-tgi) |
**The `QLoRA Instruction Tuned Models` are open-source models obtained through 4-bit QLoRA tuning of LLaMA base models on various instruction tuning datasets. They are available in 7B, 13B, 33B, and 65B parameter sizes.**
**Note: The best performing chatbot models are named [Guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco) and finetuned on OASST1. This model card is for the other models finetuned on other instruction tuning datasets.**
β οΈ These models are purely intended for research purposes and could produce problematic outputs.
## What are QLoRA Instruction Tuned Models and why use them?
- **Strong performance on MMLU** following the QLoRA instruction tuning.
- **Replicable and efficient instruction tuning procedure** that can be extended to new use cases. QLoRA training scripts are available in the [QLoRA repo](https://github.com/artidoro/qlora).
- **Rigorous comparison to 16-bit methods** (both 16-bit full-finetuning and LoRA) in [our paper](https://arxiv.org/abs/2305.14314) demonstrates the effectiveness of 4-bit QLoRA finetuning.
- **Lightweight** checkpoints which only contain adapter weights.
## License and Intended Use
QLoRA Instruction Tuned adapter weights are available under Apache 2 license. Note the use of these adapter weights, requires access to the LLaMA model weighs and therefore should be used according to the LLaMA license.
## Usage
Here is an example of how you would load Flan v2 7B in 4-bits:
```python
import torch
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
model_name = "huggyllama/llama-7b"
adapters_name = 'timdettmers/qlora-flan-7b'
model = AutoModelForCausalLM.from_pretrained(
model_name,
load_in_4bit=True,
torch_dtype=torch.bfloat16,
device_map="auto",
max_memory= {i: '24000MB' for i in range(torch.cuda.device_count())},
quantization_config=BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type='nf4'
),
)
model = PeftModel.from_pretrained(model, adapters_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
Inference can then be performed as usual with HF models as follows:
```python
prompt = "Introduce yourself"
formatted_prompt = (
f"A chat between a curious human and an artificial intelligence assistant."
f"The assistant gives helpful, detailed, and polite answers to the user's questions.\n"
f"### Human: {prompt} ### Assistant:"
)
inputs = tokenizer(formatted_prompt, return_tensors="pt").to("cuda:0")
outputs = model.generate(inputs=inputs.input_ids, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
Expected output similar to the following:
```
A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
### Human: Introduce yourself ### Assistant: I am an artificial intelligence assistant. I am here to help you with any questions you may have.
```
## Current Inference Limitations
Currently, 4-bit inference is slow. We recommend loading in 16 bits if inference speed is a concern. We are actively working on releasing efficient 4-bit inference kernels.
Below is how you would load the model in 16 bits:
```python
model_name = "huggyllama/llama-7b"
adapters_name = 'timdettmers/qlora-flan-7b'
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto",
max_memory= {i: '24000MB' for i in range(torch.cuda.device_count())},
)
model = PeftModel.from_pretrained(model, adapters_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Model Card
**Architecture**: The models released here are LoRA adapters to be used on top of LLaMA models. They are added to all layers. For all model sizes, we use $r=64$.
**Base Model**: These models use LLaMA as base model with sizes 7B, 13B, 33B, 65B. LLaMA is a causal language model pretrained on a large corpus of text. See [LLaMA paper](https://arxiv.org/abs/2302.13971) for more details. Note that these models can inherit biases and limitations of the base model.
**Finetuning Data**: These models are finetuned on various instruction tuning datasets. The datasets used are: Alpaca, HH-RLHF, Unnatural Instr., Chip2, Longform, Self-Instruct, FLAN v2.
**Languages**: The different datasets cover different languages. We direct to the various papers and resources describing the datasets for more details.
Next, we describe Training and Evaluation details.
### Training
QLoRA Instruction Tuned Models are the result of 4-bit QLoRA supervised finetuning on different instruction tuning datasets.
All models use NormalFloat4 datatype for the base model and LoRA adapters on all linear layers with BFloat16 as computation datatype. We set LoRA $r=64$, $\alpha=16$. We also use Adam beta2 of 0.999, max grad norm of 0.3 and LoRA dropout of 0.1 for models up to 13B and 0.05 for 33B and 65B models.
For the finetuning process, we use constant learning rate schedule and paged AdamW optimizer.
### Training hyperparameters
| Parameters | Dataset | Batch size | LR | Steps | Source Length | Target Length |
|------------|----------|------------|------|-------|---------------|---------------|
| 7B | All | 16 | 2e-4 | 10000 | 384 | 128 |
| 7B | OASST1 | 16 | 2e-4 | 1875 | - | 512 |
| 7B | HH-RLHF | 16 | 2e-4 | 10000 | - | 768 |
| 7B | Longform | 16 | 2e-4 | 4000 | 512 | 1024 |
| 13B | All | 16 | 2e-4 | 10000 | 384 | 128 |
| 13B | OASST1 | 16 | 2e-4 | 1875 | - | 512 |
| 13B | HH-RLHF | 16 | 2e-4 | 10000 | - | 768 |
| 13B | Longform | 16 | 2e-4 | 4000 | 512 | 1024 |
| 33B | All | 32 | 1e-4 | 5000 | 384 | 128 |
| 33B | OASST1 | 16 | 1e-4 | 1875 | - | 512 |
| 33B | HH-RLHF | 32 | 1e-4 | 5000 | - | 768 |
| 33B | Longform | 32 | 1e-4 | 2343 | 512 | 1024 |
| 65B | All | 64 | 1e-4 | 2500 | 384 | 128 |
| 65B | OASST1 | 16 | 1e-4 | 1875 | - | 512 |
| 65B | HH-RLHF | 64 | 1e-4 | 2500 | - | 768 |
| 65B | Longform | 32 | 1e-4 | 2343 | 512 | 1024 |
### Evaluation
We use the MMLU benchmark to measure performance on a range of language understanding tasks. This is a multiple-choice benchmark covering 57 tasks including elementary mathematics, US history, computer science, law, and more. We report 5-shot test accuracy.
Dataset | 7B | 13B | 33B | 65B
---|---|---|---|---
LLaMA no tuning | 35.1 | 46.9 | 57.8 | 63.4
Self-Instruct | 36.4 | 33.3 | 53.0 | 56.7
Longform | 32.1 | 43.2 | 56.6 | 59.7
Chip2 | 34.5 | 41.6 | 53.6 | 59.8
HH-RLHF | 34.9 | 44.6 | 55.8 | 60.1
Unnatural Instruct | 41.9 | 48.1 | 57.3 | 61.3
OASST1 (Guanaco) | 36.6 | 46.4 | 57.0 | 62.2
Alpaca | 38.8 | 47.8 | 57.3 | 62.5
FLAN v2 | 44.5 | 51.4 | 59.2 | 63.9
We evaluate the generative language capabilities through automated evaluations on the Vicuna benchmark. We report the score of the QLoRA Instruction Finetuned Models relative to the score obtained by ChatGPT. The rater in this case is GPT-4 which is tasked to assign a score out of 10 to both ChatGPT and the model outputs for each prompt. We report scores for models ranging 7B to 65B and compare them to both academic and commercial baselilnes.
| Model / Dataset | Params | Model bits | Memory | ChatGPT vs Sys | Sys vs ChatGPT | Mean | 95\% CI |
|------------------|--------|------------|--------|----------------|----------------|------------------|---------|
| GPT-4 | - | - | - | 119.4\% | 110.1\% | **114.5**\% | 2.6\% |
| Bard | - | - | - | 93.2\% | 96.4\% | 94.8\% | 4.1\% |
| Guanaco | 65B | 4-bit | 41 GB | 96.7\% | 101.9\% | **99.3**\% | 4.4\% |
| Alpaca | 65B | 4-bit | 41 GB | 63.0\% | 77.9\% | 70.7\% | 4.3\% |
| FLAN v2 | 65B | 4-bit | 41 GB | 37.0\% | 59.6\% | 48.4\% | 4.6\% |
| Guanaco | 33B | 4-bit | 21 GB | 96.5\% | 99.2\% | **97.8**\% | 4.4\% |
| Open Assistant | 33B | 16-bit | 66 GB | 73.4\% | 85.7\% | 78.1\% | 5.3\% |
| Alpaca | 33B | 4-bit | 21 GB | 67.2\% | 79.7\% | 73.6\% | 4.2\% |
| FLAN v2 | 33B | 4-bit | 21 GB | 26.3\% | 49.7\% | 38.0\% | 3.9\% |
| Vicuna | 13B | 16-bit | 26 GB | 91.2\% | 98.7\% | **94.9**\% | 4.5\% |
| Guanaco | 13B | 4-bit | 10 GB | 87.3\% | 93.4\% | 90.4\% | 5.2\% |
| Alpaca | 13B | 4-bit | 10 GB | 63.8\% | 76.7\% | 69.4\% | 4.2\% |
| HH-RLHF | 13B | 4-bit | 10 GB | 55.5\% | 69.1\% | 62.5\% | 4.7\% |
| Unnatural Instr. | 13B | 4-bit | 10 GB | 50.6\% | 69.8\% | 60.5\% | 4.2\% |
| Chip2 | 13B | 4-bit | 10 GB | 49.2\% | 69.3\% | 59.5\% | 4.7\% |
| Longform | 13B | 4-bit | 10 GB | 44.9\% | 62.0\% | 53.6\% | 5.2\% |
| Self-Instruct | 13B | 4-bit | 10 GB | 38.0\% | 60.5\% | 49.1\% | 4.6\% |
| FLAN v2 | 13B | 4-bit | 10 GB | 32.4\% | 61.2\% | 47.0\% | 3.6\% |
| Guanaco | 7B | 4-bit | 5 GB | 84.1\% | 89.8\% | **87.0**\% | 5.4\% |
| Alpaca | 7B | 4-bit | 5 GB | 57.3\% | 71.2\% | 64.4\% | 5.0\% |
| FLAN v2 | 7B | 4-bit | 5 GB | 33.3\% | 56.1\% | 44.8\% | 4.0\% |
## Citation
```bibtex
@article{dettmers2023qlora,
title={QLoRA: Efficient Finetuning of Quantized LLMs},
author={Dettmers, Tim and Pagnoni, Artidoro and Holtzman, Ari and Zettlemoyer, Luke},
journal={arXiv preprint arXiv:2305.14314},
year={2023}
}
``` |
Dampfinchen/Llama-3-8B-Ultra-Instruct-SaltSprinkle-Q8_0-GGUF | Dampfinchen | 2024-05-13T16:08:40Z | 3 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:Dampfinchen/Llama-3-8B-Ultra-Instruct",
"base_model:merge:Dampfinchen/Llama-3-8B-Ultra-Instruct",
"base_model:NousResearch/Meta-Llama-3-8B",
"base_model:merge:NousResearch/Meta-Llama-3-8B",
"base_model:NousResearch/Meta-Llama-3-8B-Instruct",
"base_model:merge:NousResearch/Meta-Llama-3-8B-Instruct",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-13T16:08:11Z | ---
license: llama3
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
base_model:
- Dampfinchen/Llama-3-8B-Ultra-Instruct
- NousResearch/Meta-Llama-3-8B
- NousResearch/Meta-Llama-3-8B-Instruct
---
# Dampfinchen/Llama-3-8B-Ultra-Instruct-SaltSprinkle-Q8_0-GGUF
This model was converted to GGUF format from [`Dampfinchen/Llama-3-8B-Ultra-Instruct-SaltSprinkle`](https://huggingface.co/Dampfinchen/Llama-3-8B-Ultra-Instruct-SaltSprinkle) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Dampfinchen/Llama-3-8B-Ultra-Instruct-SaltSprinkle) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo Dampfinchen/Llama-3-8B-Ultra-Instruct-SaltSprinkle-Q8_0-GGUF --model llama-3-8b-ultra-instruct-saltsprinkle.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo Dampfinchen/Llama-3-8B-Ultra-Instruct-SaltSprinkle-Q8_0-GGUF --model llama-3-8b-ultra-instruct-saltsprinkle.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-3-8b-ultra-instruct-saltsprinkle.Q8_0.gguf -n 128
```
|
nmerkle/Meta-Llama-3-8B-Instruct-ggml-model-Q4_K_M.gguf | nmerkle | 2024-05-13T16:07:40Z | 50 | 3 | null | [
"gguf",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-13T10:06:16Z | ---
license: llama3
---
## Quantized Meta-Llama-3B-Instruct model
Tested inference on Raspberry PI Model 4 with [llama.cpp](https://github.com/ggerganov/llama.cpp). ~0.5 Tokens per second. |
eashuu/med_llama3 | eashuu | 2024-05-13T16:05:40Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-13T16:05:40Z | ---
license: apache-2.0
---
|
emilykang/Gemma_medprob-social-n-preventive-medicine_lora | emilykang | 2024-05-13T16:05:08Z | 5 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"license:gemma",
"region:us"
] | null | 2024-05-12T23:58:37Z | ---
license: gemma
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: google/gemma-2b
datasets:
- generator
model-index:
- name: Gemma_medprob-social-n-preventive-medicine_lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Gemma_medprob-social-n-preventive-medicine_lora
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 10
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.2.1+cu118
- Datasets 2.19.0
- Tokenizers 0.19.1 |
alterf/alterf-llm | alterf | 2024-05-13T16:02:14Z | 4 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"base_model:quantized:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-13T15:58:09Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** alterf
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
BeenaSamuel/5_small_bbc_news_extractive_summarizer | BeenaSamuel | 2024-05-13T15:58:56Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-13T15:58:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DUAL-GPO/phi-2-gpo-v16-i1 | DUAL-GPO | 2024-05-13T15:58:36Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"phi",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"custom_code",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:DUAL-GPO/phi-2-gpo-new-i0",
"base_model:adapter:DUAL-GPO/phi-2-gpo-new-i0",
"license:apache-2.0",
"region:us"
] | null | 2024-05-13T12:53:40Z | ---
license: apache-2.0
library_name: peft
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
base_model: DUAL-GPO/phi-2-gpo-new-i0
datasets:
- HuggingFaceH4/ultrafeedback_binarized
model-index:
- name: phi-2-gpo-v16-i1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-2-gpo-v16-i1
This model is a fine-tuned version of [DUAL-GPO/phi-2-gpo-new-i0](https://huggingface.co/DUAL-GPO/phi-2-gpo-new-i0) on the HuggingFaceH4/ultrafeedback_binarized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.2.1+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2 |
emilykang/Gemma_medprob-physiology | emilykang | 2024-05-13T15:57:29Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-12T23:48:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
evonc/taxu-v3-unit2 | evonc | 2024-05-13T15:52:47Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-13T15:52:45Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxu-v3-unit2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="evonc/taxu-v3-unit2", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Gokulapriyan9677/videomae-base-finetuned-ucf101-subset | Gokulapriyan9677 | 2024-05-13T15:49:46Z | 63 | 0 | transformers | [
"transformers",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2024-05-03T16:30:15Z | ---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1256
- Accuracy: 0.2353
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| No log | 0.3125 | 5 | 2.1706 | 0.0 |
| 2.1074 | 1.3125 | 10 | 2.0516 | 0.2222 |
| 2.1074 | 2.3125 | 15 | 2.1601 | 0.1111 |
| 2.1074 | 3.0625 | 16 | 1.9285 | 0.2222 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0
- Datasets 2.19.0
- Tokenizers 0.19.1
|
Ankita802/t5-large | Ankita802 | 2024-05-13T15:46:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-13T15:46:03Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
e2jhiubyiiyvw/Mistral-7B-Instruct-v0.2-Q5_K_M-GGUF | e2jhiubyiiyvw | 2024-05-13T15:44:30Z | 0 | 0 | null | [
"gguf",
"finetuned",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-05-13T15:44:16Z | ---
license: apache-2.0
tags:
- finetuned
- llama-cpp
- gguf-my-repo
pipeline_tag: text-generation
inference: true
widget:
- messages:
- role: user
content: What is your favorite condiment?
---
# e2jhiubyiiyvw/Mistral-7B-Instruct-v0.2-Q5_K_M-GGUF
This model was converted to GGUF format from [`mistralai/Mistral-7B-Instruct-v0.2`](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo e2jhiubyiiyvw/Mistral-7B-Instruct-v0.2-Q5_K_M-GGUF --model mistral-7b-instruct-v0.2.Q5_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo e2jhiubyiiyvw/Mistral-7B-Instruct-v0.2-Q5_K_M-GGUF --model mistral-7b-instruct-v0.2.Q5_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mistral-7b-instruct-v0.2.Q5_K_M.gguf -n 128
```
|
fine-tuned/askubuntu-l | fine-tuned | 2024-05-13T15:40:40Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"Ubuntu",
"Technical",
"Troubleshooting",
"Forum",
"Operating System",
"custom_code",
"en",
"dataset:fine-tuned/askubuntu-l",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-05-13T15:40:24Z | ---
license: apache-2.0
datasets:
- fine-tuned/askubuntu-l
- allenai/c4
language:
- en
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- Ubuntu
- Technical
- Troubleshooting
- Forum
- Operating System
---
This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case:
technical troubleshooting forum
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/askubuntu-l',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
farenassr/autotrain-diversity-3 | farenassr | 2024-05-13T15:40:17Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-13T15:39:51Z | ---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
MaryamMaksour/arabic-english-tokenizer | MaryamMaksour | 2024-05-13T15:39:12Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-13T15:39:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MSParkDev/ConcSeqBERT-Katchers-v2 | MSParkDev | 2024-05-13T15:38:52Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-13T13:17:02Z | ---
license: apache-2.0
base_model: google-bert/bert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: ConcSeqBERT-Katchers-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ConcSeqBERT-Katchers-v2
This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5013
- Accuracy: 0.7895
- F1: 0.7892
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.4689 | 1.0 | 2522 | 0.5435 | 0.7627 | 0.7611 |
| 0.4421 | 2.0 | 5044 | 0.5013 | 0.7895 | 0.7892 |
| 0.3959 | 3.0 | 7566 | 0.5284 | 0.7905 | 0.7902 |
| 0.3783 | 4.0 | 10088 | 0.6208 | 0.7969 | 0.7969 |
| 0.3641 | 5.0 | 12610 | 0.7085 | 0.7825 | 0.7820 |
| 0.3415 | 6.0 | 15132 | 0.6301 | 0.7859 | 0.7858 |
| 0.3129 | 7.0 | 17654 | 0.9330 | 0.7896 | 0.7896 |
| 0.2658 | 8.0 | 20176 | 1.0530 | 0.7844 | 0.7835 |
| 0.2322 | 9.0 | 22698 | 1.1667 | 0.7902 | 0.7897 |
| 0.1925 | 10.0 | 25220 | 1.2394 | 0.7905 | 0.7902 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.0.0
- Datasets 2.14.5
- Tokenizers 0.14.1
|
Kittech/whisper-tiny-sn-with-local-data | Kittech | 2024-05-13T15:38:10Z | 2 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"whisper-event and peft-lora",
"generated_from_trainer",
"sn",
"dataset:Kittech/kittech_shona_dataset",
"base_model:openai/whisper-tiny",
"base_model:adapter:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2024-05-13T01:59:48Z | ---
language:
- sn
license: apache-2.0
library_name: peft
tags:
- whisper-event and peft-lora
- generated_from_trainer
base_model: openai/whisper-tiny
datasets:
- Kittech/kittech_shona_dataset
metrics:
- wer
model-index:
- name: Whisper Small Sn - Bright Chirindo
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Kingdom Truth International Ministries sermons
type: Kittech/kittech_shona_dataset
config: sn_zw
split: test
args: sn_zw
metrics:
- type: wer
value: 208.88577256501785
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Sn - Bright Chirindo
This model is a fine-tuned version of [kittech/whisper-tiny-sn](https://huggingface.co/kittech/whisper-tiny-sn) on the Kingdom Truth International Ministries sermons dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9243
- Wer: 208.8858
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:--------:|:----:|:---------------:|:--------:|
| 2.5155 | 357.1429 | 2500 | 2.9250 | 212.4044 |
| 2.1292 | 714.2857 | 5000 | 2.9243 | 208.8858 |
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.2.dev0
- Tokenizers 0.19.1 |
tedad09/PolizzeDonut-SecNotCacheVuota-5Epochs | tedad09 | 2024-05-13T15:31:47Z | 51 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-05-10T09:07:26Z | ---
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: PolizzeDonut-SecNotCacheVuota-5Epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PolizzeDonut-SecNotCacheVuota-5Epochs
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
bhaskars113/toyota-paint-attribute-1.1 | bhaskars113 | 2024-05-13T15:31:18Z | 7 | 0 | setfit | [
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"dataset:bhaskars113/toyota-paint-attributes",
"arxiv:2209.11055",
"base_model:sentence-transformers/all-mpnet-base-v2",
"base_model:finetune:sentence-transformers/all-mpnet-base-v2",
"region:us"
] | text-classification | 2024-05-13T15:30:44Z | ---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
base_model: sentence-transformers/all-mpnet-base-v2
datasets:
- bhaskars113/toyota-paint-attributes
metrics:
- accuracy
widget:
- text: Hey guys, I'm buying a 2004 Mach 1 Mustang and I'm super excited! It's in
great condition and has only had one owner. Only thing is the grill mustang ornament
was stolen years ago he said and he never bothered to replace it. After searching
online I cannot find anything that's at least a reliable source. I am in Canada
by the way. If anyone knows how to search one down I would be very appreciative!
Thanks!
- text: Mine is actually gold! I think the official paint name is harvest gold. It's
nice but I'd rather something like the two-tone paints of the 2nd gen. The dull
metallic gold reminds me of boring grey old corollas lol
- text: Arrgh. Click to expand... Welcome to owning a Jeep/Dodge product. in 150,000km
of ownership of our Jeep, we have replaced everything in the suspension 2 times,
throttle body, 3 sets of plugs, various electrical things, stereo pooped the bed,
I could go on and on. The most reliable dodge/jeep product I owned was my 2011
Wrangler Once I removed all the dumb design features jeep put there, like freaking
plastic in the ball joints. Move to another brand and be MUCH happier. We have
179k on our Ford F150 5.0 and all that's been replaced is one set of plugs and
one ball joint.
- text: The car is from Utah and garage kept, so the paint is still in very good condition
- text: I've seen wonders done by a good paintless dent repair professional. The right
person with the right tools could make this look brand new, or at least better
than slightly mismatched paint.
pipeline_tag: text-classification
inference: false
---
# SetFit with sentence-transformers/all-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model trained on the [bhaskars113/toyota-paint-attributes](https://huggingface.co/datasets/bhaskars113/toyota-paint-attributes) dataset that can be used for Text Classification. This SetFit model uses [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) as the Sentence Transformer embedding model. A OneVsRestClassifier instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2)
- **Classification head:** a OneVsRestClassifier instance
- **Maximum Sequence Length:** 384 tokens
<!-- - **Number of Classes:** Unknown -->
- **Training Dataset:** [bhaskars113/toyota-paint-attributes](https://huggingface.co/datasets/bhaskars113/toyota-paint-attributes)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the π€ Hub
model = SetFitModel.from_pretrained("bhaskars113/toyota-paint-attribute-1.1")
# Run inference
preds = model("The car is from Utah and garage kept, so the paint is still in very good condition")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 5 | 33.8098 | 155 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0004 | 1 | 0.1664 | - |
| 0.0196 | 50 | 0.2377 | - |
| 0.0392 | 100 | 0.1178 | - |
| 0.0588 | 150 | 0.0577 | - |
| 0.0784 | 200 | 0.0163 | - |
| 0.0980 | 250 | 0.0265 | - |
| 0.1176 | 300 | 0.0867 | - |
| 0.1373 | 350 | 0.0181 | - |
| 0.1569 | 400 | 0.0153 | - |
| 0.1765 | 450 | 0.0411 | - |
| 0.1961 | 500 | 0.0308 | - |
| 0.2157 | 550 | 0.0258 | - |
| 0.2353 | 600 | 0.0062 | - |
| 0.2549 | 650 | 0.0036 | - |
| 0.2745 | 700 | 0.0087 | - |
| 0.2941 | 750 | 0.0025 | - |
| 0.3137 | 800 | 0.004 | - |
| 0.3333 | 850 | 0.0025 | - |
| 0.3529 | 900 | 0.0044 | - |
| 0.3725 | 950 | 0.0031 | - |
| 0.3922 | 1000 | 0.0018 | - |
| 0.4118 | 1050 | 0.0046 | - |
| 0.4314 | 1100 | 0.0013 | - |
| 0.4510 | 1150 | 0.0014 | - |
| 0.4706 | 1200 | 0.002 | - |
| 0.4902 | 1250 | 0.0015 | - |
| 0.5098 | 1300 | 0.0039 | - |
| 0.5294 | 1350 | 0.0019 | - |
| 0.5490 | 1400 | 0.0011 | - |
| 0.5686 | 1450 | 0.0008 | - |
| 0.5882 | 1500 | 0.0015 | - |
| 0.6078 | 1550 | 0.0012 | - |
| 0.6275 | 1600 | 0.0011 | - |
| 0.6471 | 1650 | 0.0008 | - |
| 0.6667 | 1700 | 0.0016 | - |
| 0.6863 | 1750 | 0.0009 | - |
| 0.7059 | 1800 | 0.0008 | - |
| 0.7255 | 1850 | 0.0008 | - |
| 0.7451 | 1900 | 0.0008 | - |
| 0.7647 | 1950 | 0.0011 | - |
| 0.7843 | 2000 | 0.0008 | - |
| 0.8039 | 2050 | 0.001 | - |
| 0.8235 | 2100 | 0.001 | - |
| 0.8431 | 2150 | 0.0009 | - |
| 0.8627 | 2200 | 0.0067 | - |
| 0.8824 | 2250 | 0.0008 | - |
| 0.9020 | 2300 | 0.0009 | - |
| 0.9216 | 2350 | 0.0009 | - |
| 0.9412 | 2400 | 0.0007 | - |
| 0.9608 | 2450 | 0.0006 | - |
| 0.9804 | 2500 | 0.0007 | - |
| 1.0 | 2550 | 0.0006 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.7.0
- Transformers: 4.40.2
- PyTorch: 2.2.1+cu121
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
loordgig/RaGa | loordgig | 2024-05-13T15:27:45Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-13T15:26:55Z | ---
license: apache-2.0
---
|
naumanshahid/whisper-nf-3 | naumanshahid | 2024-05-13T15:27:42Z | 92 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:hinglish",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-05-13T01:57:16Z | ---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- hinglish
model-index:
- name: whisper-nf-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-nf-3
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the hinglish dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
veronica-girolimetti/mistral-ft-03 | veronica-girolimetti | 2024-05-13T15:23:14Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-13T15:18:57Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit
---
# Uploaded model
- **Developed by:** veronica-girolimetti
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
amc5/ppo-Huggy | amc5 | 2024-05-13T15:22:38Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2024-05-13T15:22:25Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog πΆ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: amc5/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play π
|
stevenkroon/verse-gemma-1.1-2b-it | stevenkroon | 2024-05-13T15:20:33Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-08T11:28:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SHAMS-R/araT5-AraSQUAD-4000-steps-epoch-2-v1 | SHAMS-R | 2024-05-13T15:18:59Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-13T15:18:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
asiansoul/Joah-Llama-3-MAAL-MLP-KoEn-8B-Reborn | asiansoul | 2024-05-13T15:18:36Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:2403.10882",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-13T13:43:54Z | ---
license: other
license_name: other
license_link: LICENSE
---
Model Mixed by [Reborn Merge Method](https://medium.com/@puffanddmx82/reborn-elevating-model-adaptation-with-merging-for-superior-nlp-performance-f604e8e307b2)
Keep in mind that the accuracy of your desired questions may vary for this merge.
Will it be possible to use this merge as a base for future my another merge work?
I hope this merge model combines information and grammar appropriately so that it doesn't just give strange, nonsensical answers. Then I can make new cool food with the next merge...
ps : What I am saying above is not to say that each model is strange. It means I could be doing the merge wrong. I hope there is no misunderstanding.
I am open for the "Collaboration & ETC" if you want
```
Reborn Merge Information
[models info]
reference_model_name = "MLP-KTLim/llama-3-Korean-Bllossom-8B"
base_model_name = "NousResearch/Meta-Llama-3-8B-Instruct"
target_model_name = "maum-ai/Llama-3-MAAL-8B-Instruct-v0.1"
[interpolating mismatch part vocab]
Interpolating tensor 'model.embed_tokens.weight' to match the shape: torch.Size([145088, 4096]) vs torch.Size([128256, 4096])
Interpolating tensor 'lm_head.weight' to match the shape: torch.Size([145088, 4096]) vs torch.Size([128256, 4096])
Interpolating tensor 'model.embed_tokens.weight' to match the shape: torch.Size([128256, 4096]) vs torch.Size([128257, 4096])
Interpolating tensor 'lm_head.weight' to match the shape: torch.Size([128256, 4096]) vs torch.Size([128257, 4096])
```
Ollama Create
```
jaylee@lees-MacBook-Pro-2 % ./ollama create Joah -f ./gguf/Joah-Llama-3-MAAL-MLP-KoEn-8B-Reborn/Modelfile_Q5_K_M
transferring model data
creating model layer
creating template layer
creating system layer
creating parameters layer
creating config layer
using already created layer sha256:4eadb53f0c70683aeab133c60d76b8ffc9f41ca5d49524d4b803c19e5ce7e3a5
using already created layer sha256:8ab4849b038cf0abc5b1c9b8ee1443dca6b93a045c2272180d985126eb40bf6f
writing layer sha256:ae2974c64ea5d6f488eeb1b10717a270f48fb3452432589db6f5e60472ae96ac
writing layer sha256:74ef6315972b317734fe01e7e1ad5b49fce1fa8ed3978cb66501ecb8c3a2e984
writing layer sha256:83882a5e957b8ce0d454f26bcedb2819413b49d6b967b28d60edb8ac61edfa58
writing manifest
success
```
MODELFILE
```
FROM joah-llama-3-maal-mlp-koen-8b-reborn-Q5_K_M.gguf
TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>
{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
{{ .Response }}<|eot_id|>"""
SYSTEM """
μΉμ ν μ±λ΄μΌλ‘μ μλλ°©μ μμ²μ μ΅λν μμΈνκ³ μΉμ νκ² λ΅νμ. λͺ¨λ λλ΅μ νκ΅μ΄(Korean)μΌλ‘ λλ΅ν΄μ€.
"""
PARAMETER num_keep 24
PARAMETER temperature 0.7
PARAMETER num_predict 3000
PARAMETER stop "<|start_header_id|>"
PARAMETER stop "<|end_header_id|>"
PARAMETER stop "<|eot_id|>"
```
## Citation
**Language Model**
```text
@misc{bllossom,
author = {ChangSu Choi, Yongbin Jeong, Seoyoon Park, InHo Won, HyeonSeok Lim, SangMin Kim, Yejee Kang, Chanhyuk Yoon, Jaewan Park, Yiseul Lee, HyeJin Lee, Younggyun Hahm, Hansaem Kim, KyungTae Lim},
title = {Optimizing Language Augmentation for Multilingual Large Language Models: A Case Study on Korean},
year = {2024},
journal = {LREC-COLING 2024},
paperLink = {\url{https://arxiv.org/pdf/2403.10882}},
},
}
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
``` |
GodsonNtungi/swahili_llm_v090 | GodsonNtungi | 2024-05-13T15:16:06Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-13T15:09:48Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** GodsonNtungi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
muzedotso/encoder-image-trocr | muzedotso | 2024-05-13T15:13:32Z | 108 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-feature-extraction | 2024-05-13T15:09:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Subsets and Splits