modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-17 00:37:10
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 428
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-17 00:33:35
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
PakanunNoa/rl_course_vizdoom_health_gathering_supreme | PakanunNoa | "2023-03-16T13:46:51Z" | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-15T17:31:38Z" | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 13.35 +/- 5.74
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r PakanunNoa/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
RogerB/afro-xlmr-base-finetuned-kintweetsB | RogerB | "2023-07-06T10:59:26Z" | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2023-07-06T09:53:42Z" | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: afro-xlmr-base-finetuned-kintweetsB
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afro-xlmr-base-finetuned-kintweetsB
This model is a fine-tuned version of [Davlan/afro-xlmr-base](https://huggingface.co/Davlan/afro-xlmr-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1700
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4711 | 1.0 | 900 | 2.2431 |
| 2.3238 | 2.0 | 1800 | 2.2116 |
| 2.2725 | 3.0 | 2700 | 2.1590 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
MaziyarPanahi/smol-7b-Mistral-7B-Instruct-v0.1 | MaziyarPanahi | "2024-01-17T15:21:11Z" | 17 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"Safetensors",
"text-generation-inference",
"merge",
"7b",
"mistralai/Mistral-7B-Instruct-v0.1",
"rishiraj/smol-7b",
"generated_from_trainer",
"en",
"dataset:HuggingFaceH4/no_robots",
"base_model:openchat/openchat_3.5",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2024-01-17T15:16:08Z" | ---
license: apache-2.0
tags:
- Safetensors
- mistral
- text-generation-inference
- merge
- mistral
- 7b
- mistralai/Mistral-7B-Instruct-v0.1
- rishiraj/smol-7b
- transformers
- safetensors
- mistral
- text-generation
- generated_from_trainer
- en
- dataset:HuggingFaceH4/no_robots
- base_model:openchat/openchat_3.5
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
---
# smol-7b-Mistral-7B-Instruct-v0.1
smol-7b-Mistral-7B-Instruct-v0.1 is a merge of the following models:
* [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
* [rishiraj/smol-7b](https://huggingface.co/rishiraj/smol-7b)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.1
layer_range: [0, 32]
- model: rishiraj/smol-7b
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/smol-7b-Mistral-7B-Instruct-v0.1"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
sd-concepts-library/jojo-bizzare-adventure-manga-lineart | sd-concepts-library | "2022-09-21T15:03:39Z" | 0 | 1 | null | [
"license:mit",
"region:us"
] | null | "2022-09-21T15:03:33Z" | ---
license: mit
---
### JoJo Bizzare Adventure manga lineart on Stable Diffusion
This is the `<JoJo_lineart>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:















|
ZidanSink/Kayess | ZidanSink | "2023-07-15T04:35:29Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-06-29T07:27:11Z" | ---
license: creativeml-openrail-m
---
|
rizvi-rahil786/bert-base-canadaWildfire | rizvi-rahil786 | "2024-03-13T12:11:46Z" | 92 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-03-13T08:33:43Z" | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
model-index:
- name: bert-base-canadaWildfire
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-canadaWildfire
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2575
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5586 | 1.0 | 3008 | 0.4758 |
| 0.2217 | 2.0 | 6016 | 0.2575 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
dctrain/sd-class-butterflies-32 | dctrain | "2023-03-31T16:07:10Z" | 30 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | "2023-03-31T16:06:29Z" | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('dctrain/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
aminlouhichi/gemma-3-merged_8bit | aminlouhichi | "2025-03-25T16:11:22Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | image-text-to-text | "2025-03-25T16:03:44Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ranimeree/Me | ranimeree | "2024-12-20T10:10:24Z" | 6 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2024-12-20T09:28:04Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Rani
---
# Me
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Rani` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('ranimeree/Me', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
PromptKing/GTA5_PROCESS_LEARNING_AI | PromptKing | "2023-04-12T13:22:44Z" | 0 | 5 | null | [
"code",
"graph-ml",
"license:gpl-3.0",
"region:us"
] | graph-ml | "2023-04-12T13:13:00Z" | ---
license: gpl-3.0
pipeline_tag: graph-ml
tags:
- code
---
---
import contextlib
import os
from matplotlib import pyplot as plt
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
import requests
from torchvision import datasets, transforms
import psutil
import time
import subprocess
import onnxruntime as ort
import matplotlib.pyplot as plt
import numpy as np
import numexpr as ne
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("janpase97/codeformer-pretrained")
model = AutoModelForSeq2SeqLM.from_pretrained("janpase97/codeformer-pretrained")
def check_graphics_api(target_app_name):
graphics_api = None
with contextlib.suppress(subprocess.CalledProcessError):
output = subprocess.check_output(['tasklist', '/FI', f'imagename eq {target_app_name}', '/M']).decode('utf-8')
if "opengl32.dll" in output:
graphics_api = "OpenGL"
elif "d3d11.dll" in output:
graphics_api = "DirectX11"
elif "d3d12.dll" in output:
graphics_api = "DirectX12"
elif "vulkan" in output:
graphics_api = "VULKAN"
return graphics_api
# Get the target application's process object
def get_target_app_process(target_app_name):
return next(
(
process
for process in psutil.process_iter(['name'])
if process.info['name'] == target_app_name
),
None,
)
# Attach the AI to the application's process by PID
def attach_ai_to_app_pid(target_app_process):
if target_app_process is not None:
print(f"AI is attached to the application's process with PID: {target_app_process.pid}")
return True
else:
print("Could not find the target application's process to attach the AI.")
return False
# Check if the targeted application is running
def is_target_app_running(target_app_name):
return any(
process.info['name'] == target_app_name
for process in psutil.process_iter(['name'])
)
# Create the directory if it doesn't exist
directory = r"G:\Epic Games\GTAV\GTA5_AI\trained_models"
if not os.path.exists(directory):
os.makedirs(directory)
# Define the neural network model
class NanoCircuit(nn.Module):
def __init__(self):
super(NanoCircuit, self).__init__()
self.fc1 = nn.Linear(784, 128)
self.fc2 = nn.Linear(128, 10)
def forward(self, x):
x = x.view(-1, 784) # Reshape the input from (batch_size, 28, 28) to (batch_size, 784)
x = torch.relu(self.fc1(x))
x = self.fc2(x)
return x
# Set the device to GPU if available
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# Load the MNIST dataset
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])
train_dataset = datasets.MNIST(root='./data', train=True, download=True, transform=transform)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=64, shuffle=True)
# Initialize the model and move it to the GPU
model = NanoCircuit().to(device)
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)
# Train the model on the GPU with a data cap
def train_with_data_cap(model, data_loader, criterion, optimizer, device, data_cap_gb):
data_processed = 0
data_cap_bytes = data_cap_gb * (1024 ** 3)
epoch = 0
while data_processed < data_cap_bytes:
running_loss = 0.0
for i, data in enumerate(data_loader, 0):
inputs, labels = data
inputs, labels = inputs.to(device), labels.to(device)
# Update the amount of data processed
data_processed += inputs.nelement() * inputs.element_size()
if data_processed >= data_cap_bytes:
break
optimizer.zero_grad()
outputs = model(inputs.view(-1, 28 * 28))
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
epoch += 1
print(f"Epoch {epoch}, Loss: {running_loss / (i + 1)}")
print(f"Data processed: {data_processed / (1024 ** 3):.2f} GB")
return model
# Save the updated model as a .onnx file
def save_model(model, filepath):
dummy_input = torch.randn(1, 1, 28, 28).to(device)
torch.onnx.export(model, dummy_input, filepath, input_names=['input'], output_names=['output'], opset_version=11)
# Train the model with a 1 GB data cap
trained_model = train_with_data_cap(model, train_loader, criterion, optimizer, device, data_cap_gb=50)
save_model(trained_model, os.path.join(directory, 'GTA5_TRAINED.onnx'))
target_app_name = "GTA5_TRAINED.exe"
save_interval_seconds = 5 * 60
application_was_running = False
while True:
if is_target_app_running(target_app_name):
print("Target application is running. Training and updating the model...")
trained_model = train_with_data_cap(model, train_loader, criterion, optimizer, device, data_cap_gb=.1)
save_model(trained_model, os.path.join(directory, 'GTA5_TRAINED.onnx'))
application_was_running = True
elif application_was_running:
print("Target application has exited. Saving the model...")
save_model(trained_model, os.path.join(directory, 'GTA5_TRAINED.onnx'))
print("Finished training and saved the model.")
break
else:
print("Target application is not running. Waiting to start training and updating the model...")
time.sleep(save_interval_seconds)
def train_with_data_cap(model, data_loader, criterion, optimizer, device, data_cap_gb):
data_processed = 0
data_cap_bytes = data_cap_gb * (1024 ** 3)
epoch = 0
while data_processed < data_cap_bytes:
running_loss = 0.0
for i, data in enumerate(data_loader, 0):
inputs, labels = data
inputs, labels = inputs.to(device), labels.to(device)
# Update the amount of data processed
data_processed += inputs.nelement() * inputs.element_size()
if data_processed >= data_cap_bytes:
break
optimizer.zero_grad()
# Compute the outputs and loss using numexpr
outputs = model(inputs.view(-1, 28 * 28))
outputs = outputs.cpu().detach().numpy()
labels = labels.cpu().detach().numpy()
loss = ne.evaluate("sum(-log(outputs[arange(outputs.shape[0]), labels]))") / len(labels)
# Backpropagate and update the model parameters
ne.evaluate("loss", out=loss)
grad_outputs = np.ones_like(outputs)
grad_outputs[np.arange(grad_outputs.shape[0]), labels] = -1
grad_outputs /= len(labels)
grad_outputs = ne.evaluate("grad_outputs * loss_grad")
grad_outputs = torch.from_numpy(grad_outputs).to(device)
outputs = torch.from_numpy(outputs).to(device)
loss.backward(grad_outputs)
optimizer.step()
running_loss += loss.item()
epoch += 1
print(f"Epoch {epoch}, Loss: {running_loss / (i + 1)}")
print(f"Data processed: {data_processed / (1024 ** 3):.2f} GB")
return model
# Train the model with a 10 GB data cap
trained_model = train_with_data_cap(model, train_loader, criterion, optimizer, os.device_encoding, data_cap_gb=10)
save_model(trained_model, os.path.join(directory, 'GTA5_TRAINED.onnx'))
target_app_name = "GTA5.exe"
save_interval_seconds = 5 * 60
application_was_running = False
while True:
if is_target_app_running(target_app_name):
print("Target application is running. Training and updating the model...")
trained_model = train_with_data_cap(model, train_loader, criterion, optimizer, os.device_encoding, data_cap_gb=10)
save_model(trained_model, os.path.join(directory, 'GTA5_TRAINED.onnx'))
application_was_running = True
elif application_was_running:
print("Target application has exited. Saving the model...")
save_model(trained_model, os.path.join(directory, 'GTA5_TRAINED.onnx'))
print("Finished training and saved the model.")
break
else:
print("Target application is not running. Waiting to start training and updating the model...")
time.sleep(save_interval_seconds)
def train_with_data_cap(model, data_loader, criterion, optimizer, device, data_cap_gb):
data_processed = 0
data_cap_bytes = data_cap_gb * (1024 ** 3)
epoch = 0
while data_processed < data_cap_bytes:
running_loss = 0.0
for i, data in enumerate(data_loader, 0):
inputs, labels = data
inputs, labels = inputs.to(device), labels.to(device)
# Update the amount of data processed
data_processed += inputs.nelement() * inputs.element_size()
if data_processed >= data_cap_bytes:
break
optimizer.zero_grad()
# Compute the outputs and loss using numexpr
outputs = model(inputs.view(-1, 28 * 28))
outputs = outputs.cpu().detach().numpy()
labels = labels.cpu().detach().numpy()
loss = ne.evaluate("sum(-log(outputs[arange(outputs.shape[0]), labels]))") / len(labels)
# Backpropagate and update the model parameters
ne.evaluate("loss", out=loss)
grad_outputs = np.ones_like(outputs)
grad_outputs[np.arange(grad_outputs.shape[0]), labels] = -1
grad_outputs /= len(labels)
grad_outputs = ne.evaluate("grad_outputs * loss_grad")
grad_outputs = torch.from_numpy(grad_outputs).to(device)
outputs = torch.from_numpy(outputs).to(device)
loss.backward(grad_outputs)
optimizer.step()
running_loss += loss.item()
epoch += 1
print(f"Epoch {epoch}, Loss: {running_loss / (i + 1)}")
print(f"Data processed: {data_processed / (1024 ** 3):.2f} GB")
return model
target_app_name = "GTA5.exe"
save_interval_seconds = 1 * 60
application_was_running = False
while True:
if is_target_app_running(target_app_name):
print("Target application is running. Training and updating the model...")
trained_model = train_with_data_cap(model, train_loader, criterion, optimizer, device, data_cap_gb=10)
save_model(trained_model, os.path.join(directory, 'GTA5_TRAINED.onnx'))
application_was_running = True
elif application_was_running:
print("Target application has exited. Saving the model...")
save_model(trained_model, os.path.join(directory, 'GTA5_TRAINED.onnx'))
print("Finished training and saved the model.")
break
else:
start_time = time.time()
print("Target application is not running. Waiting to detect the graphics API...")
while (time.time() - start_time) < 5:
if is_target_app_running(target_app_name):
if graphics_api := check_graphics_api(target_app_name):
print(f"Detected {graphics_api} in the target application.")
break
else:
print("Could not detect the graphics API used in the target application.")
time.sleep(1)
if not is_target_app_running(target_app_name):
print("Target application not detected in 5 seconds. Shutting down the AI.")
break
while True:
if is_target_app_running(target_app_name):
if graphics_api := check_graphics_api(target_app_name):
print(f"Detected {graphics_api} in the target application.")
else:
print("Could not detect the graphics API used in the target application.")
else:
start_time = time.time()
print("Target application is not running. Waiting to start training and updating the model...")
while (time.time() - start_time) < 5:
if is_target_app_running(target_app_name):
print(f"Detected {graphics_api} in the target application.")
break
time.sleep(1)
if not is_target_app_running(target_app_name):
print("Target application not detected in 5 seconds. Shutting down the AI.")
break
#Generate some random data for the boxplots
np.random.seed(0)
original_data = np.random.normal(0, 1, 100)
trained_data = np.random.normal(0.5, 1, 100)
while True:
if is_target_app_running(target_app_name):
print("Target application is running. Training and updating the model...")
trained_model = train_with_data_cap(model, train_loader, criterion, optimizer, device, data_cap_gb=10)
save_model(trained_model, os.path.join(directory, 'GTA5_TRAINED.onnx'))
# Create a box plot of the original and trained data
plt.figure()
plt.boxplot([original_data, trained_data], labels=["Original Data", "Trained Data"])
plt.title("Boxplot of Original and Trained Data")
plt.ylabel("Values")
plt.show()
# Save the box plot as an image
plt.savefig(r"G:\Epic Games\GTAV\GTA5_AI\Plot Box Comparison\boxplot_comparison.png")
application_was_running = True
elif application_was_running:
print("Target application has exited. Saving the model...")
save_model(trained_model, os.path.join(directory, 'GTA5_TRAINED.onnx'))
print("Finished training and saved the model.")
break
else:
start_time = time.time()
print("Target application is not running. Waiting to detect the graphics API...")
while (time.time() - start_time) < 5:
if is_target_app_running(target_app_name):
if graphics_api := check_graphics_api(target_app_name):
print(f"Detected {graphics_api} in the target application.")
break
else:
print("Could not detect the graphics API used in the target application.")
time.sleep(1)
if not is_target_app_running(target_app_name):
print("Target application not detected in 5 seconds. Shutting down the AI.")
break |
ClainBill/omnimaxe-gpt108 | ClainBill | "2023-04-08T05:44:43Z" | 142 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-04-08T01:36:31Z" | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: omnimaxe-gpt108
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# omnimaxe-gpt108
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 24
- total_train_batch_size: 144
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.4012 | 2.97 | 3000 | nan |
| 3.2798 | 5.95 | 6000 | nan |
| 2.655 | 8.92 | 9000 | nan |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
|
laquythang/f175c962-778f-4bc0-8f79-ca170999efbb | laquythang | "2025-01-12T03:22:09Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28",
"base_model:adapter:rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-12T02:30:37Z" | ---
library_name: peft
base_model: rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f175c962-778f-4bc0-8f79-ca170999efbb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 385868bf2431c92c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/385868bf2431c92c_train_data.json
type:
field_input: context
field_instruction: question
field_output: final_decision
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: laquythang/f175c962-778f-4bc0-8f79-ca170999efbb
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/385868bf2431c92c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5598c581-845d-4fb0-a7bb-ad00d799e5d3
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5598c581-845d-4fb0-a7bb-ad00d799e5d3
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# f175c962-778f-4bc0-8f79-ca170999efbb
This model is a fine-tuned version of [rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28](https://huggingface.co/rayonlabs/merged-merged-af6dd40b-32e1-43b1-adfd-8ce14d65d738-PubMedQA-138437bf-44bd-4b03-8801-d05451a9ff28) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0434
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0002 | 0.0080 | 200 | 0.0434 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lzyvegetable/stable-video-diffusion-img2vid | lzyvegetable | "2024-09-02T03:13:57Z" | 18 | 1 | diffusers | [
"diffusers",
"safetensors",
"image-to-video",
"license:other",
"diffusers:StableVideoDiffusionPipeline",
"region:us"
] | image-to-video | "2024-09-02T03:00:22Z" | ---
pipeline_tag: image-to-video
license: other
license_name: stable-video-diffusion-community
license_link: LICENSE.md
---
# Stable Video Diffusion Image-to-Video Model Card
<!-- Provide a quick summary of what the model is/does. -->

Stable Video Diffusion (SVD) Image-to-Video is a diffusion model that takes in a still image as a conditioning frame, and generates a video from it.
Please note: For commercial use of this model, please refer to https://stability.ai/license.
## Model Details
### Model Description
(SVD) Image-to-Video is a latent diffusion model trained to generate short video clips from an image conditioning.
This model was trained to generate 14 frames at resolution 576x1024 given a context frame of the same size.
We also finetune the widely used [f8-decoder](https://huggingface.co/docs/diffusers/api/models/autoencoderkl#loading-from-the-original-format) for temporal consistency.
For convenience, we additionally provide the model with the
standard frame-wise decoder [here](https://huggingface.co/stabilityai/stable-video-diffusion-img2vid/blob/main/svd_image_decoder.safetensors).
- **Developed by:** Stability AI
- **Funded by:** Stability AI
- **Model type:** Generative image-to-video model
### Model Sources
For research purposes, we recommend our `generative-models` Github repository (https://github.com/Stability-AI/generative-models),
which implements the most popular diffusion frameworks (both training and inference).
- **Repository:** https://github.com/Stability-AI/generative-models
- **Paper:** https://stability.ai/research/stable-video-diffusion-scaling-latent-video-diffusion-models-to-large-datasets
## Evaluation

The chart above evaluates user preference for SVD-Image-to-Video over [GEN-2](https://research.runwayml.com/gen2) and [PikaLabs](https://www.pika.art/).
SVD-Image-to-Video is preferred by human voters in terms of video quality. For details on the user study, we refer to the [research paper](https://stability.ai/research/stable-video-diffusion-scaling-latent-video-diffusion-models-to-large-datasets)
## Uses
### Direct Use
The model is intended for research purposes only. Possible research areas and tasks include
- Research on generative models.
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
Excluded uses are described below.
### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events,
and therefore using the model to generate such content is out-of-scope for the abilities of this model.
The model should not be used in any way that violates Stability AI's [Acceptable Use Policy](https://stability.ai/use-policy).
## Limitations and Bias
### Limitations
- The generated videos are rather short (<= 4sec), and the model does not achieve perfect photorealism.
- The model may generate videos without motion, or very slow camera pans.
- The model cannot be controlled through text.
- The model cannot render legible text.
- Faces and people in general may not be generated properly.
- The autoencoding part of the model is lossy.
### Recommendations
The model is intended for research purposes only.
## How to Get Started with the Model
Check out https://github.com/Stability-AI/generative-models
# Appendix:
All considered potential data sources were included for final training, with none held out as the proposed data filtering methods described in the SVD paper handle the quality control/filtering of the dataset. With regards to safety/NSFW filtering, sources considered were either deemed safe or filtered with the in-house NSFW filters. No explicit human labor is involved in training data preparation. However, human evaluation for model outputs and quality was extensively used to evaluate model quality and performance. The evaluations were performed with third-party contractor platforms (Amazon Sagemaker, Amazon Mechanical Turk, Prolific) with fluent English-speaking contractors from various countries, primarily from the USA, UK, and Canada. Each worker was paid $12/hr for the time invested in the evaluation. No other third party was involved in the development of this model; the model was fully developed in-house at Stability AI. Training the SVD checkpoints required a total of approximately 200,000 A100 80GB hours. The majority of the training occurred on 48 * 8 A100s, while some stages took more/less than that. The resulting CO2 emission is ~19,000kg CO2 eq., and energy consumed is ~64000 kWh. The released checkpoints (SVD/SVD-XT) are image-to-video models that generate short videos/animations closely following the given input image. Since the model relies on an existing supplied image, the potential risks of disclosing specific material or novel unsafe content are minimal. This was also evaluated by third-party independent red-teaming services, which agree with our conclusion to a high degree of confidence (>90% in various areas of safety red-teaming). The external evaluations were also performed for trustworthiness, leading to >95% confidence in real, trustworthy videos. With the default settings at the time of release, SVD takes ~100s for generation, and SVD-XT takes ~180s on an A100 80GB card. Several optimizations to trade off quality / memory / speed can be done to perform faster inference or inference on lower VRAM cards. The information related to the model and its development process and usage protocols can be found in the GitHub repo, associated research paper, and HuggingFace model page/cards. The released model inference & demo code has image-level watermarking enabled by default, which can be used to detect the outputs. This is done via the imWatermark Python library.
The model can be used to generate videos from static initial images. However, we prohibit unlawful, obscene, or misleading uses of the model consistent with the terms of our license and Acceptable Use Policy. For the open-weights release, our training data filtering mitigations alleviate this risk to some extent. These restrictions are explicitly enforced on user-facing interfaces at stablevideo.com, where a warning is issued. We do not take any responsibility for third-party interfaces. Submitting initial images that bypass input filters to tease out offensive or inappropriate content listed above is also prohibited. Safety filtering checks at stablevideo.com run on model inputs and outputs independently. More details on our user-facing interfaces can be found here: https://www.stablevideo.com/faq. Beyond the Acceptable Use Policy and other mitigations and conditions described here, the model is not subject to additional model behavior interventions of the type described in the Foundation Model Transparency Index.
For stablevideo.com, we store preference data in the form of upvotes/downvotes on user-generated videos, and we have a pairwise ranker that runs while a user generates videos. This usage data is solely used for improving Stability AI’s future image/video models and services. No other third-party entities are given access to the usage data beyond Stability AI and maintainers of stablevideo.com. For usage statistics of SVD, we refer interested users to HuggingFace model download/usage statistics as a primary indicator. Third-party applications also have reported model usage statistics. We might also consider releasing aggregate usage statistics of stablevideo.com on reaching some milestones. |
0xtinuviel/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-deadly_yawning_emu | 0xtinuviel | "2025-04-15T08:07:38Z" | 2 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am deadly yawning emu",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-13T01:31:43Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
Nima-nlc/farzan_newtokv1 | Nima-nlc | "2023-11-28T13:04:34Z" | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:NEU-HAI/Llama-2-7b-alpaca-cleaned",
"base_model:finetune:NEU-HAI/Llama-2-7b-alpaca-cleaned",
"license:cc-by-nc-4.0",
"region:us"
] | null | "2023-11-28T13:04:03Z" | ---
license: cc-by-nc-4.0
base_model: NEU-HAI/Llama-2-7b-alpaca-cleaned
tags:
- generated_from_trainer
model-index:
- name: farzan_newtokv1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# farzan_newtokv1
This model is a fine-tuned version of [NEU-HAI/Llama-2-7b-alpaca-cleaned](https://huggingface.co/NEU-HAI/Llama-2-7b-alpaca-cleaned) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Owhslp/nous_researcher_tuning_2_8 | Owhslp | "2024-03-08T07:59:08Z" | 89 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-08T07:37:58Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AdrianPerez3/covnets_ExamenFinal_Adrian | AdrianPerez3 | "2025-04-15T18:29:40Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-15T17:55:59Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
JacksonBrune/4260ae2d-b2a3-4350-9840-4721d76012dd | JacksonBrune | "2025-01-24T08:39:04Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-Instruct-hf",
"base_model:adapter:codellama/CodeLlama-7b-Instruct-hf",
"license:llama2",
"region:us"
] | null | "2025-01-24T08:37:40Z" | ---
library_name: peft
license: llama2
base_model: codellama/CodeLlama-7b-Instruct-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4260ae2d-b2a3-4350-9840-4721d76012dd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: codellama/CodeLlama-7b-Instruct-hf
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c01979ddb4da0832_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c01979ddb4da0832_train_data.json
type:
field_input: multi_turn_queries
field_instruction: actor_name
field_output: plain_query
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: JacksonBrune/4260ae2d-b2a3-4350-9840-4721d76012dd
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/c01979ddb4da0832_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 9be2fa80-5334-44a7-9635-f45f0f7880d5
wandb_project: birthdya-sn56-18-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 9be2fa80-5334-44a7-9635-f45f0f7880d5
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 4260ae2d-b2a3-4350-9840-4721d76012dd
This model is a fine-tuned version of [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0057 | 1 | nan |
| 0.0 | 0.0171 | 3 | nan |
| 0.0 | 0.0341 | 6 | nan |
| 0.0 | 0.0512 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
dragiychev/dqn-SpaceInvadersNoFrameskip-v4 | dragiychev | "2025-03-11T14:32:32Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2025-03-11T14:24:13Z" | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 554.00 +/- 157.02
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga dragiychev -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga dragiychev -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga dragiychev
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
abhishekkuber/step1_encoder_en_anchor_seq_cf | abhishekkuber | "2025-02-26T15:44:58Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-02-26T15:44:11Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
pm390/q-FrozenLake-v1-4x4-no_slippery | pm390 | "2022-05-20T16:08:40Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2022-05-20T16:08:34Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-no_slippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="pm390/q-FrozenLake-v1-4x4-no_slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
morit/arabic_xlm_xnli | morit | "2023-01-24T08:44:50Z" | 484 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"zero-shot-classification",
"ar",
"dataset:xnli",
"arxiv:1911.02116",
"arxiv:2104.12250",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | zero-shot-classification | "2023-01-06T12:25:54Z" | ---
license: mit
datasets:
- xnli
language:
- ar
metrics:
- accuracy
pipeline_tag: zero-shot-classification
---
# XLM-ROBERTA-BASE-XNLI-AR
## Model description
This model takes the XLM-Roberta-base model which has been continued to pre-traine on a large corpus of Twitter in multiple languages.
It was developed following a similar strategy as introduced as part of the [Tweet Eval](https://github.com/cardiffnlp/tweeteval) framework.
The model is further finetuned on the arabic part of the XNLI training dataset.
## Intended Usage
This model was developed to do Zero-Shot Text Classification in the realm of Hate Speech Detection. It is focused on the language of arabic as it was finetuned on data in said language. Since the base model was pre-trained on 100 different languages it has shown some effectiveness in other languages. Please refer to the list of languages in the [XLM Roberta paper](https://arxiv.org/abs/1911.02116)
### Usage with Zero-Shot Classification pipeline
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification",
model="morit/arabic_xlm_xnli")
```
## Training
This model was pre-trained on a set of 100 languages and follwed further training on 198M multilingual tweets as described in the original [paper](https://arxiv.org/abs/2104.12250). Further it was trained on the training set of XNLI dataset in arabic which is a machine translated version of the MNLI dataset. It was trained on 5 epochs of the XNLI train set and evaluated on the XNLI eval dataset at the end of every epoch to find the best performing model. The model which had the highest accuracy on the eval set was chosen at the end.

- learning rate: 2e-5
- batch size: 32
- max sequence: length 128
using a GPU (NVIDIA GeForce RTX 3090) resulting in a training time of 1h 47 mins.
## Evaluation
The best performing model was evaluatated on the XNLI test set to get a comparable result
```
predict_accuracy = 74.19 %
``` |
magnifi/Phi3_intent_v31_2_epoch_10_lr_0.002 | magnifi | "2024-08-21T21:57:05Z" | 76 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"base_model:finetune:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-08-21T21:54:53Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** magnifi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/Kukedlc_-_LLaMa-3-8b-Spanish-slerp-8bits | RichardErkhov | "2025-03-27T06:27:37Z" | 0 | 0 | null | [
"safetensors",
"llama",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-03-27T06:19:47Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
LLaMa-3-8b-Spanish-slerp - bnb 8bits
- Model creator: https://huggingface.co/Kukedlc/
- Original model: https://huggingface.co/Kukedlc/LLaMa-3-8b-Spanish-slerp/
Original model description:
---
tags:
- merge
- mergekit
- lazymergekit
- Kukedlc/LLaMa-3-8b-en-es-v1
- Kukedlc/LLaMa-3-8b-Spanish-RAG-v1
base_model:
- Kukedlc/LLaMa-3-8b-en-es-v1
- Kukedlc/LLaMa-3-8b-Spanish-RAG-v1
---
# LLaMa-3-8b-Spanish-slerp
LLaMa-3-8b-Spanish-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Kukedlc/LLaMa-3-8b-en-es-v1](https://huggingface.co/Kukedlc/LLaMa-3-8b-en-es-v1)
* [Kukedlc/LLaMa-3-8b-Spanish-RAG-v1](https://huggingface.co/Kukedlc/LLaMa-3-8b-Spanish-RAG-v1)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Kukedlc/LLaMa-3-8b-en-es-v1
layer_range: [0, 32]
- model: Kukedlc/LLaMa-3-8b-Spanish-RAG-v1
layer_range: [0, 32]
merge_method: slerp
base_model: Kukedlc/LLaMa-3-8b-Spanish-RAG-v1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Kukedlc/LLaMa-3-8b-Spanish-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
LarryAIDraw/fern-10 | LarryAIDraw | "2023-11-19T06:12:19Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-11-19T06:10:20Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/205098/fern-sousou-no-frieren-lora |
vania2911/esp-to-lsm-model | vania2911 | "2025-02-23T14:00:05Z" | 4 | 0 | null | [
"pytorch",
"tensorboard",
"marian",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | "2024-10-22T14:39:37Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
- rouge
model-index:
- name: esp-to-lsm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# esp-to-lsm-model
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-es-es](https://huggingface.co/Helsinki-NLP/opus-mt-es-es) on a Spanish-MSL glosses dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5224
- Bleu: 74.2913
- Rouge: {'rouge1': 0.9064168152109326, 'rouge2': 0.8341349206349207, 'rougeL': 0.9018725808505224, 'rougeLsum': 0.9021191961633139}
- Ter Score: 14.6840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Rouge | Ter Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------------------------------------------------------------------------------------------------------------------------:|:---------:|
| 2.5487 | 1.0 | 75 | 1.8275 | 33.3311 | {'rouge1': 0.7125697572837667, 'rouge2': 0.5131076015487782, 'rougeL': 0.6740261156112557, 'rougeLsum': 0.6730658531068747} | 48.9777 |
| 1.417 | 2.0 | 150 | 1.2236 | 58.3622 | {'rouge1': 0.8070335129553401, 'rouge2': 0.6696746733658498, 'rougeL': 0.7904133765844297, 'rougeLsum': 0.7895317227205776} | 29.4610 |
| 0.9666 | 3.0 | 225 | 0.9751 | 68.5295 | {'rouge1': 0.8502113964466904, 'rouge2': 0.7350681448181451, 'rougeL': 0.8411302357772945, 'rougeLsum': 0.8410883914560386} | 21.4684 |
| 0.8217 | 4.0 | 300 | 0.8450 | 44.5871 | {'rouge1': 0.8678535408519932, 'rouge2': 0.7697804232804234, 'rougeL': 0.8597202956428964, 'rougeLsum': 0.8600501068132649} | 30.2974 |
| 0.7691 | 5.0 | 375 | 0.7586 | 45.8903 | {'rouge1': 0.8777863634187164, 'rouge2': 0.7896996151996154, 'rougeL': 0.8714760522701701, 'rougeLsum': 0.8710761150614097} | 28.8104 |
| 0.5557 | 6.0 | 450 | 0.6913 | 60.0358 | {'rouge1': 0.8811041790453555, 'rouge2': 0.8024246031746034, 'rougeL': 0.8775582647200295, 'rougeLsum': 0.8773233525733528} | 21.2825 |
| 0.5462 | 7.0 | 525 | 0.6471 | 59.0748 | {'rouge1': 0.8826582635813243, 'rouge2': 0.8028015873015873, 'rougeL': 0.8787765851180174, 'rougeLsum': 0.8785213589101055} | 21.8401 |
| 0.4446 | 8.0 | 600 | 0.6160 | 40.9211 | {'rouge1': 0.8939967405639866, 'rouge2': 0.8149416786916788, 'rougeL': 0.8905721678257397, 'rougeLsum': 0.890523253679749} | 30.8550 |
| 0.3959 | 9.0 | 675 | 0.5945 | 42.2774 | {'rouge1': 0.894224230018348, 'rouge2': 0.8151240981240981, 'rougeL': 0.8909062049062051, 'rougeLsum': 0.8915671958760194} | 30.1115 |
| 0.3249 | 10.0 | 750 | 0.5759 | 70.2959 | {'rouge1': 0.9012842030237667, 'rouge2': 0.8230316257816259, 'rougeL': 0.8965130854983795, 'rougeLsum': 0.8970404413388284} | 16.7286 |
| 0.3459 | 11.0 | 825 | 0.5514 | 43.2915 | {'rouge1': 0.90225049025049, 'rouge2': 0.8307122122122121, 'rougeL': 0.8987950948833301, 'rougeLsum': 0.8987281601840429} | 28.9033 |
| 0.3153 | 12.0 | 900 | 0.5405 | 44.9816 | {'rouge1': 0.9047931538206682, 'rouge2': 0.8333689107827039, 'rougeL': 0.9006491566975439, 'rougeLsum': 0.9009697546988817} | 27.5093 |
| 0.2851 | 13.0 | 975 | 0.5381 | 72.0806 | {'rouge1': 0.9056758296170062, 'rouge2': 0.8312087542087543, 'rougeL': 0.9011036006477184, 'rougeLsum': 0.9014392073068547} | 15.7063 |
| 0.2526 | 14.0 | 1050 | 0.5349 | 75.0117 | {'rouge1': 0.90289756104462, 'rouge2': 0.8248306878306879, 'rougeL': 0.898266601590131, 'rougeLsum': 0.8983403573550632} | 14.9628 |
| 0.2209 | 15.0 | 1125 | 0.5281 | 74.3845 | {'rouge1': 0.9036245755878107, 'rouge2': 0.8278015873015876, 'rougeL': 0.8997443447075799, 'rougeLsum': 0.8999785990153637} | 14.7770 |
| 0.2668 | 16.0 | 1200 | 0.5265 | 74.2756 | {'rouge1': 0.9030526660159015, 'rouge2': 0.8251984126984128, 'rougeL': 0.8979846999405824, 'rougeLsum': 0.8985619854002207} | 14.8699 |
| 0.2314 | 17.0 | 1275 | 0.5258 | 74.5417 | {'rouge1': 0.9059293459808169, 'rouge2': 0.8316084656084658, 'rougeL': 0.9013539031774327, 'rougeLsum': 0.9015474139150612} | 14.5911 |
| 0.2069 | 18.0 | 1350 | 0.5225 | 74.5623 | {'rouge1': 0.9067485180941064, 'rouge2': 0.8356613756613757, 'rougeL': 0.9022319058936705, 'rougeLsum': 0.9027956773618538} | 14.6840 |
| 0.187 | 19.0 | 1425 | 0.5225 | 74.2989 | {'rouge1': 0.9060216096539625, 'rouge2': 0.832691798941799, 'rougeL': 0.9016076450782335, 'rougeLsum': 0.9017442739722153} | 14.7770 |
| 0.2413 | 20.0 | 1500 | 0.5224 | 74.2913 | {'rouge1': 0.9064168152109326, 'rouge2': 0.8341349206349207, 'rougeL': 0.9018725808505224, 'rougeLsum': 0.9021191961633139} | 14.6840 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.13.3
|
zhangtaolab/plant-dnabert-6mer-H3K27me3 | zhangtaolab | "2024-10-14T03:41:18Z" | 5 | 0 | null | [
"safetensors",
"bert",
"DNA",
"biology",
"genomics",
"dataset:zhangtaolab/plant-multi-species-histone-modifications",
"base_model:zhangtaolab/plant-dnabert-6mer",
"base_model:finetune:zhangtaolab/plant-dnabert-6mer",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | "2024-10-06T03:10:31Z" | ---
license: cc-by-nc-sa-4.0
widget:
- text: >-
AATTTTAACTAGCCCCTTCGGCCCTTCCCATCGACATATATACGAAGAGACAAAACAACATATCAACAGAATGTCAGAATTACAGACACCACGCTTGACATGTCTGTGACGCAGACCATAGAGGATGTGTCATGTTCATGTGTCCAATGGGGGCAATGGTATTGCAAGGGCACAAAATACTGCTAACATGTTTCGTAGCGCTATAGGTTACAGAGGTCATGACGTTAT
tags:
- DNA
- biology
- genomics
datasets:
- zhangtaolab/plant-multi-species-histone-modifications
metrics:
- accuracy
base_model:
- zhangtaolab/plant-dnabert-6mer
---
# Plant foundation DNA large language models
The plant DNA large language models (LLMs) contain a series of foundation models based on different model architectures, which are pre-trained on various plant reference genomes.
All the models have a comparable model size between 90 MB and 150 MB, BPE tokenizer is used for tokenization and 8000 tokens are included in the vocabulary.
**Developed by:** zhangtaolab
### Model Sources
- **Repository:** [Plant DNA LLMs](https://github.com/zhangtaolab/plant_DNA_LLMs)
- **Manuscript:** [Versatile applications of foundation DNA language models in plant genomes]()
### Architecture
The model is trained based on the zhihan1996/DNABERT-2-117M model with modified tokenizer.
This model is fine-tuned for predicting H3K27me3 histone modification.
### How to use
Install the runtime library first:
```bash
pip install transformers
```
Here is a simple code for inference:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model_name = 'plant-dnabert-6mer-H3K27me3'
# load model and tokenizer
model = AutoModelForSequenceClassification.from_pretrained(f'zhangtaolab/{model_name}', trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(f'zhangtaolab/{model_name}', trust_remote_code=True)
# inference
sequences = ['ATCTTTTAAACCCTACTTTTCTTCACATTATTCATAATAGGCACTCTCAACTCATGGTTTAGTGGAGTTACACAATACCCAAGGTTGGGTCAAGGCCAAGACGTGATTGGTTTCTTCATTGGGCACCCTCAACTTCTGATTTTGTCCTAAGTTGAGGTAAACATGTGCAAATCTTGAATCTCCAACACCACCCGACGGAAAACTCTTCCTTTTGCCTAACGCTTTTGCTTAGCGATTGTATATGT',
'GCATAATCGAGCTTGATGCCCATGTTTTTGCACCAGAGTTTTACCTCGTCGGCCGTAAAGTTCGTGCCGTTATCAGTGATGATGTTGTGGGGGACGCCGTAACAGTGTACAACCCCGGATATAAAGTCTATCACCGGTCCAGATTCGGCCGTCTCAACAGGCTTGGCTTCTATCCATTTGGT']
pipe = pipeline('text-classification', model=model, tokenizer=tokenizer,
trust_remote_code=True, top_k=None)
results = pipe(sequences)
print(results)
```
### Training data
We use BertForSequenceClassification to fine-tune the model.
Detailed training procedure can be found in our manuscript.
#### Hardware
Model was trained on a NVIDIA GTX1080Ti GPU (11 GB). |
BernardOng/Banking-FT-Bong-v1 | BernardOng | "2023-07-10T21:29:24Z" | 13 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-05-30T02:19:43Z" | ---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
---
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [h2oai/h2ogpt-oig-oasst1-512-6.9b](https://huggingface.co/h2oai/h2ogpt-oig-oasst1-512-6.9b)
- Caution: This is only an experimental model used mainly for research and testing purposes. It is not meant for production use.
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed.
```bash
pip install transformers==4.28.1
pip install accelerate==0.18.0
pip install torch==2.0.0
```
```python
import torch
from transformers import pipeline
generate_text = pipeline(
model="BernardOng/Banking-FT-Bong-v1",
torch_dtype=torch.float16,
trust_remote_code=True,
use_fast=True,
device_map={"": "cuda:0"},
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=2,
temperature=float(8.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?<|endoftext|><|answer|>
```
Alternatively, if you prefer to not use `trust_remote_code=True` you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"BernardOng/Banking-FT-Bong-v1",
use_fast=True,
padding_side="left"
)
model = AutoModelForCausalLM.from_pretrained(
"BernardOng/Banking-FT-Bong-v1",
torch_dtype=torch.float16,
device_map={"": "cuda:0"}
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=2,
temperature=float(8.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "BernardOng/Banking-FT-Bong-v1" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?<|endoftext|><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True)
model = AutoModelForCausalLM.from_pretrained(model_name)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
**inputs,
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=2,
temperature=float(8.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Model Architecture
```
GPTNeoXForCausalLM(
(gpt_neox): GPTNeoXModel(
(embed_in): Embedding(50432, 4096)
(layers): ModuleList(
(0-31): 32 x GPTNeoXLayer(
(input_layernorm): LayerNorm((4096,), eps=1e-05, elementwise_affine=True)
(post_attention_layernorm): LayerNorm((4096,), eps=1e-05, elementwise_affine=True)
(attention): GPTNeoXAttention(
(rotary_emb): RotaryEmbedding()
(query_key_value): Linear(in_features=4096, out_features=12288, bias=True)
(dense): Linear(in_features=4096, out_features=4096, bias=True)
)
(mlp): GPTNeoXMLP(
(dense_h_to_4h): Linear(in_features=4096, out_features=16384, bias=True)
(dense_4h_to_h): Linear(in_features=16384, out_features=4096, bias=True)
(act): GELUActivation()
)
)
)
(final_layer_norm): LayerNorm((4096,), eps=1e-05, elementwise_affine=True)
)
(embed_out): Linear(in_features=4096, out_features=50432, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Model Validation
Model validation results using [EleutherAI lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness).
```bash
CUDA_VISIBLE_DEVICES=0 python main.py --model hf-causal-experimental --model_args pretrained=BernardOng/Banking-FT-Bong-v1 --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq --device cuda &> eval.log
```
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it. |
MayBashendy/ArabicNewSplits7_usingALLEssays_FineTuningAraBERT_run1_AugV5_k4_task5_organization | MayBashendy | "2025-01-20T21:06:27Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-01-20T11:59:56Z" | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits7_usingALLEssays_FineTuningAraBERT_run1_AugV5_k4_task5_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits7_usingALLEssays_FineTuningAraBERT_run1_AugV5_k4_task5_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6411
- Qwk: 0.5891
- Mse: 0.6411
- Rmse: 0.8007
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.1333 | 2 | 4.0760 | -0.0323 | 4.0760 | 2.0189 |
| No log | 0.2667 | 4 | 2.2768 | 0.0705 | 2.2768 | 1.5089 |
| No log | 0.4 | 6 | 1.6016 | -0.0180 | 1.6016 | 1.2655 |
| No log | 0.5333 | 8 | 1.6261 | 0.0532 | 1.6261 | 1.2752 |
| No log | 0.6667 | 10 | 1.9060 | 0.0535 | 1.9060 | 1.3806 |
| No log | 0.8 | 12 | 2.1386 | 0.1065 | 2.1386 | 1.4624 |
| No log | 0.9333 | 14 | 1.5975 | 0.0408 | 1.5975 | 1.2639 |
| No log | 1.0667 | 16 | 1.3989 | -0.0032 | 1.3989 | 1.1827 |
| No log | 1.2 | 18 | 1.1537 | 0.1028 | 1.1537 | 1.0741 |
| No log | 1.3333 | 20 | 1.0718 | 0.1725 | 1.0718 | 1.0353 |
| No log | 1.4667 | 22 | 1.1341 | 0.0523 | 1.1341 | 1.0650 |
| No log | 1.6 | 24 | 1.3075 | 0.0883 | 1.3075 | 1.1435 |
| No log | 1.7333 | 26 | 1.5539 | 0.1339 | 1.5539 | 1.2465 |
| No log | 1.8667 | 28 | 1.6367 | 0.1843 | 1.6367 | 1.2793 |
| No log | 2.0 | 30 | 1.3398 | 0.1612 | 1.3398 | 1.1575 |
| No log | 2.1333 | 32 | 1.1341 | 0.2547 | 1.1341 | 1.0649 |
| No log | 2.2667 | 34 | 1.3812 | 0.2962 | 1.3812 | 1.1752 |
| No log | 2.4 | 36 | 1.4182 | 0.3323 | 1.4182 | 1.1909 |
| No log | 2.5333 | 38 | 0.9954 | 0.3697 | 0.9954 | 0.9977 |
| No log | 2.6667 | 40 | 0.8495 | 0.2967 | 0.8495 | 0.9217 |
| No log | 2.8 | 42 | 0.9741 | 0.3027 | 0.9741 | 0.9870 |
| No log | 2.9333 | 44 | 0.8568 | 0.3647 | 0.8568 | 0.9257 |
| No log | 3.0667 | 46 | 1.1190 | 0.3478 | 1.1190 | 1.0578 |
| No log | 3.2 | 48 | 1.4883 | 0.3096 | 1.4883 | 1.2200 |
| No log | 3.3333 | 50 | 1.3317 | 0.3617 | 1.3317 | 1.1540 |
| No log | 3.4667 | 52 | 1.0212 | 0.3203 | 1.0212 | 1.0106 |
| No log | 3.6 | 54 | 0.8089 | 0.4936 | 0.8089 | 0.8994 |
| No log | 3.7333 | 56 | 0.8214 | 0.4159 | 0.8214 | 0.9063 |
| No log | 3.8667 | 58 | 0.8306 | 0.4832 | 0.8306 | 0.9114 |
| No log | 4.0 | 60 | 0.7586 | 0.4922 | 0.7586 | 0.8710 |
| No log | 4.1333 | 62 | 1.0410 | 0.5272 | 1.0410 | 1.0203 |
| No log | 4.2667 | 64 | 1.2692 | 0.3929 | 1.2692 | 1.1266 |
| No log | 4.4 | 66 | 1.0733 | 0.4994 | 1.0733 | 1.0360 |
| No log | 4.5333 | 68 | 0.8097 | 0.5336 | 0.8097 | 0.8998 |
| No log | 4.6667 | 70 | 0.7636 | 0.4118 | 0.7636 | 0.8738 |
| No log | 4.8 | 72 | 0.8359 | 0.4792 | 0.8359 | 0.9143 |
| No log | 4.9333 | 74 | 0.9638 | 0.5272 | 0.9638 | 0.9818 |
| No log | 5.0667 | 76 | 1.1012 | 0.4107 | 1.1012 | 1.0494 |
| No log | 5.2 | 78 | 1.2320 | 0.3902 | 1.2320 | 1.1100 |
| No log | 5.3333 | 80 | 0.9282 | 0.4738 | 0.9282 | 0.9634 |
| No log | 5.4667 | 82 | 0.7437 | 0.5274 | 0.7437 | 0.8624 |
| No log | 5.6 | 84 | 0.7529 | 0.5345 | 0.7529 | 0.8677 |
| No log | 5.7333 | 86 | 0.8054 | 0.4710 | 0.8054 | 0.8975 |
| No log | 5.8667 | 88 | 0.7585 | 0.4754 | 0.7585 | 0.8709 |
| No log | 6.0 | 90 | 0.7574 | 0.5315 | 0.7574 | 0.8703 |
| No log | 6.1333 | 92 | 0.8130 | 0.4998 | 0.8130 | 0.9016 |
| No log | 6.2667 | 94 | 0.8073 | 0.4861 | 0.8073 | 0.8985 |
| No log | 6.4 | 96 | 0.8246 | 0.5390 | 0.8246 | 0.9081 |
| No log | 6.5333 | 98 | 0.8478 | 0.5363 | 0.8478 | 0.9208 |
| No log | 6.6667 | 100 | 0.8655 | 0.5363 | 0.8655 | 0.9303 |
| No log | 6.8 | 102 | 0.8201 | 0.5401 | 0.8201 | 0.9056 |
| No log | 6.9333 | 104 | 0.7944 | 0.5545 | 0.7944 | 0.8913 |
| No log | 7.0667 | 106 | 0.8176 | 0.5627 | 0.8176 | 0.9042 |
| No log | 7.2 | 108 | 0.8133 | 0.5131 | 0.8133 | 0.9018 |
| No log | 7.3333 | 110 | 0.8508 | 0.5257 | 0.8508 | 0.9224 |
| No log | 7.4667 | 112 | 0.8781 | 0.4502 | 0.8781 | 0.9371 |
| No log | 7.6 | 114 | 0.9161 | 0.5033 | 0.9161 | 0.9572 |
| No log | 7.7333 | 116 | 0.9983 | 0.5841 | 0.9983 | 0.9991 |
| No log | 7.8667 | 118 | 1.0011 | 0.4602 | 1.0011 | 1.0006 |
| No log | 8.0 | 120 | 1.0511 | 0.4629 | 1.0511 | 1.0252 |
| No log | 8.1333 | 122 | 1.0276 | 0.4992 | 1.0276 | 1.0137 |
| No log | 8.2667 | 124 | 1.0063 | 0.4848 | 1.0063 | 1.0032 |
| No log | 8.4 | 126 | 0.9935 | 0.4584 | 0.9935 | 0.9968 |
| No log | 8.5333 | 128 | 0.9570 | 0.4383 | 0.9570 | 0.9782 |
| No log | 8.6667 | 130 | 0.8919 | 0.5025 | 0.8919 | 0.9444 |
| No log | 8.8 | 132 | 0.8838 | 0.5059 | 0.8838 | 0.9401 |
| No log | 8.9333 | 134 | 0.8786 | 0.5246 | 0.8786 | 0.9374 |
| No log | 9.0667 | 136 | 0.8513 | 0.5451 | 0.8513 | 0.9226 |
| No log | 9.2 | 138 | 0.8438 | 0.5508 | 0.8438 | 0.9186 |
| No log | 9.3333 | 140 | 0.8475 | 0.5675 | 0.8475 | 0.9206 |
| No log | 9.4667 | 142 | 0.8834 | 0.6082 | 0.8834 | 0.9399 |
| No log | 9.6 | 144 | 0.8686 | 0.5679 | 0.8686 | 0.9320 |
| No log | 9.7333 | 146 | 0.8132 | 0.5919 | 0.8132 | 0.9018 |
| No log | 9.8667 | 148 | 0.7898 | 0.5224 | 0.7898 | 0.8887 |
| No log | 10.0 | 150 | 0.7920 | 0.5374 | 0.7920 | 0.8900 |
| No log | 10.1333 | 152 | 0.7793 | 0.5263 | 0.7793 | 0.8828 |
| No log | 10.2667 | 154 | 0.8016 | 0.5275 | 0.8016 | 0.8953 |
| No log | 10.4 | 156 | 0.9663 | 0.5243 | 0.9663 | 0.9830 |
| No log | 10.5333 | 158 | 0.9622 | 0.5243 | 0.9622 | 0.9809 |
| No log | 10.6667 | 160 | 0.8372 | 0.5298 | 0.8372 | 0.9150 |
| No log | 10.8 | 162 | 0.7793 | 0.5600 | 0.7793 | 0.8828 |
| No log | 10.9333 | 164 | 0.7873 | 0.5796 | 0.7873 | 0.8873 |
| No log | 11.0667 | 166 | 0.7606 | 0.5742 | 0.7606 | 0.8721 |
| No log | 11.2 | 168 | 0.7194 | 0.6001 | 0.7194 | 0.8482 |
| No log | 11.3333 | 170 | 0.7043 | 0.5638 | 0.7043 | 0.8392 |
| No log | 11.4667 | 172 | 0.6960 | 0.5771 | 0.6960 | 0.8342 |
| No log | 11.6 | 174 | 0.7220 | 0.5427 | 0.7220 | 0.8497 |
| No log | 11.7333 | 176 | 0.7039 | 0.5548 | 0.7039 | 0.8390 |
| No log | 11.8667 | 178 | 0.6669 | 0.6230 | 0.6669 | 0.8167 |
| No log | 12.0 | 180 | 0.6557 | 0.6311 | 0.6557 | 0.8097 |
| No log | 12.1333 | 182 | 0.6600 | 0.6374 | 0.6600 | 0.8124 |
| No log | 12.2667 | 184 | 0.7073 | 0.5404 | 0.7073 | 0.8410 |
| No log | 12.4 | 186 | 0.6639 | 0.6246 | 0.6639 | 0.8148 |
| No log | 12.5333 | 188 | 0.7037 | 0.6511 | 0.7037 | 0.8389 |
| No log | 12.6667 | 190 | 0.7789 | 0.6459 | 0.7789 | 0.8825 |
| No log | 12.8 | 192 | 0.7010 | 0.5832 | 0.7010 | 0.8373 |
| No log | 12.9333 | 194 | 0.6687 | 0.5736 | 0.6687 | 0.8177 |
| No log | 13.0667 | 196 | 0.8852 | 0.5182 | 0.8852 | 0.9409 |
| No log | 13.2 | 198 | 0.9364 | 0.5295 | 0.9364 | 0.9677 |
| No log | 13.3333 | 200 | 0.7974 | 0.4922 | 0.7974 | 0.8930 |
| No log | 13.4667 | 202 | 0.6790 | 0.5735 | 0.6790 | 0.8240 |
| No log | 13.6 | 204 | 0.7517 | 0.5498 | 0.7517 | 0.8670 |
| No log | 13.7333 | 206 | 0.8241 | 0.5560 | 0.8241 | 0.9078 |
| No log | 13.8667 | 208 | 0.7425 | 0.6293 | 0.7425 | 0.8617 |
| No log | 14.0 | 210 | 0.7801 | 0.5823 | 0.7801 | 0.8833 |
| No log | 14.1333 | 212 | 0.8842 | 0.5384 | 0.8842 | 0.9403 |
| No log | 14.2667 | 214 | 0.8284 | 0.5384 | 0.8284 | 0.9102 |
| No log | 14.4 | 216 | 0.7341 | 0.5654 | 0.7341 | 0.8568 |
| No log | 14.5333 | 218 | 0.6761 | 0.5921 | 0.6761 | 0.8223 |
| No log | 14.6667 | 220 | 0.6791 | 0.6055 | 0.6791 | 0.8241 |
| No log | 14.8 | 222 | 0.6986 | 0.5949 | 0.6986 | 0.8358 |
| No log | 14.9333 | 224 | 0.7376 | 0.5459 | 0.7376 | 0.8588 |
| No log | 15.0667 | 226 | 0.7264 | 0.5774 | 0.7264 | 0.8523 |
| No log | 15.2 | 228 | 0.6867 | 0.6154 | 0.6867 | 0.8287 |
| No log | 15.3333 | 230 | 0.6838 | 0.6076 | 0.6838 | 0.8269 |
| No log | 15.4667 | 232 | 0.6834 | 0.6498 | 0.6834 | 0.8267 |
| No log | 15.6 | 234 | 0.6943 | 0.6187 | 0.6943 | 0.8332 |
| No log | 15.7333 | 236 | 0.7636 | 0.5279 | 0.7636 | 0.8738 |
| No log | 15.8667 | 238 | 0.8158 | 0.4836 | 0.8158 | 0.9032 |
| No log | 16.0 | 240 | 0.7740 | 0.5489 | 0.7740 | 0.8797 |
| No log | 16.1333 | 242 | 0.7419 | 0.5684 | 0.7419 | 0.8613 |
| No log | 16.2667 | 244 | 0.8096 | 0.5207 | 0.8096 | 0.8998 |
| No log | 16.4 | 246 | 0.7947 | 0.5483 | 0.7947 | 0.8914 |
| No log | 16.5333 | 248 | 0.7560 | 0.5264 | 0.7560 | 0.8695 |
| No log | 16.6667 | 250 | 0.7723 | 0.5178 | 0.7723 | 0.8788 |
| No log | 16.8 | 252 | 0.9080 | 0.4722 | 0.9080 | 0.9529 |
| No log | 16.9333 | 254 | 0.9118 | 0.4492 | 0.9118 | 0.9549 |
| No log | 17.0667 | 256 | 0.8163 | 0.5173 | 0.8163 | 0.9035 |
| No log | 17.2 | 258 | 0.7230 | 0.5585 | 0.7230 | 0.8503 |
| No log | 17.3333 | 260 | 0.7011 | 0.5845 | 0.7011 | 0.8373 |
| No log | 17.4667 | 262 | 0.6985 | 0.5735 | 0.6985 | 0.8358 |
| No log | 17.6 | 264 | 0.6845 | 0.5368 | 0.6845 | 0.8274 |
| No log | 17.7333 | 266 | 0.6963 | 0.5959 | 0.6963 | 0.8345 |
| No log | 17.8667 | 268 | 0.7653 | 0.5383 | 0.7653 | 0.8748 |
| No log | 18.0 | 270 | 0.8114 | 0.4938 | 0.8114 | 0.9008 |
| No log | 18.1333 | 272 | 0.7895 | 0.5383 | 0.7895 | 0.8886 |
| No log | 18.2667 | 274 | 0.7264 | 0.5173 | 0.7264 | 0.8523 |
| No log | 18.4 | 276 | 0.6928 | 0.5847 | 0.6928 | 0.8323 |
| No log | 18.5333 | 278 | 0.7065 | 0.5274 | 0.7065 | 0.8405 |
| No log | 18.6667 | 280 | 0.7146 | 0.5060 | 0.7146 | 0.8453 |
| No log | 18.8 | 282 | 0.7076 | 0.5274 | 0.7076 | 0.8412 |
| No log | 18.9333 | 284 | 0.7026 | 0.5249 | 0.7026 | 0.8382 |
| No log | 19.0667 | 286 | 0.7086 | 0.5142 | 0.7086 | 0.8418 |
| No log | 19.2 | 288 | 0.7251 | 0.5364 | 0.7251 | 0.8515 |
| No log | 19.3333 | 290 | 0.7406 | 0.5795 | 0.7406 | 0.8606 |
| No log | 19.4667 | 292 | 0.7202 | 0.5364 | 0.7202 | 0.8487 |
| No log | 19.6 | 294 | 0.7040 | 0.5475 | 0.7040 | 0.8391 |
| No log | 19.7333 | 296 | 0.7039 | 0.5129 | 0.7039 | 0.8390 |
| No log | 19.8667 | 298 | 0.6833 | 0.5475 | 0.6833 | 0.8266 |
| No log | 20.0 | 300 | 0.6798 | 0.5923 | 0.6798 | 0.8245 |
| No log | 20.1333 | 302 | 0.6650 | 0.5594 | 0.6650 | 0.8155 |
| No log | 20.2667 | 304 | 0.7076 | 0.4974 | 0.7076 | 0.8412 |
| No log | 20.4 | 306 | 0.7633 | 0.5279 | 0.7633 | 0.8736 |
| No log | 20.5333 | 308 | 0.7390 | 0.5173 | 0.7390 | 0.8596 |
| No log | 20.6667 | 310 | 0.6553 | 0.5450 | 0.6553 | 0.8095 |
| No log | 20.8 | 312 | 0.6455 | 0.6488 | 0.6455 | 0.8034 |
| No log | 20.9333 | 314 | 0.6309 | 0.6154 | 0.6309 | 0.7943 |
| No log | 21.0667 | 316 | 0.6352 | 0.6291 | 0.6352 | 0.7970 |
| No log | 21.2 | 318 | 0.7366 | 0.5163 | 0.7366 | 0.8583 |
| No log | 21.3333 | 320 | 0.7568 | 0.5266 | 0.7568 | 0.8699 |
| No log | 21.4667 | 322 | 0.7071 | 0.5279 | 0.7071 | 0.8409 |
| No log | 21.6 | 324 | 0.6719 | 0.5585 | 0.6719 | 0.8197 |
| No log | 21.7333 | 326 | 0.6315 | 0.5960 | 0.6315 | 0.7947 |
| No log | 21.8667 | 328 | 0.6059 | 0.6025 | 0.6059 | 0.7784 |
| No log | 22.0 | 330 | 0.5988 | 0.6491 | 0.5988 | 0.7738 |
| No log | 22.1333 | 332 | 0.6069 | 0.6347 | 0.6069 | 0.7791 |
| No log | 22.2667 | 334 | 0.6291 | 0.6446 | 0.6291 | 0.7931 |
| No log | 22.4 | 336 | 0.6303 | 0.6446 | 0.6303 | 0.7939 |
| No log | 22.5333 | 338 | 0.6482 | 0.6073 | 0.6482 | 0.8051 |
| No log | 22.6667 | 340 | 0.6724 | 0.6073 | 0.6724 | 0.8200 |
| No log | 22.8 | 342 | 0.6963 | 0.5558 | 0.6963 | 0.8345 |
| No log | 22.9333 | 344 | 0.7235 | 0.5605 | 0.7235 | 0.8506 |
| No log | 23.0667 | 346 | 0.7434 | 0.5103 | 0.7434 | 0.8622 |
| No log | 23.2 | 348 | 0.7497 | 0.5516 | 0.7497 | 0.8658 |
| No log | 23.3333 | 350 | 0.7339 | 0.5858 | 0.7339 | 0.8567 |
| No log | 23.4667 | 352 | 0.7058 | 0.5585 | 0.7058 | 0.8401 |
| No log | 23.6 | 354 | 0.6842 | 0.5261 | 0.6842 | 0.8272 |
| No log | 23.7333 | 356 | 0.6785 | 0.5396 | 0.6785 | 0.8237 |
| No log | 23.8667 | 358 | 0.6691 | 0.5614 | 0.6691 | 0.8180 |
| No log | 24.0 | 360 | 0.6733 | 0.5485 | 0.6733 | 0.8205 |
| No log | 24.1333 | 362 | 0.6801 | 0.6014 | 0.6801 | 0.8247 |
| No log | 24.2667 | 364 | 0.7146 | 0.5729 | 0.7146 | 0.8453 |
| No log | 24.4 | 366 | 0.7238 | 0.5729 | 0.7238 | 0.8507 |
| No log | 24.5333 | 368 | 0.6828 | 0.5740 | 0.6828 | 0.8263 |
| No log | 24.6667 | 370 | 0.6510 | 0.6143 | 0.6510 | 0.8069 |
| No log | 24.8 | 372 | 0.6575 | 0.6143 | 0.6575 | 0.8109 |
| No log | 24.9333 | 374 | 0.6845 | 0.5986 | 0.6845 | 0.8273 |
| No log | 25.0667 | 376 | 0.7155 | 0.6092 | 0.7155 | 0.8459 |
| No log | 25.2 | 378 | 0.7561 | 0.5622 | 0.7561 | 0.8695 |
| No log | 25.3333 | 380 | 0.7806 | 0.5591 | 0.7806 | 0.8835 |
| No log | 25.4667 | 382 | 0.7439 | 0.5504 | 0.7439 | 0.8625 |
| No log | 25.6 | 384 | 0.6852 | 0.5688 | 0.6852 | 0.8278 |
| No log | 25.7333 | 386 | 0.6678 | 0.5905 | 0.6678 | 0.8172 |
| No log | 25.8667 | 388 | 0.6734 | 0.5905 | 0.6734 | 0.8206 |
| No log | 26.0 | 390 | 0.6913 | 0.5740 | 0.6913 | 0.8315 |
| No log | 26.1333 | 392 | 0.7289 | 0.5266 | 0.7289 | 0.8538 |
| No log | 26.2667 | 394 | 0.8116 | 0.5475 | 0.8116 | 0.9009 |
| No log | 26.4 | 396 | 0.7909 | 0.5591 | 0.7909 | 0.8893 |
| No log | 26.5333 | 398 | 0.6971 | 0.5498 | 0.6971 | 0.8350 |
| No log | 26.6667 | 400 | 0.6132 | 0.6360 | 0.6132 | 0.7831 |
| No log | 26.8 | 402 | 0.6070 | 0.5831 | 0.6070 | 0.7791 |
| No log | 26.9333 | 404 | 0.6085 | 0.5833 | 0.6085 | 0.7801 |
| No log | 27.0667 | 406 | 0.6201 | 0.6032 | 0.6201 | 0.7875 |
| No log | 27.2 | 408 | 0.6327 | 0.5774 | 0.6327 | 0.7954 |
| No log | 27.3333 | 410 | 0.6523 | 0.5751 | 0.6523 | 0.8077 |
| No log | 27.4667 | 412 | 0.6657 | 0.5516 | 0.6657 | 0.8159 |
| No log | 27.6 | 414 | 0.6452 | 0.5855 | 0.6452 | 0.8032 |
| No log | 27.7333 | 416 | 0.6329 | 0.5763 | 0.6329 | 0.7956 |
| No log | 27.8667 | 418 | 0.6381 | 0.6237 | 0.6381 | 0.7988 |
| No log | 28.0 | 420 | 0.6460 | 0.6215 | 0.6460 | 0.8038 |
| No log | 28.1333 | 422 | 0.6490 | 0.6186 | 0.6490 | 0.8056 |
| No log | 28.2667 | 424 | 0.6386 | 0.6284 | 0.6386 | 0.7991 |
| No log | 28.4 | 426 | 0.6360 | 0.6389 | 0.6360 | 0.7975 |
| No log | 28.5333 | 428 | 0.6415 | 0.5969 | 0.6415 | 0.8009 |
| No log | 28.6667 | 430 | 0.6654 | 0.5634 | 0.6654 | 0.8157 |
| No log | 28.8 | 432 | 0.6467 | 0.5645 | 0.6467 | 0.8042 |
| No log | 28.9333 | 434 | 0.6226 | 0.6219 | 0.6226 | 0.7891 |
| No log | 29.0667 | 436 | 0.6193 | 0.6014 | 0.6193 | 0.7869 |
| No log | 29.2 | 438 | 0.6604 | 0.5634 | 0.6604 | 0.8126 |
| No log | 29.3333 | 440 | 0.6924 | 0.5516 | 0.6924 | 0.8321 |
| No log | 29.4667 | 442 | 0.7176 | 0.5622 | 0.7176 | 0.8471 |
| No log | 29.6 | 444 | 0.7135 | 0.5622 | 0.7135 | 0.8447 |
| No log | 29.7333 | 446 | 0.6655 | 0.5634 | 0.6655 | 0.8158 |
| No log | 29.8667 | 448 | 0.6424 | 0.5645 | 0.6424 | 0.8015 |
| No log | 30.0 | 450 | 0.6228 | 0.5863 | 0.6228 | 0.7892 |
| No log | 30.1333 | 452 | 0.6171 | 0.5887 | 0.6171 | 0.7856 |
| No log | 30.2667 | 454 | 0.6179 | 0.5964 | 0.6179 | 0.7861 |
| No log | 30.4 | 456 | 0.6475 | 0.5516 | 0.6475 | 0.8047 |
| No log | 30.5333 | 458 | 0.7406 | 0.5622 | 0.7406 | 0.8606 |
| No log | 30.6667 | 460 | 0.8610 | 0.5458 | 0.8610 | 0.9279 |
| No log | 30.8 | 462 | 0.9250 | 0.5208 | 0.9250 | 0.9618 |
| No log | 30.9333 | 464 | 0.9091 | 0.5106 | 0.9091 | 0.9535 |
| No log | 31.0667 | 466 | 0.8227 | 0.5147 | 0.8227 | 0.9071 |
| No log | 31.2 | 468 | 0.7087 | 0.5622 | 0.7087 | 0.8418 |
| No log | 31.3333 | 470 | 0.6599 | 0.5546 | 0.6599 | 0.8123 |
| No log | 31.4667 | 472 | 0.6444 | 0.5455 | 0.6444 | 0.8028 |
| No log | 31.6 | 474 | 0.6467 | 0.5455 | 0.6467 | 0.8042 |
| No log | 31.7333 | 476 | 0.6683 | 0.5855 | 0.6683 | 0.8175 |
| No log | 31.8667 | 478 | 0.7170 | 0.5410 | 0.7170 | 0.8468 |
| No log | 32.0 | 480 | 0.7389 | 0.5516 | 0.7389 | 0.8596 |
| No log | 32.1333 | 482 | 0.7090 | 0.5528 | 0.7090 | 0.8420 |
| No log | 32.2667 | 484 | 0.6800 | 0.5678 | 0.6800 | 0.8246 |
| No log | 32.4 | 486 | 0.6779 | 0.5480 | 0.6779 | 0.8234 |
| No log | 32.5333 | 488 | 0.6814 | 0.5932 | 0.6814 | 0.8255 |
| No log | 32.6667 | 490 | 0.6927 | 0.5798 | 0.6927 | 0.8323 |
| No log | 32.8 | 492 | 0.6886 | 0.5798 | 0.6886 | 0.8298 |
| No log | 32.9333 | 494 | 0.7018 | 0.5528 | 0.7018 | 0.8378 |
| No log | 33.0667 | 496 | 0.6977 | 0.5528 | 0.6977 | 0.8353 |
| No log | 33.2 | 498 | 0.6912 | 0.5528 | 0.6912 | 0.8314 |
| 0.2408 | 33.3333 | 500 | 0.6704 | 0.6043 | 0.6704 | 0.8188 |
| 0.2408 | 33.4667 | 502 | 0.6636 | 0.5669 | 0.6636 | 0.8146 |
| 0.2408 | 33.6 | 504 | 0.6554 | 0.5902 | 0.6554 | 0.8095 |
| 0.2408 | 33.7333 | 506 | 0.6493 | 0.5887 | 0.6493 | 0.8058 |
| 0.2408 | 33.8667 | 508 | 0.6446 | 0.6028 | 0.6446 | 0.8029 |
| 0.2408 | 34.0 | 510 | 0.6411 | 0.5891 | 0.6411 | 0.8007 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
cite-text-analysis/case-analysis-distilbert-base-cased | cite-text-analysis | "2024-05-10T14:55:00Z" | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-cased",
"base_model:finetune:distilbert/distilbert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-05-10T13:33:38Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: distilbert/distilbert-base-cased
metrics:
- accuracy
- precision
- recall
model-index:
- name: case-analysis-distilbert-base-cased
results: []
---
## Metrics
- loss: 1.8402
- accuracy: 0.8085
- precision: 0.7983
- recall: 0.8085
- precision_macro: 0.6608
- recall_macro: 0.6429
- macro_fpr: 0.0935
- weighted_fpr: 0.0732
- weighted_specificity: 0.8548
- macro_specificity: 0.9158
- weighted_sensitivity: 0.8085
- macro_sensitivity: 0.6429
- f1_micro: 0.8085
- f1_macro: 0.6478
- f1_weighted: 0.8018
- runtime: 131.6318
- samples_per_second: 3.4110
- steps_per_second: 0.4330
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# case-analysis-distilbert-base-cased
This model is a fine-tuned version of [distilbert/distilbert-base-cased](https://huggingface.co/distilbert/distilbert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8402
- Accuracy: 0.8085
- Precision: 0.7983
- Recall: 0.8085
- Precision Macro: 0.6461
- Recall Macro: 0.6218
- Macro Fpr: 0.0984
- Weighted Fpr: 0.0771
- Weighted Specificity: 0.8479
- Macro Specificity: 0.9119
- Weighted Sensitivity: 0.7996
- Macro Sensitivity: 0.6218
- F1 Micro: 0.7996
- F1 Macro: 0.6245
- F1 Weighted: 0.7887
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | Precision Macro | Recall Macro | Macro Fpr | Weighted Fpr | Weighted Specificity | Macro Specificity | Weighted Sensitivity | Macro Sensitivity | F1 Micro | F1 Macro | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:---------------:|:------------:|:---------:|:------------:|:--------------------:|:-----------------:|:--------------------:|:-----------------:|:--------:|:--------:|:-----------:|
| No log | 1.0 | 224 | 0.7001 | 0.7661 | 0.7311 | 0.7661 | 0.5791 | 0.5137 | 0.1330 | 0.0923 | 0.7614 | 0.8819 | 0.7661 | 0.5137 | 0.7661 | 0.5270 | 0.7333 |
| No log | 2.0 | 448 | 0.7388 | 0.7751 | 0.7315 | 0.7751 | 0.5585 | 0.5464 | 0.1208 | 0.0882 | 0.7908 | 0.8915 | 0.7751 | 0.5464 | 0.7751 | 0.5487 | 0.7493 |
| 0.7066 | 3.0 | 672 | 0.7229 | 0.8018 | 0.7605 | 0.8018 | 0.5932 | 0.5708 | 0.1076 | 0.0761 | 0.8090 | 0.9027 | 0.8018 | 0.5708 | 0.8018 | 0.5767 | 0.7760 |
| 0.7066 | 4.0 | 896 | 0.8331 | 0.8062 | 0.7896 | 0.8062 | 0.6675 | 0.6115 | 0.1018 | 0.0742 | 0.8218 | 0.9070 | 0.8062 | 0.6115 | 0.8062 | 0.6301 | 0.7934 |
| 0.3654 | 5.0 | 1120 | 1.2300 | 0.7684 | 0.7699 | 0.7684 | 0.6085 | 0.6131 | 0.1066 | 0.0913 | 0.8542 | 0.9056 | 0.7684 | 0.6131 | 0.7684 | 0.5896 | 0.7611 |
| 0.3654 | 6.0 | 1344 | 1.0698 | 0.8129 | 0.7940 | 0.8129 | 0.6864 | 0.6153 | 0.0957 | 0.0712 | 0.8406 | 0.9134 | 0.8129 | 0.6153 | 0.8129 | 0.6300 | 0.7972 |
| 0.2047 | 7.0 | 1568 | 1.3300 | 0.7884 | 0.7960 | 0.7884 | 0.6412 | 0.5959 | 0.1044 | 0.0821 | 0.8421 | 0.9076 | 0.7884 | 0.5959 | 0.7884 | 0.6141 | 0.7892 |
| 0.2047 | 8.0 | 1792 | 1.3870 | 0.8107 | 0.7861 | 0.8107 | 0.6467 | 0.6063 | 0.0983 | 0.0722 | 0.8318 | 0.9106 | 0.8107 | 0.6063 | 0.8107 | 0.6163 | 0.7947 |
| 0.0795 | 9.0 | 2016 | 1.5031 | 0.7951 | 0.7719 | 0.7951 | 0.6275 | 0.5969 | 0.1040 | 0.0791 | 0.8320 | 0.9068 | 0.7951 | 0.5969 | 0.7951 | 0.6036 | 0.7803 |
| 0.0795 | 10.0 | 2240 | 1.6304 | 0.7728 | 0.7796 | 0.7728 | 0.6171 | 0.6233 | 0.1060 | 0.0892 | 0.8561 | 0.9072 | 0.7728 | 0.6233 | 0.7728 | 0.6196 | 0.7759 |
| 0.0795 | 11.0 | 2464 | 1.6553 | 0.8040 | 0.7802 | 0.8040 | 0.6405 | 0.6047 | 0.1003 | 0.0751 | 0.8333 | 0.9093 | 0.8040 | 0.6047 | 0.8040 | 0.6097 | 0.7884 |
| 0.0309 | 12.0 | 2688 | 1.6668 | 0.7996 | 0.7776 | 0.7996 | 0.6247 | 0.6084 | 0.0999 | 0.0771 | 0.8431 | 0.9107 | 0.7996 | 0.6084 | 0.7996 | 0.6073 | 0.7861 |
| 0.0309 | 13.0 | 2912 | 1.7548 | 0.8040 | 0.7724 | 0.8040 | 0.6059 | 0.5847 | 0.1030 | 0.0751 | 0.8216 | 0.9064 | 0.8040 | 0.5847 | 0.8040 | 0.5912 | 0.7846 |
| 0.0225 | 14.0 | 3136 | 1.6691 | 0.8107 | 0.7736 | 0.8107 | 0.5965 | 0.6044 | 0.0974 | 0.0722 | 0.8336 | 0.9111 | 0.8107 | 0.6044 | 0.8107 | 0.5998 | 0.7909 |
| 0.0225 | 15.0 | 3360 | 1.8751 | 0.8040 | 0.7897 | 0.8040 | 0.6516 | 0.6081 | 0.1007 | 0.0751 | 0.8322 | 0.9091 | 0.8040 | 0.6081 | 0.8040 | 0.6251 | 0.7939 |
| 0.0048 | 16.0 | 3584 | 1.8402 | 0.8085 | 0.7983 | 0.8085 | 0.6608 | 0.6429 | 0.0935 | 0.0732 | 0.8548 | 0.9158 | 0.8085 | 0.6429 | 0.8085 | 0.6478 | 0.8018 |
| 0.0048 | 17.0 | 3808 | 1.9124 | 0.7951 | 0.7871 | 0.7951 | 0.6331 | 0.6237 | 0.1001 | 0.0791 | 0.8456 | 0.9102 | 0.7951 | 0.6237 | 0.7951 | 0.6250 | 0.7891 |
| 0.0069 | 18.0 | 4032 | 1.8857 | 0.7973 | 0.7794 | 0.7973 | 0.6268 | 0.5972 | 0.1048 | 0.0781 | 0.8240 | 0.9053 | 0.7973 | 0.5972 | 0.7973 | 0.6062 | 0.7847 |
| 0.0069 | 19.0 | 4256 | 1.9492 | 0.8062 | 0.7813 | 0.8062 | 0.6467 | 0.6015 | 0.1006 | 0.0742 | 0.8281 | 0.9086 | 0.8062 | 0.6015 | 0.8062 | 0.6107 | 0.7895 |
| 0.0069 | 20.0 | 4480 | 1.8994 | 0.8085 | 0.7849 | 0.8085 | 0.6417 | 0.6067 | 0.0988 | 0.0732 | 0.8322 | 0.9102 | 0.8085 | 0.6067 | 0.8085 | 0.6144 | 0.7932 |
| 0.0034 | 21.0 | 4704 | 1.9819 | 0.8040 | 0.7898 | 0.8040 | 0.6748 | 0.6325 | 0.0976 | 0.0751 | 0.8439 | 0.9120 | 0.8040 | 0.6325 | 0.8040 | 0.6429 | 0.7942 |
| 0.0034 | 22.0 | 4928 | 2.0181 | 0.8062 | 0.7880 | 0.8062 | 0.6736 | 0.6204 | 0.0977 | 0.0742 | 0.8408 | 0.9118 | 0.8062 | 0.6204 | 0.8062 | 0.6293 | 0.7930 |
| 0.0001 | 23.0 | 5152 | 2.0305 | 0.8062 | 0.7880 | 0.8062 | 0.6736 | 0.6204 | 0.0977 | 0.0742 | 0.8408 | 0.9118 | 0.8062 | 0.6204 | 0.8062 | 0.6293 | 0.7930 |
| 0.0001 | 24.0 | 5376 | 2.0249 | 0.8040 | 0.7801 | 0.8040 | 0.6448 | 0.6004 | 0.1019 | 0.0751 | 0.8256 | 0.9074 | 0.8040 | 0.6004 | 0.8040 | 0.6092 | 0.7877 |
| 0.0 | 25.0 | 5600 | 2.0139 | 0.8018 | 0.7848 | 0.8018 | 0.6514 | 0.6226 | 0.0984 | 0.0761 | 0.8438 | 0.9114 | 0.8018 | 0.6226 | 0.8018 | 0.6272 | 0.7908 |
| 0.0 | 26.0 | 5824 | 2.0075 | 0.8040 | 0.7868 | 0.8040 | 0.6586 | 0.6281 | 0.0961 | 0.0751 | 0.8487 | 0.9132 | 0.8040 | 0.6281 | 0.8040 | 0.6305 | 0.7926 |
| 0.0026 | 27.0 | 6048 | 2.0155 | 0.8040 | 0.7868 | 0.8040 | 0.6586 | 0.6281 | 0.0961 | 0.0751 | 0.8487 | 0.9132 | 0.8040 | 0.6281 | 0.8040 | 0.6305 | 0.7926 |
| 0.0026 | 28.0 | 6272 | 2.0191 | 0.8040 | 0.7865 | 0.8040 | 0.6586 | 0.6237 | 0.0970 | 0.0751 | 0.8463 | 0.9126 | 0.8040 | 0.6237 | 0.8040 | 0.6283 | 0.7923 |
| 0.0026 | 29.0 | 6496 | 2.0225 | 0.8040 | 0.7865 | 0.8040 | 0.6586 | 0.6237 | 0.0970 | 0.0751 | 0.8463 | 0.9126 | 0.8040 | 0.6237 | 0.8040 | 0.6283 | 0.7923 |
| 0.0 | 30.0 | 6720 | 2.0343 | 0.7996 | 0.7821 | 0.7996 | 0.6461 | 0.6218 | 0.0984 | 0.0771 | 0.8479 | 0.9119 | 0.7996 | 0.6218 | 0.7996 | 0.6245 | 0.7887 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
madelineoliver/ToolsBaer-OLM-to-MSG-Conversion | madelineoliver | "2024-04-23T12:23:23Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-23T12:22:36Z" | The ToolsBaer OLM to MSG Conversion application allows users to quickly and safely convert OLM files to the MSG file format in large quantities. This software converts OLM to MSG files and can easily handle OLM files of any size or quality. Users can export an OLM file to MSG format in a few easy steps. Following these easy steps doesn't require any prior technical expertise from the user anyone, even with little experience, can complete them without extra help or guidance. The topic, CC, BCC, To, From, Images, Links, and Attachments are among the components of an email that can be exported. Outlook versions 2010, 2013, 2016, 2019, and 2021 are all compatible with this application. By utilizing the software's demo version, users can convert the first 10 emails from every folder. The conversion goal can be reliably fulfilled by it. Windows 11, 10, 8.1, 8, 7, and all earlier versions are included in the list of Windows versions. Before choosing to license, users can download and check out the ToolsBaer OLM to MSG Conversion demo edition.
Read More:- http://www.toolsbaer.com/olm-to-msg-conversion/ |
flpelerin/TinyLlama-1.1b-slimorca-10k | flpelerin | "2024-05-28T11:26:22Z" | 134 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-28T11:20:48Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TathagatAgrawal/HiNER_DI | TathagatAgrawal | "2024-04-08T08:57:49Z" | 99 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-03-22T08:18:32Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: HiNER_DI
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# HiNER_DI
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1542
- Precision: 0.8287
- Recall: 0.8180
- F1: 0.8233
- Accuracy: 0.9535
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1528 | 2.11 | 10000 | 0.1542 | 0.8287 | 0.8180 | 0.8233 | 0.9535 |
### Framework versions
- Transformers 4.39.0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
pfunk/CartPole-v1-CP_DQPN_x100-seed888 | pfunk | "2023-03-20T19:40:31Z" | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-20T19:40:28Z" | ---
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQPN_freq
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 10.12 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQPN_freq** Agent Playing **CartPole-v1**
This is a trained model of a DQPN_freq agent playing CartPole-v1.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/CP_DQPN_x100.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[CP_DQPN_x100]"
python -m cleanrl_utils.enjoy --exp-name CP_DQPN_x100 --env-id CartPole-v1
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/CartPole-v1-CP_DQPN_x100-seed888/raw/main/dqpn_freq.py
curl -OL https://huggingface.co/pfunk/CartPole-v1-CP_DQPN_x100-seed888/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/CartPole-v1-CP_DQPN_x100-seed888/raw/main/poetry.lock
poetry install --all-extras
python dqpn_freq.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name CP_DQPN_x100 --policy-network-frequency 10000 --seed 888
```
# Hyperparameters
```python
{'alg_type': 'dqpn_freq.py',
'batch_size': 256,
'buffer_size': 300000,
'capture_video': True,
'cuda': True,
'end_e': 0.1,
'env_id': 'CartPole-v1',
'exp_name': 'CP_DQPN_x100',
'exploration_fraction': 0.2,
'gamma': 1.0,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 1000,
'policy_network_frequency': 10000,
'policy_tau': 1.0,
'save_model': True,
'seed': 888,
'start_e': 1.0,
'target_network_frequency': 100,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 500000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
Melo1512/vit-msn-small-lateral_flow_ivalidation_train_test_4 | Melo1512 | "2025-01-16T16:15:44Z" | 8 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit_msn",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/vit-msn-small",
"base_model:finetune:facebook/vit-msn-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2025-01-16T15:58:14Z" | ---
library_name: transformers
license: apache-2.0
base_model: facebook/vit-msn-small
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-msn-small-lateral_flow_ivalidation_train_test_4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8937728937728938
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-msn-small-lateral_flow_ivalidation_train_test_4
This model is a fine-tuned version of [facebook/vit-msn-small](https://huggingface.co/facebook/vit-msn-small) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3980
- Accuracy: 0.8938
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.3
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8038 | 1.0 | 13 | 0.8368 | 0.4029 |
| 0.6874 | 2.0 | 26 | 0.8356 | 0.4029 |
| 0.6487 | 3.0 | 39 | 0.8336 | 0.3810 |
| 0.773 | 4.0 | 52 | 0.8307 | 0.3700 |
| 0.7002 | 5.0 | 65 | 0.8270 | 0.3480 |
| 0.6991 | 6.0 | 78 | 0.8223 | 0.3407 |
| 0.6809 | 7.0 | 91 | 0.8164 | 0.3480 |
| 0.7359 | 8.0 | 104 | 0.8093 | 0.3516 |
| 0.771 | 9.0 | 117 | 0.8017 | 0.3443 |
| 0.6855 | 10.0 | 130 | 0.7934 | 0.3443 |
| 0.6674 | 11.0 | 143 | 0.7851 | 0.3480 |
| 0.6296 | 12.0 | 156 | 0.7746 | 0.3810 |
| 0.5597 | 13.0 | 169 | 0.7643 | 0.3956 |
| 0.5636 | 14.0 | 182 | 0.7519 | 0.4066 |
| 0.5718 | 15.0 | 195 | 0.7382 | 0.4432 |
| 0.5527 | 16.0 | 208 | 0.7256 | 0.4579 |
| 0.5646 | 17.0 | 221 | 0.7115 | 0.5055 |
| 0.4843 | 18.0 | 234 | 0.6966 | 0.5275 |
| 0.492 | 19.0 | 247 | 0.6805 | 0.5788 |
| 0.4865 | 20.0 | 260 | 0.6630 | 0.6117 |
| 0.4198 | 21.0 | 273 | 0.6448 | 0.6410 |
| 0.4203 | 22.0 | 286 | 0.6280 | 0.6740 |
| 0.4547 | 23.0 | 299 | 0.6083 | 0.6923 |
| 0.3916 | 24.0 | 312 | 0.5909 | 0.7143 |
| 0.4329 | 25.0 | 325 | 0.5768 | 0.7289 |
| 0.4645 | 26.0 | 338 | 0.5629 | 0.7399 |
| 0.3376 | 27.0 | 351 | 0.5536 | 0.7436 |
| 0.4417 | 28.0 | 364 | 0.5417 | 0.7729 |
| 0.3908 | 29.0 | 377 | 0.5262 | 0.7619 |
| 0.3715 | 30.0 | 390 | 0.5130 | 0.7729 |
| 0.438 | 31.0 | 403 | 0.5059 | 0.7912 |
| 0.2937 | 32.0 | 416 | 0.4937 | 0.8022 |
| 0.2944 | 33.0 | 429 | 0.4871 | 0.8022 |
| 0.3474 | 34.0 | 442 | 0.4820 | 0.8059 |
| 0.2302 | 35.0 | 455 | 0.4776 | 0.7949 |
| 0.3543 | 36.0 | 468 | 0.4690 | 0.8022 |
| 0.3325 | 37.0 | 481 | 0.4640 | 0.8059 |
| 0.4004 | 38.0 | 494 | 0.4584 | 0.8095 |
| 0.3031 | 39.0 | 507 | 0.4548 | 0.8132 |
| 0.4862 | 40.0 | 520 | 0.4520 | 0.8095 |
| 0.2609 | 41.0 | 533 | 0.4498 | 0.8278 |
| 0.1859 | 42.0 | 546 | 0.4450 | 0.8462 |
| 0.2712 | 43.0 | 559 | 0.4408 | 0.8462 |
| 0.221 | 44.0 | 572 | 0.4387 | 0.8425 |
| 0.2328 | 45.0 | 585 | 0.4371 | 0.8498 |
| 0.3004 | 46.0 | 598 | 0.4339 | 0.8425 |
| 0.2036 | 47.0 | 611 | 0.4318 | 0.8462 |
| 0.1925 | 48.0 | 624 | 0.4299 | 0.8498 |
| 0.4543 | 49.0 | 637 | 0.4266 | 0.8498 |
| 0.4056 | 50.0 | 650 | 0.4251 | 0.8462 |
| 0.2326 | 51.0 | 663 | 0.4247 | 0.8498 |
| 0.327 | 52.0 | 676 | 0.4224 | 0.8571 |
| 0.2385 | 53.0 | 689 | 0.4193 | 0.8571 |
| 0.2876 | 54.0 | 702 | 0.4183 | 0.8571 |
| 0.2257 | 55.0 | 715 | 0.4162 | 0.8718 |
| 0.252 | 56.0 | 728 | 0.4150 | 0.8755 |
| 0.4299 | 57.0 | 741 | 0.4129 | 0.8645 |
| 0.3146 | 58.0 | 754 | 0.4124 | 0.8755 |
| 0.1993 | 59.0 | 767 | 0.4124 | 0.8755 |
| 0.2507 | 60.0 | 780 | 0.4118 | 0.8791 |
| 0.324 | 61.0 | 793 | 0.4101 | 0.8535 |
| 0.2303 | 62.0 | 806 | 0.4090 | 0.8718 |
| 0.2767 | 63.0 | 819 | 0.4072 | 0.8608 |
| 0.3318 | 64.0 | 832 | 0.4071 | 0.8681 |
| 0.1946 | 65.0 | 845 | 0.4064 | 0.8681 |
| 0.4204 | 66.0 | 858 | 0.4055 | 0.8608 |
| 0.3351 | 67.0 | 871 | 0.4031 | 0.8608 |
| 0.2772 | 68.0 | 884 | 0.4013 | 0.8645 |
| 0.2969 | 69.0 | 897 | 0.4000 | 0.8681 |
| 0.2755 | 70.0 | 910 | 0.4021 | 0.8901 |
| 0.2835 | 71.0 | 923 | 0.4005 | 0.8608 |
| 0.2487 | 72.0 | 936 | 0.3998 | 0.8608 |
| 0.2447 | 73.0 | 949 | 0.3987 | 0.8571 |
| 0.3512 | 74.0 | 962 | 0.3970 | 0.8718 |
| 0.2303 | 75.0 | 975 | 0.3975 | 0.8681 |
| 0.2271 | 76.0 | 988 | 0.3976 | 0.8791 |
| 0.2325 | 77.0 | 1001 | 0.3980 | 0.8938 |
| 0.2517 | 78.0 | 1014 | 0.3965 | 0.8901 |
| 0.2839 | 79.0 | 1027 | 0.3956 | 0.8938 |
| 0.1994 | 80.0 | 1040 | 0.3940 | 0.8828 |
| 0.4525 | 81.0 | 1053 | 0.3934 | 0.8864 |
| 0.2178 | 82.0 | 1066 | 0.3930 | 0.8828 |
| 0.2784 | 83.0 | 1079 | 0.3929 | 0.8901 |
| 0.1956 | 84.0 | 1092 | 0.3930 | 0.8901 |
| 0.2713 | 85.0 | 1105 | 0.3922 | 0.8828 |
| 0.2331 | 86.0 | 1118 | 0.3920 | 0.8828 |
| 0.3294 | 87.0 | 1131 | 0.3917 | 0.8864 |
| 0.2998 | 88.0 | 1144 | 0.3911 | 0.8864 |
| 0.3767 | 89.0 | 1157 | 0.3909 | 0.8864 |
| 0.3126 | 90.0 | 1170 | 0.3908 | 0.8828 |
| 0.2427 | 91.0 | 1183 | 0.3903 | 0.8791 |
| 0.2696 | 92.0 | 1196 | 0.3898 | 0.8828 |
| 0.2664 | 93.0 | 1209 | 0.3897 | 0.8828 |
| 0.3718 | 94.0 | 1222 | 0.3898 | 0.8828 |
| 0.2813 | 95.0 | 1235 | 0.3899 | 0.8828 |
| 0.3105 | 96.0 | 1248 | 0.3898 | 0.8828 |
| 0.2452 | 97.0 | 1261 | 0.3901 | 0.8828 |
| 0.2775 | 98.0 | 1274 | 0.3900 | 0.8828 |
| 0.3814 | 99.0 | 1287 | 0.3901 | 0.8828 |
| 0.2861 | 100.0 | 1300 | 0.3901 | 0.8828 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.2.0
- Tokenizers 0.19.1
|
TechxGenus/Mistral-7B-v0.2-hf-GPTQ | TechxGenus | "2024-03-26T12:33:17Z" | 75 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2024-03-26T11:52:25Z" | GPTQ quantized version of Mistral-7B-v0.2-hf model.
---
~~Mistral 7b v0.2 with attention_dropout=0.6, for training purposes~~
Conversion process:
1. Download original weights from https://models.mistralcdn.com/mistral-7b-v0-2/mistral-7B-v0.2.tar
2. Convert with https://github.com/huggingface/transformers/blob/main/src/transformers/models/mistral/convert_mistral_weights_to_hf.py
3. You may need to copy the tokenizer.model from Mistral-7B-Instruct-v0.2 repo. |
ijin07/wav2vec2-large-xlsr-53-korean | ijin07 | "2024-05-16T16:28:17Z" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-05-16T15:35:50Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TinyPixel/20m | TinyPixel | "2024-04-25T14:27:56Z" | 134 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-25T14:27:46Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MaziyarPanahi/M7Yamshadowexperiment28_Experiment28Experiment24 | MaziyarPanahi | "2024-04-10T00:13:30Z" | 20 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"Safetensors",
"text-generation-inference",
"merge",
"base_model:automerger/Experiment28Experiment24-7B",
"base_model:merge:automerger/Experiment28Experiment24-7B",
"base_model:automerger/M7Yamshadowexperiment28-7B",
"base_model:merge:automerger/M7Yamshadowexperiment28-7B",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-04-09T23:58:06Z" | ---
license: apache-2.0
tags:
- Safetensors
- text-generation-inference
- merge
model_name: M7Yamshadowexperiment28_Experiment28Experiment24
base_model:
- automerger/M7Yamshadowexperiment28-7B
- automerger/Experiment28Experiment24-7B
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# M7Yamshadowexperiment28_Experiment28Experiment24
M7Yamshadowexperiment28_Experiment28Experiment24 is a merge of the following models:
* [automerger/M7Yamshadowexperiment28-7B](https://huggingface.co/automerger/M7Yamshadowexperiment28-7B)
* [automerger/Experiment28Experiment24-7B](https://huggingface.co/automerger/Experiment28Experiment24-7B)
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/M7Yamshadowexperiment28_Experiment28Experiment24"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Lvxue/distilled-mt5-small-010099_8 | Lvxue | "2022-08-10T03:32:16Z" | 6 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"generated_from_trainer",
"en",
"ro",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-08-10T02:24:27Z" | ---
language:
- en
- ro
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: distilled-mt5-small-010099_8
results:
- task:
name: Translation
type: translation
dataset:
name: wmt16 ro-en
type: wmt16
args: ro-en
metrics:
- name: Bleu
type: bleu
value: 6.231
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-010099_8
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9641
- Bleu: 6.231
- Gen Len: 50.1911
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
zkabar/a2c-cartpole | zkabar | "2025-03-17T20:44:37Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2025-03-17T20:44:27Z" | ---
library_name: stable-baselines3
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 468.30 +/- 14.64
name: mean_reward
verified: false
---
# **A2C** Agent playing **CartPole-v1**
This is a trained model of a **A2C** agent playing **CartPole-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
YOYO-AI/QwQ-instruct-32B | YOYO-AI | "2025-03-20T14:43:37Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2408.07990",
"base_model:Qwen/QwQ-32B",
"base_model:merge:Qwen/QwQ-32B",
"base_model:Qwen/Qwen2.5-32B",
"base_model:merge:Qwen/Qwen2.5-32B",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:merge:Qwen/Qwen2.5-32B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-20T10:03:57Z" | ---
base_model:
- Qwen/QwQ-32B
- Qwen/Qwen2.5-32B
- Qwen/Qwen2.5-32B-Instruct
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [Qwen/Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) as a base.
### Models Merged
The following models were included in the merge:
* [Qwen/QwQ-32B](https://huggingface.co/Qwen/QwQ-32B)
* [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: sce
models:
# Pivot model
- model: Qwen/Qwen2.5-32B
# Target models
- model: Qwen/QwQ-32B
- model: Qwen/Qwen2.5-32B-Instruct
base_model: Qwen/Qwen2.5-32B
parameters:
select_topk: 1
dtype: bfloat16
tokenizer_source: Qwen/QwQ-32B
normalize: true
int8_mask: true
```
|
Weyaxi/Einstein-v4-7B | Weyaxi | "2024-07-23T21:09:49Z" | 142 | 48 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"axolotl",
"generated_from_trainer",
"Mistral",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"science",
"physics",
"chemistry",
"biology",
"math",
"conversational",
"en",
"dataset:allenai/ai2_arc",
"dataset:camel-ai/physics",
"dataset:camel-ai/chemistry",
"dataset:camel-ai/biology",
"dataset:camel-ai/math",
"dataset:metaeval/reclor",
"dataset:openbookqa",
"dataset:mandyyyyii/scibench",
"dataset:derek-thomas/ScienceQA",
"dataset:TIGER-Lab/ScienceEval",
"dataset:jondurbin/airoboros-3.2",
"dataset:LDJnr/Capybara",
"dataset:Cot-Alpaca-GPT4-From-OpenHermes-2.5",
"dataset:STEM-AI-mtl/Electrical-engineering",
"dataset:knowrohit07/saraswati-stem",
"dataset:sablo/oasst2_curated",
"dataset:glaiveai/glaive-code-assistant",
"dataset:lmsys/lmsys-chat-1m",
"dataset:TIGER-Lab/MathInstruct",
"dataset:bigbio/med_qa",
"dataset:meta-math/MetaMathQA-40K",
"dataset:piqa",
"dataset:scibench",
"dataset:sciq",
"dataset:Open-Orca/SlimOrca",
"dataset:migtissera/Synthia-v1.3",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:other",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-02-22T12:40:38Z" | ---
language:
- en
license: other
tags:
- axolotl
- generated_from_trainer
- Mistral
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- science
- physics
- chemistry
- biology
- math
base_model: mistralai/Mistral-7B-v0.1
datasets:
- allenai/ai2_arc
- camel-ai/physics
- camel-ai/chemistry
- camel-ai/biology
- camel-ai/math
- metaeval/reclor
- openbookqa
- mandyyyyii/scibench
- derek-thomas/ScienceQA
- TIGER-Lab/ScienceEval
- jondurbin/airoboros-3.2
- LDJnr/Capybara
- Cot-Alpaca-GPT4-From-OpenHermes-2.5
- STEM-AI-mtl/Electrical-engineering
- knowrohit07/saraswati-stem
- sablo/oasst2_curated
- glaiveai/glaive-code-assistant
- lmsys/lmsys-chat-1m
- TIGER-Lab/MathInstruct
- bigbio/med_qa
- meta-math/MetaMathQA-40K
- openbookqa
- piqa
- metaeval/reclor
- derek-thomas/ScienceQA
- scibench
- sciq
- Open-Orca/SlimOrca
- migtissera/Synthia-v1.3
- TIGER-Lab/ScienceEval
model-index:
- name: Einstein-v4-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 64.68
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 83.75
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.31
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 55.15
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 76.24
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 57.62
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 47.08
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 14.3
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 1.74
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 4.25
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 19.02
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 13.99
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B
name: Open LLM Leaderboard
---

# 🔬 Einstein-v4-7B
This model is a full fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on diverse datasets.
This model is finetuned using `7xRTX3090` + `1xRTXA6000` using [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl).
This model's training was sponsored by [sablo.ai](https://sablo.ai).
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: mistralai/Mistral-7B-v0.1
model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer
is_mistral_derived_model: true
load_in_8bit: false
load_in_4bit: false
strict: false
chat_template: chatml
datasets:
- path: data/merged_all.json
ds_type: json
type: alpaca
conversation: chatml
- path: data/capybara_sharegpt.json
ds_type: json
type: sharegpt
conversation: chatml
- path: data/synthia-v1.3_sharegpt_12500.json
ds_type: json
type: sharegpt
conversation: chatml
- path: data/cot_alpaca_gpt4_extracted_openhermes_2.5_sharegpt.json
ds_type: json
type: sharegpt
conversation: chatml
- path: data/slimorca_dedup_filtered_95k_sharegpt.json
ds_type: json
type: sharegpt
conversation: chatml
- path: data/airoboros_3.2_without_contextual_slimorca_orca_sharegpt.json
ds_type: json
type: sharegpt
conversation: chatml
dataset_prepared_path: last_run_prepared
val_set_size: 0.005
output_dir: ./Einstein-v4-model
sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true
eval_sample_packing: false
wandb_project: Einstein
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
hub_model_id: Weyaxi/Einstein-v4-7B
save_safetensors: true
gradient_accumulation_steps: 4
micro_batch_size: 1
num_epochs: 1.5
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.000005
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 2 # changed
eval_table_size:
eval_table_max_new_tokens: 128
saves_per_epoch: 4
debug:
deepspeed: zero3_bf16.json
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "<|im_end|>"
unk_token: "<unk>"
tokens:
- "<|im_start|>"
resume_from_checkpoint: Einstein-v4-model/checkpoint-521
```
</details><br>
# 💬 Prompt Template
You can use this prompt template while using the model:
### ChatML
```
<|im_start|>system
{system}<|im_end|>
<|im_start|>user
{user}<|im_end|>
<|im_start|>assistant
{asistant}<|im_end|>
```
This prompt template is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are helpful AI asistant."},
{"role": "user", "content": "Hello!"}
]
gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
model.generate(**gen_input)
```
# 🔄 Quantizationed versions
Quantizationed versions of this model is available.
## GGUF [@LoneStriker](https://huggingface.co/LoneStriker)
- https://huggingface.co/LoneStriker/Einstein-v4-7B-GGUF
## AWQ [@solidrust](https://huggingface.co/solidrust)
- https://huggingface.co/solidrust/Einstein-v4-7B-AWQ
## Exl2 [@bartowski](https://hf.co/bartowski):
- https://huggingface.co/bartowski/Einstein-v4-7B-exl2
# 🎯 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__Einstein-v4-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |66.62|
|AI2 Reasoning Challenge (25-Shot)|64.68|
|HellaSwag (10-Shot) |83.75|
|MMLU (5-Shot) |62.31|
|TruthfulQA (0-shot) |55.15|
|Winogrande (5-shot) |76.24|
|GSM8k (5-shot) |57.62|
# 🎯 [Open LLM Leaderboard v2 Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__Einstein-v4-7B)
| Metric |Value|
|-------------------|----:|
|Avg. |16.73|
|IFEval (0-Shot) |47.08|
|BBH (3-Shot) |14.30|
|MATH Lvl 5 (4-Shot)| 1.74|
|GPQA (0-shot) | 4.25|
|MuSR (0-shot) |19.02|
|MMLU-PRO (5-shot) |13.99|
# 📚 Some resources, discussions and reviews aboout this model
#### 🐦 Announcement tweet:
https://twitter.com/Weyaxi/status/1765851433448944125
#### 🔍 Reddit post in r/LocalLLaMA:
- https://www.reddit.com/r/LocalLLaMA/comments/1b9gmvl/meet_einsteinv47b_mistralbased_sft_model_using/
#### ▶️ Youtube Videos
- https://www.youtube.com/watch?v=-3YWgHJIORE&t=18s
- https://www.youtube.com/watch?v=Xo2ySU8gja0
# 🤖 Additional information about training
This model is full fine-tuned for 1.5 epoch.
Total number of steps was 1562.
<details><summary>Loss graph</summary>

</details><br>
# 🤝 Acknowledgments
Thanks to [sablo.ai](https://sablo.ai) for sponsoring this model.
Thanks to all the dataset authors mentioned in the datasets section.
Thanks to [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) for making the repository I used to make this model.
Thanks to all open source AI community.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
If you would like to support me:
[☕ Buy Me a Coffee](https://www.buymeacoffee.com/weyaxi)
|
mradermacher/MathCoder-Llama3.1-8B-cot-GGUF | mradermacher | "2024-08-18T03:05:20Z" | 5 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:EpistemeAI/MathCoder-Llama3.1-8B-cot",
"base_model:quantized:EpistemeAI/MathCoder-Llama3.1-8B-cot",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-08-18T02:11:45Z" | ---
base_model: EpistemeAI/MathCoder-Llama3.1-8B-cot
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/EpistemeAI/MathCoder-Llama3.1-8B-cot
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MathCoder-Llama3.1-8B-cot-GGUF/resolve/main/MathCoder-Llama3.1-8B-cot.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/MathCoder-Llama3.1-8B-cot-GGUF/resolve/main/MathCoder-Llama3.1-8B-cot.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/MathCoder-Llama3.1-8B-cot-GGUF/resolve/main/MathCoder-Llama3.1-8B-cot.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/MathCoder-Llama3.1-8B-cot-GGUF/resolve/main/MathCoder-Llama3.1-8B-cot.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MathCoder-Llama3.1-8B-cot-GGUF/resolve/main/MathCoder-Llama3.1-8B-cot.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/MathCoder-Llama3.1-8B-cot-GGUF/resolve/main/MathCoder-Llama3.1-8B-cot.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MathCoder-Llama3.1-8B-cot-GGUF/resolve/main/MathCoder-Llama3.1-8B-cot.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/MathCoder-Llama3.1-8B-cot-GGUF/resolve/main/MathCoder-Llama3.1-8B-cot.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/MathCoder-Llama3.1-8B-cot-GGUF/resolve/main/MathCoder-Llama3.1-8B-cot.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MathCoder-Llama3.1-8B-cot-GGUF/resolve/main/MathCoder-Llama3.1-8B-cot.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MathCoder-Llama3.1-8B-cot-GGUF/resolve/main/MathCoder-Llama3.1-8B-cot.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/MathCoder-Llama3.1-8B-cot-GGUF/resolve/main/MathCoder-Llama3.1-8B-cot.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/MathCoder-Llama3.1-8B-cot-GGUF/resolve/main/MathCoder-Llama3.1-8B-cot.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MathCoder-Llama3.1-8B-cot-GGUF/resolve/main/MathCoder-Llama3.1-8B-cot.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/MathCoder-Llama3.1-8B-cot-GGUF/resolve/main/MathCoder-Llama3.1-8B-cot.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
fedge/DeepSeek-R1-Medical-COT-Fedge | fedge | "2025-02-24T08:13:07Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-24T08:08:00Z" | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** fedge
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Sophie-Rain-Spiderman-Video-Youtube-Free-1/Sophie.Rain.Spider-Man.Video.Official | Sophie-Rain-Spiderman-Video-Youtube-Free-1 | "2025-03-23T02:33:15Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-03-23T02:25:41Z" | [►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️](https://tinyurl.com/jnjwyafx)
[🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️](https://tinyurl.com/jnjwyafx)
[WATCH NOW](https://tinyurl.com/jnjwyafx)

|
demohong/5c730679-0709-4b6d-9348-0a4cd62066e1 | demohong | "2025-01-18T03:28:31Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:defog/sqlcoder-7b-2",
"base_model:adapter:defog/sqlcoder-7b-2",
"license:cc-by-sa-4.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-18T02:43:34Z" | ---
library_name: peft
license: cc-by-sa-4.0
base_model: defog/sqlcoder-7b-2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5c730679-0709-4b6d-9348-0a4cd62066e1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: defog/sqlcoder-7b-2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 4b6ca972ceb37da3_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/4b6ca972ceb37da3_train_data.json
type:
field_instruction: title
field_output: text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: demohong/5c730679-0709-4b6d-9348-0a4cd62066e1
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/4b6ca972ceb37da3_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 00bd15d8-3c31-42eb-9ad4-50ea7ef181e0
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 00bd15d8-3c31-42eb-9ad4-50ea7ef181e0
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 5c730679-0709-4b6d-9348-0a4cd62066e1
This model is a fine-tuned version of [defog/sqlcoder-7b-2](https://huggingface.co/defog/sqlcoder-7b-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6437
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.1533 | 0.0201 | 200 | 1.6437 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
hsohn3/cchs-bert-visit-uncased-wordlevel-block512-batch4-ep100 | hsohn3 | "2022-07-06T06:03:07Z" | 3 | 0 | transformers | [
"transformers",
"tf",
"bert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-07-05T19:36:06Z" | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: hsohn3/cchs-bert-visit-uncased-wordlevel-block512-batch4-ep100
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# hsohn3/cchs-bert-visit-uncased-wordlevel-block512-batch4-ep100
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7195
- Epoch: 99
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 3.8730 | 0 |
| 3.0562 | 1 |
| 3.0168 | 2 |
| 3.0032 | 3 |
| 2.9954 | 4 |
| 2.9951 | 5 |
| 2.9904 | 6 |
| 2.9765 | 7 |
| 2.9788 | 8 |
| 2.9692 | 9 |
| 2.9656 | 10 |
| 2.9761 | 11 |
| 2.9643 | 12 |
| 2.9393 | 13 |
| 2.9026 | 14 |
| 2.8685 | 15 |
| 2.8438 | 16 |
| 2.8279 | 17 |
| 2.8107 | 18 |
| 2.7896 | 19 |
| 2.7716 | 20 |
| 2.7458 | 21 |
| 2.7118 | 22 |
| 2.6519 | 23 |
| 2.5933 | 24 |
| 2.4702 | 25 |
| 2.2842 | 26 |
| 2.0712 | 27 |
| 1.8406 | 28 |
| 1.6374 | 29 |
| 1.4836 | 30 |
| 1.3824 | 31 |
| 1.3079 | 32 |
| 1.2538 | 33 |
| 1.2054 | 34 |
| 1.1700 | 35 |
| 1.1432 | 36 |
| 1.1122 | 37 |
| 1.0939 | 38 |
| 1.0645 | 39 |
| 1.0465 | 40 |
| 1.0248 | 41 |
| 1.0069 | 42 |
| 0.9902 | 43 |
| 0.9769 | 44 |
| 0.9510 | 45 |
| 0.9394 | 46 |
| 0.9316 | 47 |
| 0.9181 | 48 |
| 0.9090 | 49 |
| 0.9010 | 50 |
| 0.8934 | 51 |
| 0.8791 | 52 |
| 0.8759 | 53 |
| 0.8652 | 54 |
| 0.8566 | 55 |
| 0.8511 | 56 |
| 0.8414 | 57 |
| 0.8373 | 58 |
| 0.8302 | 59 |
| 0.8241 | 60 |
| 0.8246 | 61 |
| 0.8207 | 62 |
| 0.8110 | 63 |
| 0.8081 | 64 |
| 0.8010 | 65 |
| 0.7995 | 66 |
| 0.7965 | 67 |
| 0.7941 | 68 |
| 0.7849 | 69 |
| 0.7866 | 70 |
| 0.7874 | 71 |
| 0.7796 | 72 |
| 0.7742 | 73 |
| 0.7706 | 74 |
| 0.7687 | 75 |
| 0.7686 | 76 |
| 0.7663 | 77 |
| 0.7586 | 78 |
| 0.7554 | 79 |
| 0.7563 | 80 |
| 0.7541 | 81 |
| 0.7527 | 82 |
| 0.7482 | 83 |
| 0.7460 | 84 |
| 0.7436 | 85 |
| 0.7423 | 86 |
| 0.7422 | 87 |
| 0.7385 | 88 |
| 0.7367 | 89 |
| 0.7321 | 90 |
| 0.7320 | 91 |
| 0.7354 | 92 |
| 0.7271 | 93 |
| 0.7270 | 94 |
| 0.7210 | 95 |
| 0.7236 | 96 |
| 0.7263 | 97 |
| 0.7237 | 98 |
| 0.7195 | 99 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
gchhablani/fnet-base-finetuned-sst2 | gchhablani | "2021-11-13T08:23:41Z" | 29 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"rust",
"fnet",
"text-classification",
"generated_from_trainer",
"fnet-bert-base-comparison",
"en",
"dataset:glue",
"arxiv:2105.03824",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-03-02T23:29:05Z" | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- accuracy
model-index:
- name: fnet-base-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE SST2
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.8944954128440367
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fnet-base-finetuned-sst2
This model is a fine-tuned version of [google/fnet-base](https://huggingface.co/google/fnet-base) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4674
- Accuracy: 0.8945
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path google/fnet-base \\n --task_name sst2 \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir fnet-base-finetuned-sst2 \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:-----:|:--------:|:---------------:|
| 0.2956 | 1.0 | 4210 | 0.8819 | 0.3128 |
| 0.1746 | 2.0 | 8420 | 0.8979 | 0.3850 |
| 0.1204 | 3.0 | 12630 | 0.8945 | 0.4674 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
gyr66/RoBERTa-ext-large-lora-updated-chinese-finetuned-ner | gyr66 | "2024-01-03T12:55:50Z" | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:gyr66/RoBERTa-ext-large-chinese-finetuned-ner",
"base_model:finetune:gyr66/RoBERTa-ext-large-chinese-finetuned-ner",
"region:us"
] | null | "2024-01-03T12:55:48Z" | ---
base_model: gyr66/RoBERTa-ext-large-chinese-finetuned-ner
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: RoBERTa-ext-large-lora-updated-chinese-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RoBERTa-ext-large-lora-updated-chinese-finetuned-ner
This model is a fine-tuned version of [gyr66/RoBERTa-ext-large-chinese-finetuned-ner](https://huggingface.co/gyr66/RoBERTa-ext-large-chinese-finetuned-ner) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9586
- Precision: 0.7016
- Recall: 0.7518
- F1: 0.7258
- Accuracy: 0.9154
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0034 | 1.0 | 252 | 1.0787 | 0.6753 | 0.7523 | 0.7117 | 0.9121 |
| 0.0032 | 2.0 | 504 | 1.0376 | 0.6830 | 0.7490 | 0.7145 | 0.9141 |
| 0.0018 | 3.0 | 756 | 1.0547 | 0.6731 | 0.7573 | 0.7127 | 0.9126 |
| 0.0032 | 4.0 | 1008 | 1.0262 | 0.6829 | 0.7384 | 0.7096 | 0.9126 |
| 0.0027 | 5.0 | 1260 | 0.9613 | 0.6898 | 0.7445 | 0.7161 | 0.9118 |
| 0.0027 | 6.0 | 1512 | 0.9481 | 0.6780 | 0.7550 | 0.7145 | 0.9120 |
| 0.0019 | 7.0 | 1764 | 0.9328 | 0.6917 | 0.7513 | 0.7203 | 0.9150 |
| 0.0008 | 8.0 | 2016 | 0.9570 | 0.6976 | 0.7520 | 0.7238 | 0.9143 |
| 0.0005 | 9.0 | 2268 | 0.9586 | 0.7016 | 0.7518 | 0.7258 | 0.9154 |
| 0.0003 | 10.0 | 2520 | 0.9565 | 0.6945 | 0.7520 | 0.7221 | 0.9151 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
pnikoulis/dqn-SpaceInvadersNoFrameskip-v4 | pnikoulis | "2023-11-29T15:03:35Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-11-29T15:03:04Z" | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 232.00 +/- 130.85
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga pnikoulis -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga pnikoulis -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga pnikoulis
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
KingKazma/xsum_gpt2_p_tuning_500_4_50000_6_e0_s6789_v4_l4_v100 | KingKazma | "2023-09-02T01:26:10Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-09-02T01:26:08Z" | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
guilxus/a40926a4-0276-43d5-a0b2-bb7f5c8cb483 | guilxus | "2025-02-09T17:06:44Z" | 33 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Hermes-2-Theta-Llama-3-8B",
"base_model:adapter:NousResearch/Hermes-2-Theta-Llama-3-8B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-02-09T16:10:24Z" | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Hermes-2-Theta-Llama-3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a40926a4-0276-43d5-a0b2-bb7f5c8cb483
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Hermes-2-Theta-Llama-3-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c9b7db6e5effa927_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c9b7db6e5effa927_train_data.json
type:
field_input: category
field_instruction: tools
field_output: task
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: true
hub_model_id: guilxus/a40926a4-0276-43d5-a0b2-bb7f5c8cb483
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0002
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.2
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 600
micro_batch_size: 2
mlflow_experiment_name: /tmp/c9b7db6e5effa927_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 150
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 20c1b47d-d9fe-4e65-b212-e710c7c0a52f
wandb_project: Gradients-On-11
wandb_run: your_name
wandb_runid: 20c1b47d-d9fe-4e65-b212-e710c7c0a52f
warmup_steps: 5
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# a40926a4-0276-43d5-a0b2-bb7f5c8cb483
This model is a fine-tuned version of [NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 600
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.2292 | 0.7687 | 600 | 0.3271 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Hemorphage/ppo-LunarLander-v2 | Hemorphage | "2023-02-13T20:52:54Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-02-13T20:44:35Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 53.79 +/- 76.63
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Mihaiii/TinyLlama-1.1B-Chat-v1.0-optimum-intel | Mihaiii | "2024-01-23T10:44:38Z" | 86 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-23T09:46:41Z" | ---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- HuggingFaceH4/ultrachat_200k
- HuggingFaceH4/ultrafeedback_binarized
language:
- en
inference: false
---
Optimum quantization using the command:
```bash
optimum-cli inc quantize --model TinyLlama/TinyLlama-1.1B-Chat-v1.0 --output ./TinyLlama
```
Usage example:
```python
from optimum.intel import INCModelForCausalLM
from transformers import AutoTokenizer, pipeline, AutoModelForCausalLM
import torch
model_id = "Mihaiii/TinyLlama-1.1B-Chat-v1.0-optimum-intel"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = INCModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
messages = [
{
"role": "system",
"content": "You are a friendly chatbot who always responds in the style of a pirate",
},
{"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.0001, repetition_penalty=1.2)
print(outputs[0]["generated_text"])
``` |
mradermacher/mpt-30b-i1-GGUF | mradermacher | "2025-02-03T10:22:27Z" | 259 | 0 | transformers | [
"transformers",
"gguf",
"Composer",
"MosaicML",
"llm-foundry",
"StreamingDatasets",
"en",
"dataset:allenai/c4",
"dataset:mc4",
"dataset:togethercomputer/RedPajama-Data-1T",
"dataset:bigcode/the-stack-dedup",
"dataset:allenai/s2orc",
"base_model:mosaicml/mpt-30b",
"base_model:quantized:mosaicml/mpt-30b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | "2024-09-10T08:05:50Z" | ---
base_model: mosaicml/mpt-30b
datasets:
- allenai/c4
- mc4
- togethercomputer/RedPajama-Data-1T
- bigcode/the-stack-dedup
- allenai/s2orc
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- Composer
- MosaicML
- llm-foundry
- StreamingDatasets
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/mosaicml/mpt-30b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/mpt-30b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/mpt-30b-i1-GGUF/resolve/main/mpt-30b.i1-IQ1_S.gguf) | i1-IQ1_S | 6.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/mpt-30b-i1-GGUF/resolve/main/mpt-30b.i1-IQ1_M.gguf) | i1-IQ1_M | 7.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/mpt-30b-i1-GGUF/resolve/main/mpt-30b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 8.1 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-30b-i1-GGUF/resolve/main/mpt-30b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 9.0 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-30b-i1-GGUF/resolve/main/mpt-30b.i1-IQ2_S.gguf) | i1-IQ2_S | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-30b-i1-GGUF/resolve/main/mpt-30b.i1-IQ2_M.gguf) | i1-IQ2_M | 10.2 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-30b-i1-GGUF/resolve/main/mpt-30b.i1-Q2_K.gguf) | i1-Q2_K | 11.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/mpt-30b-i1-GGUF/resolve/main/mpt-30b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 11.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/mpt-30b-i1-GGUF/resolve/main/mpt-30b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 12.8 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-30b-i1-GGUF/resolve/main/mpt-30b.i1-IQ3_S.gguf) | i1-IQ3_S | 13.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/mpt-30b-i1-GGUF/resolve/main/mpt-30b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 13.1 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/mpt-30b-i1-GGUF/resolve/main/mpt-30b.i1-IQ3_M.gguf) | i1-IQ3_M | 14.6 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-30b-i1-GGUF/resolve/main/mpt-30b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 15.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/mpt-30b-i1-GGUF/resolve/main/mpt-30b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 16.1 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-30b-i1-GGUF/resolve/main/mpt-30b.i1-Q4_0.gguf) | i1-Q4_0 | 17.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/mpt-30b-i1-GGUF/resolve/main/mpt-30b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 17.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/mpt-30b-i1-GGUF/resolve/main/mpt-30b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/mpt-30b-i1-GGUF/resolve/main/mpt-30b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 19.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mpt-30b-i1-GGUF/resolve/main/mpt-30b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 20.7 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-30b-i1-GGUF/resolve/main/mpt-30b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 22.4 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-30b-i1-GGUF/resolve/main/mpt-30b.i1-Q6_K.gguf) | i1-Q6_K | 24.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
oz1115/hate | oz1115 | "2024-07-31T01:02:25Z" | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"electra",
"text-classification",
"generated_from_trainer",
"base_model:beomi/KcELECTRA-base",
"base_model:finetune:beomi/KcELECTRA-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-07-31T01:01:59Z" | ---
license: mit
base_model: beomi/KcELECTRA-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: hate
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hate
This model is a fine-tuned version of [beomi/KcELECTRA-base](https://huggingface.co/beomi/KcELECTRA-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3904
- Accuracy: 0.8278
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.4578 | 0.2002 | 734 | 0.3904 | 0.8278 |
### Framework versions
- Transformers 4.42.4
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
amyy78/u4 | amyy78 | "2023-11-03T05:45:45Z" | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-11-03T05:45:40Z" | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: u4
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 48.40 +/- 33.78
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
IDPZEro/dummy-model | IDPZEro | "2024-04-30T13:04:00Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2024-04-30T13:02:20Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ItsMaxNorm/lora-trained-xl | ItsMaxNorm | "2025-04-14T17:56:58Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"lora",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2025-04-13T20:56:59Z" | ---
base_model: CompVis/stable-diffusion-v1-4
library_name: diffusers
license: creativeml-openrail-m
inference: true
instance_prompt: a photo of sks dog
tags:
- text-to-image
- diffusers
- lora
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA DreamBooth - ItsMaxNorm/lora-trained-xl
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
TheBloke/Vigogne-2-13B-Instruct-AWQ | TheBloke | "2023-11-09T18:20:36Z" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"LLM",
"llama-2",
"fr",
"base_model:bofenghuang/vigogne-2-13b-instruct",
"base_model:quantized:bofenghuang/vigogne-2-13b-instruct",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] | text-generation | "2023-09-19T04:42:19Z" | ---
language:
- fr
license: llama2
library_name: transformers
tags:
- LLM
- llama
- llama-2
model_name: Vigogne 2 13B Instruct
base_model: bofenghuang/vigogne-2-13b-instruct
inference: false
model_creator: bofenghuang
model_type: llama
pipeline_tag: text-generation
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Vigogne 2 13B Instruct - AWQ
- Model creator: [bofenghuang](https://huggingface.co/bofenghuang)
- Original model: [Vigogne 2 13B Instruct](https://huggingface.co/bofenghuang/vigogne-2-13b-instruct)
<!-- description start -->
## Description
This repo contains AWQ model files for [bofenghuang's Vigogne 2 13B Instruct](https://huggingface.co/bofenghuang/vigogne-2-13b-instruct).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference.
It is also now supported by continuous batching server [vLLM](https://github.com/vllm-project/vllm), allowing use of AWQ models for high-throughput concurrent inference in multi-user server scenarios. Note that, at the time of writing, overall throughput is still lower than running vLLM with unquantised models, however using AWQ enables using much smaller GPUs which can lead to easier deployment and overall cost savings. For example, a 70B model can be run on 1 x 48GB GPU instead of 2 x 80GB.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Vigogne-2-13B-Instruct-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Vigogne-2-13B-Instruct-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Vigogne-2-13B-Instruct-GGUF)
* [bofenghuang's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/bofenghuang/vigogne-2-13b-instruct)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files and AWQ parameters
For my first release of AWQ models, I am releasing 128g models only. I will consider adding 32g as well if there is interest, and once I have done perplexity and evaluation comparisons, but at this time 32g models are still not fully tested with AutoAWQ and vLLM.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Vigogne-2-13B-Instruct-AWQ/tree/main) | 4 | 128 | [French news](https://huggingface.co/datasets/gustavecortal/diverse_french_news) | 4096 | 7.25 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Serving this model from vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- When using vLLM as a server, pass the `--quantization awq` parameter, for example:
```shell
python3 python -m vllm.entrypoints.api_server --model TheBloke/Vigogne-2-13B-Instruct-AWQ --quantization awq
```
When using vLLM from Python code, pass the `quantization=awq` parameter, for example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Hello, my name is",
"The president of the United States is",
"The capital of France is",
"The future of AI is",
]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/Vigogne-2-13B-Instruct-AWQ", quantization="awq")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-python start -->
## How to use this AWQ model from Python code
### Install the necessary packages
Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.0.2 or later
```shell
pip3 install autoawq
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### You can then try the following example code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
model_name_or_path = "TheBloke/Vigogne-2-13B-Instruct-AWQ"
# Load model
model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True,
trust_remote_code=False, safetensors=True)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=False)
prompt = "Tell me about AI"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
print("\n\n*** Generate:")
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
# Generate output
generation_output = model.generate(
tokens,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
max_new_tokens=512
)
print("Output: ", tokenizer.decode(generation_output[0]))
# Inference can also be done using transformers' pipeline
from transformers import pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with [AutoAWQ](https://github.com/casper-hansen/AutoAWQ), and [vLLM](https://github.com/vllm-project/vllm).
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is not yet compatible with AWQ, but a PR is open which should bring support soon: [TGI PR #781](https://github.com/huggingface/text-generation-inference/issues/781).
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: bofenghuang's Vigogne 2 13B Instruct
<p align="center" width="100%">
<img src="https://huggingface.co/bofenghuang/vigogne-2-13b-instruct/resolve/main/vigogne_logo.png" alt="Vigogne" style="width: 40%; min-width: 300px; display: block; margin: auto;">
</p>
# Vigogne-2-13B-Instruct: A Llama-2 based French instruction-following model
Vigogne-2-13B-Instruct is a model based on [LLaMA-2-13B](https://ai.meta.com/llama) that has been fine-tuned to follow French instructions.
For more information, please visit the Github repo: https://github.com/bofenghuang/vigogne
**Usage and License Notices**: Vigogne-2-13B-Instruct follows the same usage policy as Llama-2, which can be found [here](https://ai.meta.com/llama/use-policy).
## Usage
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
from vigogne.preprocess import generate_instruct_prompt
model_name_or_path = "bofenghuang/vigogne-2-13b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, padding_side="right", use_fast=False)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, torch_dtype=torch.float16, device_map="auto")
user_query = "Expliquez la différence entre DoS et phishing."
prompt = generate_instruct_prompt(user_query)
input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(model.device)
input_length = input_ids.shape[1]
generated_outputs = model.generate(
input_ids=input_ids,
generation_config=GenerationConfig(
temperature=0.1,
do_sample=True,
repetition_penalty=1.0,
max_new_tokens=512,
),
return_dict_in_generate=True,
)
generated_tokens = generated_outputs.sequences[0, input_length:]
generated_text = tokenizer.decode(generated_tokens, skip_special_tokens=True)
print(generated_text)
```
You can also infer this model by using the following Google Colab Notebook.
<a href="https://colab.research.google.com/github/bofenghuang/vigogne/blob/main/notebooks/infer_instruct.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Example Outputs
*todo*
## Limitations
Vigogne is still under development, and there are many limitations that have to be addressed. Please note that it is possible that the model generates harmful or biased content, incorrect information or generally unhelpful answers.
|
Chi666/mistralai_Mixtral-8x22B-Instruct-v0.1_finetune_20250210 | Chi666 | "2025-02-11T12:03:20Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2025-02-11T11:46:27Z" | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nhung03/9875129d-4dc5-41af-828c-88f35b745fb6 | nhung03 | "2025-01-13T17:19:20Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Coder-7B",
"base_model:adapter:unsloth/Qwen2.5-Coder-7B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-13T16:52:52Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Coder-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9875129d-4dc5-41af-828c-88f35b745fb6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-Coder-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 73747b81bdd59b67_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/73747b81bdd59b67_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung03/9875129d-4dc5-41af-828c-88f35b745fb6
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/73747b81bdd59b67_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 16119331-0f9c-49d9-888b-3d979ba41c25
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 16119331-0f9c-49d9-888b-3d979ba41c25
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 9875129d-4dc5-41af-828c-88f35b745fb6
This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-7B](https://huggingface.co/unsloth/Qwen2.5-Coder-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4827
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.3674 | 0.0325 | 200 | 1.4827 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Nexspear/29eb08d5-0ff6-4863-ae3d-293ec46ae81a | Nexspear | "2025-01-13T16:14:33Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"bloom",
"axolotl",
"generated_from_trainer",
"base_model:bigscience/bloomz-560m",
"base_model:adapter:bigscience/bloomz-560m",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | "2025-01-13T16:07:03Z" | ---
library_name: peft
license: bigscience-bloom-rail-1.0
base_model: bigscience/bloomz-560m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 29eb08d5-0ff6-4863-ae3d-293ec46ae81a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: bigscience/bloomz-560m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f1bc7e9faf5b03b2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f1bc7e9faf5b03b2_train_data.json
type:
field_input: real_abstract
field_instruction: title
field_output: generated_abstract
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: Nexspear/29eb08d5-0ff6-4863-ae3d-293ec46ae81a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 100
micro_batch_size: 8
mlflow_experiment_name: /tmp/f1bc7e9faf5b03b2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 4267907d-a9d0-4f7a-ad94-b6ffd47bc6ff
wandb_project: Gradients-On-Four
wandb_run: your_name
wandb_runid: 4267907d-a9d0-4f7a-ad94-b6ffd47bc6ff
warmup_steps: 10
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 29eb08d5-0ff6-4863-ae3d-293ec46ae81a
This model is a fine-tuned version of [bigscience/bloomz-560m](https://huggingface.co/bigscience/bloomz-560m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9455
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0034 | 1 | 2.3144 |
| 8.9964 | 0.0309 | 9 | 2.2398 |
| 8.3309 | 0.0619 | 18 | 2.1041 |
| 8.0886 | 0.0928 | 27 | 2.0422 |
| 7.8037 | 0.1237 | 36 | 2.0057 |
| 7.8449 | 0.1546 | 45 | 1.9821 |
| 7.9978 | 0.1856 | 54 | 1.9646 |
| 7.5581 | 0.2165 | 63 | 1.9571 |
| 7.7959 | 0.2474 | 72 | 1.9517 |
| 7.4536 | 0.2784 | 81 | 1.9476 |
| 7.7221 | 0.3093 | 90 | 1.9463 |
| 7.6559 | 0.3402 | 99 | 1.9455 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mini1013/master_cate_fi10 | mini1013 | "2025-01-21T21:02:48Z" | 1,255 | 0 | setfit | [
"setfit",
"safetensors",
"roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:mini1013/master_domain",
"base_model:finetune:mini1013/master_domain",
"model-index",
"region:us"
] | text-classification | "2025-01-21T21:02:26Z" | ---
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: 미드센추리 투명 아크릴 스테인리스 트롤리 이동식 거실 테이블 가구/인테리어>주방가구>왜건/카트
- text: 복고풍 황동철제 에메랄드 인테리어 리빙 고급 대리석 사각테이블 가구/인테리어>주방가구>식탁/의자>식탁테이블
- text: 스칸디아 우디 800 1200 반타원형 수납 원목 테이블 식탁 착불배송 가구/인테리어>주방가구>식탁/의자>식탁테이블
- text: 퍼니코 어반 라미네이트 반타원 1200 4인용 식탁 세트 식탁 의자2P 벤치1P 1000 벤치 포인트체어 1200X800 가구/인테리어>주방가구>식탁/의자>식탁세트
- text: 모던 홈 바 테이블 세트 아일랜드 식탁 100 240cm-길이 200 폭 30 높이 100 가구/인테리어>주방가구>식탁/의자>아일랜드식탁
metrics:
- accuracy
pipeline_tag: text-classification
library_name: setfit
inference: true
base_model: mini1013/master_domain
model-index:
- name: SetFit with mini1013/master_domain
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 1.0
name: Accuracy
---
# SetFit with mini1013/master_domain
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 6 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2.0 | <ul><li>'밥솥다이 전자렌지선반 광파오븐장 1200 라빈화이트 가구/인테리어>주방가구>레인지대'</li><li>'가구느낌 전자레인지 수납장 4단 밥솥 다이 렌지 선반 가구/인테리어>주방가구>레인지대'</li><li>'가구레시피 한정이벤트 조립식 시그니처 진열장형 5단 선반장 렌지대 주방수납장 밥솥다이 가구/인테리어>주방가구>레인지대'</li></ul> |
| 0.0 | <ul><li>'장미맨숀 마르틴 원목 그릇장 가구/인테리어>주방가구>그릇장/컵보드'</li><li>'찻잔 장식장 다기 진열 홈카페 수납장 주방 선반 그 -17 오동나무 12칸 벽걸이형 가구/인테리어>주방가구>그릇장/컵보드'</li><li>'찬장 원목 그릇장 빈티지 주방 수납장 엔틱 미닫이 진열장 가구/인테리어>주방가구>그릇장/컵보드'</li></ul> |
| 5.0 | <ul><li>'아이엔지홈 킨포크 주방수납장 1200 가구/인테리어>주방가구>주방수납장'</li><li>'리바트키친 트루 주방 수납장 가구/인테리어>주방가구>주방수납장'</li><li>'화이트 수납 캐비닛 주방 지중해 갤러리 찬장 가구/인테리어>주방가구>주방수납장'</li></ul> |
| 4.0 | <ul><li>'이동식 트롤리 바퀴달린 리어카 선반 다이닝카 다층선반 미드센추리 가구/인테리어>주방가구>왜건/카트'</li><li>'진료 선반 병원 카트 치과 트레이 드레싱 장비 수납 가구/인테리어>주방가구>왜건/카트'</li><li>'밀스턴 튼튼한 이동식 트롤리 3단 가구/인테리어>주방가구>왜건/카트'</li></ul> |
| 3.0 | <ul><li>'화이트 엣지 600 원형 18T 라운딩 테이블 가구/인테리어>주방가구>식탁/의자>식탁테이블'</li><li>'600x2000 키큰 주방 렌지대 겸 접이식 식탁 밥솥 다이 가구/인테리어>주방가구>식탁/의자>레인지대겸용식탁'</li><li>'웰퍼니쳐 클로이 고무나무 원목 6인 식탁세트 의자6 가구/인테리어>주방가구>식탁/의자>식탁세트'</li></ul> |
| 1.0 | <ul><li>'흡수가 빠른 씽크대선반건조대 규조토드라잉매트 가구/인테리어>주방가구>기타주방가구'</li><li>'업소용 싱크대 영업용 식당 스텐 주방 씽크대 개수대 가구/인테리어>주방가구>기타주방가구'</li><li>'주방 식당 스텐 배수 조리대 작업대 테이블 싱크대 업소용 스테인레스 가구/인테리어>주방가구>기타주방가구'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 1.0 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("mini1013/master_cate_fi10")
# Run inference
preds = model("미드센추리 투명 아크릴 스테인리스 트롤리 이동식 거실 테이블 가구/인테리어>주방가구>왜건/카트")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 3 | 9.0476 | 15 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0.0 | 70 |
| 1.0 | 70 |
| 2.0 | 70 |
| 3.0 | 70 |
| 4.0 | 70 |
| 5.0 | 70 |
### Training Hyperparameters
- batch_size: (256, 256)
- num_epochs: (30, 30)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 50
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:----:|:-------------:|:---------------:|
| 0.0120 | 1 | 0.494 | - |
| 0.6024 | 50 | 0.4972 | - |
| 1.2048 | 100 | 0.4906 | - |
| 1.8072 | 150 | 0.1734 | - |
| 2.4096 | 200 | 0.0195 | - |
| 3.0120 | 250 | 0.0002 | - |
| 3.6145 | 300 | 0.0 | - |
| 4.2169 | 350 | 0.0 | - |
| 4.8193 | 400 | 0.0001 | - |
| 5.4217 | 450 | 0.0 | - |
| 6.0241 | 500 | 0.0 | - |
| 6.6265 | 550 | 0.0 | - |
| 7.2289 | 600 | 0.0 | - |
| 7.8313 | 650 | 0.0 | - |
| 8.4337 | 700 | 0.0 | - |
| 9.0361 | 750 | 0.0 | - |
| 9.6386 | 800 | 0.0 | - |
| 10.2410 | 850 | 0.0 | - |
| 10.8434 | 900 | 0.0 | - |
| 11.4458 | 950 | 0.0 | - |
| 12.0482 | 1000 | 0.0 | - |
| 12.6506 | 1050 | 0.0 | - |
| 13.2530 | 1100 | 0.0 | - |
| 13.8554 | 1150 | 0.0 | - |
| 14.4578 | 1200 | 0.0 | - |
| 15.0602 | 1250 | 0.0 | - |
| 15.6627 | 1300 | 0.0 | - |
| 16.2651 | 1350 | 0.0 | - |
| 16.8675 | 1400 | 0.0 | - |
| 17.4699 | 1450 | 0.0 | - |
| 18.0723 | 1500 | 0.0 | - |
| 18.6747 | 1550 | 0.0 | - |
| 19.2771 | 1600 | 0.0 | - |
| 19.8795 | 1650 | 0.0 | - |
| 20.4819 | 1700 | 0.0 | - |
| 21.0843 | 1750 | 0.0 | - |
| 21.6867 | 1800 | 0.0 | - |
| 22.2892 | 1850 | 0.0 | - |
| 22.8916 | 1900 | 0.0 | - |
| 23.4940 | 1950 | 0.0 | - |
| 24.0964 | 2000 | 0.0 | - |
| 24.6988 | 2050 | 0.0 | - |
| 25.3012 | 2100 | 0.0 | - |
| 25.9036 | 2150 | 0.0 | - |
| 26.5060 | 2200 | 0.0 | - |
| 27.1084 | 2250 | 0.0 | - |
| 27.7108 | 2300 | 0.0 | - |
| 28.3133 | 2350 | 0.0 | - |
| 28.9157 | 2400 | 0.0 | - |
| 29.5181 | 2450 | 0.0 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.1.0
- Sentence Transformers: 3.3.1
- Transformers: 4.44.2
- PyTorch: 2.2.0a0+81ea7a4
- Datasets: 3.2.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
LoneStriker/Blue-Orchid-2x7b-8.0bpw-h8-exl2 | LoneStriker | "2024-02-03T05:26:08Z" | 9 | 5 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-02-01T20:00:17Z" | ---
license: apache-2.0
---
**Blue-Orchid-2x7b**
GGUF: https://huggingface.co/nakodanei/Blue-Orchid-2x7b_GGUF
Roleplaying focused MoE Mistral model.
One expert is a merge of mostly RP models, the other is a merge of mostly storywriting models. So it should be good at both. The base model is SanjiWatsuki/Kunoichi-DPO-v2-7B.
- Expert 1 is a merge of LimaRP, Limamono, Noromaid 0.4 DPO and good-robot.
- Expert 2 is a merge of Erebus, Holodeck, Dans-AdventurousWinds-Mk2, Opus, Ashhwriter and good-robot.
## Prompt template (LimaRP):
```
### Instruction:
{system prompt}
### Input:
User: {prompt}
### Response:
Character:
```
Alpaca prompt template should work fine too. |
ttnksm/lease_sk_ner_emph_non_emph_26_01 | ttnksm | "2025-01-27T14:18:18Z" | 79 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"token-classification",
"generated_from_trainer",
"base_model:gerulata/slovakbert",
"base_model:finetune:gerulata/slovakbert",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2025-01-27T13:59:51Z" | ---
library_name: transformers
license: mit
base_model: gerulata/slovakbert
tags:
- generated_from_trainer
model-index:
- name: lease_sk_ner_emph_non_emph_26_01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lease_sk_ner_emph_non_emph_26_01
This model is a fine-tuned version of [gerulata/slovakbert](https://huggingface.co/gerulata/slovakbert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1265 | 1.0 | 570 | 0.0390 |
| 0.0317 | 2.0 | 1140 | 0.0283 |
| 0.0198 | 3.0 | 1710 | 0.0230 |
| 0.0148 | 4.0 | 2280 | 0.0230 |
| 0.0119 | 5.0 | 2850 | 0.0246 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
ZeroUniqueness/qlora-llama-2-13b-code | ZeroUniqueness | "2023-08-16T02:59:42Z" | 27 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-08-02T16:13:08Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
huggingtweets/chrisevans-robertdowneyjr | huggingtweets | "2022-06-16T20:34:01Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-06-16T20:32:28Z" | ---
language: en
thumbnail: http://www.huggingtweets.com/chrisevans-robertdowneyjr/1655411636421/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1353806309397655553/0zEtkDvx_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1320917504013848577/-VTJLuI9_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Robert Downey Jr & Chris Evans</div>
<div style="text-align: center; font-size: 14px;">@chrisevans-robertdowneyjr</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Robert Downey Jr & Chris Evans.
| Data | Robert Downey Jr | Chris Evans |
| --- | --- | --- |
| Tweets downloaded | 875 | 2075 |
| Retweets | 154 | 684 |
| Short tweets | 70 | 209 |
| Tweets kept | 651 | 1182 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2a0abddd/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @chrisevans-robertdowneyjr's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/hfbdxz6f) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/hfbdxz6f/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/chrisevans-robertdowneyjr')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
abhishek/zephyr-beta-math | abhishek | "2023-11-09T13:56:26Z" | 1,509 | 6 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mistral",
"text-generation",
"autotrain",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-10-27T08:53:16Z" | ---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
Hello, this is a long description now. How about it? Hello, this is a long description now. How about it? Hello, this is a long description now. How about it? Hello, this is a long description now. How about it? Hello, this is a long description now. How about it? Hello, this is a long description now. How about it? Hello, this is a long description now. How about it? Hello, this is a long description now. How about it? Hello, this is a long description now. How about it? |
mradermacher/deepseek-coder-33b-base-GGUF | mradermacher | "2024-09-08T20:26:10Z" | 16 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:deepseek-ai/deepseek-coder-33b-base",
"base_model:quantized:deepseek-ai/deepseek-coder-33b-base",
"license:other",
"endpoints_compatible",
"region:us"
] | null | "2024-09-06T03:17:15Z" | ---
base_model: deepseek-ai/deepseek-coder-33b-base
language:
- en
library_name: transformers
license: other
license_link: LICENSE
license_name: deepseek-license
no_imatrix: nan detected in blk.61.attn_q.weight
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/deepseek-ai/deepseek-coder-33b-base
<!-- provided-files -->
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/deepseek-coder-33b-base-GGUF/resolve/main/deepseek-coder-33b-base.Q2_K.gguf) | Q2_K | 12.5 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek-coder-33b-base-GGUF/resolve/main/deepseek-coder-33b-base.IQ3_XS.gguf) | IQ3_XS | 13.8 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek-coder-33b-base-GGUF/resolve/main/deepseek-coder-33b-base.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek-coder-33b-base-GGUF/resolve/main/deepseek-coder-33b-base.IQ3_S.gguf) | IQ3_S | 14.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/deepseek-coder-33b-base-GGUF/resolve/main/deepseek-coder-33b-base.IQ3_M.gguf) | IQ3_M | 15.1 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek-coder-33b-base-GGUF/resolve/main/deepseek-coder-33b-base.Q3_K_M.gguf) | Q3_K_M | 16.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/deepseek-coder-33b-base-GGUF/resolve/main/deepseek-coder-33b-base.Q3_K_L.gguf) | Q3_K_L | 17.7 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek-coder-33b-base-GGUF/resolve/main/deepseek-coder-33b-base.IQ4_XS.gguf) | IQ4_XS | 18.1 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek-coder-33b-base-GGUF/resolve/main/deepseek-coder-33b-base.Q4_K_S.gguf) | Q4_K_S | 19.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/deepseek-coder-33b-base-GGUF/resolve/main/deepseek-coder-33b-base.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/deepseek-coder-33b-base-GGUF/resolve/main/deepseek-coder-33b-base.Q5_K_S.gguf) | Q5_K_S | 23.1 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek-coder-33b-base-GGUF/resolve/main/deepseek-coder-33b-base.Q5_K_M.gguf) | Q5_K_M | 23.6 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek-coder-33b-base-GGUF/resolve/main/deepseek-coder-33b-base.Q6_K.gguf) | Q6_K | 27.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/deepseek-coder-33b-base-GGUF/resolve/main/deepseek-coder-33b-base.Q8_0.gguf) | Q8_0 | 35.5 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
bigband/ResilientHorus | bigband | "2025-02-21T05:32:34Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-02-21T05:32:14Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ghegfield/Llama-2-7b-chat-hf-formula-peft | ghegfield | "2023-10-26T00:20:36Z" | 0 | 0 | null | [
"generated_from_trainer",
"base_model:NousResearch/Llama-2-7b-chat-hf",
"base_model:finetune:NousResearch/Llama-2-7b-chat-hf",
"region:us"
] | null | "2023-10-21T13:17:40Z" | ---
base_model: NousResearch/Llama-2-7b-chat-hf
tags:
- generated_from_trainer
model-index:
- name: Llama-2-7b-chat-hf-formula-peft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-chat-hf-formula-peft
This model is a fine-tuned version of [NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1452
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.1878 | 1.43 | 10 | 3.6596 |
| 2.8437 | 2.86 | 20 | 2.6466 |
| 1.8635 | 4.29 | 30 | 2.2266 |
| 1.4052 | 5.71 | 40 | 2.1136 |
| 1.2186 | 7.14 | 50 | 2.0805 |
| 0.8835 | 8.57 | 60 | 2.0733 |
| 0.6991 | 10.0 | 70 | 2.0809 |
| 0.5608 | 11.43 | 80 | 2.0862 |
| 0.4188 | 12.86 | 90 | 2.1078 |
| 0.3897 | 14.29 | 100 | 2.1089 |
| 0.2748 | 15.71 | 110 | 2.1333 |
| 0.2582 | 17.14 | 120 | 2.1383 |
| 0.2394 | 18.57 | 130 | 2.1440 |
| 0.2392 | 20.0 | 140 | 2.1452 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
brixeus/55b0e08a-8a9b-4394-9f80-7d8739261d02 | brixeus | "2025-02-26T12:26:52Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Mistral-7b-128k",
"base_model:adapter:NousResearch/Yarn-Mistral-7b-128k",
"license:apache-2.0",
"region:us"
] | null | "2025-02-26T10:47:41Z" | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Yarn-Mistral-7b-128k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 55b0e08a-8a9b-4394-9f80-7d8739261d02
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Mistral-7b-128k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ec3a1fa9097f209b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ec3a1fa9097f209b_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
ddp_timeout: 1800
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 150
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: true
group_by_length: true
hub_model_id: brixeus/55b0e08a-8a9b-4394-9f80-7d8739261d02
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 10
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: constant
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 1800
micro_batch_size: 4
mlflow_experiment_name: /tmp/ec3a1fa9097f209b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optim_args:
adam_beta1: 0.9
adam_beta2: 0.999
adam_epsilon: 1e-08
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
relora_prune_ratio: 0.9
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 150
saves_per_epoch: null
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: acopia-grant
wandb_mode: online
wandb_name: 2ff8dc3a-3ca3-4651-9759-228db073299b
wandb_project: Gradients-On-60
wandb_run: your_name
wandb_runid: 2ff8dc3a-3ca3-4651-9759-228db073299b
warmup_steps: 50
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 55b0e08a-8a9b-4394-9f80-7d8739261d02
This model is a fine-tuned version of [NousResearch/Yarn-Mistral-7b-128k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1502
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.999,adam_epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 50
- training_steps: 1800
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0003 | 1 | 0.4296 |
| 0.8485 | 0.0481 | 150 | 0.2456 |
| 0.7233 | 0.0962 | 300 | 0.2319 |
| 0.7551 | 0.1443 | 450 | 0.2190 |
| 0.6373 | 0.1924 | 600 | 0.2016 |
| 0.7612 | 0.2406 | 750 | 0.1909 |
| 0.5026 | 0.2887 | 900 | 0.1827 |
| 0.6856 | 0.3368 | 1050 | 0.1737 |
| 0.5957 | 0.3849 | 1200 | 0.1703 |
| 0.5332 | 0.4330 | 1350 | 0.1633 |
| 0.5696 | 0.4811 | 1500 | 0.1560 |
| 0.4823 | 0.5292 | 1650 | 0.1476 |
| 0.4592 | 0.5773 | 1800 | 0.1502 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
macarious/torgo_xlsr_finetune_M01_old | macarious | "2024-03-05T03:00:31Z" | 60 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-11-19T07:39:12Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: torgo_xlsr_finetune_M01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# torgo_xlsr_finetune_M01
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8655
- Wer: 0.3060
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.4346 | 0.89 | 1000 | 3.3570 | 1.0 |
| 1.3708 | 1.79 | 2000 | 1.5774 | 0.7569 |
| 0.7783 | 2.69 | 3000 | 1.6546 | 0.6103 |
| 0.5676 | 3.58 | 4000 | 1.3849 | 0.5216 |
| 0.4476 | 4.48 | 5000 | 1.5294 | 0.5 |
| 0.4264 | 5.37 | 6000 | 1.5832 | 0.4534 |
| 0.3434 | 6.27 | 7000 | 1.4397 | 0.4233 |
| 0.3371 | 7.16 | 8000 | 1.4635 | 0.4129 |
| 0.3268 | 8.06 | 9000 | 1.5989 | 0.3828 |
| 0.2623 | 8.95 | 10000 | 1.5145 | 0.3836 |
| 0.2755 | 9.85 | 11000 | 1.6695 | 0.3569 |
| 0.2304 | 10.74 | 12000 | 1.4313 | 0.3397 |
| 0.2052 | 11.64 | 13000 | 1.4242 | 0.3466 |
| 0.199 | 12.53 | 14000 | 1.7287 | 0.3405 |
| 0.2124 | 13.43 | 15000 | 1.4715 | 0.3086 |
| 0.1858 | 14.32 | 16000 | 1.6835 | 0.3086 |
| 0.1667 | 15.22 | 17000 | 1.6080 | 0.3233 |
| 0.1551 | 16.11 | 18000 | 1.6151 | 0.3293 |
| 0.1638 | 17.01 | 19000 | 1.5014 | 0.3034 |
| 0.1584 | 17.9 | 20000 | 1.7036 | 0.3233 |
| 0.1486 | 18.8 | 21000 | 1.6527 | 0.3207 |
| 0.1337 | 19.7 | 22000 | 1.6947 | 0.3181 |
| 0.201 | 20.59 | 23000 | 1.9110 | 0.3431 |
| 0.2058 | 21.49 | 24000 | 1.6260 | 0.3560 |
| 0.1776 | 22.38 | 25000 | 1.8602 | 0.3483 |
| 0.1779 | 23.28 | 26000 | 2.0418 | 0.3578 |
| 0.1401 | 24.17 | 27000 | 2.0262 | 0.3371 |
| 0.1533 | 25.07 | 28000 | 1.7442 | 0.3069 |
| 0.1476 | 25.96 | 29000 | 1.7283 | 0.3190 |
| 0.1414 | 26.86 | 30000 | 1.7655 | 0.3181 |
| 0.1522 | 27.75 | 31000 | 1.6772 | 0.3103 |
| 0.146 | 28.65 | 32000 | 1.4420 | 0.3 |
| 0.1363 | 29.54 | 33000 | 1.5955 | 0.3276 |
| 0.1306 | 30.44 | 34000 | 1.7269 | 0.3336 |
| 0.1241 | 31.33 | 35000 | 1.7725 | 0.3216 |
| 0.1155 | 32.23 | 36000 | 1.8232 | 0.3086 |
| 0.117 | 33.12 | 37000 | 1.8145 | 0.3052 |
| 0.0973 | 34.02 | 38000 | 2.0621 | 0.3216 |
| 0.1181 | 34.91 | 39000 | 1.6758 | 0.2957 |
| 0.1063 | 35.81 | 40000 | 1.6431 | 0.2983 |
| 0.094 | 36.71 | 41000 | 1.7967 | 0.3069 |
| 0.0937 | 37.6 | 42000 | 1.8469 | 0.3052 |
| 0.0931 | 38.5 | 43000 | 1.8364 | 0.3017 |
| 0.0897 | 39.39 | 44000 | 1.8655 | 0.3060 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.13.3
|
KingKazma/cnn_dailymail_gpt2_lora_500_10_3000_8_e6_s6789_v3_l4_r2 | KingKazma | "2023-08-12T21:16:34Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-08-12T21:16:29Z" | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
Hamza-Ziyard/sinMT5-tuned | Hamza-Ziyard | "2023-05-08T13:26:54Z" | 21 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | "2023-05-07T00:17:03Z" | ---
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: sinMT5-tuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sinMT5-tuned
This model is a fine-tuned version of [google/mT5](https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8573
- Rouge1: 20.2531
- Rouge2: 8.1307
- Rougel: 19.3917
- Rougelsum: 20.0592
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00015652249866150822
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 1.8651 | 1.0 | 1500 | 1.8070 | 17.676 | 7.1418 | 16.8638 | 17.457 |
| 1.5527 | 2.0 | 3000 | 1.7804 | 21.1357 | 8.1386 | 20.122 | 20.8652 |
| 1.3755 | 3.0 | 4500 | 1.7769 | 21.4151 | 8.5692 | 20.3204 | 21.1152 |
| 1.2473 | 4.0 | 6000 | 1.7937 | 21.2434 | 8.2325 | 20.1332 | 21.0657 |
| 1.1548 | 5.0 | 7500 | 1.8035 | 20.4298 | 8.2314 | 19.5909 | 20.2116 |
| 1.0835 | 6.0 | 9000 | 1.8367 | 20.5427 | 8.2226 | 19.6134 | 20.2918 |
| 1.0387 | 7.0 | 10500 | 1.8573 | 20.2531 | 8.1307 | 19.3917 | 20.0592 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Tokenizers 0.13.3
|
jondurbin/airoboros-l2-13b-gpt4-1.4.1 | jondurbin | "2023-08-04T20:50:37Z" | 1,430 | 12 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-gpt4-1.4.1",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-07-24T08:18:44Z" | ---
license: other
datasets:
- jondurbin/airoboros-gpt4-1.4.1
---
### Overview
Llama 2 13b fine tune using https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.4.1
See the previous llama 65b model card for info:
https://hf.co/jondurbin/airoboros-65b-gpt4-1.4
### Licence and usage restrictions
This model was built on llama-2, which has a proprietary/custom Meta license.
- See the LICENSE.txt file attached for the original license, along with USE_POLICY.md which was also provided by Meta.
The data used to fine-tune the llama-2-13b-hf model was generated by GPT4 via OpenAI API calls.using [airoboros](https://github.com/jondurbin/airoboros)
- The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant of copyrighted or otherwise unallowable licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely indemnify me. |
memeviss/unjust_5 | memeviss | "2025-03-21T07:32:11Z" | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | "2025-03-20T12:20:28Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Voicemod/fastspeech2-en-ljspeech | Voicemod | "2022-05-22T22:54:24Z" | 6 | 8 | fairseq | [
"fairseq",
"audio",
"text-to-speech",
"en",
"dataset:ljspeech",
"arxiv:2006.04558",
"arxiv:2109.06912",
"region:us"
] | text-to-speech | "2022-05-19T13:25:18Z" | ---
library_name: fairseq
task: text-to-speech
tags:
- fairseq
- audio
- text-to-speech
language: en
datasets:
- ljspeech
widget:
- text: "Hello, this is a test run."
example_title: "Hello, this is a test run."
---
# fastspeech2-en-ljspeech
[FastSpeech 2](https://arxiv.org/abs/2006.04558) text-to-speech model from fairseq S^2 ([paper](https://arxiv.org/abs/2109.06912)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_synthesis)):
- English
- Single-speaker female voice
- Trained on [LJSpeech](https://keithito.com/LJ-Speech-Dataset/)
## Usage
```python
from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub
from fairseq.models.text_to_speech.hub_interface import TTSHubInterface
import IPython.display as ipd
models, cfg, task = load_model_ensemble_and_task_from_hf_hub(
"facebook/fastspeech2-en-ljspeech",
arg_overrides={"vocoder": "hifigan", "fp16": False}
)
model = models[0]
TTSHubInterface.update_cfg_with_data_cfg(cfg, task.data_cfg)
generator = task.build_generator(model, cfg)
text = "Hello, this is a test run."
sample = TTSHubInterface.get_model_input(task, text)
wav, rate = TTSHubInterface.get_prediction(task, model, generator, sample)
ipd.Audio(wav, rate=rate)
```
See also [fairseq S^2 example](https://github.com/pytorch/fairseq/blob/main/examples/speech_synthesis/docs/ljspeech_example.md).
## Citation
```bibtex
@inproceedings{wang-etal-2021-fairseq,
title = "fairseq S{\^{}}2: A Scalable and Integrable Speech Synthesis Toolkit",
author = "Wang, Changhan and
Hsu, Wei-Ning and
Adi, Yossi and
Polyak, Adam and
Lee, Ann and
Chen, Peng-Jen and
Gu, Jiatao and
Pino, Juan",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-demo.17",
doi = "10.18653/v1/2021.emnlp-demo.17",
pages = "143--152",
}
```
|
sarthaksavvy/flux-lora-train | sarthaksavvy | "2024-09-03T07:23:03Z" | 77 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2024-09-03T06:59:10Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
instance_prompt: sarthaksavvy
---
# Flux Lora Train
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `sarthaksavvy` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('sarthaksavvy/flux-lora-train', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
microsoft/cvt-13 | microsoft | "2023-09-17T16:00:37Z" | 9,762 | 11 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"cvt",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2103.15808",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2022-04-04T11:32:10Z" | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# Convolutional Vision Transformer (CvT)
CvT-13 model pre-trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Wu et al. and first released in [this repository](https://github.com/microsoft/CvT).
Disclaimer: The team releasing CvT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Usage
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoFeatureExtractor, CvtForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoFeatureExtractor.from_pretrained('microsoft/cvt-13')
model = CvtForImageClassification.from_pretrained('microsoft/cvt-13')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
``` |
LHRuig/filmsx | LHRuig | "2025-01-20T06:16:38Z" | 7 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | "2025-01-20T06:16:18Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: man
---
# filmsx
<Gallery />
## Model description
filmsx lora
## Trigger words
You should use `man` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/filmsx/tree/main) them in the Files & versions tab.
|
diaenra/0d5a2237-f543-43e1-be3f-401c67f1c812 | diaenra | "2025-01-21T09:01:16Z" | 7 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2b-it",
"base_model:adapter:unsloth/gemma-2b-it",
"license:apache-2.0",
"region:us"
] | null | "2025-01-21T05:15:05Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/gemma-2b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0d5a2237-f543-43e1-be3f-401c67f1c812
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 6b44e678a82a701b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6b44e678a82a701b_train_data.json
type:
field_input: context
field_instruction: instruction
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: true
hub_model_id: diaenra/0d5a2237-f543-43e1-be3f-401c67f1c812
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 70GB
micro_batch_size: 4
mlflow_experiment_name: /tmp/6b44e678a82a701b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: diaenra-tao-miner
wandb_mode: online
wandb_name: 4254bcf7-12ab-45f0-9f3c-2b8d77287b02
wandb_project: tao
wandb_run: diaenra
wandb_runid: 4254bcf7-12ab-45f0-9f3c-2b8d77287b02
warmup_steps: 10
weight_decay: 0.0
xformers_attention: true
```
</details><br>
# 0d5a2237-f543-43e1-be3f-401c67f1c812
This model is a fine-tuned version of [unsloth/gemma-2b-it](https://huggingface.co/unsloth/gemma-2b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8147
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8864 | 0.9996 | 1668 | 0.8303 |
| 0.8148 | 1.9995 | 3336 | 0.8147 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
trangtrannnnn/23d7bf23-536a-49c2-9966-f1db4f75054e | trangtrannnnn | "2025-01-27T07:56:12Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-3B",
"base_model:adapter:Qwen/Qwen2.5-3B",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-27T07:30:40Z" | ---
library_name: peft
license: other
base_model: Qwen/Qwen2.5-3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 23d7bf23-536a-49c2-9966-f1db4f75054e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-3B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e3f7343345b9b21f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e3f7343345b9b21f_train_data.json
type:
field_instruction: description
field_output: code
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: trangtrannnnn/23d7bf23-536a-49c2-9966-f1db4f75054e
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/e3f7343345b9b21f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: db8f3cf6-8e27-4f7a-a1cc-9f92fa694ab2
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: db8f3cf6-8e27-4f7a-a1cc-9f92fa694ab2
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 23d7bf23-536a-49c2-9966-f1db4f75054e
This model is a fine-tuned version of [Qwen/Qwen2.5-3B](https://huggingface.co/Qwen/Qwen2.5-3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5265 | 0.0214 | 200 | 0.5356 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
cleanrl/InvertedPendulum-v4-ppo_continuous_action-seed1 | cleanrl | "2023-10-15T20:08:34Z" | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"InvertedPendulum-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-10-15T20:08:28Z" | ---
tags:
- InvertedPendulum-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: InvertedPendulum-v4
type: InvertedPendulum-v4
metrics:
- type: mean_reward
value: 5.30 +/- 0.46
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **InvertedPendulum-v4**
This is a trained model of a PPO agent playing InvertedPendulum-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ppo_continuous_action.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[ppo_continuous_action]"
python -m cleanrl_utils.enjoy --exp-name ppo_continuous_action --env-id InvertedPendulum-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/InvertedPendulum-v4-ppo_continuous_action-seed1/raw/main/ppo_continuous_action.py
curl -OL https://huggingface.co/cleanrl/InvertedPendulum-v4-ppo_continuous_action-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/InvertedPendulum-v4-ppo_continuous_action-seed1/raw/main/poetry.lock
poetry install --all-extras
python ppo_continuous_action.py --track --save-model --upload-model --hf-entity cleanrl --env-id InvertedPendulum-v4 --seed 1
```
# Hyperparameters
```python
{'anneal_lr': True,
'batch_size': 2048,
'capture_video': False,
'clip_coef': 0.2,
'clip_vloss': True,
'cuda': True,
'ent_coef': 0.0,
'env_id': 'InvertedPendulum-v4',
'exp_name': 'ppo_continuous_action',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learning_rate': 0.0003,
'max_grad_norm': 0.5,
'minibatch_size': 64,
'norm_adv': True,
'num_envs': 1,
'num_minibatches': 32,
'num_steps': 2048,
'save_model': True,
'seed': 1,
'target_kl': None,
'torch_deterministic': True,
'total_timesteps': 1000000,
'track': True,
'update_epochs': 10,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
tiiuae/Falcon3-3B-Instruct-GPTQ-Int4 | tiiuae | "2025-01-13T08:04:10Z" | 76 | 0 | null | [
"safetensors",
"llama",
"falcon3",
"en",
"fr",
"es",
"pt",
"base_model:tiiuae/Falcon3-3B-Instruct",
"base_model:quantized:tiiuae/Falcon3-3B-Instruct",
"license:other",
"4-bit",
"gptq",
"region:us"
] | null | "2024-12-14T09:22:22Z" | ---
base_model: tiiuae/Falcon3-3B-Instruct
language:
- en
- fr
- es
- pt
license: other
license_name: falcon-llm-license
license_link: https://falconllm.tii.ae/falcon-terms-and-conditions.html
tags:
- falcon3
---
<div align="center">
<img src="https://huggingface.co/datasets/tiiuae/documentation-images/resolve/main/general/falco3-logo.png" alt="drawing" width="500"/>
</div>
# Falcon3-3B-Instruct-GPTQ-Int4
**Falcon3** family of Open Foundation Models is a set of pretrained and instruct LLMs ranging from 1B to 10B parameters.
**Falcon3-3B-Instruct** achieves strong results on reasoning, language understanding, instruction following, code and mathematics tasks.
Falcon3-3B-Instruct supports 4 languages (English, French, Spanish, Portuguese) and a context length of up to 32K.
This repository contains the GPTQ-quantized 4-bit instruction-tuned 3B Falcon3 model.
## Model Details
- Architecture
- Transformer-based causal decoder-only architecture
- 22 decoder blocks
- Grouped Query Attention (GQA) for faster inference: 12 query heads and 4 key-value heads
- Wider head dimension: 256
- High RoPE value to support long context understanding: 1000042
- Uses SwiGLU and RMSNorm
- 32K context length
- 131K vocab size
- Pruned and healed from Falcon3-7B-Base on only 100 Gigatokens of datasets comprising of web, code, STEM, high quality and mutlilingual data using 1024 H100 GPU chips
- Posttrained on 1.2 million samples of STEM, conversational, code, safety and function call data
- Supports EN, FR, ES, PT
- Developed by [Technology Innovation Institute](https://www.tii.ae)
- License: TII Falcon-LLM License 2.0
- Model Release Date: December 2024
- Quantization: GPTQ 4-bit
## Getting started
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "tiiuae/Falcon3-3B-Instruct-GPTQ-Int4"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "How many hours in one day?"
messages = [
{"role": "system", "content": "You are a helpful friendly assistant Falcon3 from TII, try to follow instructions as much as possible."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=1024
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
</details>
<br>
## Benchmarks
We report in the following table our internal pipeline benchmarks:
<table border="1" style="width: 100%; text-align: center; border-collapse: collapse;">
<colgroup>
<col style="width: 10%;">
<col style="width: 10%;">
<col style="width: 10%;">
<col style="width: 10%;">
<col style="background-color: rgba(80, 15, 213, 0.5); width: 7%;">
</colgroup>
<thead>
<tr>
<th>Benchmark</th>
<th>Falcon3-3B-Instruct</th>
<th>Falcon3-3B-Instruct-GPTQ-Int8</th>
<th>Falcon3-3B-Instruct-AWQ</th>
<th>Falcon3-3B-Instruct-GPTQ-Int4</th>
</tr>
</thead>
<tbody>
<tr>
<td>MMLU</td>
<td>55.7</td>
<td>55.8</td>
<td>53.3</td>
<td>53.3</td>
</tr>
<tr>
<td>MMLU-PRO</td>
<td>30.0</td>
<td>30.3</td>
<td>28.4</td>
<td>25.9</td>
</tr>
<tr>
<td>IFEval</td>
<td>69.1</td>
<td>68.4</td>
<td>67.9</td>
<td>62.9</td>
</tr>
</tbody>
</table>
## Useful links
- View our [release blogpost](https://huggingface.co/blog/falcon3).
- Feel free to join [our discord server](https://discord.gg/fwXpMyGc) if you have any questions or to interact with our researchers and developers.
## Technical Report
Coming soon....
## Citation
If the Falcon3 family of models were helpful to your work, feel free to give us a cite.
```
@misc{Falcon3,
title = {The Falcon 3 Family of Open Models},
url = {https://huggingface.co/blog/falcon3},
author = {Falcon-LLM Team},
month = {December},
year = {2024}
}
``` |
yanka9/Reinforce-PixelCopter-PLE-v0 | yanka9 | "2023-10-20T16:24:37Z" | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-10-19T21:41:17Z" | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 35.40 +/- 24.68
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
PrunaAI/NexaAIDev-Octopus-v2-HQQ-8bit-smashed | PrunaAI | "2025-03-29T01:52:40Z" | 3 | 0 | null | [
"gemma",
"pruna-ai",
"hqq",
"region:us"
] | null | "2025-03-22T05:21:22Z" | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: ORIGINAL_REPO_NAME
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo ORIGINAL_REPO_NAME installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/NexaAIDev-Octopus-v2-HQQ-8bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/NexaAIDev-Octopus-v2-HQQ-8bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("ORIGINAL_REPO_NAME")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model ORIGINAL_REPO_NAME before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
osanseviero/autotrain-sbpr1-6z22v | osanseviero | "2024-03-21T13:46:31Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"autotrain",
"text-generation",
"conversational",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:other",
"region:us"
] | text-generation | "2024-03-21T13:27:47Z" | ---
tags:
- autotrain
- text-generation
library_name: peft
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
base_model: meta-llama/Llama-2-7b-hf
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
Gumibit/q-FrozenLake-v1-4x4-Slippery_ex02 | Gumibit | "2023-01-20T23:47:48Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-01-20T23:47:38Z" | ---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-Slippery_ex02
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
metrics:
- type: mean_reward
value: 0.73 +/- 0.44
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Gumibit/q-FrozenLake-v1-4x4-Slippery_ex02", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
mrferr3t/bcd5cc34-cc6e-493e-8113-e50b4921c185 | mrferr3t | "2025-01-31T04:14:29Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/zephyr-sft",
"base_model:adapter:unsloth/zephyr-sft",
"license:apache-2.0",
"region:us"
] | null | "2025-01-31T04:11:37Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/zephyr-sft
tags:
- axolotl
- generated_from_trainer
model-index:
- name: bcd5cc34-cc6e-493e-8113-e50b4921c185
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/zephyr-sft
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 6ed6c277f11bbb64_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6ed6c277f11bbb64_train_data.json
type:
field_input: text
field_instruction: instruction
field_output: correct_answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 50
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/bcd5cc34-cc6e-493e-8113-e50b4921c185
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0005
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 99
micro_batch_size: 2
mlflow_experiment_name: /tmp/6ed6c277f11bbb64_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 300
saves_per_epoch: 0
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0d002147-7eec-4e71-9fab-a11266c28fd8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0d002147-7eec-4e71-9fab-a11266c28fd8
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# bcd5cc34-cc6e-493e-8113-e50b4921c185
This model is a fine-tuned version of [unsloth/zephyr-sft](https://huggingface.co/unsloth/zephyr-sft) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7752
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 99
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.8501 | 0.0012 | 1 | 1.0013 |
| 3.8639 | 0.0619 | 50 | 0.7752 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
werent4/w4Llama3_ukr_eng | werent4 | "2024-05-25T16:18:05Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-25T15:44:02Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[werent4](https://huggingface.co/werent4)
## Model Card Contact
[More Information Needed] |
peter2000/bmz_topics10 | peter2000 | "2022-09-14T12:13:21Z" | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2022-09-14T12:12:58Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# peter2000/bmz_topics10
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('peter2000/bmz_topics10')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('peter2000/bmz_topics10')
model = AutoModel.from_pretrained('peter2000/bmz_topics10')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=peter2000/bmz_topics10)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 83 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.dataloader._InfiniteConstantSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.BatchHardTripletLoss.BatchHardTripletLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 1660,
"warmup_steps": 166,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Grekkla/MedChmtsStyleLORA | Grekkla | "2024-01-24T17:09:07Z" | 21 | 1 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:unknown",
"region:us"
] | text-to-image | "2024-01-24T16:43:11Z" | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
character concept of a medieval soldier, he is wearing a platemail armor,
shoulderguards, pauldrons, shoulder armor, and a brown a leather kilt, in
the style of medchmts, white background <lora:medchmtsStyleSDXL-000003:1>
parameters:
negative_prompt: ' unaestheticXL_hk1'
output:
url: images/00000-2574209897.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: medchmts style
license: unknown
---
# MedchmtsStyle
<Gallery />
## Trigger words
You should use `medchmts style` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Grekkla/MedChmtsStyleLORA/tree/main) them in the Files & versions tab.
|
mradermacher/Fett-uccine-11B-Experiment-GGUF | mradermacher | "2024-11-28T03:55:23Z" | 85 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:saishf/Fett-uccine-11B-Experiment",
"base_model:quantized:saishf/Fett-uccine-11B-Experiment",
"license:agpl-3.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-11-27T17:31:24Z" | ---
base_model: saishf/Fett-uccine-11B-Experiment
language:
- en
library_name: transformers
license: agpl-3.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/saishf/Fett-uccine-11B-Experiment
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Fett-uccine-11B-Experiment-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Fett-uccine-11B-Experiment-GGUF/resolve/main/Fett-uccine-11B-Experiment.Q2_K.gguf) | Q2_K | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Fett-uccine-11B-Experiment-GGUF/resolve/main/Fett-uccine-11B-Experiment.Q3_K_S.gguf) | Q3_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Fett-uccine-11B-Experiment-GGUF/resolve/main/Fett-uccine-11B-Experiment.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Fett-uccine-11B-Experiment-GGUF/resolve/main/Fett-uccine-11B-Experiment.Q3_K_L.gguf) | Q3_K_L | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Fett-uccine-11B-Experiment-GGUF/resolve/main/Fett-uccine-11B-Experiment.IQ4_XS.gguf) | IQ4_XS | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Fett-uccine-11B-Experiment-GGUF/resolve/main/Fett-uccine-11B-Experiment.Q4_0_4_4.gguf) | Q4_0_4_4 | 6.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Fett-uccine-11B-Experiment-GGUF/resolve/main/Fett-uccine-11B-Experiment.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Fett-uccine-11B-Experiment-GGUF/resolve/main/Fett-uccine-11B-Experiment.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Fett-uccine-11B-Experiment-GGUF/resolve/main/Fett-uccine-11B-Experiment.Q5_K_S.gguf) | Q5_K_S | 7.5 | |
| [GGUF](https://huggingface.co/mradermacher/Fett-uccine-11B-Experiment-GGUF/resolve/main/Fett-uccine-11B-Experiment.Q5_K_M.gguf) | Q5_K_M | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/Fett-uccine-11B-Experiment-GGUF/resolve/main/Fett-uccine-11B-Experiment.Q6_K.gguf) | Q6_K | 8.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Fett-uccine-11B-Experiment-GGUF/resolve/main/Fett-uccine-11B-Experiment.Q8_0.gguf) | Q8_0 | 11.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Fett-uccine-11B-Experiment-GGUF/resolve/main/Fett-uccine-11B-Experiment.f16.gguf) | f16 | 21.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
radce/Llama-3.2-3B | radce | "2025-02-26T11:07:22Z" | 54 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-02T14:52:40Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
N0de/ppo-LunarLander-v2_1 | N0de | "2024-03-28T08:23:36Z" | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | "2024-03-28T08:19:36Z" | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -134.44 +/- 94.92
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 100000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'N0de/ppo-LunarLander-v2_1'
'batch_size': 512
'minibatch_size': 128}
```
|
yam-peleg/Experiment23-7B | yam-peleg | "2024-02-27T21:30:01Z" | 48 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"chat",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-02-24T02:01:18Z" | ---
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- chat
---
**Experiment23-7B**
An experiment for testing and refining a specific training and evaluation pipeline research framework.
This experiment aims to identify potential optimizations, focusing on data engineering, architecture efficiency, and evaluation performance.
The goal is to evaluate the effectiveness of a new training / evaluation pipeline for LLMs.
The experiment will explore adjustments in data preprocessing, model training algorithms, and evaluation metrics to test methods for improvement.
More details in the future experiments.
---
license: apache-2.0
--- |
fanzru/t5-small-finetuned-xsum-introduction | fanzru | "2022-11-21T12:45:51Z" | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-11-21T11:56:20Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum-introduction
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
args: default
metrics:
- name: Rouge1
type: rouge
value: 28.1828
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum-introduction
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4784
- Rouge1: 28.1828
- Rouge2: 7.6948
- Rougel: 22.1413
- Rougelsum: 22.1467
- Gen Len: 18.8272
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.7155 | 1.0 | 12753 | 2.4784 | 28.1828 | 7.6948 | 22.1413 | 22.1467 | 18.8272 |
### Framework versions
- Transformers 4.11.0
- Pytorch 1.11.0a0+b6df043
- Datasets 2.6.1
- Tokenizers 0.10.3
|
Subsets and Splits