modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-13 18:27:00
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 425
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-13 18:24:29
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Triangle104/Unsloth-Llama-3.2-3B-Instruct-Q4_K_S-GGUF | Triangle104 | "2024-11-11T01:40:44Z" | 9 | 0 | transformers | [
"transformers",
"gguf",
"llama-3",
"llama",
"meta",
"facebook",
"unsloth",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:quantized:unsloth/Llama-3.2-3B-Instruct",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-11-11T01:40:05Z" | ---
base_model: unsloth/Llama-3.2-3B-Instruct
language:
- en
library_name: transformers
license: llama3.2
tags:
- llama-3
- llama
- meta
- facebook
- unsloth
- transformers
- llama-cpp
- gguf-my-repo
---
# Triangle104/Llama-3.2-3B-Instruct-Q4_K_S-GGUF
This model was converted to GGUF format from [`unsloth/Llama-3.2-3B-Instruct`](https://huggingface.co/unsloth/Llama-3.2-3B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/unsloth/Llama-3.2-3B-Instruct) for more details on the model.
---
Model details:
-
Finetune Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
Special Thanks
A huge thank you to the Meta and Llama team for creating and releasing these models.
Model Information
The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
Model developer: Meta
Model Architecture: Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
Supported languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
Llama 3.2 family of models Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
Model Release Date: Sept 25, 2024
Status: This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
License: Use of Llama 3.2 is governed by the Llama 3.2 Community License (a custom, commercial license agreement).
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model README. For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go here.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Llama-3.2-3B-Instruct-Q4_K_S-GGUF --hf-file llama-3.2-3b-instruct-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Llama-3.2-3B-Instruct-Q4_K_S-GGUF --hf-file llama-3.2-3b-instruct-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Llama-3.2-3B-Instruct-Q4_K_S-GGUF --hf-file llama-3.2-3b-instruct-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Llama-3.2-3B-Instruct-Q4_K_S-GGUF --hf-file llama-3.2-3b-instruct-q4_k_s.gguf -c 2048
```
|
StepLaw/StepLaw-N_214M-D_11.0B-LR1.105e-02-BS2097152 | StepLaw | "2025-04-06T00:01:12Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"step1",
"text-generation",
"StepLaw",
"causal-lm",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-05T23:59:50Z" | ---
license: apache-2.0
tags:
- StepLaw
- causal-lm
language:
- en
library_name: transformers
pipeline_tag: text-generation
model-index:
- name: step2v2_0618_h960_ffnh9368_numh15_numl7_lr1.105e-02_bs1024_ti5426_mlr1.00e-05
results: []
---
# Wandb Model Name: step2v2_0618_h960_ffnh9368_numh15_numl7_lr1.105e-02_bs1024_ti5426_mlr1.00e-05
This model is part of the [StepLaw-N_214M-D_11.0B](https://huggingface.co/collections/StepLaw/StepLaw-N_214M-D_11.0B) collection.
## Model Specifications
### Architecture
- **Hidden size (H)**: 960
- **Feed-forward network size (FFN)**: 9368
- **Attention heads**: 15
- **Layers**: 7
- **Parameter count**: 214MM
### Training Parameters
- **Learning rate (lr)**: 1.105e-02
- **Batch size (bs)**: 1024
- **Training iterations**: 5426
- **Training tokens (D)**: 11.4B
## Model Description
StepLaw models are trained with various hyperparameter settings to enable research on scaling laws and hyperparameter optimization. This specific model was trained with learning rate 1.105e-02 and batch size 1024 for 5426 iterations, using a total of 11.4B training tokens.
## Usage Example
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "StepLaw/StepLaw-N_214M-D_11.0B-LR1.105e-02-BS2097152"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True, use_fast=False)
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True)
# Generate text
inputs = tokenizer("A long time ago in a galaxy far, far away", return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```## Part of StepLaw Project
StepLaw is an initiative to provide thousands of models for optimal hyperparameter research.
Visit [StepLaw Project](https://step-law.github.io/) for more information.
|
KappaNeuro/dripping-art | KappaNeuro | "2023-09-14T09:34:12Z" | 11 | 1 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"design",
"style",
"liquid",
"dripping art",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] | text-to-image | "2023-09-14T09:34:08Z" | ---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- design
- style
- liquid
- dripping art
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Dripping Art page
widget:
- text: "Dripping Art - large drip token logo. Dripping into psychedelic world."
- text: "Dripping Art - rainbow goth abstract 3d dripping pattern illustration"
- text: "Dripping Art - Design an artistic image that symbolizes love, using a dripping paint effect to enhance the visual impact. The image may incorporate a heart or another emblem of love, with paint dripping from it in a visually appealing manner. Use bright and attention-grabbing colors to make the artwork stand out, while the dripping paint effect adds an imaginative and innovative flair, brick wall background"
- text: "Dripping Art - A high angle shot of an AI drip art piece, showing vibrant colors trickling down the canvas. The drips create a visually compelling pattern against a white background. The image is brightly lit to highlight the contrast of colors. The camera used is a DSLR with a standard lens, the shot level is high"
- text: "Dripping Art - driptober driptastic dripetition to dripfind the dripdrippiest dripdripdripperdrip in dripville"
- text: "Dripping Art - solid white backgound with vibrant colored paint dripping down from the top of the image"
- text: "Dripping Art - design of very high quality of drips and splashes in vibrant colors and pastel color s"
- text: "Dripping Art - cool design made of dripping paint that looks like graffiti bright colors"
- text: "Dripping Art - solid white backgound with neon colored paint dripping down from the top"
- text: "Dripping Art - paint dripping - abstract - splatter - contrast - bold - bright - q 5"
---
# Dripping Art ([CivitAI](https://civitai.com/models/153740)

> Dripping Art - large drip token logo. Dripping into psychedelic world.
<p>Dripping art, also known as drip painting or action painting, is a style of abstract expressionism associated with artists such as Jackson Pollock.</p><p>This style emerged in the mid-20th century and is characterized by the spontaneous and gestural application of paint onto a canvas. Instead of using traditional brushes, artists would often use various tools or even pour, drip, or fling paint onto the canvas.</p><p>Dripping art emphasizes the physicality and movement of the artist, with the process of creating the artwork becoming as important as the final result. The dripping and splattering of paint onto the canvas create dynamic and unpredictable patterns and textures.</p><p>Artists working in this style often explored notions of chance, improvisation, and the subconscious. The process allowed for a direct and immediate expression of emotions and energy.</p><p>Dripping art challenged conventional notions of painting and shifted the focus from the subject matter to the act of creation itself. It emphasized the physicality of the artist's movements and the spontaneous interaction between the artist and the canvas.</p><p>Jackson Pollock is perhaps the most renowned artist associated with this style, as his innovative techniques and energetic compositions came to define the drip painting approach.</p><p>Dripping art continues to be influential and has inspired subsequent generations of artists to explore new possibilities in abstract expressionism and the process-driven nature of creation.</p>
## Image examples for the model:

> Dripping Art - rainbow goth abstract 3d dripping pattern illustration

> Dripping Art - Design an artistic image that symbolizes love, using a dripping paint effect to enhance the visual impact. The image may incorporate a heart or another emblem of love, with paint dripping from it in a visually appealing manner. Use bright and attention-grabbing colors to make the artwork stand out, while the dripping paint effect adds an imaginative and innovative flair, brick wall background

> Dripping Art - A high angle shot of an AI drip art piece, showing vibrant colors trickling down the canvas. The drips create a visually compelling pattern against a white background. The image is brightly lit to highlight the contrast of colors. The camera used is a DSLR with a standard lens, the shot level is high

> Dripping Art - driptober driptastic dripetition to dripfind the dripdrippiest dripdripdripperdrip in dripville

> Dripping Art - solid white backgound with vibrant colored paint dripping down from the top of the image

> Dripping Art - design of very high quality of drips and splashes in vibrant colors and pastel color s

> Dripping Art - cool design made of dripping paint that looks like graffiti bright colors

> Dripping Art - solid white backgound with neon colored paint dripping down from the top

> Dripping Art - paint dripping - abstract - splatter - contrast - bold - bright - q 5
|
MaziyarPanahi/NeuralHermes-2.5-Mistral-7B-Mistral-7B-Instruct-v0.1 | MaziyarPanahi | "2024-01-16T14:49:49Z" | 18 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"Safetensors",
"text-generation-inference",
"merge",
"7b",
"mistralai/Mistral-7B-Instruct-v0.1",
"mlabonne/NeuralHermes-2.5-Mistral-7B",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"distillation",
"dpo",
"rlhf",
"en",
"dataset:mlabonne/chatml_dpo_pairs",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us",
"conversational"
] | text-generation | "2024-01-16T14:44:57Z" | ---
license: apache-2.0
tags:
- Safetensors
- mistral
- text-generation-inference
- merge
- mistral
- 7b
- mistralai/Mistral-7B-Instruct-v0.1
- mlabonne/NeuralHermes-2.5-Mistral-7B
- transformers
- safetensors
- mistral
- text-generation
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
- dpo
- rlhf
- en
- dataset:mlabonne/chatml_dpo_pairs
- base_model:teknium/OpenHermes-2.5-Mistral-7B
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- has_space
- text-generation-inference
- region:us
---
# NeuralHermes-2.5-Mistral-7B-Mistral-7B-Instruct-v0.1
NeuralHermes-2.5-Mistral-7B-Mistral-7B-Instruct-v0.1 is a merge of the following models:
* [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.1
layer_range: [0, 32]
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/NeuralHermes-2.5-Mistral-7B-Mistral-7B-Instruct-v0.1"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
mradermacher/SmolLM2-FT-DPO-GGUF | mradermacher | "2024-12-12T20:27:43Z" | 65 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"smol-course",
"module_1",
"trl",
"dpo",
"en",
"base_model:mdagost/SmolLM2-FT-DPO",
"base_model:quantized:mdagost/SmolLM2-FT-DPO",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-12-12T20:26:30Z" | ---
base_model: mdagost/SmolLM2-FT-DPO
language:
- en
library_name: transformers
model_name: SmolLM2-FT-DPO
quantized_by: mradermacher
tags:
- generated_from_trainer
- smol-course
- module_1
- trl
- dpo
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/mdagost/SmolLM2-FT-DPO
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-FT-DPO-GGUF/resolve/main/SmolLM2-FT-DPO.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-FT-DPO-GGUF/resolve/main/SmolLM2-FT-DPO.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-FT-DPO-GGUF/resolve/main/SmolLM2-FT-DPO.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-FT-DPO-GGUF/resolve/main/SmolLM2-FT-DPO.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-FT-DPO-GGUF/resolve/main/SmolLM2-FT-DPO.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-FT-DPO-GGUF/resolve/main/SmolLM2-FT-DPO.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-FT-DPO-GGUF/resolve/main/SmolLM2-FT-DPO.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-FT-DPO-GGUF/resolve/main/SmolLM2-FT-DPO.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-FT-DPO-GGUF/resolve/main/SmolLM2-FT-DPO.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-FT-DPO-GGUF/resolve/main/SmolLM2-FT-DPO.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-FT-DPO-GGUF/resolve/main/SmolLM2-FT-DPO.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/SmolLM2-FT-DPO-GGUF/resolve/main/SmolLM2-FT-DPO.f16.gguf) | f16 | 0.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Yuyi-Tech/12345-dialogue-keywords-t5 | Yuyi-Tech | "2024-07-19T09:10:02Z" | 8 | 1 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-07-17T14:46:40Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
seregadgl101/baii_v12_8ep | seregadgl101 | "2024-05-20T19:34:54Z" | 7 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-05-20T19:33:04Z" | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# seregadgl101/baii_v12_8ep
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('seregadgl101/baii_v12_8ep')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=seregadgl101/baii_v12_8ep)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Fsoft-AIC/XMAiNframe-instruct-10.5b | Fsoft-AIC | "2024-08-23T12:18:38Z" | 35 | 3 | null | [
"safetensors",
"llama",
"code",
"dataset:Fsoft-AIC/MainframeBench",
"arxiv:2408.04660",
"license:mit",
"region:us"
] | null | "2024-08-02T11:54:05Z" | ---
license: mit
datasets:
- Fsoft-AIC/MainframeBench
tags:
- code
---
<p align="center">
<img src="./asset/XMAiNframe.png" width="560px" alt="logo">
</p>
<div align="center">
# XMAiNframe: A Large Language Model for Mainframe Modernization
</div>
## Introduction
We are introducing **XMAiNframe**, a state-of-the-art large language model (LLM) specifically designed with knowledge of mainframe legacy systems and COBOL codebases. XMAiNframe is built on top of DeepSeek-Coder 7B and is available with 7B and 10.5B parameters.
Additionally, we present [MainframeBench](https://huggingface.co/datasets/Fsoft-AIC/MainframeBench), a comprehensive benchmark for assessing mainframe knowledge, including multiple-choice questions, question answering, and COBOL code summarization. Our empirical evaluations demonstrate that XMAiNframe consistently outperforms existing state-of-the-art LLMs across these tasks. Specifically, XMAiNframe achieves 30% higher accuracy than DeepSeek-Coder on multiple-choice questions, doubles the BLEU score of Mixtral-Instruct 8x7B on question answering, and scores six times higher than GPT-3.5 on COBOL summarization. Our work highlights the potential of XMAiNframe to drive significant advancements in managing and modernizing legacy systems, thereby enhancing productivity and saving time for software developers.
## Model Versions
We release XMAiNframe with 7B and 10.5B parameters, including base and instruct models, to the public. XMAiNframe 10.5B is expanded from DeepSeek-Coder 7B by the depth up-scaling method without introducing additional modules or dynamic expert selection methods.
<div align="center">
| **Model** | **Download** |
| :-----------------------------: | :----------------------------------------------------------: |
| XMAiNframe-base-7b | [🤗 HuggingFace](https://https://huggingface.co/Fsoft-AIC/XMAiNframe-base-7b/) |
| XMAiNframe-instruct-7b | [🤗 HuggingFace](https://huggingface.co/Fsoft-AIC/XMAiNframe-instruct-7b) |
| XMAiNframe-base-10.5b | [🤗 HuggingFace](https://huggingface.co/Fsoft-AIC/XMAiNframe-base-10.5b) |
| XMAiNframe-instruct-10.5b | [🤗 HuggingFace](https://huggingface.co/Fsoft-AIC/XMAiNframe-instruct-10.5b) |
</div>
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Fsoft-AIC/XMAiNframe-instruct-10.5b")
model = AutoModelForCausalLM.from_pretrained("Fsoft-AIC/XMAiNframe-instruct-10.5b")
messages=[
{'from':'system', 'value': "You are a helpful assistant"},
{'from': 'human', 'value': 'What is the future of Mainframe?'}
]
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, top_k=50, top_p=0.95, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)
print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True))
```
## Additional Information
### Other Resources:
- Github: https://github.com/FSoft-AI4Code/XMainframe
- Paper: https://arxiv.org/abs/2408.04660
### License
[MIT License](LICENSE)
### Citation Information
More details can be found in our [paper](https://arxiv.org/abs/2408.04660).
If you're using XMAiNframe, please cite using this BibTeX:
```
@misc{dau2024xmainframelargelanguagemodel,
title={XMainframe: A Large Language Model for Mainframe Modernization},
author={Anh T. V. Dau and Hieu Trung Dao and Anh Tuan Nguyen and Hieu Trung Tran and Phong X. Nguyen and Nghi D. Q. Bui},
year={2024},
eprint={2408.04660},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2408.04660},
}
```
# Contact us
If you have any questions, comments or suggestions, please do not hesitate to contact us.
- Website: [fpt-aicenter](https://www.fpt-aicenter.com/ai-residency/)
- Email: [email protected] |
reza-alipour/ft5 | reza-alipour | "2023-11-28T05:40:38Z" | 12 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:reza-alipour/Text-Edit-Instruct-Preprocessed-4m",
"base_model:reza-alipour/ft5",
"base_model:finetune:reza-alipour/ft5",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-11-25T12:12:18Z" | ---
base_model: reza-alipour/ft5
tags:
- generated_from_trainer
datasets:
- reza-alipour/Text-Edit-Instruct-Preprocessed-4m
metrics:
- rouge
model-index:
- name: ft5
results:
- task:
name: Summarization
type: summarization
dataset:
name: reza-alipour/Text-Edit-Instruct-Preprocessed-4m
type: reza-alipour/Text-Edit-Instruct-Preprocessed-4m
metrics:
- name: Rouge1
type: rouge
value: 86.5313
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ft5
This model is a fine-tuned version of [reza-alipour/ft5](https://huggingface.co/reza-alipour/ft5) on the reza-alipour/Text-Edit-Instruct-Preprocessed-4m dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3032
- Rouge1: 86.5313
- Rouge2: 75.3351
- Rougel: 85.9565
- Rougelsum: 85.9785
- Gen Len: 24.5688
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 1337
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.3601 | 1.0 | 31042 | 0.3032 | 86.5313 | 75.3351 | 85.9565 | 85.9785 | 24.5688 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.1
- Datasets 2.15.0
- Tokenizers 0.15.0
|
StaticOwl/Flan-t5-Model | StaticOwl | "2023-11-09T04:47:20Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-11-09T04:30:14Z" | ---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
model-index:
- name: Flan-t5-Model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Flan-t5-Model
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Ridealist/xlm-roberta-base-finetuned-panx-all | Ridealist | "2023-09-10T16:47:36Z" | 112 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2023-09-10T16:38:34Z" | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1745
- F1: 0.8577
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2915 | 1.0 | 835 | 0.1859 | 0.8171 |
| 0.1544 | 2.0 | 1670 | 0.1631 | 0.8509 |
| 0.1014 | 3.0 | 2505 | 0.1745 | 0.8577 |
### Framework versions
- Transformers 4.33.1
- Pytorch 1.12.1+cu116
- Datasets 2.4.0
- Tokenizers 0.12.1
|
RichardErkhov/brown1808_-_vinallama-7b-finetune-gguf | RichardErkhov | "2025-04-09T00:23:49Z" | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-08T21:45:48Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
vinallama-7b-finetune - GGUF
- Model creator: https://huggingface.co/brown1808/
- Original model: https://huggingface.co/brown1808/vinallama-7b-finetune/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [vinallama-7b-finetune.Q2_K.gguf](https://huggingface.co/RichardErkhov/brown1808_-_vinallama-7b-finetune-gguf/blob/main/vinallama-7b-finetune.Q2_K.gguf) | Q2_K | 2.42GB |
| [vinallama-7b-finetune.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/brown1808_-_vinallama-7b-finetune-gguf/blob/main/vinallama-7b-finetune.IQ3_XS.gguf) | IQ3_XS | 2.67GB |
| [vinallama-7b-finetune.IQ3_S.gguf](https://huggingface.co/RichardErkhov/brown1808_-_vinallama-7b-finetune-gguf/blob/main/vinallama-7b-finetune.IQ3_S.gguf) | IQ3_S | 2.81GB |
| [vinallama-7b-finetune.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/brown1808_-_vinallama-7b-finetune-gguf/blob/main/vinallama-7b-finetune.Q3_K_S.gguf) | Q3_K_S | 2.81GB |
| [vinallama-7b-finetune.IQ3_M.gguf](https://huggingface.co/RichardErkhov/brown1808_-_vinallama-7b-finetune-gguf/blob/main/vinallama-7b-finetune.IQ3_M.gguf) | IQ3_M | 2.97GB |
| [vinallama-7b-finetune.Q3_K.gguf](https://huggingface.co/RichardErkhov/brown1808_-_vinallama-7b-finetune-gguf/blob/main/vinallama-7b-finetune.Q3_K.gguf) | Q3_K | 3.14GB |
| [vinallama-7b-finetune.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/brown1808_-_vinallama-7b-finetune-gguf/blob/main/vinallama-7b-finetune.Q3_K_M.gguf) | Q3_K_M | 3.14GB |
| [vinallama-7b-finetune.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/brown1808_-_vinallama-7b-finetune-gguf/blob/main/vinallama-7b-finetune.Q3_K_L.gguf) | Q3_K_L | 3.42GB |
| [vinallama-7b-finetune.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/brown1808_-_vinallama-7b-finetune-gguf/blob/main/vinallama-7b-finetune.IQ4_XS.gguf) | IQ4_XS | 3.47GB |
| [vinallama-7b-finetune.Q4_0.gguf](https://huggingface.co/RichardErkhov/brown1808_-_vinallama-7b-finetune-gguf/blob/main/vinallama-7b-finetune.Q4_0.gguf) | Q4_0 | 3.64GB |
| [vinallama-7b-finetune.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/brown1808_-_vinallama-7b-finetune-gguf/blob/main/vinallama-7b-finetune.IQ4_NL.gguf) | IQ4_NL | 3.66GB |
| [vinallama-7b-finetune.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/brown1808_-_vinallama-7b-finetune-gguf/blob/main/vinallama-7b-finetune.Q4_K_S.gguf) | Q4_K_S | 3.67GB |
| [vinallama-7b-finetune.Q4_K.gguf](https://huggingface.co/RichardErkhov/brown1808_-_vinallama-7b-finetune-gguf/blob/main/vinallama-7b-finetune.Q4_K.gguf) | Q4_K | 3.88GB |
| [vinallama-7b-finetune.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/brown1808_-_vinallama-7b-finetune-gguf/blob/main/vinallama-7b-finetune.Q4_K_M.gguf) | Q4_K_M | 3.88GB |
| [vinallama-7b-finetune.Q4_1.gguf](https://huggingface.co/RichardErkhov/brown1808_-_vinallama-7b-finetune-gguf/blob/main/vinallama-7b-finetune.Q4_1.gguf) | Q4_1 | 4.03GB |
| [vinallama-7b-finetune.Q5_0.gguf](https://huggingface.co/RichardErkhov/brown1808_-_vinallama-7b-finetune-gguf/blob/main/vinallama-7b-finetune.Q5_0.gguf) | Q5_0 | 4.41GB |
| [vinallama-7b-finetune.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/brown1808_-_vinallama-7b-finetune-gguf/blob/main/vinallama-7b-finetune.Q5_K_S.gguf) | Q5_K_S | 4.41GB |
| [vinallama-7b-finetune.Q5_K.gguf](https://huggingface.co/RichardErkhov/brown1808_-_vinallama-7b-finetune-gguf/blob/main/vinallama-7b-finetune.Q5_K.gguf) | Q5_K | 4.54GB |
| [vinallama-7b-finetune.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/brown1808_-_vinallama-7b-finetune-gguf/blob/main/vinallama-7b-finetune.Q5_K_M.gguf) | Q5_K_M | 4.54GB |
| [vinallama-7b-finetune.Q5_1.gguf](https://huggingface.co/RichardErkhov/brown1808_-_vinallama-7b-finetune-gguf/blob/main/vinallama-7b-finetune.Q5_1.gguf) | Q5_1 | 4.8GB |
| [vinallama-7b-finetune.Q6_K.gguf](https://huggingface.co/RichardErkhov/brown1808_-_vinallama-7b-finetune-gguf/blob/main/vinallama-7b-finetune.Q6_K.gguf) | Q6_K | 5.24GB |
| [vinallama-7b-finetune.Q8_0.gguf](https://huggingface.co/RichardErkhov/brown1808_-_vinallama-7b-finetune-gguf/blob/main/vinallama-7b-finetune.Q8_0.gguf) | Q8_0 | 6.79GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hendrydong/llama3b-r-rejectallcorrect-step320 | hendrydong | "2025-04-01T06:32:01Z" | 0 | 0 | null | [
"safetensors",
"llama",
"region:us"
] | null | "2025-04-01T06:29:16Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
nandmnnit/testModel | nandmnnit | "2024-10-16T04:14:02Z" | 127 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-10-16T04:13:06Z" | ---
library_name: transformers
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: testModel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# testModel
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.45.2
- Pytorch 2.4.1
- Datasets 3.0.1
- Tokenizers 0.20.1
|
ktrocks/merged_peft_kt2 | ktrocks | "2024-12-29T04:56:56Z" | 75 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-12-29T04:55:58Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lakshyasoni/my_awesome_qa_model | lakshyasoni | "2023-05-26T05:02:08Z" | 63 | 1 | transformers | [
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2023-05-22T09:27:26Z" | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: lakshyasoni/my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# lakshyasoni/my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.1581
- Validation Loss: 3.8699
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16288, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.9502 | 3.3306 | 0 |
| 3.5638 | 3.6374 | 1 |
| 3.1581 | 3.8699 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
RichardErkhov/semcoder_-_semcoder_s-8bits | RichardErkhov | "2025-03-16T09:26:40Z" | 0 | 0 | null | [
"safetensors",
"llama",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-03-16T09:23:09Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
semcoder_s - bnb 8bits
- Model creator: https://huggingface.co/semcoder/
- Original model: https://huggingface.co/semcoder/semcoder_s/
Original model description:
---
license: mit
---
This checkpoint is outdated. Please refer to our latest checkpoint: semcoder/semcoder_s_1030
|
JacksonBrune/961735c8-cafa-4d48-ac3e-bd24f80005d5 | JacksonBrune | "2025-02-08T18:55:18Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:adapter:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:llama3.1",
"region:us"
] | null | "2025-02-08T18:23:53Z" | ---
library_name: peft
license: llama3.1
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 961735c8-cafa-4d48-ac3e-bd24f80005d5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
# 961735c8-cafa-4d48-ac3e-bd24f80005d5
This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5842
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
VictorZhu/results | VictorZhu | "2022-06-03T17:17:57Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-06-03T17:10:04Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1194
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1428 | 1.0 | 510 | 0.1347 |
| 0.0985 | 2.0 | 1020 | 0.1189 |
| 0.0763 | 3.0 | 1530 | 0.1172 |
| 0.0646 | 4.0 | 2040 | 0.1194 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu102
- Datasets 2.2.2
- Tokenizers 0.12.1
|
camidenecken/RM2-RoBERTa-rm-v3-SBERT_7 | camidenecken | "2024-11-11T18:22:41Z" | 160 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2024-11-11T18:22:16Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Gladiator-Mini-Exp-1221-3B-Instruct-V2-GGUF | mradermacher | "2025-04-01T17:32:24Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"unsloth",
"trl",
"sft",
"en",
"base_model:MultivexAI/Gladiator-Mini-Exp-1221-3B-Instruct-V2",
"base_model:quantized:MultivexAI/Gladiator-Mini-Exp-1221-3B-Instruct-V2",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-01T17:04:31Z" | Temporary Redirect. Redirecting to /api/resolve-cache/models/mradermacher/Gladiator-Mini-Exp-1221-3B-Instruct-V2-GGUF/c0d1cf0869adb087e140e0c6940d727b28ef5631/README.md?%2Fmradermacher%2FGladiator-Mini-Exp-1221-3B-Instruct-V2-GGUF%2Fresolve%2Fmain%2FREADME.md=&etag=%2281a3a9f211f1b700992f4714f61476a0a504a176%22 |
gkMSDA/Finchat298-Mistral7B-FULL | gkMSDA | "2024-06-16T19:46:21Z" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-16T19:34:31Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ljynhhzw/author_extract | ljynhhzw | "2025-02-14T07:31:55Z" | 0 | 0 | null | [
"safetensors",
"license:apache-2.0",
"region:us"
] | null | "2025-02-14T07:19:01Z" | ---
license: apache-2.0
---
|
OwOOwO/dumbo-krillin53 | OwOOwO | "2024-04-18T13:00:27Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-18T12:57:33Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
FounderOfHuggingface/gpt2_gen_lora_r16_dbpedia_14_t18_e5_non_member_shadow14 | FounderOfHuggingface | "2024-01-10T07:07:15Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | "2024-01-10T07:07:13Z" | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
huggingtweets/alexspoodiary-apesahoy-nsp_gpt2 | huggingtweets | "2022-09-26T22:08:29Z" | 117 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-09-26T22:03:52Z" | ---
language: en
thumbnail: http://www.huggingtweets.com/alexspoodiary-apesahoy-nsp_gpt2/1664230104622/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/774937495691722752/OHoU0clu_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1196519479364268034/5QpniWSP_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1218028522939113479/0VrO0Rko_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Alex's Poo Diary & Humongous Ape MP & Ninja Sex Party but AI</div>
<div style="text-align: center; font-size: 14px;">@alexspoodiary-apesahoy-nsp_gpt2</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Alex's Poo Diary & Humongous Ape MP & Ninja Sex Party but AI.
| Data | Alex's Poo Diary | Humongous Ape MP | Ninja Sex Party but AI |
| --- | --- | --- | --- |
| Tweets downloaded | 1859 | 3246 | 692 |
| Retweets | 3 | 178 | 13 |
| Short tweets | 5 | 625 | 44 |
| Tweets kept | 1851 | 2443 | 635 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/28kotecb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @alexspoodiary-apesahoy-nsp_gpt2's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2thnv3rd) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2thnv3rd/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/alexspoodiary-apesahoy-nsp_gpt2')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
KappaNeuro/alphonse-mucha-style | KappaNeuro | "2023-09-14T02:31:59Z" | 30 | 2 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"art",
"alphonse mucha",
"style",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] | text-to-image | "2023-09-14T02:31:52Z" | ---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- art
- alphonse mucha
- style
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Alphonse Mucha Style page
widget:
- text: Alphonse Mucha Style - a portrait of persephone in stained glass alphonse mucha art style
- text: Alphonse Mucha Style - handsome thirty-something sleeping prince in the style of alphonse mucha
- text: Alphonse Mucha Style - Create vintage poster in the style of Alphonse Mucha, using 64K digital artwork of a regal Aphrodite full standing gracefully in a marble room with a series of tall columns. The statue should have a serene expression, while the gown should be draped around her, complementing the beauty of her figure. The lighting should be warm and natural, with highlights bringing out the detail of her skin, hair, and jewelry. Use a medium format camera with a high-resolution sensor to capture the intricate details of the columns and the soft texture of the statue's gown.
- text: Alphonse Mucha Style - intricate art nouveau red haired man in toga with beautiful vector patterns
- text: Alphonse Mucha Style - white background, art nouveau in style of Mucha, Thick Lines, high-contrast, black, white, no gray, high-resolution high definition in details and ink outlines, no shading, intricate detailed frame border drawn by Alphonse Mucha, Korea, white Elephant, Flowers, artistic, final fantasy, natural light, detailed, realistic forest, Coloring Page, Thick Lines, Low Detail, No Shading, Fantasy, final fantasy character, Realistic detail, Coloring book, thick lines, white background, 8k
- text: Alphonse Mucha Style - Generate an Art Nouveau pattern design in the style of Alphonse Mucha. The design should be vibrant, intricate, and incorporate elements such as flowing lines, floral motifs, and ornate details. The color palette should include rich, jewel tones with accents of gold and silver. The composition should be balanced and symmetrical, with a central focal point and organic shapes radiating outward. The overall aesthetic should evoke a sense of elegance, grace, and beauty.
- text: Alphonse Mucha Style - Draw inspiration from Alphonse Mucha's art style, characterized by flowing lines, intricate patterns, and decorative motifs. Include elements like delicate floral patterns, ethereal hair adorned with flowers or jewels, and ornate accessories to enhance the royal aesthetic.
- text: Alphonse Mucha Style - Clipart illustrated corner borders in the style of Alphonse Mucha with a man on the left and a woman on the right. They should be facing each other and have a white space between them for text
- text: Alphonse Mucha Style - Ultra HD, Stunning Laetitia Casta as flowers widow in art nouveau style, designed by patrick nagel + bill sienkiewicz inspired by Mucha.
- text: Alphonse Mucha Style - 1960's Hollywood girl's face on a poster in the early style of Alfons Mucha late 19th century. Intense Mucha style background
---
# Alphonse Mucha Style

> Alphonse Mucha Style - a portrait of persephone in stained glass alphonse mucha art style
<p>Alphonse Mucha was a Czech Art Nouveau artist and designer known for his distinctive style, characterized by intricate and ornamental compositions, flowing lines, and decorative motifs.</p><p>Mucha's artwork often featured elegant women adorned with intricate patterns and flowing hair, surrounded by botanical elements and symbolic imagery. His illustrations became iconic representations of the Art Nouveau movement, which emerged in the late 19th century as a reaction against the industrialization and mass production of the time.</p><p>One of Mucha's most famous series is "The Seasons," which consists of four decorative panels representing spring, summer, autumn, and winter. These panels showcase Mucha's meticulous attention to detail, his ability to capture the beauty of nature, and his talent for blending decorative elements with symbolic imagery.</p><p>Mucha's work extended beyond illustrations and paintings. He also designed posters, advertisements, jewelry, furniture, and even theater sets, embracing a holistic approach to art and design. His designs often incorporated flowing lines, organic forms, and intricate patterns, creating a harmonious and visually captivating aesthetic.</p><p>Mucha's style was influenced by various artistic movements, including the Pre-Raphaelites, Japanese woodblock prints, and Byzantine art. He sought to create a sense of unity between art and life, combining aesthetics with a spiritual and philosophical approach.</p><p>Alphonse Mucha's legacy extends far beyond his own time. His artwork continues to be celebrated for its beauty, elegance, and influence on subsequent generations of artists and designers. Mucha's dedication to aesthetics, craftsmanship, and his ability to capture the essence of the Art Nouveau movement have firmly established him as one of the most iconic figures of this artistic era.</p>
## Image examples for the model:

> Alphonse Mucha Style - handsome thirty-something sleeping prince in the style of alphonse mucha

> Alphonse Mucha Style - Create vintage poster in the style of Alphonse Mucha, using 64K digital artwork of a regal Aphrodite full standing gracefully in a marble room with a series of tall columns. The statue should have a serene expression, while the gown should be draped around her, complementing the beauty of her figure. The lighting should be warm and natural, with highlights bringing out the detail of her skin, hair, and jewelry. Use a medium format camera with a high-resolution sensor to capture the intricate details of the columns and the soft texture of the statue's gown.

> Alphonse Mucha Style - intricate art nouveau red haired man in toga with beautiful vector patterns

> Alphonse Mucha Style - white background, art nouveau in style of Mucha, Thick Lines, high-contrast, black, white, no gray, high-resolution high definition in details and ink outlines, no shading, intricate detailed frame border drawn by Alphonse Mucha, Korea, white Elephant, Flowers, artistic, final fantasy, natural light, detailed, realistic forest, Coloring Page, Thick Lines, Low Detail, No Shading, Fantasy, final fantasy character, Realistic detail, Coloring book, thick lines, white background, 8k

> Alphonse Mucha Style - Generate an Art Nouveau pattern design in the style of Alphonse Mucha. The design should be vibrant, intricate, and incorporate elements such as flowing lines, floral motifs, and ornate details. The color palette should include rich, jewel tones with accents of gold and silver. The composition should be balanced and symmetrical, with a central focal point and organic shapes radiating outward. The overall aesthetic should evoke a sense of elegance, grace, and beauty.

> Alphonse Mucha Style - Draw inspiration from Alphonse Mucha's art style, characterized by flowing lines, intricate patterns, and decorative motifs. Include elements like delicate floral patterns, ethereal hair adorned with flowers or jewels, and ornate accessories to enhance the royal aesthetic.

> Alphonse Mucha Style - Clipart illustrated corner borders in the style of Alphonse Mucha with a man on the left and a woman on the right. They should be facing each other and have a white space between them for text

> Alphonse Mucha Style - Ultra HD, Stunning Laetitia Casta as flowers widow in art nouveau style, designed by patrick nagel + bill sienkiewicz inspired by Mucha.

> Alphonse Mucha Style - 1960's Hollywood girl's face on a poster in the early style of Alfons Mucha late 19th century. Intense Mucha style background
|
Weni/WeniGPT-Agents-Mistral-1.0.1-SFT-merged | Weni | "2024-04-11T03:46:06Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-10T19:38:53Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
isspek/xlnet-base-cased_covid_chatgpt_3_2e-5_16_undersampling_0.2 | isspek | "2024-12-26T13:48:05Z" | 120 | 0 | transformers | [
"transformers",
"safetensors",
"xlnet",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-12-26T13:47:49Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/OpenELM-450M-GGUF | mradermacher | "2025-03-01T15:17:19Z" | 167 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:apple/OpenELM-450M",
"base_model:quantized:apple/OpenELM-450M",
"license:apple-amlr",
"endpoints_compatible",
"region:us"
] | null | "2024-08-04T06:29:02Z" | ---
base_model: apple/OpenELM-450M
language:
- en
library_name: transformers
license: apple-amlr
license_link: LICENSE
license_name: apple-sample-code-license
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/apple/OpenELM-450M
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OpenELM-450M-GGUF/resolve/main/OpenELM-450M.Q2_K.gguf) | Q2_K | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/OpenELM-450M-GGUF/resolve/main/OpenELM-450M.IQ3_XS.gguf) | IQ3_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/OpenELM-450M-GGUF/resolve/main/OpenELM-450M.IQ3_S.gguf) | IQ3_S | 0.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/OpenELM-450M-GGUF/resolve/main/OpenELM-450M.Q3_K_S.gguf) | Q3_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/OpenELM-450M-GGUF/resolve/main/OpenELM-450M.IQ3_M.gguf) | IQ3_M | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/OpenELM-450M-GGUF/resolve/main/OpenELM-450M.Q3_K_M.gguf) | Q3_K_M | 0.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OpenELM-450M-GGUF/resolve/main/OpenELM-450M.IQ4_XS.gguf) | IQ4_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/OpenELM-450M-GGUF/resolve/main/OpenELM-450M.Q3_K_L.gguf) | Q3_K_L | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/OpenELM-450M-GGUF/resolve/main/OpenELM-450M.Q4_K_S.gguf) | Q4_K_S | 0.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OpenELM-450M-GGUF/resolve/main/OpenELM-450M.Q4_K_M.gguf) | Q4_K_M | 0.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OpenELM-450M-GGUF/resolve/main/OpenELM-450M.Q5_K_S.gguf) | Q5_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/OpenELM-450M-GGUF/resolve/main/OpenELM-450M.Q5_K_M.gguf) | Q5_K_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/OpenELM-450M-GGUF/resolve/main/OpenELM-450M.Q6_K.gguf) | Q6_K | 0.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/OpenELM-450M-GGUF/resolve/main/OpenELM-450M.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/OpenELM-450M-GGUF/resolve/main/OpenELM-450M.f16.gguf) | f16 | 1.0 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
cardiffnlp/flan-t5-base-tweet-emotion | cardiffnlp | "2023-06-05T23:55:17Z" | 106 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-06-05T23:53:46Z" |
---
widget:
- text: "Whatever you decide to do make sure it makes you #happy."
example_title: example 1
- text: "Do you think humans have the sense for recognizing impending doom?"
example_title: example 2
- text: "@user but your pussy was weak from what I heard so stfu up to me bitch . You got to threaten him that your pregnant ."
example_title: example 3
---
# cardiffnlp/flan-t5-base-tweet-emotion
This is [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) fine-tuned on [cardiffnlp/super_tweeteval (tweet_emotion)](https://huggingface.co/datasets/cardiffnlp/super_tweeteval).
### Usage
```python
from transformers import pipeline
pipe = pipeline('text2text-generation', model="cardiffnlp/flan-t5-base-tweet-emotion")
output = pipe("Whatever you decide to do make sure it makes you #happy.")
```
|
mradermacher/Sailor2-1B-Chat-GGUF | mradermacher | "2025-02-13T15:13:35Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"multilingual",
"sea",
"sailor",
"sft",
"chat",
"instruction",
"en",
"zh",
"id",
"th",
"vi",
"ms",
"lo",
"my",
"jv",
"km",
"su",
"tl",
"base_model:sail/Sailor2-1B-Chat",
"base_model:quantized:sail/Sailor2-1B-Chat",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-13T15:06:26Z" | ---
base_model: sail/Sailor2-1B-Chat
language:
- en
- zh
- id
- th
- vi
- ms
- lo
- my
- jv
- km
- su
- tl
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- multilingual
- sea
- sailor
- sft
- chat
- instruction
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/sail/Sailor2-1B-Chat
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Sailor2-1B-Chat-GGUF/resolve/main/Sailor2-1B-Chat.Q3_K_S.gguf) | Q3_K_S | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Sailor2-1B-Chat-GGUF/resolve/main/Sailor2-1B-Chat.Q2_K.gguf) | Q2_K | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Sailor2-1B-Chat-GGUF/resolve/main/Sailor2-1B-Chat.IQ4_XS.gguf) | IQ4_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Sailor2-1B-Chat-GGUF/resolve/main/Sailor2-1B-Chat.Q3_K_M.gguf) | Q3_K_M | 0.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Sailor2-1B-Chat-GGUF/resolve/main/Sailor2-1B-Chat.Q3_K_L.gguf) | Q3_K_L | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Sailor2-1B-Chat-GGUF/resolve/main/Sailor2-1B-Chat.Q4_K_S.gguf) | Q4_K_S | 0.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Sailor2-1B-Chat-GGUF/resolve/main/Sailor2-1B-Chat.Q4_K_M.gguf) | Q4_K_M | 0.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Sailor2-1B-Chat-GGUF/resolve/main/Sailor2-1B-Chat.Q5_K_S.gguf) | Q5_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Sailor2-1B-Chat-GGUF/resolve/main/Sailor2-1B-Chat.Q5_K_M.gguf) | Q5_K_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Sailor2-1B-Chat-GGUF/resolve/main/Sailor2-1B-Chat.Q6_K.gguf) | Q6_K | 1.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Sailor2-1B-Chat-GGUF/resolve/main/Sailor2-1B-Chat.Q8_0.gguf) | Q8_0 | 1.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Sailor2-1B-Chat-GGUF/resolve/main/Sailor2-1B-Chat.f16.gguf) | f16 | 2.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
annabellehuether/partisan-bert-base-uncased-supreme-court-32batch_5epoch_2e5lr_1wd | annabellehuether | "2023-12-04T00:23:05Z" | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-12-03T23:19:51Z" | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: partisan-bert-base-uncased-supreme-court-32batch_5epoch_2e5lr_1wd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# partisan-bert-base-uncased-supreme-court-32batch_5epoch_2e5lr_1wd
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8492
- Accuracy: 0.6396
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 7
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6391 | 1.0 | 660 | 0.5539 | 0.6330 |
| 0.6032 | 2.0 | 1320 | 0.5506 | 0.6507 |
| 0.5625 | 3.0 | 1980 | 0.6238 | 0.6489 |
| 0.4003 | 4.0 | 2640 | 0.7708 | 0.6363 |
| 0.3281 | 5.0 | 3300 | 0.8492 | 0.6396 |
### Framework versions
- Transformers 4.35.1
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
FounderOfHuggingface/gpt2_gen_lora_r16_dbpedia_14_t18_e75_member_shadow14 | FounderOfHuggingface | "2023-12-20T11:09:50Z" | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | "2023-12-20T11:09:44Z" | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
yemen2016/nbbert_NCST | yemen2016 | "2024-07-05T13:39:12Z" | 13 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:NbAiLab/nb-bert-base",
"base_model:finetune:NbAiLab/nb-bert-base",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-07-05T12:47:42Z" | ---
license: cc-by-4.0
base_model: NbAiLab/nb-bert-base
tags:
- generated_from_trainer
model-index:
- name: nbbert_NCST
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nbbert_NCST
This model is a fine-tuned version of [NbAiLab/nb-bert-base](https://huggingface.co/NbAiLab/nb-bert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4123
- F1-score: 0.8174
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1-score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5078 | 1.0 | 535 | 0.4123 | 0.8174 |
| 0.417 | 2.0 | 1070 | 0.4540 | 0.8027 |
| 0.2917 | 3.0 | 1605 | 0.5587 | 0.8007 |
| 0.1978 | 4.0 | 2140 | 0.8186 | 0.7875 |
| 0.1333 | 5.0 | 2675 | 0.9502 | 0.7917 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
ethicalabs/Kurtis-E1.1-Qwen2.5-0.5B-Instruct-Zero | ethicalabs | "2025-03-28T16:23:18Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"dataset:ethicalabs/Kurtis-E1-SFT",
"dataset:ethicalabs/Kurtis-E1-Multilingual-01-SFT",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-28T16:00:25Z" | ---
license: mit
base_model:
- Qwen/Qwen2.5-0.5B-Instruct
pipeline_tag: text-generation
datasets:
- ethicalabs/Kurtis-E1-SFT
- ethicalabs/Kurtis-E1-Multilingual-01-SFT
library_name: transformers
---
Pseudo-pretraining. Not suitable for production usage. |
sam1120/dropoff-utcustom-train-SF-RGB-b5_7 | sam1120 | "2024-02-12T14:57:46Z" | 146 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | "2024-02-12T14:26:14Z" | ---
license: other
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: dropoff-utcustom-train-SF-RGB-b5_7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dropoff-utcustom-train-SF-RGB-b5_7
This model is a fine-tuned version of [nvidia/mit-b5](https://huggingface.co/nvidia/mit-b5) on the sam1120/dropoff-utcustom-TRAIN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1841
- Mean Iou: 0.7025
- Mean Accuracy: 0.7532
- Overall Accuracy: 0.9721
- Accuracy Unlabeled: nan
- Accuracy Dropoff: 0.5145
- Accuracy Undropoff: 0.9919
- Iou Unlabeled: nan
- Iou Dropoff: 0.4336
- Iou Undropoff: 0.9715
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 120
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Dropoff | Accuracy Undropoff | Iou Unlabeled | Iou Dropoff | Iou Undropoff |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:----------------:|:------------------:|:-------------:|:-----------:|:-------------:|
| 0.8255 | 5.0 | 10 | 0.7949 | 0.4128 | 0.7856 | 0.9393 | nan | 0.6179 | 0.9533 | 0.0 | 0.3007 | 0.9377 |
| 0.4434 | 10.0 | 20 | 0.4247 | 0.4471 | 0.7066 | 0.9705 | nan | 0.4187 | 0.9944 | 0.0 | 0.3714 | 0.9700 |
| 0.2107 | 15.0 | 30 | 0.2726 | 0.6711 | 0.7003 | 0.9715 | nan | 0.4046 | 0.9961 | nan | 0.3713 | 0.9710 |
| 0.1678 | 20.0 | 40 | 0.2388 | 0.6801 | 0.7343 | 0.9691 | nan | 0.4782 | 0.9904 | nan | 0.3917 | 0.9685 |
| 0.0972 | 25.0 | 50 | 0.1849 | 0.6764 | 0.7096 | 0.9715 | nan | 0.4241 | 0.9952 | nan | 0.3818 | 0.9709 |
| 0.0604 | 30.0 | 60 | 0.2019 | 0.4644 | 0.7568 | 0.9704 | nan | 0.5239 | 0.9897 | 0.0 | 0.4236 | 0.9697 |
| 0.0497 | 35.0 | 70 | 0.1793 | 0.6838 | 0.7345 | 0.9700 | nan | 0.4775 | 0.9914 | nan | 0.3983 | 0.9694 |
| 0.0492 | 40.0 | 80 | 0.2000 | 0.4639 | 0.7567 | 0.9702 | nan | 0.5239 | 0.9896 | 0.0 | 0.4223 | 0.9695 |
| 0.0409 | 45.0 | 90 | 0.1893 | 0.7030 | 0.7778 | 0.9696 | nan | 0.5687 | 0.9869 | nan | 0.4372 | 0.9688 |
| 0.0328 | 50.0 | 100 | 0.1842 | 0.7040 | 0.7715 | 0.9704 | nan | 0.5545 | 0.9885 | nan | 0.4382 | 0.9697 |
| 0.0332 | 55.0 | 110 | 0.1781 | 0.7015 | 0.7563 | 0.9715 | nan | 0.5216 | 0.9910 | nan | 0.4322 | 0.9709 |
| 0.0314 | 60.0 | 120 | 0.1732 | 0.6890 | 0.7305 | 0.9717 | nan | 0.4675 | 0.9935 | nan | 0.4068 | 0.9711 |
| 0.0318 | 65.0 | 130 | 0.1786 | 0.6971 | 0.7477 | 0.9715 | nan | 0.5037 | 0.9918 | nan | 0.4233 | 0.9709 |
| 0.0291 | 70.0 | 140 | 0.1814 | 0.7119 | 0.7687 | 0.9725 | nan | 0.5466 | 0.9909 | nan | 0.4521 | 0.9718 |
| 0.0273 | 75.0 | 150 | 0.1755 | 0.7101 | 0.7677 | 0.9722 | nan | 0.5446 | 0.9907 | nan | 0.4487 | 0.9715 |
| 0.0274 | 80.0 | 160 | 0.1786 | 0.7006 | 0.7494 | 0.9720 | nan | 0.5066 | 0.9922 | nan | 0.4297 | 0.9714 |
| 0.0248 | 85.0 | 170 | 0.1741 | 0.7029 | 0.7526 | 0.9722 | nan | 0.5131 | 0.9921 | nan | 0.4341 | 0.9716 |
| 0.0248 | 90.0 | 180 | 0.1832 | 0.7050 | 0.7595 | 0.9719 | nan | 0.5278 | 0.9912 | nan | 0.4387 | 0.9713 |
| 0.0242 | 95.0 | 190 | 0.1808 | 0.7028 | 0.7539 | 0.9720 | nan | 0.5160 | 0.9918 | nan | 0.4341 | 0.9714 |
| 0.024 | 100.0 | 200 | 0.1796 | 0.7022 | 0.7501 | 0.9723 | nan | 0.5077 | 0.9925 | nan | 0.4327 | 0.9717 |
| 0.0231 | 105.0 | 210 | 0.1835 | 0.7137 | 0.7731 | 0.9724 | nan | 0.5557 | 0.9905 | nan | 0.4556 | 0.9717 |
| 0.0238 | 110.0 | 220 | 0.1823 | 0.7046 | 0.7565 | 0.9721 | nan | 0.5214 | 0.9917 | nan | 0.4376 | 0.9715 |
| 0.0228 | 115.0 | 230 | 0.1833 | 0.7009 | 0.7504 | 0.9720 | nan | 0.5088 | 0.9921 | nan | 0.4305 | 0.9714 |
| 0.0255 | 120.0 | 240 | 0.1841 | 0.7025 | 0.7532 | 0.9721 | nan | 0.5145 | 0.9919 | nan | 0.4336 | 0.9715 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
himmeow/Bon-deepseek-r1-v2 | himmeow | "2025-01-25T06:30:14Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/DeepSeek-R1-Distill-Qwen-14B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/DeepSeek-R1-Distill-Qwen-14B-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-25T06:22:24Z" | ---
base_model: unsloth/DeepSeek-R1-Distill-Qwen-14B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** himmeow
- **License:** apache-2.0
- **Finetuned from model :** unsloth/DeepSeek-R1-Distill-Qwen-14B-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Tomwho/RLHF-practice | Tomwho | "2025-02-24T08:48:52Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-02-24T08:38:27Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
pranjal11/results | pranjal11 | "2024-01-01T08:32:41Z" | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:ybelkada/falcon-7b-sharded-bf16",
"base_model:adapter:ybelkada/falcon-7b-sharded-bf16",
"region:us"
] | null | "2024-01-01T08:32:33Z" | ---
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: ybelkada/falcon-7b-sharded-bf16
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 120
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0 |
oliverdk/codegen-350M-mono-measurement_pred-diamonds-seed1 | oliverdk | "2024-12-19T21:55:38Z" | 69 | 0 | null | [
"tensorboard",
"safetensors",
"codegen_mp",
"generated_from_trainer",
"custom_code",
"base_model:Salesforce/codegen-350M-mono",
"base_model:finetune:Salesforce/codegen-350M-mono",
"license:bsd-3-clause",
"region:us"
] | null | "2024-12-16T20:17:04Z" | ---
license: bsd-3-clause
base_model: Salesforce/codegen-350M-mono
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: codegen-350M-mono-measurement_pred-diamonds-seed1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codegen-350M-mono-measurement_pred-diamonds-seed1
This model is a fine-tuned version of [Salesforce/codegen-350M-mono](https://huggingface.co/Salesforce/codegen-350M-mono) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4083
- Accuracy: 0.9134
- Accuracy Sensor 0: 0.9153
- Auroc Sensor 0: 0.9651
- Accuracy Sensor 1: 0.9094
- Auroc Sensor 1: 0.9502
- Accuracy Sensor 2: 0.9317
- Auroc Sensor 2: 0.9780
- Accuracy Aggregated: 0.8974
- Auroc Aggregated: 0.9672
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 64
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Accuracy Sensor 0 | Auroc Sensor 0 | Accuracy Sensor 1 | Auroc Sensor 1 | Accuracy Sensor 2 | Auroc Sensor 2 | Accuracy Aggregated | Auroc Aggregated |
|:-------------:|:------:|:----:|:---------------:|:--------:|:-----------------:|:--------------:|:-----------------:|:--------------:|:-----------------:|:--------------:|:-------------------:|:----------------:|
| 0.2812 | 0.9997 | 781 | 0.2931 | 0.8747 | 0.8785 | 0.9058 | 0.8806 | 0.9047 | 0.8897 | 0.9331 | 0.8499 | 0.9009 |
| 0.1938 | 1.9994 | 1562 | 0.2940 | 0.8844 | 0.8760 | 0.9330 | 0.9017 | 0.9300 | 0.9160 | 0.9574 | 0.8438 | 0.9252 |
| 0.1202 | 2.9990 | 2343 | 0.2551 | 0.9080 | 0.9055 | 0.9601 | 0.9119 | 0.9504 | 0.9235 | 0.9757 | 0.8910 | 0.9615 |
| 0.0779 | 4.0 | 3125 | 0.2902 | 0.9178 | 0.9194 | 0.9667 | 0.9164 | 0.9516 | 0.9309 | 0.9799 | 0.9044 | 0.9680 |
| 0.035 | 4.9984 | 3905 | 0.4083 | 0.9134 | 0.9153 | 0.9651 | 0.9094 | 0.9502 | 0.9317 | 0.9780 | 0.8974 | 0.9672 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
mradermacher/bloomz-1b7-seahorse4dpo-v2-GGUF | mradermacher | "2024-12-06T02:36:25Z" | 8 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:DKYoon/bloomz-1b7-seahorse4dpo-v2",
"base_model:quantized:DKYoon/bloomz-1b7-seahorse4dpo-v2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-12-06T02:22:38Z" | ---
base_model: DKYoon/bloomz-1b7-seahorse4dpo-v2
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/DKYoon/bloomz-1b7-seahorse4dpo-v2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/bloomz-1b7-seahorse4dpo-v2-GGUF/resolve/main/bloomz-1b7-seahorse4dpo-v2.Q2_K.gguf) | Q2_K | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/bloomz-1b7-seahorse4dpo-v2-GGUF/resolve/main/bloomz-1b7-seahorse4dpo-v2.Q3_K_S.gguf) | Q3_K_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/bloomz-1b7-seahorse4dpo-v2-GGUF/resolve/main/bloomz-1b7-seahorse4dpo-v2.Q3_K_M.gguf) | Q3_K_M | 1.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/bloomz-1b7-seahorse4dpo-v2-GGUF/resolve/main/bloomz-1b7-seahorse4dpo-v2.Q3_K_L.gguf) | Q3_K_L | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/bloomz-1b7-seahorse4dpo-v2-GGUF/resolve/main/bloomz-1b7-seahorse4dpo-v2.IQ4_XS.gguf) | IQ4_XS | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/bloomz-1b7-seahorse4dpo-v2-GGUF/resolve/main/bloomz-1b7-seahorse4dpo-v2.Q4_0_4_4.gguf) | Q4_0_4_4 | 1.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/bloomz-1b7-seahorse4dpo-v2-GGUF/resolve/main/bloomz-1b7-seahorse4dpo-v2.Q4_K_S.gguf) | Q4_K_S | 1.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/bloomz-1b7-seahorse4dpo-v2-GGUF/resolve/main/bloomz-1b7-seahorse4dpo-v2.Q4_K_M.gguf) | Q4_K_M | 1.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/bloomz-1b7-seahorse4dpo-v2-GGUF/resolve/main/bloomz-1b7-seahorse4dpo-v2.Q5_K_S.gguf) | Q5_K_S | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/bloomz-1b7-seahorse4dpo-v2-GGUF/resolve/main/bloomz-1b7-seahorse4dpo-v2.Q5_K_M.gguf) | Q5_K_M | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/bloomz-1b7-seahorse4dpo-v2-GGUF/resolve/main/bloomz-1b7-seahorse4dpo-v2.Q6_K.gguf) | Q6_K | 1.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/bloomz-1b7-seahorse4dpo-v2-GGUF/resolve/main/bloomz-1b7-seahorse4dpo-v2.Q8_0.gguf) | Q8_0 | 2.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/bloomz-1b7-seahorse4dpo-v2-GGUF/resolve/main/bloomz-1b7-seahorse4dpo-v2.f16.gguf) | f16 | 4.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
zurandmoro/b0c5843b314b | zurandmoro | "2025-03-28T09:19:51Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-03-28T08:57:56Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: b0c5843b314b
---
# B0C5843B314B
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `b0c5843b314b` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('zurandmoro/b0c5843b314b', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
aliciannz/TwinLlama-3.1-8B | aliciannz | "2025-03-17T19:23:17Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-17T19:10:20Z" | ---
base_model: unsloth/Meta-Llama-3.1-8B
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** aliciannz
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Isotonic/phizzle | Isotonic | "2024-04-04T02:41:55Z" | 145 | 1 | transformers | [
"transformers",
"safetensors",
"phi-msft",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"rhysjones/phi-2-orange",
"cognitivecomputations/dolphin-2_6-phi-2",
"mrm8488/phi-2-coder",
"custom_code",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-01-16T02:58:16Z" | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- rhysjones/phi-2-orange
- cognitivecomputations/dolphin-2_6-phi-2
- mrm8488/phi-2-coder
---
🌟 Buying me coffee is a direct way to show support for this project.
<a href="https://www.buymeacoffee.com/isotonic"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
# Phizzle
Phizzle is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [rhysjones/phi-2-orange](https://huggingface.co/rhysjones/phi-2-orange)
* [cognitivecomputations/dolphin-2_6-phi-2](https://huggingface.co/cognitivecomputations/dolphin-2_6-phi-2)
* [mrm8488/phi-2-coder](https://huggingface.co/mrm8488/phi-2-coder)
## 🧩 Configuration
```yaml
models:
- model: rhysjones/phi-2-orange
parameters:
density: 0.5
weight: 0.3
- model: cognitivecomputations/dolphin-2_6-phi-2
parameters:
density: 0.5
weight: 0.3
- model: mrm8488/phi-2-coder
parameters:
density: 0.5
weight: 0.3
merge_method: ties
base_model: rhysjones/phi-2-orange
parameters:
normalize: true
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate einops
from transformers import AutoTokenizer
import transformers
import torch
model = "Isotonic/phizzle"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
trust_remote_code=True
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
## Evaluations
Evaluations done using mlabonne's usefull [Colab notebook llm-autoeval](https://github.com/mlabonne/llm-autoeval).
Also check out the alternative leaderboard at [Yet_Another_LLM_Leaderboard](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard)
[phizzle](https://huggingface.co/Isotonic/phizzle) - Yet to be benchmarked
| Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|----------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|[phi-2-orange](https://huggingface.co/rhysjones/phi-2-orange)| **33.37**| 71.33| 49.87| **37.3**| **47.97**|
|[phi-2-dpo](https://huggingface.co/lxuechen/phi-2-dpo)| 30.39| **71.68**| **50.75**| 34.9| 46.93|
|[dolphin-2_6-phi-2](https://huggingface.co/cognitivecomputations/dolphin-2_6-phi-2)| 33.12| 69.85| 47.39| 37.2| 46.89|
|[phi-2](https://huggingface.co/microsoft/phi-2)| 27.98| 70.8| 44.43| 35.21| 44.61|
|
lesso06/0354113d-9c2e-4e50-839c-7a91fc66aef0 | lesso06 | "2025-02-18T02:11:31Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-1b",
"base_model:adapter:EleutherAI/pythia-1b",
"license:apache-2.0",
"region:us"
] | null | "2025-02-18T01:49:22Z" | ---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-1b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0354113d-9c2e-4e50-839c-7a91fc66aef0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# 0354113d-9c2e-4e50-839c-7a91fc66aef0
This model is a fine-tuned version of [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3678
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000206
- train_batch_size: 4
- eval_batch_size: 4
- seed: 60
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | 4.8451 |
| 6.9366 | 0.0027 | 50 | 3.6627 |
| 6.8468 | 0.0055 | 100 | 3.9046 |
| 6.5612 | 0.0082 | 150 | 3.5650 |
| 6.953 | 0.0110 | 200 | 3.5307 |
| 6.9443 | 0.0137 | 250 | 3.4631 |
| 6.7133 | 0.0165 | 300 | 3.4163 |
| 6.9374 | 0.0192 | 350 | 3.3973 |
| 7.2634 | 0.0219 | 400 | 3.3735 |
| 6.9631 | 0.0247 | 450 | 3.3669 |
| 6.8995 | 0.0274 | 500 | 3.3678 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
alanahmet/vit-base-patch16-224-finetuned-lora-oxfordPets | alanahmet | "2023-12-13T18:06:09Z" | 2 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/vit-base-patch16-224",
"base_model:adapter:google/vit-base-patch16-224",
"region:us"
] | null | "2023-12-07T09:02:02Z" | ---
library_name: peft
base_model: google/vit-base-patch16-224
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
HyperbeeAI/nanotranslator-hf | HyperbeeAI | "2023-08-31T14:17:00Z" | 0 | 2 | null | [
"translation",
"en",
"es",
"dataset:news_commentary",
"license:apache-2.0",
"region:us"
] | translation | "2023-08-19T21:29:54Z" | ---
language:
- en
- es
datasets:
- news_commentary
tags:
- translation
license: apache-2.0
---
# NanoTranslator by HyperbeeAI
Copyrights © 2023 Hyperbee.AI Inc. All rights reserved. [email protected]
This repository contains the Spanish-to-English translation utility by HyperbeeAI called NanoTranslator. **The model takes up less than 400 KBs of RAM and provides accurate translation for casual conversations.**
To run the demo, see explanations in "demo.ipynb", which acts as the serial terminal to communicate with the ai85 from the host PC. Further explanations are provided below as well as in the notebooks.

### Contents:
- **.py files:** python modules used by the Jupyter notebooks. These files define a simulation environment for the MAX78000 CNN accelerator hardware + some peripheral tools that help evaluation. Note that the simulator only includes the chip features that are relevant to this project (e.g., pooling not implemented because this project does not need it).
- **evaluation.ipynb:** this Jupyter notebook provides an interface to try out different sentences from the test set on the model in the simulation environment, and compute the BLEU score of the model over the test set.
- **demo.ipynb:** this Jupyter notebook acts as the serial interface with the chip. A sentence in the source language is sent over to the chip for translation via the serial port, the implementation on the chip translates this and sends it back via the same serial port in the target language, and the result is displayed on the notebook cell. This needs to be run together with the "assets/demo.elf" program on the chip, which does the actual translation job on the ai85. There is a specific cell on the notebook that needs to be run before the ai85 demo.elf is started. Check the notebook for further info.
- **assets/demo.elf:** C program running the actual translation application. Run this together with the demo.ipynb notebook for the translation demo. See further explanations inside demo.ipynb.
### Extras/Notes:
- the demo C program does not require any extra modules/libraries, it can be directly run the same way as the Maxim SDK examples (i.e., using the arm gdb, defining the target as "remote localhost:3333", doing "load" etc.). However, note that the Jupyter notebook demo.ipynb needs to be run together with the C program for meaningful output. There is a specific cell on the notebook that needs to be run before the ai85 demo.elf is started. Check the notebook for further info.
- The demo.ipynb notebook needs to run on the same host PC that programs the ai85 since it uses the on-board (USB) serial port (that programs the ai85) to communicate with the chip while the translation application is running.
- Although the program should run on both the EVKit and the FeatherBoard without errors (since it uses common functionality), it was only explicitly tested with the FeatherBoard for now.
### Setup:
This demo has been tested with the following configuration:
Python 3.8.10.
datasets 1.8.0
huggingface-hub 0.0.10
ipykernel 5.5.3
ipython 7.22.0
notebook 6.3.0
numpy 1.20.2
pyserial 3.5
sacrebleu 1.5.1
tokenizers 0.10.3
torch 1.8.1
torchtext 0.9.1
tqdm 4.49.0
Note1: torchtext might default to older versions (e.g., v0.8) on some containers (typically in those provided by AWS, which use older versions of python that don't align well with the newer torchtext versions), in that case, the .legacy submodule path needs to be removed from the import directives in the .py files and Jupyter notebooks.
Note2: there are multiple python packages on pip that provide serial port implementation, with conflicting function/object names too. Although the package used here gets imported with "import serial", it needs to be installed via "pip install pyserial", not "pip install serial". Make sure you get the correct version.
|
Factral/prob8 | Factral | "2025-01-28T13:19:21Z" | 34 | 0 | transformers.js | [
"transformers.js",
"onnx",
"yolos",
"base_model:hustvl/yolos-tiny",
"base_model:quantized:hustvl/yolos-tiny",
"region:us"
] | null | "2025-01-28T06:53:28Z" | ---
base_model: hustvl/yolos-tiny
library_name: transformers.js
---
https://huggingface.co/hustvl/yolos-tiny with ONNX weights to be compatible with Transformers.js.
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
```bash
npm i @huggingface/transformers
```
**Example:** Perform object detection with `Xenova/yolos-tiny`.
```js
import { pipeline } from "@huggingface/transformers";
const detector = await pipeline("object-detection", "Xenova/yolos-tiny");
const image = "https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/cats.jpg";
const output = await detector(image, { threshold: 0.9 });
console.log(output);
```
<details>
<summary>Example output</summary>
```
[
{
score: 0.9921281933784485,
label: "remote",
box: { xmin: 32, ymin: 78, xmax: 185, ymax: 117 },
},
{
score: 0.9884883165359497,
label: "remote",
box: { xmin: 324, ymin: 82, xmax: 376, ymax: 191 },
},
{
score: 0.9197800159454346,
label: "cat",
box: { xmin: 5, ymin: 56, xmax: 321, ymax: 469 },
},
{
score: 0.9300552606582642,
label: "cat",
box: { xmin: 332, ymin: 25, xmax: 638, ymax: 369 },
},
]
```
</details>
---
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
|
flowers-team/TA_RIAC_SAC_chimpanzee_s10 | flowers-team | "2022-08-11T11:52:07Z" | 0 | 0 | null | [
"sac",
"deep-reinforcement-learning",
"reinforcement-learning",
"teach-my-agent-parkour",
"arxiv:2103.09815",
"model-index",
"region:us"
] | reinforcement-learning | "2022-08-11T11:51:44Z" | ---
tags:
- sac
- deep-reinforcement-learning
- reinforcement-learning
- teach-my-agent-parkour
model-index:
- name: RIAC_SAC_chimpanzee_s10
results:
- metrics:
- type: mean_reward
value: -59.06 +/- 4.59
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: teach-my-agent-parkour
type: teach-my-agent-parkour
---
# Deep RL Agent Playing TeachMyAgent's parkour.
You can find more info about TeachMyAgent [here](https://developmentalsystems.org/TeachMyAgent/).
Results of our benchmark can be found in our [paper](https://arxiv.org/pdf/2103.09815.pdf).
You can test this policy [here](https://huggingface.co/spaces/flowers-team/Interactive_DeepRL_Demo)
## Results
Percentage of mastered tasks (i.e. reward >= 230) after 20 millions steps on the Parkour track.
Results shown are averages over 16 seeds along with the standard deviation for each morphology as well as the aggregation of the 48 seeds in the *Overall* column.
We highlight the best results in bold.
| Algorithm | BipedalWalker | Fish | Climber | Overall |
|---------------|----------------|---------------|--------------|---------------|
| Random | 27.25 (± 10.7) | 23.6 (± 21.3) | 0.0 (± 0.0) | 16.9 (± 18.3) |
| ADR | 14.7 (± 19.4) | 5.3 (± 20.6) | 0.0 (± 0.0) | 6.7 (± 17.4) |
| ALP-GMM | **42.7** (± 11.2) | 36.1 (± 28.5) | 0.4 (± 1.2) | **26.4** (± 25.7) |
| Covar-GMM | 35.7 (± 15.9) | 29.9 (± 27.9) | 0.5 (± 1.9) | 22.1 (± 24.2) |
| GoalGAN | 25.4 (± 24.7) | 34.7 ± 37.0) | 0.8 (± 2.7) | 20.3 (± 29.5) |
| RIAC | 31.2 (± 8.2) | **37.4** (± 25.4) | 0.4 (± 1.4) | 23.0 (± 22.4) |
| SPDL | 30.6 (± 22.8) | 9.0 (± 24.2) | **1.0** (± 3.4) | 13.5 (± 23.0) |
| Setter-Solver | 28.75 (± 20.7) | 5.1 (± 7.6) | 0.0 (± 0.0) | 11.3 (± 17.9) |
# Hyperparameters
```python
{'student': 'SAC'
'environment': 'parkour'
'training_steps': 20000000
'n_evaluation_tasks': 100
'teacher': 'RIAC'
'morphology': 'climbing_profile_chimpanzee'}
```
|
infinitejoy/wav2vec2-large-xls-r-300m-bulgarian | infinitejoy | "2022-03-24T11:47:30Z" | 445 | 2 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"bg",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-03-02T23:29:05Z" | ---
language:
- bg
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
- bg
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: XLS-R-300M - Bulgarian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: bg
metrics:
- name: Test WER
type: wer
value: 46.68
- name: Test CER
type: cer
value: 10.75
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: bg
metrics:
- name: Test WER
type: wer
value: 63.68
- name: Test CER
type: cer
value: 19.88
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: bg
metrics:
- name: Test WER
type: wer
value: 64.08
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-bulgarian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - BG dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4487
- Wer: 0.4674
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.9774 | 6.33 | 500 | 2.9769 | 1.0 |
| 1.3453 | 12.66 | 1000 | 0.6523 | 0.6980 |
| 1.1658 | 18.99 | 1500 | 0.5636 | 0.6359 |
| 1.0797 | 25.32 | 2000 | 0.5004 | 0.5759 |
| 1.044 | 31.65 | 2500 | 0.4958 | 0.5569 |
| 0.9915 | 37.97 | 3000 | 0.4971 | 0.5350 |
| 0.9429 | 44.3 | 3500 | 0.4829 | 0.5229 |
| 0.9266 | 50.63 | 4000 | 0.4515 | 0.5074 |
| 0.8965 | 56.96 | 4500 | 0.4599 | 0.5039 |
| 0.878 | 63.29 | 5000 | 0.4735 | 0.4954 |
| 0.8494 | 69.62 | 5500 | 0.4460 | 0.4878 |
| 0.8343 | 75.95 | 6000 | 0.4510 | 0.4795 |
| 0.8236 | 82.28 | 6500 | 0.4538 | 0.4789 |
| 0.8069 | 88.61 | 7000 | 0.4526 | 0.4748 |
| 0.7958 | 94.94 | 7500 | 0.4496 | 0.4700 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
keitokei1994/Borea-Phi-3.5-mini-Instruct-Common-GGUF | keitokei1994 | "2024-08-21T09:01:13Z" | 28 | 1 | null | [
"gguf",
"ja",
"en",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-08-21T08:48:34Z" | ---
license: mit
language:
- ja
- en
---
# keitokei1994/Borea-Phi-3.5-mini-Instruct-Common-GGUF
[HODACHIさんが公開しているHODACHI/Borea-Phi-3.5-mini-Instruct-Common](https://huggingface.co/HODACHI/Borea-Phi-3.5-mini-Instruct-Common)のgguf版です。 |
Seuto1/Seutobot | Seuto1 | "2025-02-19T07:23:59Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-02-19T07:23:59Z" | ---
license: apache-2.0
---
|
RajkNakka/distilhubert-finetuned-gtzan-2 | RajkNakka | "2023-08-11T22:00:51Z" | 161 | 0 | transformers | [
"transformers",
"pytorch",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | "2023-08-10T23:13:31Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan-2
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9149
- Accuracy: 0.83
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 25
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0823 | 1.0 | 113 | 2.0903 | 0.46 |
| 1.5111 | 2.0 | 226 | 1.5342 | 0.6 |
| 1.2342 | 3.0 | 339 | 1.1036 | 0.68 |
| 0.8352 | 4.0 | 452 | 0.9137 | 0.78 |
| 0.5727 | 5.0 | 565 | 0.6258 | 0.81 |
| 0.3957 | 6.0 | 678 | 0.5984 | 0.83 |
| 0.1851 | 7.0 | 791 | 0.6269 | 0.82 |
| 0.1607 | 8.0 | 904 | 0.6945 | 0.79 |
| 0.1426 | 9.0 | 1017 | 0.6103 | 0.86 |
| 0.0519 | 10.0 | 1130 | 0.7502 | 0.81 |
| 0.0097 | 11.0 | 1243 | 0.7101 | 0.85 |
| 0.006 | 12.0 | 1356 | 0.8174 | 0.82 |
| 0.0039 | 13.0 | 1469 | 0.8008 | 0.84 |
| 0.0032 | 14.0 | 1582 | 0.8438 | 0.81 |
| 0.0027 | 15.0 | 1695 | 0.8206 | 0.82 |
| 0.0024 | 16.0 | 1808 | 0.8563 | 0.82 |
| 0.002 | 17.0 | 1921 | 0.8884 | 0.82 |
| 0.0018 | 18.0 | 2034 | 0.9148 | 0.82 |
| 0.0018 | 19.0 | 2147 | 0.9017 | 0.83 |
| 0.0016 | 20.0 | 2260 | 0.9178 | 0.83 |
| 0.0015 | 21.0 | 2373 | 0.9070 | 0.83 |
| 0.0014 | 22.0 | 2486 | 0.9033 | 0.83 |
| 0.0014 | 23.0 | 2599 | 0.8975 | 0.84 |
| 0.0013 | 24.0 | 2712 | 0.9160 | 0.83 |
| 0.0013 | 25.0 | 2825 | 0.9149 | 0.83 |
### Framework versions
- Transformers 4.29.0
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.2
|
sampras343/wav2vec2-base-ft-keyword-spotting | sampras343 | "2022-06-14T10:02:24Z" | 8 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"audio-classification",
"generated_from_trainer",
"dataset:superb",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | "2022-06-14T08:00:35Z" | ---
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: wav2vec2-base-ft-keyword-spotting
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-ft-keyword-spotting
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0824
- Accuracy: 0.9826
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 0
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8972 | 1.0 | 399 | 0.7023 | 0.8174 |
| 0.3274 | 2.0 | 798 | 0.1634 | 0.9773 |
| 0.1993 | 3.0 | 1197 | 0.1048 | 0.9788 |
| 0.1777 | 4.0 | 1596 | 0.0824 | 0.9826 |
| 0.1527 | 5.0 | 1995 | 0.0812 | 0.9810 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
qingy2024/Qwen2.5-4B | qingy2024 | "2025-01-03T19:41:02Z" | 240 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Qwen/Qwen2.5-3B",
"base_model:finetune:Qwen/Qwen2.5-3B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-03T19:39:31Z" | ---
base_model:
- Qwen/Qwen2.5-3B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [Qwen/Qwen2.5-3B](https://huggingface.co/Qwen/Qwen2.5-3B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- layer_range: [0, 6]
model: Qwen/Qwen2.5-3B
- sources:
- layer_range: [4, 12]
model: Qwen/Qwen2.5-3B
- sources:
- layer_range: [10, 18]
model: Qwen/Qwen2.5-3B
- sources:
- layer_range: [16, 24]
model: Qwen/Qwen2.5-3B
- sources:
- layer_range: [22, 30]
model: Qwen/Qwen2.5-3B
- sources:
- layer_range: [28, 36]
model: Qwen/Qwen2.5-3B
merge_method: passthrough
dtype: bfloat16
```
|
bloc4488/SAD | bloc4488 | "2024-04-30T15:51:42Z" | 0 | 0 | null | [
"en",
"dataset:bloc4488/TMDB-all-movies",
"region:us"
] | null | "2024-04-30T15:32:11Z" | ---
datasets:
- bloc4488/TMDB-all-movies
language:
- en
metrics:
- accuracy
--- |
CUTD/qnAr | CUTD | "2024-03-11T23:28:28Z" | 90 | 0 | transformers | [
"transformers",
"pytorch",
"electra",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | question-answering | "2024-03-11T20:43:11Z" | ---
tags:
- generated_from_trainer
model-index:
- name: qnAr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qnAr
This model is a fine-tuned version of [ZeyadAhmed/AraElectra-Arabic-SQuADv2-QA](https://huggingface.co/ZeyadAhmed/AraElectra-Arabic-SQuADv2-QA) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9324
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8797 | 1.0 | 1208 | 1.8843 |
| 1.7562 | 2.0 | 2417 | 1.8879 |
| 1.6659 | 3.0 | 3624 | 1.9324 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.13.3
|
mradermacher/Qwen-1_8B-Chat-i1-GGUF | mradermacher | "2025-03-11T02:50:38Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"qwen",
"zh",
"en",
"base_model:Qwen/Qwen-1_8B-Chat",
"base_model:quantized:Qwen/Qwen-1_8B-Chat",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | "2025-03-11T02:16:45Z" | ---
base_model: Qwen/Qwen-1_8B-Chat
language:
- zh
- en
library_name: transformers
quantized_by: mradermacher
tags:
- qwen
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Qwen/Qwen-1_8B-Chat
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen-1_8B-Chat-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen-1_8B-Chat-i1-GGUF/resolve/main/Qwen-1_8B-Chat.i1-IQ1_S.gguf) | i1-IQ1_S | 0.8 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen-1_8B-Chat-i1-GGUF/resolve/main/Qwen-1_8B-Chat.i1-IQ1_M.gguf) | i1-IQ1_M | 0.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen-1_8B-Chat-i1-GGUF/resolve/main/Qwen-1_8B-Chat.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-1_8B-Chat-i1-GGUF/resolve/main/Qwen-1_8B-Chat.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-1_8B-Chat-i1-GGUF/resolve/main/Qwen-1_8B-Chat.i1-IQ2_S.gguf) | i1-IQ2_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-1_8B-Chat-i1-GGUF/resolve/main/Qwen-1_8B-Chat.i1-IQ2_M.gguf) | i1-IQ2_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-1_8B-Chat-i1-GGUF/resolve/main/Qwen-1_8B-Chat.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen-1_8B-Chat-i1-GGUF/resolve/main/Qwen-1_8B-Chat.i1-Q2_K.gguf) | i1-Q2_K | 0.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen-1_8B-Chat-i1-GGUF/resolve/main/Qwen-1_8B-Chat.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen-1_8B-Chat-i1-GGUF/resolve/main/Qwen-1_8B-Chat.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-1_8B-Chat-i1-GGUF/resolve/main/Qwen-1_8B-Chat.i1-IQ3_S.gguf) | i1-IQ3_S | 1.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen-1_8B-Chat-i1-GGUF/resolve/main/Qwen-1_8B-Chat.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.1 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen-1_8B-Chat-i1-GGUF/resolve/main/Qwen-1_8B-Chat.i1-IQ3_M.gguf) | i1-IQ3_M | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-1_8B-Chat-i1-GGUF/resolve/main/Qwen-1_8B-Chat.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen-1_8B-Chat-i1-GGUF/resolve/main/Qwen-1_8B-Chat.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen-1_8B-Chat-i1-GGUF/resolve/main/Qwen-1_8B-Chat.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-1_8B-Chat-i1-GGUF/resolve/main/Qwen-1_8B-Chat.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Qwen-1_8B-Chat-i1-GGUF/resolve/main/Qwen-1_8B-Chat.i1-Q4_0.gguf) | i1-Q4_0 | 1.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen-1_8B-Chat-i1-GGUF/resolve/main/Qwen-1_8B-Chat.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen-1_8B-Chat-i1-GGUF/resolve/main/Qwen-1_8B-Chat.i1-Q4_1.gguf) | i1-Q4_1 | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-1_8B-Chat-i1-GGUF/resolve/main/Qwen-1_8B-Chat.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen-1_8B-Chat-i1-GGUF/resolve/main/Qwen-1_8B-Chat.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-1_8B-Chat-i1-GGUF/resolve/main/Qwen-1_8B-Chat.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-1_8B-Chat-i1-GGUF/resolve/main/Qwen-1_8B-Chat.i1-Q6_K.gguf) | i1-Q6_K | 1.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
hgjhgj/online.viral.sex.video.xnxx.reddit.live | hgjhgj | "2025-02-20T10:04:59Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-02-20T10:04:05Z" | <p><a data-target="animated-image.originalLink" rel="nofollow" href="https://shlbd.net/leaked/video/"><img data-target="animated-image.originalImage" style="max-width: 100%; display: inline-block;" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a></p>
<!-- HTML_TAG_END --> |
prxy5608/50f7f2b8-4ce1-49e7-978c-894064c16dc4 | prxy5608 | "2025-01-18T19:49:37Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:echarlaix/tiny-random-mistral",
"base_model:adapter:echarlaix/tiny-random-mistral",
"license:apache-2.0",
"region:us"
] | null | "2025-01-18T19:48:55Z" | ---
library_name: peft
license: apache-2.0
base_model: echarlaix/tiny-random-mistral
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 50f7f2b8-4ce1-49e7-978c-894064c16dc4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: echarlaix/tiny-random-mistral
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- df2ada24c6bb2b0e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/df2ada24c6bb2b0e_train_data.json
type:
field_input: answer
field_instruction: problem
field_output: generated_solution
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: prxy5608/50f7f2b8-4ce1-49e7-978c-894064c16dc4
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/df2ada24c6bb2b0e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: c2294ea4-f15b-4e48-866d-856c7fe6510b
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: c2294ea4-f15b-4e48-866d-856c7fe6510b
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 50f7f2b8-4ce1-49e7-978c-894064c16dc4
This model is a fine-tuned version of [echarlaix/tiny-random-mistral](https://huggingface.co/echarlaix/tiny-random-mistral) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.2566
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 41.5304 | 0.0034 | 1 | 10.3790 |
| 41.1013 | 0.1718 | 50 | 10.2832 |
| 41.0338 | 0.3436 | 100 | 10.2672 |
| 40.9821 | 0.5155 | 150 | 10.2582 |
| 40.9956 | 0.6873 | 200 | 10.2566 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
xyla/Clinical-T5-Large | xyla | "2023-02-22T02:34:54Z" | 0 | 31 | null | [
"medical",
"arxiv:2302.08091",
"license:mit",
"region:us"
] | null | "2023-01-24T20:27:34Z" | ---
license: mit
tags:
- medical
---
# Clinical-T5 Models
We train four different T5 variants on the union of MIMIC-III and MIMIC-IV: (1) Initialized from T5-Base,
(2) Initialized from SciFive-Base, (3) T5-Base initialized from scratch, and (4) T5-Large initialized from scratch.
This particular model card describes the T5-Large model trained from scratch on MIMIC notes.
More information about the models and training procedure can also be found in the [preprint](https://arxiv.org/abs/2302.08091).
# Model Pretraining
In this section, we will describe the pretraining procedure.
### Pretraining Data
We train on the union of MIMIC-III and MIMIC-IV. MIMIC-III contains a wide variety of note types, whereas MIMIC-IV contains only radiology reports and discharge summaries. We remove duplicate notes. This results in ~1.2B words.
### Note Preprocessing
We make two important preprocessing steps:
* We replace all DEID tags with special tokens. For example, `"The patient, [**First Name 123**], has a history of high blood pressure"` is replaced with `"The patient, [NAME], has a history of high blood pressure"`.
* We remove any duplicate notes based on edit times. There are roughly ~300M/800M words from MIMIC-III, which are repeats of the same note, with only a few words changed! This is due to the fact that a nurse might save a note, and then edit it 10 minutes later. Both would appear.
### Pretraining Procedures
We train the Clinical-T5-Large model from scratch using a cased-vocab of 32,000. We train it for 780,000 steps, using a batch size of 12 per TPU pod (8 pods total), and a sequence length of 512.
This results in a batch size of 49,152. Accounting for the number of steps, this equates to 38B tokens. We were aiming for 40B, but our Google Cloud instance broke! We use the same LR schedule as the original T5 paper.
We train the Clinical-T5-Scratch model using an uncased vocab of 32,0000. This model is trained for 28 epochs total, with a sequence length of 512 (~40B tokens total). We use the same LR schedule as the original T5 paper.
As mentioned previously, we also initialize train two models initialized from T5-base and SciFive. These are trained for ~13B tokens, using a batch size of 32 per GPU (8 GPUs), and a sequence length of 512. In an attempt to speed up training, and help the models quickly adapt, we increase the warm-up steps from 10K to 40K. This helps the model initialized from T5-Base, but not SciFive.
# How to use the Model
You will first need to have credentialed PhysioNet access to use model. Why? There is reasonable evidence that these models contain leakage, especially the larger ones. Releasing a model that leaks these notes would be a data-use agreement violation. To get PhysioNet access, you must pass the CITI training.
Once you have PhysioNet, access the model by doing the following:
```
wget -r -N -c -np --user "INSERT_USER" --ask-password https://physionet.org/files/clinical-t5/1.0.0/
```
Then, you can load the model + tokenizer:
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained(INSERT_PATH_TO_MODEL_FOLDER)
model = AutoModelForSeq2SeqLM.from_pretrained(PATH_TO_MODEL_FOLDER)
```
# Tips
Use the models initialized from scratch! Based on our preliminary results, we find that these are best.
# Questions?
If you have any questions about using the models, please email [email protected]. |
HPLT/hplt_bert_base_2_0_heb-Hebr | HPLT | "2025-03-19T12:45:18Z" | 13 | 0 | null | [
"pytorch",
"BERT",
"HPLT",
"encoder",
"custom_code",
"he",
"dataset:HPLT/HPLT2.0_cleaned",
"arxiv:2503.10267",
"license:apache-2.0",
"region:us"
] | null | "2025-02-22T22:49:33Z" | ---
language:
- he
inference: false
tags:
- BERT
- HPLT
- encoder
license: apache-2.0
datasets:
- HPLT/HPLT2.0_cleaned
---
# HPLT v2.0 BERT for Hebrew
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a second release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language model. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
We present monolingual LTG-BERT models for more than 50 languages out of 191 total in the [HPLT v2.0 dataset](https://hplt-project.org/datasets/v2.0).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage (tested with `transformers==4.46.1` and `tokenizers==0.20.1`)
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_2_0_heb-Hebr")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_2_0_heb-Hebr", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist(), clean_up_tokenization_spaces=True))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Intermediate checkpoints
We are releasing 10 intermediate checkpoints for each model at intervals of every 3125 training steps in separate branches. The naming convention is `stepXXX`: for example, `step18750`.
You can load a specific model revision with `transformers` using the argument `revision`:
```python
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_2_0_heb-Hebr", revision="step21875", trust_remote_code=True)
```
You can access all the revisions for the models with the following code:
```python
from huggingface_hub import list_repo_refs
out = list_repo_refs("HPLT/hplt_bert_base_2_0_heb-Hebr")
print([b.name for b in out.branches])
```
## Cite us
```bibtex
@inproceedings{samuel-etal-2023-trained,
title = "Trained on 100 million words and still in shape: {BERT} meets {B}ritish {N}ational {C}orpus",
author = "Samuel, David and
Kutuzov, Andrey and
{\O}vrelid, Lilja and
Velldal, Erik",
editor = "Vlachos, Andreas and
Augenstein, Isabelle",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2023",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-eacl.146",
doi = "10.18653/v1/2023.findings-eacl.146",
pages = "1954--1974"
}
```
```bibtex
@misc{burchell2025expandedmassivemultilingualdataset,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
year={2025},
eprint={2503.10267},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2503.10267},
}
```
|
nesemenpolkov/gemma-3-1b-it-bookMeta | nesemenpolkov | "2025-04-09T17:46:18Z" | 24 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"conversational",
"ru",
"base_model:google/gemma-3-1b-it",
"base_model:finetune:google/gemma-3-1b-it",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-02T11:56:55Z" | ---
library_name: transformers
license: mit
language:
- ru
base_model:
- google/gemma-3-1b-it
pipeline_tag: text-generation
---
# Model Card for `gemma-3-1b-bookMeta`
## Model Details
- **Model ID:** `nesemenpolkov/gemma-3-1b-it-bookMeta`
- **Model Type:** Causal Language Model
- **Library:** Transformers
- **Framework:** PyTorch
## Model Description
The `gemma-3-1b-bookMeta` model is designed to extract metadata from book descriptions. It can identify authors, titles, publishers, years, page counts, translators, and illustrators from a given text.
## Usage
Below is an example of how to use the `gemma-3-1b-bookMeta` model to extract metadata from a book description:
```python
from transformers import AutoTokenizer, Gemma3ForCausalLM
import torch
model_id = "nesemenpolkov/gemma-3-1b-it-bookMeta"
model = Gemma3ForCausalLM.from_pretrained(model_id).eval()
tokenizer = AutoTokenizer.from_pretrained(model_id)
text = "Летний сад / художник Успенский М. Н. 5 мая 1967 1 л."
prompt = "Вот текст:\n{text}\nВыдели в тексте (если есть):\n-авторы (один или несколько может быть, не пересекается с названием)\n-название (обычно в начале текста, не пересекается с авторами)\n-издательство (не может включать в себя год)\n-год (четырех значное число)\n-количество страниц\n-переводчик (обычно в тексте встречается как 'пер.' или синоним)\n-илюстратор (обычно в тексте встречается как 'ил.' или синоним)\n В случае отсутствия чего либо на этом месте должна быть пустая строка в ответе".format(text=text)
messages = [
[
{
"role": "user",
"content": [{"type": "text", "text": prompt},]
},
],
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device).to(torch.bfloat16)
with torch.inference_mode():
outputs = model.generate(**inputs, max_new_tokens=128)
outputs = tokenizer.batch_decode(outputs)
```
## Input/Output
- **Input:** A text string containing book metadata.
- **Output:** Extracted metadata fields such as authors, title, publisher, year, page count, translator, and illustrator.
## Limitations
- The model may struggle with non-standard or incomplete book descriptions.
- Accuracy may vary depending on the formatting and language of the input text.
## Ethical Considerations
- Ensure that the model is used responsibly and ethically, respecting privacy and copyright laws when processing book metadata.
## Contact
For more information or support, please contact the model maintainers.
|
Jonjew/JoyceDewitt | Jonjew | "2025-03-14T04:47:29Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
] | text-to-image | "2025-03-14T04:47:24Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: joyce-dewitt
output:
url: images/Flux_20241105123728_MSI_00002.jpeg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: joyce-dewitt
license: unknown
---
# Joyce Dewitt
<Gallery />
## Model description
FROM https://civitai.com/models/924759/joyce-dewitt-flux-actress?modelVersionId=1035106
Trigger joyce-dewitt
## Trigger words
You should use `joyce-dewitt` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/JoyceDewitt/tree/main) them in the Files & versions tab.
|
n-f/marketing-email-bloom-17b | n-f | "2023-11-03T13:53:41Z" | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:bigscience/bloom-1b7",
"base_model:adapter:bigscience/bloom-1b7",
"region:us"
] | null | "2023-11-03T13:53:39Z" | ---
library_name: peft
base_model: bigscience/bloom-1b7
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.7.0.dev0
|
mradermacher/Falcon2-8B-Spanish-GGUF | mradermacher | "2024-06-05T17:44:35Z" | 10 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"es",
"base_model:ssmits/Falcon2-8B-Spanish",
"base_model:quantized:ssmits/Falcon2-8B-Spanish",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-06-05T17:16:06Z" | ---
base_model: ssmits/Falcon2-8B-Spanish
language:
- es
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ssmits/Falcon2-8B-Spanish
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Spanish-GGUF/resolve/main/Falcon2-8B-Spanish.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Spanish-GGUF/resolve/main/Falcon2-8B-Spanish.IQ3_XS.gguf) | IQ3_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Spanish-GGUF/resolve/main/Falcon2-8B-Spanish.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Spanish-GGUF/resolve/main/Falcon2-8B-Spanish.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Spanish-GGUF/resolve/main/Falcon2-8B-Spanish.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Spanish-GGUF/resolve/main/Falcon2-8B-Spanish.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Spanish-GGUF/resolve/main/Falcon2-8B-Spanish.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Spanish-GGUF/resolve/main/Falcon2-8B-Spanish.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Spanish-GGUF/resolve/main/Falcon2-8B-Spanish.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Spanish-GGUF/resolve/main/Falcon2-8B-Spanish.Q4_K_M.gguf) | Q4_K_M | 5.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Spanish-GGUF/resolve/main/Falcon2-8B-Spanish.Q5_K_S.gguf) | Q5_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Spanish-GGUF/resolve/main/Falcon2-8B-Spanish.Q5_K_M.gguf) | Q5_K_M | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Spanish-GGUF/resolve/main/Falcon2-8B-Spanish.Q6_K.gguf) | Q6_K | 6.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Spanish-GGUF/resolve/main/Falcon2-8B-Spanish.Q8_0.gguf) | Q8_0 | 8.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Spanish-GGUF/resolve/main/Falcon2-8B-Spanish.f16.gguf) | f16 | 16.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
timm/tinynet_a.in1k | timm | "2025-01-21T18:20:17Z" | 19,297 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"transformers",
"dataset:imagenet-1k",
"arxiv:2010.14819",
"license:apache-2.0",
"region:us"
] | image-classification | "2022-12-13T00:21:58Z" | ---
tags:
- image-classification
- timm
- transformers
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for tinynet_a.in1k
A TinyNet image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 6.2
- GMACs: 0.3
- Activations (M): 5.4
- Image size: 192 x 192
- **Papers:**
- Model rubik's cube: Twisting resolution, depth and width for tinynets: https://arxiv.org/abs/2010.14819v2
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('tinynet_a.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'tinynet_a.in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 16, 96, 96])
# torch.Size([1, 24, 48, 48])
# torch.Size([1, 40, 24, 24])
# torch.Size([1, 112, 12, 12])
# torch.Size([1, 320, 6, 6])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'tinynet_a.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 1280, 6, 6) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{han2020model,
title={Model rubik’s cube: Twisting resolution, depth and width for tinynets},
author={Han, Kai and Wang, Yunhe and Zhang, Qiulin and Zhang, Wei and Xu, Chunjing and Zhang, Tong},
journal={Advances in Neural Information Processing Systems},
volume={33},
pages={19353--19364},
year={2020}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
Abdo36/DistilBertLoRa | Abdo36 | "2024-04-26T13:39:46Z" | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:adapter:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | "2024-04-26T12:06:51Z" | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: distilbert-base-uncased
metrics:
- accuracy
model-index:
- name: DistilBertLoRa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DistilBertLoRa
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the IMDB Movie dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0234
- Accuracy: {'accuracy': 0.884}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------------:|
| No log | 1.0 | 250 | 0.4076 | {'accuracy': 0.876} |
| 0.429 | 2.0 | 500 | 0.3507 | {'accuracy': 0.863} |
| 0.429 | 3.0 | 750 | 0.5018 | {'accuracy': 0.881} |
| 0.2304 | 4.0 | 1000 | 0.7036 | {'accuracy': 0.864} |
| 0.2304 | 5.0 | 1250 | 0.8113 | {'accuracy': 0.862} |
| 0.1136 | 6.0 | 1500 | 0.8428 | {'accuracy': 0.882} |
| 0.1136 | 7.0 | 1750 | 0.9134 | {'accuracy': 0.89} |
| 0.0153 | 8.0 | 2000 | 0.9723 | {'accuracy': 0.884} |
| 0.0153 | 9.0 | 2250 | 1.0225 | {'accuracy': 0.884} |
| 0.0089 | 10.0 | 2500 | 1.0234 | {'accuracy': 0.884} |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 |
akshaykumarcp/distilbert-base-uncased-finetuned-squad | akshaykumarcp | "2024-03-14T08:05:13Z" | 2 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2024-03-14T03:03:10Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4720
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.6428 | 1.0 | 5533 | 1.2580 |
| 0.5309 | 2.0 | 11066 | 1.4201 |
| 0.4581 | 3.0 | 16599 | 1.4720 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.2.1+cu118
- Datasets 2.18.0
- Tokenizers 0.13.3
|
mitchyAI/garammchy | mitchyAI | "2024-02-17T01:39:26Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-02-17T01:37:35Z" | ---
license: creativeml-openrail-m
---
|
MayBashendy/ArabicNewSplits6_WithDuplicationsForScore5_FineTuningAraBERT_run1_AugV5_k14_task1_organization | MayBashendy | "2024-12-22T11:05:42Z" | 163 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-12-22T10:50:27Z" | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits6_WithDuplicationsForScore5_FineTuningAraBERT_run1_AugV5_k14_task1_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits6_WithDuplicationsForScore5_FineTuningAraBERT_run1_AugV5_k14_task1_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6528
- Qwk: 0.6806
- Mse: 0.6528
- Rmse: 0.8080
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0253 | 2 | 5.2680 | -0.0016 | 5.2680 | 2.2952 |
| No log | 0.0506 | 4 | 3.1539 | 0.0597 | 3.1539 | 1.7759 |
| No log | 0.0759 | 6 | 1.8922 | 0.1210 | 1.8922 | 1.3756 |
| No log | 0.1013 | 8 | 1.3212 | 0.2191 | 1.3212 | 1.1494 |
| No log | 0.1266 | 10 | 1.0999 | 0.2428 | 1.0999 | 1.0488 |
| No log | 0.1519 | 12 | 1.1472 | 0.1753 | 1.1472 | 1.0711 |
| No log | 0.1772 | 14 | 1.2943 | 0.2225 | 1.2943 | 1.1377 |
| No log | 0.2025 | 16 | 1.4861 | 0.0025 | 1.4861 | 1.2191 |
| No log | 0.2278 | 18 | 1.4219 | 0.0825 | 1.4219 | 1.1924 |
| No log | 0.2532 | 20 | 1.3237 | 0.2252 | 1.3237 | 1.1505 |
| No log | 0.2785 | 22 | 1.1903 | 0.2260 | 1.1903 | 1.0910 |
| No log | 0.3038 | 24 | 1.1303 | 0.1863 | 1.1303 | 1.0632 |
| No log | 0.3291 | 26 | 1.1544 | 0.1649 | 1.1544 | 1.0744 |
| No log | 0.3544 | 28 | 1.1080 | 0.1863 | 1.1080 | 1.0526 |
| No log | 0.3797 | 30 | 1.2187 | 0.2392 | 1.2187 | 1.1039 |
| No log | 0.4051 | 32 | 1.4095 | 0.1164 | 1.4095 | 1.1872 |
| No log | 0.4304 | 34 | 1.5253 | 0.0513 | 1.5253 | 1.2350 |
| No log | 0.4557 | 36 | 1.4482 | 0.1309 | 1.4482 | 1.2034 |
| No log | 0.4810 | 38 | 1.1031 | 0.3565 | 1.1031 | 1.0503 |
| No log | 0.5063 | 40 | 0.8583 | 0.5191 | 0.8583 | 0.9265 |
| No log | 0.5316 | 42 | 0.8229 | 0.4856 | 0.8229 | 0.9072 |
| No log | 0.5570 | 44 | 0.8017 | 0.5630 | 0.8017 | 0.8954 |
| No log | 0.5823 | 46 | 1.0425 | 0.3333 | 1.0425 | 1.0210 |
| No log | 0.6076 | 48 | 1.3017 | 0.2675 | 1.3017 | 1.1409 |
| No log | 0.6329 | 50 | 1.3044 | 0.2823 | 1.3044 | 1.1421 |
| No log | 0.6582 | 52 | 1.1319 | 0.3643 | 1.1319 | 1.0639 |
| No log | 0.6835 | 54 | 1.0732 | 0.4185 | 1.0732 | 1.0359 |
| No log | 0.7089 | 56 | 0.8371 | 0.5972 | 0.8371 | 0.9149 |
| No log | 0.7342 | 58 | 0.6174 | 0.7162 | 0.6174 | 0.7858 |
| No log | 0.7595 | 60 | 0.6142 | 0.7192 | 0.6142 | 0.7837 |
| No log | 0.7848 | 62 | 0.7365 | 0.6832 | 0.7365 | 0.8582 |
| No log | 0.8101 | 64 | 1.0876 | 0.5336 | 1.0876 | 1.0429 |
| No log | 0.8354 | 66 | 1.1844 | 0.4905 | 1.1844 | 1.0883 |
| No log | 0.8608 | 68 | 0.8550 | 0.6178 | 0.8550 | 0.9247 |
| No log | 0.8861 | 70 | 0.5876 | 0.7536 | 0.5876 | 0.7666 |
| No log | 0.9114 | 72 | 0.5780 | 0.7379 | 0.5780 | 0.7603 |
| No log | 0.9367 | 74 | 0.5780 | 0.7156 | 0.5780 | 0.7602 |
| No log | 0.9620 | 76 | 0.5624 | 0.7288 | 0.5624 | 0.7499 |
| No log | 0.9873 | 78 | 0.5692 | 0.7184 | 0.5692 | 0.7545 |
| No log | 1.0127 | 80 | 0.5539 | 0.6848 | 0.5539 | 0.7443 |
| No log | 1.0380 | 82 | 0.6700 | 0.6822 | 0.6700 | 0.8185 |
| No log | 1.0633 | 84 | 0.7081 | 0.6732 | 0.7081 | 0.8415 |
| No log | 1.0886 | 86 | 0.6590 | 0.7210 | 0.6590 | 0.8118 |
| No log | 1.1139 | 88 | 0.5898 | 0.7346 | 0.5898 | 0.7680 |
| No log | 1.1392 | 90 | 0.5904 | 0.7607 | 0.5904 | 0.7684 |
| No log | 1.1646 | 92 | 0.5928 | 0.7817 | 0.5928 | 0.7700 |
| No log | 1.1899 | 94 | 0.5609 | 0.7791 | 0.5609 | 0.7489 |
| No log | 1.2152 | 96 | 0.5477 | 0.7490 | 0.5477 | 0.7401 |
| No log | 1.2405 | 98 | 0.6284 | 0.7203 | 0.6284 | 0.7927 |
| No log | 1.2658 | 100 | 0.8825 | 0.5691 | 0.8825 | 0.9394 |
| No log | 1.2911 | 102 | 0.7646 | 0.5770 | 0.7646 | 0.8744 |
| No log | 1.3165 | 104 | 0.5991 | 0.7036 | 0.5991 | 0.7740 |
| No log | 1.3418 | 106 | 0.6372 | 0.7032 | 0.6372 | 0.7982 |
| No log | 1.3671 | 108 | 0.7095 | 0.6927 | 0.7095 | 0.8423 |
| No log | 1.3924 | 110 | 0.6814 | 0.7267 | 0.6814 | 0.8254 |
| No log | 1.4177 | 112 | 0.5999 | 0.7229 | 0.5999 | 0.7745 |
| No log | 1.4430 | 114 | 0.9206 | 0.6235 | 0.9206 | 0.9595 |
| No log | 1.4684 | 116 | 0.9962 | 0.6093 | 0.9962 | 0.9981 |
| No log | 1.4937 | 118 | 0.6768 | 0.7081 | 0.6768 | 0.8227 |
| No log | 1.5190 | 120 | 0.6473 | 0.7379 | 0.6473 | 0.8045 |
| No log | 1.5443 | 122 | 0.6888 | 0.7312 | 0.6888 | 0.8299 |
| No log | 1.5696 | 124 | 0.6289 | 0.7417 | 0.6289 | 0.7930 |
| No log | 1.5949 | 126 | 0.6666 | 0.7455 | 0.6666 | 0.8164 |
| No log | 1.6203 | 128 | 0.6889 | 0.7474 | 0.6889 | 0.8300 |
| No log | 1.6456 | 130 | 0.7669 | 0.6985 | 0.7669 | 0.8757 |
| No log | 1.6709 | 132 | 0.8517 | 0.6420 | 0.8517 | 0.9229 |
| No log | 1.6962 | 134 | 0.7816 | 0.6839 | 0.7816 | 0.8841 |
| No log | 1.7215 | 136 | 0.5943 | 0.7204 | 0.5943 | 0.7709 |
| No log | 1.7468 | 138 | 0.6181 | 0.7591 | 0.6181 | 0.7862 |
| No log | 1.7722 | 140 | 0.6024 | 0.7856 | 0.6024 | 0.7761 |
| No log | 1.7975 | 142 | 0.5766 | 0.7449 | 0.5766 | 0.7593 |
| No log | 1.8228 | 144 | 0.7742 | 0.6616 | 0.7742 | 0.8799 |
| No log | 1.8481 | 146 | 0.7788 | 0.6557 | 0.7788 | 0.8825 |
| No log | 1.8734 | 148 | 0.6085 | 0.7276 | 0.6085 | 0.7801 |
| No log | 1.8987 | 150 | 0.6769 | 0.7384 | 0.6769 | 0.8227 |
| No log | 1.9241 | 152 | 1.0900 | 0.6565 | 1.0900 | 1.0440 |
| No log | 1.9494 | 154 | 1.1754 | 0.6397 | 1.1754 | 1.0842 |
| No log | 1.9747 | 156 | 1.0071 | 0.6956 | 1.0071 | 1.0035 |
| No log | 2.0 | 158 | 0.9085 | 0.6969 | 0.9085 | 0.9532 |
| No log | 2.0253 | 160 | 0.7333 | 0.7106 | 0.7333 | 0.8563 |
| No log | 2.0506 | 162 | 0.6466 | 0.7066 | 0.6466 | 0.8041 |
| No log | 2.0759 | 164 | 0.6347 | 0.7324 | 0.6347 | 0.7967 |
| No log | 2.1013 | 166 | 0.6856 | 0.7105 | 0.6856 | 0.8280 |
| No log | 2.1266 | 168 | 0.7494 | 0.6909 | 0.7494 | 0.8657 |
| No log | 2.1519 | 170 | 0.7192 | 0.6850 | 0.7192 | 0.8481 |
| No log | 2.1772 | 172 | 0.6278 | 0.7207 | 0.6278 | 0.7924 |
| No log | 2.2025 | 174 | 0.6158 | 0.7516 | 0.6158 | 0.7848 |
| No log | 2.2278 | 176 | 0.6699 | 0.7280 | 0.6699 | 0.8185 |
| No log | 2.2532 | 178 | 0.7579 | 0.7015 | 0.7579 | 0.8706 |
| No log | 2.2785 | 180 | 0.7193 | 0.6972 | 0.7193 | 0.8481 |
| No log | 2.3038 | 182 | 0.7078 | 0.6972 | 0.7078 | 0.8413 |
| No log | 2.3291 | 184 | 0.6373 | 0.7417 | 0.6373 | 0.7983 |
| No log | 2.3544 | 186 | 0.6110 | 0.7526 | 0.6110 | 0.7817 |
| No log | 2.3797 | 188 | 0.6187 | 0.7325 | 0.6187 | 0.7866 |
| No log | 2.4051 | 190 | 0.6181 | 0.7174 | 0.6181 | 0.7862 |
| No log | 2.4304 | 192 | 0.6168 | 0.7306 | 0.6168 | 0.7854 |
| No log | 2.4557 | 194 | 0.6742 | 0.7202 | 0.6742 | 0.8211 |
| No log | 2.4810 | 196 | 0.8614 | 0.6695 | 0.8614 | 0.9281 |
| No log | 2.5063 | 198 | 0.8452 | 0.6630 | 0.8452 | 0.9193 |
| No log | 2.5316 | 200 | 0.7037 | 0.7057 | 0.7037 | 0.8389 |
| No log | 2.5570 | 202 | 0.5667 | 0.7391 | 0.5667 | 0.7528 |
| No log | 2.5823 | 204 | 0.6743 | 0.6963 | 0.6743 | 0.8211 |
| No log | 2.6076 | 206 | 0.7003 | 0.6844 | 0.7003 | 0.8368 |
| No log | 2.6329 | 208 | 0.5908 | 0.7092 | 0.5908 | 0.7687 |
| No log | 2.6582 | 210 | 0.6098 | 0.7630 | 0.6098 | 0.7809 |
| No log | 2.6835 | 212 | 0.7875 | 0.7033 | 0.7875 | 0.8874 |
| No log | 2.7089 | 214 | 0.7970 | 0.7103 | 0.7970 | 0.8928 |
| No log | 2.7342 | 216 | 0.6803 | 0.7698 | 0.6803 | 0.8248 |
| No log | 2.7595 | 218 | 0.5804 | 0.7475 | 0.5804 | 0.7618 |
| No log | 2.7848 | 220 | 0.5665 | 0.7688 | 0.5665 | 0.7527 |
| No log | 2.8101 | 222 | 0.6106 | 0.7583 | 0.6106 | 0.7814 |
| No log | 2.8354 | 224 | 0.7010 | 0.7557 | 0.7010 | 0.8373 |
| No log | 2.8608 | 226 | 0.7048 | 0.7604 | 0.7048 | 0.8395 |
| No log | 2.8861 | 228 | 0.6155 | 0.7285 | 0.6155 | 0.7845 |
| No log | 2.9114 | 230 | 0.6024 | 0.7737 | 0.6024 | 0.7761 |
| No log | 2.9367 | 232 | 0.5903 | 0.7369 | 0.5903 | 0.7683 |
| No log | 2.9620 | 234 | 0.6284 | 0.7270 | 0.6284 | 0.7927 |
| No log | 2.9873 | 236 | 0.6903 | 0.7369 | 0.6903 | 0.8309 |
| No log | 3.0127 | 238 | 0.6826 | 0.7369 | 0.6826 | 0.8262 |
| No log | 3.0380 | 240 | 0.6365 | 0.7264 | 0.6365 | 0.7978 |
| No log | 3.0633 | 242 | 0.6146 | 0.7155 | 0.6146 | 0.7840 |
| No log | 3.0886 | 244 | 0.6094 | 0.7324 | 0.6094 | 0.7807 |
| No log | 3.1139 | 246 | 0.6150 | 0.7413 | 0.6150 | 0.7842 |
| No log | 3.1392 | 248 | 0.6349 | 0.7404 | 0.6349 | 0.7968 |
| No log | 3.1646 | 250 | 0.6068 | 0.7437 | 0.6068 | 0.7790 |
| No log | 3.1899 | 252 | 0.6049 | 0.7183 | 0.6049 | 0.7778 |
| No log | 3.2152 | 254 | 0.6025 | 0.7220 | 0.6025 | 0.7762 |
| No log | 3.2405 | 256 | 0.5937 | 0.7367 | 0.5937 | 0.7705 |
| No log | 3.2658 | 258 | 0.5993 | 0.7372 | 0.5993 | 0.7742 |
| No log | 3.2911 | 260 | 0.5995 | 0.7387 | 0.5995 | 0.7743 |
| No log | 3.3165 | 262 | 0.6031 | 0.7372 | 0.6031 | 0.7766 |
| No log | 3.3418 | 264 | 0.6090 | 0.7392 | 0.6090 | 0.7804 |
| No log | 3.3671 | 266 | 0.6283 | 0.7405 | 0.6283 | 0.7926 |
| No log | 3.3924 | 268 | 0.6173 | 0.7396 | 0.6173 | 0.7857 |
| No log | 3.4177 | 270 | 0.6123 | 0.7427 | 0.6123 | 0.7825 |
| No log | 3.4430 | 272 | 0.6135 | 0.7373 | 0.6135 | 0.7833 |
| No log | 3.4684 | 274 | 0.5925 | 0.7505 | 0.5925 | 0.7697 |
| No log | 3.4937 | 276 | 0.5647 | 0.7574 | 0.5647 | 0.7515 |
| No log | 3.5190 | 278 | 0.5871 | 0.7473 | 0.5871 | 0.7662 |
| No log | 3.5443 | 280 | 0.7426 | 0.6961 | 0.7426 | 0.8617 |
| No log | 3.5696 | 282 | 0.8941 | 0.6562 | 0.8941 | 0.9456 |
| No log | 3.5949 | 284 | 0.9038 | 0.6764 | 0.9038 | 0.9507 |
| No log | 3.6203 | 286 | 0.8249 | 0.6859 | 0.8249 | 0.9082 |
| No log | 3.6456 | 288 | 0.6966 | 0.7160 | 0.6966 | 0.8346 |
| No log | 3.6709 | 290 | 0.6370 | 0.7365 | 0.6370 | 0.7981 |
| No log | 3.6962 | 292 | 0.6251 | 0.7291 | 0.6251 | 0.7906 |
| No log | 3.7215 | 294 | 0.6488 | 0.7148 | 0.6488 | 0.8055 |
| No log | 3.7468 | 296 | 0.7116 | 0.6952 | 0.7116 | 0.8436 |
| No log | 3.7722 | 298 | 0.8192 | 0.6767 | 0.8192 | 0.9051 |
| No log | 3.7975 | 300 | 0.8336 | 0.6645 | 0.8336 | 0.9130 |
| No log | 3.8228 | 302 | 0.7128 | 0.6747 | 0.7128 | 0.8443 |
| No log | 3.8481 | 304 | 0.5910 | 0.7342 | 0.5910 | 0.7687 |
| No log | 3.8734 | 306 | 0.5710 | 0.7447 | 0.5710 | 0.7556 |
| No log | 3.8987 | 308 | 0.5911 | 0.7244 | 0.5911 | 0.7689 |
| No log | 3.9241 | 310 | 0.5791 | 0.7421 | 0.5791 | 0.7610 |
| No log | 3.9494 | 312 | 0.5915 | 0.7692 | 0.5915 | 0.7691 |
| No log | 3.9747 | 314 | 0.7386 | 0.7247 | 0.7386 | 0.8594 |
| No log | 4.0 | 316 | 0.8097 | 0.7183 | 0.8097 | 0.8999 |
| No log | 4.0253 | 318 | 0.7223 | 0.7306 | 0.7223 | 0.8499 |
| No log | 4.0506 | 320 | 0.6395 | 0.7149 | 0.6395 | 0.7997 |
| No log | 4.0759 | 322 | 0.5875 | 0.7485 | 0.5875 | 0.7665 |
| No log | 4.1013 | 324 | 0.5750 | 0.7487 | 0.5750 | 0.7583 |
| No log | 4.1266 | 326 | 0.5789 | 0.7451 | 0.5789 | 0.7608 |
| No log | 4.1519 | 328 | 0.6001 | 0.7296 | 0.6001 | 0.7747 |
| No log | 4.1772 | 330 | 0.6490 | 0.7210 | 0.6490 | 0.8056 |
| No log | 4.2025 | 332 | 0.6921 | 0.6921 | 0.6921 | 0.8319 |
| No log | 4.2278 | 334 | 0.6627 | 0.7164 | 0.6627 | 0.8141 |
| No log | 4.2532 | 336 | 0.6074 | 0.7103 | 0.6074 | 0.7793 |
| No log | 4.2785 | 338 | 0.5853 | 0.7332 | 0.5853 | 0.7650 |
| No log | 4.3038 | 340 | 0.5871 | 0.7449 | 0.5871 | 0.7662 |
| No log | 4.3291 | 342 | 0.5936 | 0.7473 | 0.5936 | 0.7705 |
| No log | 4.3544 | 344 | 0.6003 | 0.7438 | 0.6003 | 0.7748 |
| No log | 4.3797 | 346 | 0.6112 | 0.7545 | 0.6112 | 0.7818 |
| No log | 4.4051 | 348 | 0.5931 | 0.7590 | 0.5931 | 0.7701 |
| No log | 4.4304 | 350 | 0.5651 | 0.7566 | 0.5651 | 0.7517 |
| No log | 4.4557 | 352 | 0.5767 | 0.7251 | 0.5767 | 0.7594 |
| No log | 4.4810 | 354 | 0.5766 | 0.7314 | 0.5766 | 0.7594 |
| No log | 4.5063 | 356 | 0.5592 | 0.7470 | 0.5592 | 0.7478 |
| No log | 4.5316 | 358 | 0.5634 | 0.7777 | 0.5634 | 0.7506 |
| No log | 4.5570 | 360 | 0.6136 | 0.7398 | 0.6136 | 0.7833 |
| No log | 4.5823 | 362 | 0.6551 | 0.7046 | 0.6551 | 0.8094 |
| No log | 4.6076 | 364 | 0.6188 | 0.7389 | 0.6188 | 0.7866 |
| No log | 4.6329 | 366 | 0.5901 | 0.7394 | 0.5901 | 0.7682 |
| No log | 4.6582 | 368 | 0.5883 | 0.7422 | 0.5883 | 0.7670 |
| No log | 4.6835 | 370 | 0.6535 | 0.7057 | 0.6535 | 0.8084 |
| No log | 4.7089 | 372 | 0.7035 | 0.6859 | 0.7035 | 0.8388 |
| No log | 4.7342 | 374 | 0.7489 | 0.6774 | 0.7489 | 0.8654 |
| No log | 4.7595 | 376 | 0.6974 | 0.6844 | 0.6974 | 0.8351 |
| No log | 4.7848 | 378 | 0.6215 | 0.7207 | 0.6215 | 0.7883 |
| No log | 4.8101 | 380 | 0.6047 | 0.7402 | 0.6047 | 0.7776 |
| No log | 4.8354 | 382 | 0.6038 | 0.7490 | 0.6038 | 0.7770 |
| No log | 4.8608 | 384 | 0.6104 | 0.7227 | 0.6104 | 0.7813 |
| No log | 4.8861 | 386 | 0.6213 | 0.7056 | 0.6213 | 0.7882 |
| No log | 4.9114 | 388 | 0.6469 | 0.7083 | 0.6469 | 0.8043 |
| No log | 4.9367 | 390 | 0.7456 | 0.6491 | 0.7456 | 0.8635 |
| No log | 4.9620 | 392 | 0.7887 | 0.6550 | 0.7887 | 0.8881 |
| No log | 4.9873 | 394 | 0.7437 | 0.6679 | 0.7437 | 0.8624 |
| No log | 5.0127 | 396 | 0.6487 | 0.6959 | 0.6487 | 0.8054 |
| No log | 5.0380 | 398 | 0.5993 | 0.7486 | 0.5993 | 0.7742 |
| No log | 5.0633 | 400 | 0.5666 | 0.7607 | 0.5666 | 0.7527 |
| No log | 5.0886 | 402 | 0.5479 | 0.7693 | 0.5479 | 0.7402 |
| No log | 5.1139 | 404 | 0.5467 | 0.7513 | 0.5467 | 0.7394 |
| No log | 5.1392 | 406 | 0.5391 | 0.7768 | 0.5391 | 0.7343 |
| No log | 5.1646 | 408 | 0.5639 | 0.7179 | 0.5639 | 0.7509 |
| No log | 5.1899 | 410 | 0.5757 | 0.7125 | 0.5757 | 0.7587 |
| No log | 5.2152 | 412 | 0.5829 | 0.7269 | 0.5829 | 0.7634 |
| No log | 5.2405 | 414 | 0.5643 | 0.7179 | 0.5643 | 0.7512 |
| No log | 5.2658 | 416 | 0.5656 | 0.7323 | 0.5656 | 0.7521 |
| No log | 5.2911 | 418 | 0.5885 | 0.7380 | 0.5885 | 0.7671 |
| No log | 5.3165 | 420 | 0.5979 | 0.7440 | 0.5979 | 0.7733 |
| No log | 5.3418 | 422 | 0.6718 | 0.6852 | 0.6718 | 0.8196 |
| No log | 5.3671 | 424 | 0.7107 | 0.6992 | 0.7107 | 0.8431 |
| No log | 5.3924 | 426 | 0.6783 | 0.7238 | 0.6783 | 0.8236 |
| No log | 5.4177 | 428 | 0.6325 | 0.7308 | 0.6325 | 0.7953 |
| No log | 5.4430 | 430 | 0.5896 | 0.7467 | 0.5896 | 0.7679 |
| No log | 5.4684 | 432 | 0.5607 | 0.7717 | 0.5607 | 0.7488 |
| No log | 5.4937 | 434 | 0.5494 | 0.7854 | 0.5494 | 0.7412 |
| No log | 5.5190 | 436 | 0.5538 | 0.7733 | 0.5538 | 0.7441 |
| No log | 5.5443 | 438 | 0.5579 | 0.7714 | 0.5579 | 0.7469 |
| No log | 5.5696 | 440 | 0.5844 | 0.7486 | 0.5844 | 0.7644 |
| No log | 5.5949 | 442 | 0.6676 | 0.7090 | 0.6676 | 0.8170 |
| No log | 5.6203 | 444 | 0.7420 | 0.6876 | 0.7420 | 0.8614 |
| No log | 5.6456 | 446 | 0.7145 | 0.6795 | 0.7145 | 0.8453 |
| No log | 5.6709 | 448 | 0.6226 | 0.7362 | 0.6226 | 0.7890 |
| No log | 5.6962 | 450 | 0.5545 | 0.7832 | 0.5545 | 0.7446 |
| No log | 5.7215 | 452 | 0.5454 | 0.7730 | 0.5454 | 0.7385 |
| No log | 5.7468 | 454 | 0.5495 | 0.7738 | 0.5495 | 0.7413 |
| No log | 5.7722 | 456 | 0.5738 | 0.7462 | 0.5738 | 0.7575 |
| No log | 5.7975 | 458 | 0.6283 | 0.7144 | 0.6283 | 0.7927 |
| No log | 5.8228 | 460 | 0.6516 | 0.7134 | 0.6516 | 0.8072 |
| No log | 5.8481 | 462 | 0.6310 | 0.7240 | 0.6310 | 0.7943 |
| No log | 5.8734 | 464 | 0.5799 | 0.7364 | 0.5799 | 0.7615 |
| No log | 5.8987 | 466 | 0.5439 | 0.7376 | 0.5439 | 0.7375 |
| No log | 5.9241 | 468 | 0.5272 | 0.7465 | 0.5272 | 0.7261 |
| No log | 5.9494 | 470 | 0.5273 | 0.7357 | 0.5273 | 0.7262 |
| No log | 5.9747 | 472 | 0.5313 | 0.7421 | 0.5313 | 0.7289 |
| No log | 6.0 | 474 | 0.5454 | 0.7331 | 0.5454 | 0.7385 |
| No log | 6.0253 | 476 | 0.6052 | 0.7119 | 0.6052 | 0.7779 |
| No log | 6.0506 | 478 | 0.7053 | 0.6883 | 0.7053 | 0.8398 |
| No log | 6.0759 | 480 | 0.7513 | 0.6831 | 0.7513 | 0.8668 |
| No log | 6.1013 | 482 | 0.7323 | 0.6914 | 0.7323 | 0.8557 |
| No log | 6.1266 | 484 | 0.6984 | 0.7081 | 0.6984 | 0.8357 |
| No log | 6.1519 | 486 | 0.6593 | 0.7022 | 0.6593 | 0.8120 |
| No log | 6.1772 | 488 | 0.6473 | 0.6968 | 0.6473 | 0.8045 |
| No log | 6.2025 | 490 | 0.6367 | 0.7213 | 0.6367 | 0.7980 |
| No log | 6.2278 | 492 | 0.6592 | 0.6755 | 0.6592 | 0.8119 |
| No log | 6.2532 | 494 | 0.6944 | 0.6666 | 0.6944 | 0.8333 |
| No log | 6.2785 | 496 | 0.6938 | 0.6676 | 0.6938 | 0.8329 |
| No log | 6.3038 | 498 | 0.6725 | 0.6721 | 0.6725 | 0.8200 |
| 0.4573 | 6.3291 | 500 | 0.6377 | 0.7135 | 0.6377 | 0.7986 |
| 0.4573 | 6.3544 | 502 | 0.6052 | 0.7142 | 0.6052 | 0.7780 |
| 0.4573 | 6.3797 | 504 | 0.5961 | 0.7382 | 0.5961 | 0.7721 |
| 0.4573 | 6.4051 | 506 | 0.5985 | 0.7266 | 0.5985 | 0.7736 |
| 0.4573 | 6.4304 | 508 | 0.6190 | 0.7166 | 0.6190 | 0.7868 |
| 0.4573 | 6.4557 | 510 | 0.6709 | 0.6936 | 0.6709 | 0.8191 |
| 0.4573 | 6.4810 | 512 | 0.7422 | 0.6556 | 0.7422 | 0.8615 |
| 0.4573 | 6.5063 | 514 | 0.7478 | 0.6649 | 0.7478 | 0.8647 |
| 0.4573 | 6.5316 | 516 | 0.7015 | 0.6737 | 0.7015 | 0.8376 |
| 0.4573 | 6.5570 | 518 | 0.6185 | 0.7093 | 0.6185 | 0.7864 |
| 0.4573 | 6.5823 | 520 | 0.5791 | 0.7306 | 0.5791 | 0.7610 |
| 0.4573 | 6.6076 | 522 | 0.5724 | 0.7306 | 0.5724 | 0.7565 |
| 0.4573 | 6.6329 | 524 | 0.5782 | 0.7340 | 0.5782 | 0.7604 |
| 0.4573 | 6.6582 | 526 | 0.5767 | 0.7340 | 0.5767 | 0.7594 |
| 0.4573 | 6.6835 | 528 | 0.5858 | 0.7340 | 0.5858 | 0.7654 |
| 0.4573 | 6.7089 | 530 | 0.5948 | 0.7245 | 0.5948 | 0.7713 |
| 0.4573 | 6.7342 | 532 | 0.6002 | 0.7245 | 0.6002 | 0.7747 |
| 0.4573 | 6.7595 | 534 | 0.6121 | 0.7245 | 0.6121 | 0.7824 |
| 0.4573 | 6.7848 | 536 | 0.6174 | 0.7245 | 0.6174 | 0.7858 |
| 0.4573 | 6.8101 | 538 | 0.6013 | 0.7364 | 0.6013 | 0.7755 |
| 0.4573 | 6.8354 | 540 | 0.5715 | 0.7445 | 0.5715 | 0.7559 |
| 0.4573 | 6.8608 | 542 | 0.5583 | 0.7551 | 0.5583 | 0.7472 |
| 0.4573 | 6.8861 | 544 | 0.5615 | 0.7591 | 0.5615 | 0.7493 |
| 0.4573 | 6.9114 | 546 | 0.5760 | 0.7410 | 0.5760 | 0.7590 |
| 0.4573 | 6.9367 | 548 | 0.6236 | 0.7364 | 0.6236 | 0.7897 |
| 0.4573 | 6.9620 | 550 | 0.6979 | 0.6586 | 0.6979 | 0.8354 |
| 0.4573 | 6.9873 | 552 | 0.7575 | 0.6551 | 0.7575 | 0.8703 |
| 0.4573 | 7.0127 | 554 | 0.8077 | 0.6582 | 0.8077 | 0.8987 |
| 0.4573 | 7.0380 | 556 | 0.8073 | 0.6582 | 0.8073 | 0.8985 |
| 0.4573 | 7.0633 | 558 | 0.7632 | 0.6551 | 0.7632 | 0.8736 |
| 0.4573 | 7.0886 | 560 | 0.7080 | 0.6654 | 0.7080 | 0.8414 |
| 0.4573 | 7.1139 | 562 | 0.6837 | 0.6876 | 0.6837 | 0.8269 |
| 0.4573 | 7.1392 | 564 | 0.6346 | 0.7269 | 0.6346 | 0.7966 |
| 0.4573 | 7.1646 | 566 | 0.6069 | 0.7227 | 0.6069 | 0.7790 |
| 0.4573 | 7.1899 | 568 | 0.6046 | 0.7296 | 0.6046 | 0.7775 |
| 0.4573 | 7.2152 | 570 | 0.6033 | 0.7370 | 0.6033 | 0.7767 |
| 0.4573 | 7.2405 | 572 | 0.6126 | 0.7364 | 0.6126 | 0.7827 |
| 0.4573 | 7.2658 | 574 | 0.6271 | 0.7267 | 0.6271 | 0.7919 |
| 0.4573 | 7.2911 | 576 | 0.6377 | 0.7003 | 0.6377 | 0.7986 |
| 0.4573 | 7.3165 | 578 | 0.6667 | 0.7036 | 0.6667 | 0.8165 |
| 0.4573 | 7.3418 | 580 | 0.6827 | 0.6849 | 0.6827 | 0.8263 |
| 0.4573 | 7.3671 | 582 | 0.6757 | 0.6956 | 0.6757 | 0.8220 |
| 0.4573 | 7.3924 | 584 | 0.6682 | 0.6963 | 0.6682 | 0.8175 |
| 0.4573 | 7.4177 | 586 | 0.6685 | 0.6990 | 0.6685 | 0.8176 |
| 0.4573 | 7.4430 | 588 | 0.6521 | 0.7065 | 0.6521 | 0.8075 |
| 0.4573 | 7.4684 | 590 | 0.6330 | 0.7154 | 0.6330 | 0.7956 |
| 0.4573 | 7.4937 | 592 | 0.6256 | 0.7154 | 0.6256 | 0.7910 |
| 0.4573 | 7.5190 | 594 | 0.6126 | 0.7325 | 0.6126 | 0.7827 |
| 0.4573 | 7.5443 | 596 | 0.5941 | 0.7314 | 0.5941 | 0.7708 |
| 0.4573 | 7.5696 | 598 | 0.5846 | 0.7321 | 0.5846 | 0.7646 |
| 0.4573 | 7.5949 | 600 | 0.5819 | 0.7257 | 0.5819 | 0.7628 |
| 0.4573 | 7.6203 | 602 | 0.5869 | 0.7267 | 0.5869 | 0.7661 |
| 0.4573 | 7.6456 | 604 | 0.6027 | 0.7336 | 0.6027 | 0.7763 |
| 0.4573 | 7.6709 | 606 | 0.6198 | 0.7125 | 0.6198 | 0.7873 |
| 0.4573 | 7.6962 | 608 | 0.6399 | 0.6871 | 0.6399 | 0.8000 |
| 0.4573 | 7.7215 | 610 | 0.6665 | 0.6953 | 0.6665 | 0.8164 |
| 0.4573 | 7.7468 | 612 | 0.6694 | 0.6998 | 0.6694 | 0.8181 |
| 0.4573 | 7.7722 | 614 | 0.6637 | 0.6960 | 0.6637 | 0.8147 |
| 0.4573 | 7.7975 | 616 | 0.6534 | 0.6897 | 0.6534 | 0.8083 |
| 0.4573 | 7.8228 | 618 | 0.6415 | 0.6833 | 0.6415 | 0.8009 |
| 0.4573 | 7.8481 | 620 | 0.6507 | 0.6806 | 0.6507 | 0.8067 |
| 0.4573 | 7.8734 | 622 | 0.6753 | 0.6953 | 0.6753 | 0.8218 |
| 0.4573 | 7.8987 | 624 | 0.7133 | 0.6740 | 0.7133 | 0.8446 |
| 0.4573 | 7.9241 | 626 | 0.7482 | 0.6523 | 0.7482 | 0.8650 |
| 0.4573 | 7.9494 | 628 | 0.7381 | 0.6722 | 0.7381 | 0.8591 |
| 0.4573 | 7.9747 | 630 | 0.7062 | 0.6646 | 0.7062 | 0.8403 |
| 0.4573 | 8.0 | 632 | 0.6633 | 0.6688 | 0.6633 | 0.8144 |
| 0.4573 | 8.0253 | 634 | 0.6427 | 0.6916 | 0.6427 | 0.8017 |
| 0.4573 | 8.0506 | 636 | 0.6390 | 0.6950 | 0.6390 | 0.7994 |
| 0.4573 | 8.0759 | 638 | 0.6352 | 0.6950 | 0.6352 | 0.7970 |
| 0.4573 | 8.1013 | 640 | 0.6358 | 0.6950 | 0.6358 | 0.7974 |
| 0.4573 | 8.1266 | 642 | 0.6597 | 0.6833 | 0.6597 | 0.8122 |
| 0.4573 | 8.1519 | 644 | 0.6880 | 0.6947 | 0.6880 | 0.8295 |
| 0.4573 | 8.1772 | 646 | 0.6951 | 0.6933 | 0.6951 | 0.8337 |
| 0.4573 | 8.2025 | 648 | 0.6832 | 0.6947 | 0.6832 | 0.8266 |
| 0.4573 | 8.2278 | 650 | 0.6824 | 0.7173 | 0.6824 | 0.8260 |
| 0.4573 | 8.2532 | 652 | 0.6870 | 0.7000 | 0.6870 | 0.8289 |
| 0.4573 | 8.2785 | 654 | 0.6882 | 0.7074 | 0.6882 | 0.8296 |
| 0.4573 | 8.3038 | 656 | 0.6812 | 0.7014 | 0.6812 | 0.8254 |
| 0.4573 | 8.3291 | 658 | 0.6764 | 0.7020 | 0.6764 | 0.8224 |
| 0.4573 | 8.3544 | 660 | 0.6894 | 0.7031 | 0.6894 | 0.8303 |
| 0.4573 | 8.3797 | 662 | 0.7033 | 0.6982 | 0.7033 | 0.8386 |
| 0.4573 | 8.4051 | 664 | 0.7203 | 0.6975 | 0.7203 | 0.8487 |
| 0.4573 | 8.4304 | 666 | 0.7159 | 0.6975 | 0.7159 | 0.8461 |
| 0.4573 | 8.4557 | 668 | 0.6916 | 0.6927 | 0.6916 | 0.8316 |
| 0.4573 | 8.4810 | 670 | 0.6532 | 0.6719 | 0.6532 | 0.8082 |
| 0.4573 | 8.5063 | 672 | 0.6254 | 0.6970 | 0.6254 | 0.7908 |
| 0.4573 | 8.5316 | 674 | 0.6082 | 0.7251 | 0.6082 | 0.7798 |
| 0.4573 | 8.5570 | 676 | 0.6031 | 0.7251 | 0.6031 | 0.7766 |
| 0.4573 | 8.5823 | 678 | 0.5965 | 0.7325 | 0.5965 | 0.7723 |
| 0.4573 | 8.6076 | 680 | 0.5976 | 0.7308 | 0.5976 | 0.7731 |
| 0.4573 | 8.6329 | 682 | 0.6055 | 0.7122 | 0.6055 | 0.7782 |
| 0.4573 | 8.6582 | 684 | 0.6159 | 0.7162 | 0.6159 | 0.7848 |
| 0.4573 | 8.6835 | 686 | 0.6239 | 0.7215 | 0.6239 | 0.7899 |
| 0.4573 | 8.7089 | 688 | 0.6334 | 0.7126 | 0.6334 | 0.7958 |
| 0.4573 | 8.7342 | 690 | 0.6534 | 0.6904 | 0.6534 | 0.8083 |
| 0.4573 | 8.7595 | 692 | 0.6845 | 0.6915 | 0.6845 | 0.8273 |
| 0.4573 | 8.7848 | 694 | 0.7054 | 0.6939 | 0.7054 | 0.8399 |
| 0.4573 | 8.8101 | 696 | 0.7300 | 0.6902 | 0.7300 | 0.8544 |
| 0.4573 | 8.8354 | 698 | 0.7322 | 0.6902 | 0.7322 | 0.8557 |
| 0.4573 | 8.8608 | 700 | 0.7236 | 0.6902 | 0.7236 | 0.8506 |
| 0.4573 | 8.8861 | 702 | 0.7056 | 0.6982 | 0.7056 | 0.8400 |
| 0.4573 | 8.9114 | 704 | 0.6786 | 0.6947 | 0.6786 | 0.8238 |
| 0.4573 | 8.9367 | 706 | 0.6540 | 0.6739 | 0.6540 | 0.8087 |
| 0.4573 | 8.9620 | 708 | 0.6379 | 0.6739 | 0.6379 | 0.7987 |
| 0.4573 | 8.9873 | 710 | 0.6293 | 0.6810 | 0.6293 | 0.7933 |
| 0.4573 | 9.0127 | 712 | 0.6306 | 0.6810 | 0.6306 | 0.7941 |
| 0.4573 | 9.0380 | 714 | 0.6332 | 0.6646 | 0.6332 | 0.7958 |
| 0.4573 | 9.0633 | 716 | 0.6361 | 0.6739 | 0.6361 | 0.7975 |
| 0.4573 | 9.0886 | 718 | 0.6378 | 0.6739 | 0.6378 | 0.7986 |
| 0.4573 | 9.1139 | 720 | 0.6354 | 0.6646 | 0.6354 | 0.7971 |
| 0.4573 | 9.1392 | 722 | 0.6333 | 0.6646 | 0.6333 | 0.7958 |
| 0.4573 | 9.1646 | 724 | 0.6310 | 0.6737 | 0.6310 | 0.7944 |
| 0.4573 | 9.1899 | 726 | 0.6375 | 0.6664 | 0.6375 | 0.7985 |
| 0.4573 | 9.2152 | 728 | 0.6505 | 0.6741 | 0.6505 | 0.8066 |
| 0.4573 | 9.2405 | 730 | 0.6599 | 0.6806 | 0.6599 | 0.8124 |
| 0.4573 | 9.2658 | 732 | 0.6707 | 0.6871 | 0.6707 | 0.8190 |
| 0.4573 | 9.2911 | 734 | 0.6753 | 0.6935 | 0.6753 | 0.8218 |
| 0.4573 | 9.3165 | 736 | 0.6775 | 0.6833 | 0.6775 | 0.8231 |
| 0.4573 | 9.3418 | 738 | 0.6762 | 0.6833 | 0.6762 | 0.8223 |
| 0.4573 | 9.3671 | 740 | 0.6767 | 0.6833 | 0.6767 | 0.8226 |
| 0.4573 | 9.3924 | 742 | 0.6806 | 0.6833 | 0.6806 | 0.8250 |
| 0.4573 | 9.4177 | 744 | 0.6789 | 0.6833 | 0.6789 | 0.8240 |
| 0.4573 | 9.4430 | 746 | 0.6781 | 0.6833 | 0.6781 | 0.8235 |
| 0.4573 | 9.4684 | 748 | 0.6708 | 0.6806 | 0.6708 | 0.8190 |
| 0.4573 | 9.4937 | 750 | 0.6636 | 0.6806 | 0.6636 | 0.8146 |
| 0.4573 | 9.5190 | 752 | 0.6591 | 0.6806 | 0.6591 | 0.8118 |
| 0.4573 | 9.5443 | 754 | 0.6596 | 0.6806 | 0.6596 | 0.8122 |
| 0.4573 | 9.5696 | 756 | 0.6589 | 0.6806 | 0.6589 | 0.8117 |
| 0.4573 | 9.5949 | 758 | 0.6569 | 0.6806 | 0.6569 | 0.8105 |
| 0.4573 | 9.6203 | 760 | 0.6542 | 0.6806 | 0.6542 | 0.8088 |
| 0.4573 | 9.6456 | 762 | 0.6528 | 0.6806 | 0.6528 | 0.8079 |
| 0.4573 | 9.6709 | 764 | 0.6499 | 0.6806 | 0.6499 | 0.8062 |
| 0.4573 | 9.6962 | 766 | 0.6498 | 0.6806 | 0.6498 | 0.8061 |
| 0.4573 | 9.7215 | 768 | 0.6509 | 0.6806 | 0.6509 | 0.8068 |
| 0.4573 | 9.7468 | 770 | 0.6500 | 0.6806 | 0.6500 | 0.8062 |
| 0.4573 | 9.7722 | 772 | 0.6504 | 0.6806 | 0.6504 | 0.8065 |
| 0.4573 | 9.7975 | 774 | 0.6531 | 0.6806 | 0.6531 | 0.8082 |
| 0.4573 | 9.8228 | 776 | 0.6540 | 0.6806 | 0.6540 | 0.8087 |
| 0.4573 | 9.8481 | 778 | 0.6538 | 0.6806 | 0.6538 | 0.8086 |
| 0.4573 | 9.8734 | 780 | 0.6546 | 0.6806 | 0.6546 | 0.8091 |
| 0.4573 | 9.8987 | 782 | 0.6540 | 0.6806 | 0.6540 | 0.8087 |
| 0.4573 | 9.9241 | 784 | 0.6529 | 0.6806 | 0.6529 | 0.8080 |
| 0.4573 | 9.9494 | 786 | 0.6525 | 0.6806 | 0.6525 | 0.8078 |
| 0.4573 | 9.9747 | 788 | 0.6527 | 0.6806 | 0.6527 | 0.8079 |
| 0.4573 | 10.0 | 790 | 0.6528 | 0.6806 | 0.6528 | 0.8080 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
MaziyarPanahi/Mistral-7B-KNUT-v0.2-Mistral-7B-Instruct-v0.1 | MaziyarPanahi | "2024-01-17T09:54:56Z" | 20 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"Safetensors",
"text-generation-inference",
"merge",
"7b",
"mistralai/Mistral-7B-Instruct-v0.1",
"Herry443/Mistral-7B-KNUT-v0.2",
"pytorch",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"conversational",
"license:apache-2.0"
] | text-generation | "2024-01-17T09:50:10Z" | ---
license: apache-2.0
tags:
- Safetensors
- mistral
- text-generation-inference
- merge
- mistral
- 7b
- mistralai/Mistral-7B-Instruct-v0.1
- Herry443/Mistral-7B-KNUT-v0.2
- transformers
- pytorch
- mistral
- text-generation
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
---
# Mistral-7B-KNUT-v0.2-Mistral-7B-Instruct-v0.1
Mistral-7B-KNUT-v0.2-Mistral-7B-Instruct-v0.1 is a merge of the following models:
* [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
* [Herry443/Mistral-7B-KNUT-v0.2](https://huggingface.co/Herry443/Mistral-7B-KNUT-v0.2)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.1
layer_range: [0, 32]
- model: Herry443/Mistral-7B-KNUT-v0.2
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/Mistral-7B-KNUT-v0.2-Mistral-7B-Instruct-v0.1"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
ALazcanoG/nominal-groups-recognition-bert-base-spanish-wwm-cased | ALazcanoG | "2023-07-10T21:09:53Z" | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"es",
"dataset:ALazcanoG/spanish_nominal_groups_conll2003",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2023-07-07T23:46:21Z" | ---
language:
- es
tags:
- generated_from_trainer
datasets:
- ALazcanoG/spanish_nominal_groups_conll2003
model-index:
- name: nominal-groups-recognition-bert-base-spanish-wwm-cased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nominal-groups-recognition-bert-base-spanish-wwm-cased
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the ALazcanoG/spanish_nominal_groups_conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3362
- Body Part Precision: 0.6830
- Body Part Recall: 0.7409
- Body Part F1: 0.7108
- Body Part Number: 413
- Disease Precision: 0.7439
- Disease Recall: 0.7446
- Disease F1: 0.7442
- Disease Number: 975
- Family Member Precision: 0.7941
- Family Member Recall: 0.9
- Family Member F1: 0.8438
- Family Member Number: 30
- Medication Precision: 0.8734
- Medication Recall: 0.7419
- Medication F1: 0.8023
- Medication Number: 93
- Procedure Precision: 0.6190
- Procedure Recall: 0.6270
- Procedure F1: 0.6230
- Procedure Number: 311
- Overall Precision: 0.7144
- Overall Recall: 0.7261
- Overall F1: 0.7202
- Overall Accuracy: 0.9175
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 13
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Body Part Precision | Body Part Recall | Body Part F1 | Body Part Number | Disease Precision | Disease Recall | Disease F1 | Disease Number | Family Member Precision | Family Member Recall | Family Member F1 | Family Member Number | Medication Precision | Medication Recall | Medication F1 | Medication Number | Procedure Precision | Procedure Recall | Procedure F1 | Procedure Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-----------------:|:--------------:|:----------:|:--------------:|:-----------------------:|:--------------------:|:----------------:|:--------------------:|:--------------------:|:-----------------:|:-------------:|:-----------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.4335 | 1.0 | 1004 | 0.3011 | 0.5944 | 0.7167 | 0.6498 | 413 | 0.7014 | 0.7036 | 0.7025 | 975 | 0.8 | 0.8 | 0.8000 | 30 | 0.7875 | 0.6774 | 0.7283 | 93 | 0.6007 | 0.5177 | 0.5561 | 311 | 0.6634 | 0.6751 | 0.6692 | 0.9063 |
| 0.2379 | 2.0 | 2008 | 0.2920 | 0.6995 | 0.7215 | 0.7104 | 413 | 0.7655 | 0.7097 | 0.7366 | 975 | 0.75 | 0.8 | 0.7742 | 30 | 0.7667 | 0.7419 | 0.7541 | 93 | 0.6094 | 0.6270 | 0.6181 | 311 | 0.7212 | 0.7014 | 0.7112 | 0.9140 |
| 0.1629 | 3.0 | 3012 | 0.3022 | 0.6674 | 0.7530 | 0.7076 | 413 | 0.7286 | 0.7241 | 0.7263 | 975 | 0.8571 | 0.8 | 0.8276 | 30 | 0.8519 | 0.7419 | 0.7931 | 93 | 0.5994 | 0.6495 | 0.6235 | 311 | 0.6975 | 0.7201 | 0.7086 | 0.9170 |
| 0.1143 | 4.0 | 4016 | 0.3362 | 0.6830 | 0.7409 | 0.7108 | 413 | 0.7439 | 0.7446 | 0.7442 | 975 | 0.7941 | 0.9 | 0.8438 | 30 | 0.8734 | 0.7419 | 0.8023 | 93 | 0.6190 | 0.6270 | 0.6230 | 311 | 0.7144 | 0.7261 | 0.7202 | 0.9175 |
| 0.0861 | 5.0 | 5020 | 0.3643 | 0.6806 | 0.7482 | 0.7128 | 413 | 0.7428 | 0.7436 | 0.7432 | 975 | 0.8182 | 0.9 | 0.8571 | 30 | 0.8831 | 0.7312 | 0.8000 | 93 | 0.5928 | 0.6367 | 0.6140 | 311 | 0.7081 | 0.7283 | 0.7181 | 0.9163 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
dragonkue/snowflake-arctic-embed-l-v2.0-ko | dragonkue | "2025-03-19T03:03:43Z" | 1,779 | 23 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"loss:CachedGISTEmbedLoss",
"ko",
"en",
"arxiv:1908.10084",
"arxiv:2412.04506",
"arxiv:2407.18887",
"arxiv:2410.02525",
"base_model:Snowflake/snowflake-arctic-embed-l-v2.0",
"base_model:finetune:Snowflake/snowflake-arctic-embed-l-v2.0",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2025-03-07T15:34:52Z" | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- loss:CachedGISTEmbedLoss
base_model: Snowflake/snowflake-arctic-embed-l-v2.0
license: apache-2.0
language:
- ko
- en
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/642b0c2fecec03b4464a1d9b/IxcqY5qbGNuGpqDciIcOI.webp" width="600">
# SentenceTransformer based on Snowflake/snowflake-arctic-embed-l-v2.0
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l-v2.0](https://huggingface.co/Snowflake/snowflake-arctic-embed-l-v2.0) on the clustered datasets. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search.
The **Snowflake/snowflake-arctic-embed-l-v2.0** model has been further trained with Korean data to enhance its performance in **Korean retrieval tasks**. It is a powerful model that achieves **state-of-the-art (SOTA) performance across multiple retrieval benchmarks**.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Snowflake/snowflake-arctic-embed-l-v2.0](https://huggingface.co/Snowflake/snowflake-arctic-embed-l-v2.0) <!-- at revision 7f311bb640ad3babc0a4e3a8873240dcba44c9d2 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Datasets:**
- AI Hub Dataset
- 행정 문서 대상 기계 독해
- 기계 독해
- 뉴스 기사 기계독해
- 도서 자료 기계독해
- 숫자 연산 기계독해
- 금융 법률 문서 기계독해
- **Language:** Korean, English
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
First install the Sentence Transformers library and xformers library
```bash
pip install -U sentence-transformers
pip install xformers
```
Then you can load this model and run inference.
### Using Sentence Transformers
```python
from sentence_transformers import SentenceTransformer
# Load the model
# Please use bf16 when inferring with half precision
model_name = 'dragonkue/snowflake-arctic-embed-l-v2.0-ko'
model = SentenceTransformer(model_name)
# Define the queries and documents
queries = ['대한민국의 수도는 어디인가?', '한글을 만든 사람은 누구인가?']
documents = ['대한민국의 수도는 서울이다.', '한글은 세종대왕이 창제하였다.']
# Compute embeddings: use `prompt_name="query"` to encode queries!
query_embeddings = model.encode(queries, prompt_name="query")
document_embeddings = model.encode(documents)
# Compute cosine similarity scores
scores = model.similarity(query_embeddings, document_embeddings)
# Output the results
for query, query_scores in zip(queries, scores):
doc_score_pairs = list(zip(documents, query_scores))
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
print("Query:", query)
for document, score in doc_score_pairs:
print(score, document)
```
### Using Huggingface Transformers
You can use the transformers package to use Snowflake's arctic-embed model, as shown below. For optimal retrieval quality, use the CLS token to embed each text portion and use the query prefix below (just on the query).
```python
import torch
from transformers import AutoModel, AutoTokenizer
# Load the model
# Please use bf16 when inferring with half precision
model_name = 'dragonkue/snowflake-arctic-embed-l-v2.0-ko'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name, add_pooling_layer=False)
model.eval()
# Define the queries and documents
query_prefix = 'query: '
queries = ['대한민국의 수도는 어디인가?', '한글을 만든 사람은 누구인가?']
queries_with_prefix = ["{}{}".format(query_prefix, i) for i in queries]
query_tokens = tokenizer(queries_with_prefix, padding=True, truncation=True, return_tensors='pt', max_length=8192)
documents = ['대한민국의 수도는 서울이다.', '한글은 세종대왕이 창제하였다.']
document_tokens = tokenizer(documents, padding=True, truncation=True, return_tensors='pt', max_length=8192)
# Compute token embeddings
with torch.no_grad():
query_embeddings = model(**query_tokens)[0][:, 0]
document_embeddings = model(**document_tokens)[0][:, 0]
# Normalize embeddings
query_embeddings = torch.nn.functional.normalize(query_embeddings, p=2, dim=1)
document_embeddings = torch.nn.functional.normalize(document_embeddings, p=2, dim=1)
scores = torch.mm(query_embeddings, document_embeddings.transpose(0, 1))
for query, query_scores in zip(queries, scores):
doc_score_pairs = list(zip(documents, query_scores))
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
# Output passages & scores
print("Query:", query)
for document, score in doc_score_pairs:
print(score, document)
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
- This evaluation references the KURE GitHub repository. (https://github.com/nlpai-lab/KURE)
- We conducted an evaluation on all **Korean Retrieval Benchmarks** registered in [MTEB](https://github.com/embeddings-benchmark/mteb).
### Korean Retrieval Benchmark
- [Ko-StrategyQA](https://huggingface.co/datasets/taeminlee/Ko-StrategyQA): A Korean **ODQA multi-hop retrieval dataset**, translated from StrategyQA.
- [AutoRAGRetrieval](https://huggingface.co/datasets/yjoonjang/markers_bm): A **Korean document retrieval dataset** constructed by parsing PDFs from five domains: **finance, public, medical, legal, and commerce**.
- [MIRACLRetrieval](https://huggingface.co/datasets/miracl/miracl): A **Korean document retrieval dataset** based on Wikipedia.
- [PublicHealthQA](https://huggingface.co/datasets/xhluca/publichealth-qa): A **retrieval dataset** focused on **medical and public health domains** in Korean.
- [BelebeleRetrieval](https://huggingface.co/datasets/facebook/belebele): A **Korean document retrieval dataset** based on FLORES-200.
- [MrTidyRetrieval](https://huggingface.co/datasets/mteb/mrtidy): A **Wikipedia-based Korean document retrieval dataset**.
- [MultiLongDocRetrieval](https://huggingface.co/datasets/Shitao/MLDR): A **long-document retrieval dataset** covering various domains in Korean.
- [XPQARetrieval](https://huggingface.co/datasets/jinaai/xpqa): A **cross-domain Korean document retrieval dataset**.
### Metrics
* Standard metric : NDCG@10
### Information Retrieval
- Achieves state-of-the-art (SOTA) performance across various benchmarks.
- For each benchmark, the **highest score** is highlighted in bold, and the _second-highest score_ is italicized.
| Model | MrTidyRetrieval | MIRACLRetrieval | XPQARetrieval | BelebeleRetrieval | PublicHealthQA | AutoRAGRetrieval | Ko-StrategyQA | Average |
|:-------------------------------------------------------------------------------------------------|:------------------|:------------------|:----------------|:--------------------|:-----------------|:-------------------|:----------------|:-------------|
| dragonkue/snowflake-arctic-embed-l-v2.0-ko | 0.57121 | 0.66846 | **0.4436** | **0.95177** | 0.83374 | **0.90927** | _0.80498_ | **0.740433** |
| dragonkue/BGE-m3-ko | 0.60992 | 0.68331 | 0.38131 | _0.95027_ | 0.81545 | _0.87379_ | 0.7959 | _0.729993_ |
| nlpai-lab/KURE-v1 | 0.59092 | 0.68157 | 0.38158 | 0.95019 | 0.81925 | 0.87076 | 0.7999 | 0.727739 |
| BAAI/bge-m3 | **0.64708** | _0.70146_ | 0.36075 | 0.93164 | 0.80412 | 0.83008 | 0.79405 | 0.724169 |
| Snowflake/snowflake-arctic-embed-l-v2.0 | 0.59071 | 0.66077 | _0.43018_ | 0.9271 | 0.81679 | 0.83863 | 0.80455 | 0.724104 |
| intfloat/multilingual-e5-large | _0.64211_ | 0.66486 | 0.3571 | 0.94499 | 0.82534 | 0.81337 | 0.80348 | 0.721607 |
| nlpai-lab/KoE5 | 0.58411 | 0.62347 | 0.35086 | 0.94251 | 0.83507 | 0.84339 | 0.80008 | 0.711356 |
| BAAI/bge-multilingual-gemma2 | 0.47521 | **0.70315** | 0.37446 | 0.95001 | _0.87102_ | 0.76535 | 0.79072 | 0.704274 |
| jinaai/jina-embeddings-v3 | 0.55759 | 0.63716 | 0.41272 | 0.91203 | 0.83059 | 0.76104 | 0.79807 | 0.701314 |
| intfloat/multilingual-e5-large-instruct | 0.52877 | 0.59914 | 0.39712 | 0.936 | 0.84967 | 0.77996 | 0.79793 | 0.69837 |
| nomic-ai/nomic-embed-text-v2-moe | 0.53766 | 0.65913 | 0.36871 | 0.93636 | 0.78448 | 0.80682 | 0.76325 | 0.693773 |
| intfloat/multilingual-e5-base | 0.58082 | 0.6227 | 0.3607 | 0.92868 | 0.77203 | 0.79752 | 0.76355 | 0.689429 |
| intfloat/e5-mistral-7b-instruct | 0.52444 | 0.58709 | 0.39159 | 0.92403 | **0.88733** | 0.67849 | 0.79317 | 0.683734 |
| Alibaba-NLP/gte-Qwen2-7B-instruct | 0.46571 | 0.53375 | 0.37866 | 0.94808 | 0.85844 | 0.76682 | **0.8108** | 0.680323 |
| Alibaba-NLP/gte-multilingual-base | 0.56464 | 0.62697 | 0.30702 | 0.8796 | 0.74584 | 0.77108 | 0.75121 | 0.663766 |
| openai/text-embedding-3-large | 0.44728 | 0.56248 | 0.37423 | 0.89451 | 0.85617 | 0.76466 | 0.73634 | 0.662239 |
| upskyy/bge-m3-korean | 0.55011 | 0.59892 | 0.31695 | 0.8731 | 0.77559 | 0.72946 | 0.75277 | 0.6567 |
| Salesforce/SFR-Embedding-2_R | 0.40347 | 0.55798 | 0.37371 | 0.91747 | 0.8605 | 0.70782 | 0.77042 | 0.65591 |
| ibm-granite/granite-embedding-278m-multilingual | nan | 0.59216 | 0.23058 | 0.83231 | 0.77668 | 0.70226 | 0.71762 | 0.641935 |
| jhgan/ko-sroberta-multitask | 0.29475 | 0.36698 | 0.27961 | 0.81636 | 0.69212 | 0.58332 | 0.65097 | 0.526301 |
#### Capabilities Beyond Benchmarks
This model is designed to handle various retrieval scenarios that are not directly measured in benchmarks:
1. Supports phrase-based queries in addition to full-sentence queries.
Example: "What products does Samsung sell?" or "Samsung's products"
2. Trained to handle diverse query formats, regardless of phrasing variations.
Example: "Tell me about Samsung.", "I'm curious about Samsung.", "What is Samsung?"
3. Optimized for Markdown table search, allowing retrieval of answers embedded within tables when present in documents.
4. Efficient clustering without hard negatives:
- Samples within the same batch are clustered together.
- Uses efficient embedding formation for clustering by truncating embeddings from the Snowflake/snowflake-arctic-embed-l-v2.0 model to 256 dimensions.
- The clustering approach is inspired by the findings in the following papers:
- *Embedding And Clustering Your Data Can Improve Contrastive Pretraining*
- *CONTEXTUAL DOCUMENT EMBEDDINGS*
5. Strong performance across different domains:
- The *Arctic-Embed 2.0: Multilingual Retrieval Without Compromise* paper states:
*"While models like mE5, mGTE, and BGE-M3 excel on MIRACL, their performance on CLEF is notably weaker compared to ours and closed-source offerings, suggesting the potential of overfitting to MIRACL or its Wikipedia-based domain."*
- Based on my own experience, **Snowflake/snowflake-arctic-embed-l-v2.0** has consistently outperformed **BGE-M3** in different domains, further validating this observation.
## Bias, Risks and Limitations
To prevent excessive GPU usage costs, the model was trained with a maximum sequence length of **1300** tokens. As a result, its performance may degrade on benchmarks like MultiLongDocRetrieval (MLDR).
The previous model, **BGE-m3-ko**, was trained with a token length of **1024**, which imposed limitations on its MLDR benchmark performance.
In the case of **snowflake-arctic-embed-l-v2.0-ko**, if the document length exceeds **1300** tokens or approximately **2500** characters, it is recommended to consider the following models instead.
| Model | MultiLongDocRetrieval |
|:-------------------------------------------------------------------------------------------------|------------------------:|
| Alibaba-NLP/gte-multilingual-base/Alibaba-NLP/gte-multilingual-base | **0.48402** |
| nlpai-lab/KURE-v1/nlpai-lab_KURE-v1 | _0.47528_ |
| dragonkue/snowflake-arctic-embed-l-v2.0-ko | 0.4459 |
| BAAI/bge-m3/BAAI_bge-m3 | 0.43011 |
| Snowflake/snowflake-arctic-embed-l-v2.0 | 0.40401 |
| dragonkue/BGE-m3-ko/dragonkue_BGE-m3-ko | 0.40135 |
| openai/text-embedding-3-large | 0.31108 |
| BAAI/bge-multilingual-gemma2 | 0.31021 |
| nlpai-lab/KoE5 | 0.30869 |
| jinaai/jina-embeddings-v3/jinaai__jina-embeddings-v3 | 0.30512 |
| Alibaba-NLP/gte-Qwen2-7B-instruct/Alibaba-NLP__gte-Qwen2-7B-instruct | 0.30313 |
| intfloat/multilingual-e5-large-instruct/intfloat__multilingual-e5-large-instruct | 0.27973 |
| nomic-ai/nomic-embed-text-v2-moe | 0.27135 |
| intfloat/e5-mistral-7b-instruct/intfloat__e5-mistral-7b-instruct | 0.2583 |
| intfloat/multilingual-e5-large/intfloat__multilingual-e5-large | 0.24596 |
| Salesforce/SFR-Embedding-2_R/Salesforce__SFR-Embedding-2_R | 0.24346 |
| intfloat/multilingual-e5-base/intfloat__multilingual-e5-base | 0.23766 |
| upskyy/bge-m3-korean/upskyy__bge-m3-korean | 0.21968 |
| ibm-granite/granite-embedding-278m-multilingual/ibm-granite__granite-embedding-278m-multilingual | 0.20781 |
| jhgan/ko-sroberta-multitask/jhgan__ko-sroberta-multitask | 0.20416 |
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
* Loss: [<code>CachedGISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedgistembedloss) with these parameters:
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 20000
- `per_device_eval_batch_size`: 4096
- `learning_rate`: 2e-05
- `num_train_epochs`: 2
- `lr_scheduler_type`: warmup_stable_decay
- `lr_scheduler_kwargs`: {'num_decay_steps': 160}
- `warmup_ratio`: 0.05
- `bf16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 10000
- `per_device_eval_batch_size`: 4096
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 2
- `max_steps`: -1
- `lr_scheduler_type`: warmup_stable_decay
- `lr_scheduler_kwargs`: {'num_decay_steps': 160}
- `warmup_ratio`: 0.05
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.4.1
- Transformers: 4.49.0
- PyTorch: 2.6.0+cu124
- Accelerate: 1.4.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084"
}
```
#### KURE
```bibtex
@misc{KURE,
publisher = {Youngjoon Jang, Junyoung Son, Taemin Lee},
year = {2024},
url = {https://github.com/nlpai-lab/KURE}
}
```
#### Arctic-Embed 2.0
```bibtex
@article{yu2024arcticembed,
title = "Arctic-Embed 2.0: Multilingual Retrieval Without Compromise",
author = "Puxuan Yu, Luke Merrick, Gaurav Nuti, Daniel Campos",
journal = "arXiv preprint arXiv:2412.04506",
year = "2024",
url = "https://arxiv.org/abs/2412.04506"
}
```
#### Embedding And Clustering Your Data Can Improve Contrastive Pretraining
```bibtex
@article{merrick2024embedding,
title = "Embedding And Clustering Your Data Can Improve Contrastive Pretraining",
author = "Luke Merrick",
journal = "arXiv preprint arXiv:2407.18887",
year = "2024",
url = "https://arxiv.org/abs/2407.18887"
}
```
#### Contextual Document Embeddings
```bibtex
@article{morris2024contextual,
title = "Contextual Document Embeddings",
author = "John X. Morris, Alexander M. Rush",
journal = "arXiv preprint arXiv:2410.02525",
year = "2024",
url = "https://arxiv.org/abs/2410.02525"
}
```
## License
Arctic is licensed under the **Apache-2**. The released models can be used for commercial purposes free of charge.
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
hashirconfinlaity/Hashir_distilBERT_v1.1 | hashirconfinlaity | "2024-11-28T00:29:12Z" | 182 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-11-28T00:12:08Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gsn-codes/q-FrozenLake-v1-4x4-noSlippery | gsn-codes | "2023-06-03T04:14:01Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-06-03T04:13:59Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="gsn-codes/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
marcolin/distilbert-base-uncased-finetuned-emotion | marcolin | "2023-03-04T15:50:09Z" | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-03-04T15:29:11Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.928
- name: F1
type: f1
value: 0.9279822791628913
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2142
- Accuracy: 0.928
- F1: 0.9280
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8453 | 1.0 | 250 | 0.3091 | 0.9085 | 0.9057 |
| 0.2485 | 2.0 | 500 | 0.2142 | 0.928 | 0.9280 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.10.3
|
MaxArb/RotEtogoCasino | MaxArb | "2023-09-17T17:45:28Z" | 0 | 0 | null | [
"license:cc-by-nc-nd-4.0",
"region:us"
] | null | "2023-09-14T18:26:02Z" | ---
license: cc-by-nc-nd-4.0
---
|
AdelZakirov/sd-class-butterflies-42 | AdelZakirov | "2022-11-28T15:53:06Z" | 35 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | "2022-11-28T15:52:36Z" | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(AdelZakirov/sd-class-butterflies-42)
image = pipeline().images[0]
image
```
|
rajeevsawant/Llama_3_2_3B_SAP_Lora_procure_1 | rajeevsawant | "2025-03-17T15:40:19Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-03-17T15:37:22Z" | ---
base_model: unsloth/llama-3.2-3b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** rajeevsawant
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nbninh/5eb34a9d-c8f1-44d1-a53b-3867996f6a14 | nbninh | "2025-01-09T02:36:40Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:adapter:Qwen/Qwen2.5-0.5B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-09T02:25:45Z" | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5eb34a9d-c8f1-44d1-a53b-3867996f6a14
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-0.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fb03151dde95ae5b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fb03151dde95ae5b_train_data.json
type:
field_instruction: context
field_output: question
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nbninh/5eb34a9d-c8f1-44d1-a53b-3867996f6a14
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/fb03151dde95ae5b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5eb34a9d-c8f1-44d1-a53b-3867996f6a14
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5eb34a9d-c8f1-44d1-a53b-3867996f6a14
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 5eb34a9d-c8f1-44d1-a53b-3867996f6a14
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0430
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.4102 | 0.0886 | 200 | 2.0430 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
frizner/dfrizner-model | frizner | "2023-08-27T21:06:48Z" | 19 | 1 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:stabilityai/stable-diffusion-2",
"base_model:finetune:stabilityai/stable-diffusion-2",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-08-27T20:52:42Z" |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2
instance_prompt: a photo of dfrizner
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - frizner/dfrizner-model
This is a dreambooth model derived from stabilityai/stable-diffusion-2. The weights were trained on a photo of dfrizner using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
baby-dev/4dd93335-b75b-4cdb-b2a1-fb9f84332ed9 | baby-dev | "2025-02-05T07:52:22Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:rayonlabs/6259c3f5-19eb-4f1a-9530-69e345bdfc69",
"base_model:adapter:rayonlabs/6259c3f5-19eb-4f1a-9530-69e345bdfc69",
"region:us"
] | null | "2025-02-05T07:38:55Z" | ---
library_name: peft
base_model: rayonlabs/6259c3f5-19eb-4f1a-9530-69e345bdfc69
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4dd93335-b75b-4cdb-b2a1-fb9f84332ed9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
# 4dd93335-b75b-4cdb-b2a1-fb9f84332ed9
This model is a fine-tuned version of [rayonlabs/6259c3f5-19eb-4f1a-9530-69e345bdfc69](https://huggingface.co/rayonlabs/6259c3f5-19eb-4f1a-9530-69e345bdfc69) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1845
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kallilikhitha123/llama-Quantized-Model-8B_8lakh_06-03-2025 | kallilikhitha123 | "2025-03-06T13:09:46Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2025-03-06T12:24:47Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kamelcharaf/GRPO-SFT-meta-Llama-3.2-3B-meta-Llama-3.2-3B-mrd3-s4-sum | kamelcharaf | "2025-03-25T10:09:23Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"endpoints_compatible",
"region:us"
] | null | "2025-03-24T13:58:54Z" | ---
library_name: transformers
model_name: GRPO-SFT-meta-Llama-3.2-3B-meta-Llama-3.2-3B-mrd3-sum-s4
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for GRPO-SFT-meta-Llama-3.2-3B-meta-Llama-3.2-3B-mrd3-sum-s4
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="kamelcharaf/GRPO-SFT-meta-Llama-3.2-3B-meta-Llama-3.2-3B-mrd3-sum-s4", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/kamel-charaf-epfl/huggingface/runs/mo040dpd)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.14.0
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.0.1
- Tokenizers: 0.21.0
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mlx-community/AIDC-AI_Marco-o1_MLX-8bit | mlx-community | "2024-11-27T11:37:18Z" | 84 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mlx",
"conversational",
"base_model:AIDC-AI/Marco-o1",
"base_model:quantized:AIDC-AI/Marco-o1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | text-generation | "2024-11-27T11:15:34Z" | ---
license: apache-2.0
library_name: transformers
inference: false
tags:
- mlx
base_model: AIDC-AI/Marco-o1
---
# mlx-community/AIDC-AI_Marco-o1_MLX-8bit
The Model [mlx-community/AIDC-AI_Marco-o1_MLX-8bit](https://huggingface.co/mlx-community/AIDC-AI_Marco-o1_MLX-8bit) was
converted to MLX format from [AIDC-AI/Marco-o1](https://huggingface.co/AIDC-AI/Marco-o1)
using mlx-lm version **0.20.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/AIDC-AI_Marco-o1_MLX-8bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
QuantFactory/Qwen2.5-Lumen-14B-GGUF | QuantFactory | "2024-09-21T17:35:30Z" | 108 | 3 | transformers | [
"transformers",
"gguf",
"qwen",
"qwen2.5",
"finetune",
"dpo",
"orpo",
"qwen2",
"chat",
"conversational",
"instruct",
"storywriting",
"roleplay",
"text-generation",
"en",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:Qwen/Qwen2.5-14B-Instruct",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"arxiv:2403.19522",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-09-21T15:31:34Z" |
---
datasets:
- jondurbin/gutenberg-dpo-v0.1
- Qwen/Qwen2.5-14B-Instruct
- HuggingFaceH4/ultrafeedback_binarized
base_model:
- Qwen/Qwen2.5-14B-Instruct
- v000000/Qwen2.5-14B-Gutenberg-1e-Delta
- tanliboy/lambda-qwen2.5-14b-dpo-test
library_name: transformers
tags:
- qwen
- qwen2.5
- finetune
- dpo
- orpo
- qwen2
- chat
- conversational
- instruct
- storywriting
- roleplay
license: apache-2.0
language:
- en
pipeline_tag: text-generation
---
[](https://hf.co/QuantFactory)
# QuantFactory/Qwen2.5-Lumen-14B-GGUF
This is quantized version of [v000000/Qwen2.5-Lumen-14B](https://huggingface.co/v000000/Qwen2.5-Lumen-14B) created using llama.cpp
# Original Model Card
# Qwen2.5-Lumen-14B
* *Qwen direct preference optimization finetuned for ~3 epochs.*

<b>A qwen2.5 preference finetune, targeting prompt adherence, storywriting and roleplay.</b>
-------------------------------------------------------------------------------
## Training Notes
Trained [Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) for 2 epochs on NVidia A100, and on dataset [jondurbin/gutenberg-dpo-v0.1](https://huggingface.co/datasets/jondurbin/gutenberg-dpo-v0.1), saving different checkpoints along the way (completely different runs at varying epochs and learning rates).
[Tanliboy](https://huggingface.co/tanliboy) trained [Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) for 1 epoch on [HuggingFaceH4/ultrafeedback_binarized](HuggingFaceH4/ultrafeedback_binarized), (Credit to Tanliboy! *Check out the model [here](https://huggingface.co/tanliboy/lambda-qwen2.5-14b-dpo-test)*)
*Mass checkpoint merged, Based on Qwen2.5-14B-Instruct (Base Model).*
## Merge
* Merged with a sophosympatheia's <b>SLERP</b> gradient *"Ultrafeedback-Binarized DPO"* and *"Gutenberg DPO"*
* Merged with a sophosympatheia's <b>SLERP</b> gradient *"Qwen2.5-14B-Instruct"* and *"Gutenberg DPO"*
* Merged all <b>DPO checkpoints</b> and <b>SLERP</b> variations with <b>MODEL_STOCK</b> to analyze geometric properties and get the most *performant* aspects of all runs/merges. *Model Stock* was chosen due to the similarity between the merged models.
* This was chosen due to the fact that evaluation for *ORPO* is unclear, so it's hard to know which runs are the best.
## One-Attempt generated example:
* *Temp 1.3 [1], Min_P 0.012 [4], TFS 0.97 [3], Smooth_Factor 0.3 [2], Smoothing_Curve 1.1, Rep 1.1, Rep Range 1000*
<details>
<summary> >[CLICK ME] Write the opening chapter of a romance novel set in the late 1800s, where the protagonist, a war-weary music star returning to England after an injury in Afghanistan service...</summary>
**Title: A Harp for the Heart**
*Chapter One: Homebound*
England, Winter 1893—
As the steam-powered ferry made its way across the turbulent waters of the Channel, each tremor that reverberated through the vessel felt like a calloused reminder to Edgar Thornton. His mind was already conjuring visions of his beloved London—the sprawling city draped in an ever-present blanket of melancholic grey. Yet within him lingered more pressing memories: the whizzing sound of artillery shells as they streaked across Afghan deserts, leaving behind billowing black smoke and devastating wreckage.
Edgar's injuries were visible, the ghostly scar of a saber cut marring one cheek; less evident, however, was the toll that service took on his spirit. Each warbling note from an opera played by some amateur on the ferry sliced into Edgar's hollow heart, echoing through him with a bitter lament—the song was *Adagio* and it meant *Slow down*. Only now did he dare allow himself to slow—decades since music had flowed through the seams of his body in this way.
For years he had been sought out, from England to New York, as the country's foremost harpist, not just for his fingers nimble on a range of scales but also for how each piece conveyed an insurmountable melancholy—a reflection of his solitary life and unending loneliness. Each recital saw thousands of faces that remembered his smiles or silences; yet today he had no one to return them to.
The ferry's horn blared as they approached Dover, cutting through Edgar’s reverie much the same as cannonade did in Kabul. He thought about London with mixed emotions—longing for the grandeur and splendor of its concert halls, a city vibrant under lamplight but devoid of comfort or real life; a home where he was no less alien than anywhere else.
A soft murmur drew his gaze to a young woman standing near the window, her back to him. Her blonde hair cascaded in waves, shimmering under the lantern light as if lit from within. Despite the chill of the approaching winter night, she appeared oblivious—entirely entranced by something on the far horizon or lost in thought.
His breath caught at a sensation that had been uncommon for years but familiar to his boyhood—a light flutter tickling just behind his ribcage. And as her shoulders drew back and her chin tipped upwards, he saw a hint of the same unspoken story reflected in her eyes: dreams big enough to leave them scarred by distance and time.
And then she spoke without ever turning around—
"Do you hear that?" she asked into the gathering darkness outside their window. "Is that London?"[...]
</details>
-----------------------------------
<details>
<summary> >[CLICK ME] Write the opening chapter of a dark fantasy book during the darkest age, where the protagonist, a war-weary woman is introduced returning to stand against the orcs and fight in what they falsely believe is the last battle. Introduce us as she is riding into her army to talk to some friends before they strike... </summary>
---
The night was pitch black with only the thinnest sliver of a moon slicing across the sky, barely enough to paint the horizon a sickly shade of greyish-blue. Below it stretched an expanse of rolling hills and forests that swayed like shadows in an eternal nightmare. The wind carried with it not the scents of living earth but the taint of the bloodied battlegrounds below, where for too long had men, women, and beasts clashed and bled.
Ariath rode amidst this gloomy spectacle atop a pale stallion named Ebon, a beast as weary and war-weary as she was. Her dark cloak billowed around her like an inky cloud, contrasting starkly with the barren landscape through which they traveled. She had not chosen such darkness—nor any choice at all; tonight, every decision from waking to fighting lay in someone else's cold, indifferent hands.
Ebon’s hooves stirred the heavy mud of long trampled fields, sinking silently into the earth as he treaded towards her rallying army. It was said that this night they would fight, not for honor nor country—no, those noble phantoms had vanished many a moon past—but to stand once more between the realm and its approaching destruction.
Her chestnut mare, a beauty in years past before every bone seemed like it could break from its own weight, whinnied quietly under Ariath's calming voice. Her friends awaited not far ahead; they were an eclectic bunch—half-brother Raen, the steadfast knight who fought by her side for almost all their battles, Elaeril the mage known more as a wielder of shadow spells than life-giving magic, and finally Meryn the roguish, whose sword was as quick as his wit.[...]
</details>
-----------------------------------
* *Temp 1.3, Min_P 0.012, Rep 1.1*
<details>
<summary> >[CLICK ME (Disclaimer: highly developed prompt from dataset - best case scenario)] Summary of previous portion of the novel: In the previous chapter, Anna Pavlovna is overwhelmed by a deep sense of fear and dread after her partner, Count Vronsky, has left. Despite trying to reach out to him with a note pleading for his return, she is informed that the count had driven off, and her message did not reach him. Struggling with her thoughts and barely able to find solace, Anna decides to send a telegram and visit Darya Alexandrovna to avoid succumbing to her overwhelming feelings. Her distress is palpable as she attempts to take action to distract herself from her anxiety and looks for any form of support or distraction to keep her from unraveling. Write the next chapter of a story in novel format that focuses on the internal and external conflicts of a woman named Anna. Detail her poignant reflections on her past, her struggle with her current sense of self-worth, and her interactions with family and society. Include flashbacks to her younger days, comparisons between her past and present, and vivid descriptions of her environment and society's norms that have led to her feelings of alienation. This chapter should showcase Anna's complex emotional state as she contemplates her next steps in life, hinting at recent life-altering decisions and their repercussions. It should contain interactions with characters such as Dolly and Kitty, reflecting the tensions and societal judgments she is grappling with.</summary>
### Chapter Seven
It was early morning, but the heavy clouds of melancholy still clung tightly to Anna's heart like a shroud over her spirit. The atmosphere outside mirrored this burden; rain fell softly, soaking the flower beds as mournful streaks rolled lazily across the gray sky in muted tones of despair.
Anna could not have taken her steps outwards to face the world until she had come to some kind of resolution within herself, even if it were no more than acceptance. She reflected deeply over how far away everything now seemed from what was once close and familiar. In every leaf trembling on a wet branch, in the low sighs of a wind that wandered lost without a purpose across the gray fields, she felt echoes of her inner turmoil.
She remembered her younger days. Life had then been filled with smiles, warmth and acceptance — her friends called her Annaarina, the spirit they saw in her so vividly, bright-eyed and kind. Their little gathering at Kitty’s had been an annual celebration for all four families: Oblonskys’, Shtcherbats’ys’, Karenins’, and Count Vronsky’s parents — each as familiar and loved a presence in that room as Anna.
But where were their faces now? Where was her happiness from those times, which only seemed to taunt the present?
"Here we are again,” she murmured in self-accusation with a mocking bitterness. The first snow had fallen softly upon them last winter then — and today it reminded her of something too lovely and cruel; something not allowed now in the rigid confines that life had chosen as its own path.
Determined to avoid being weighed down further, Anna prepared herself mentally to face Dolly, her dearest friend amongst women who seemed like a beacon among dim lights. In moments such as these she believed fully that somehow they could ease or even rid this dark fog from her heart — yet Anna felt apprehension nevertheless. Would their kinship and bond stand unchanged in light of recent circumstances? Perhaps now it would reveal cracks that Dolly was oblivious to, since the old Annaarina had been seen so rarely even then.
---
The carriage jolted through wet cobblestones as Anna finally made her way to Darya Alexandrovna’s house. The house stood there as warmly and welcomingly familiar with smoke still rising from chimneys, signaling signs of comfort within. Anna pulled off a shivering hand — dampness and cold seeped in merciless through fine silks.
Sneezing lightly into lace-lined white gloves, Anna steadied herself before rapping at the door. There would be no more room for her to hide; any attempt at self-preservation now lay broken between her heart and truth.
Anna stepped inside, greeted by the sight of Dolly busying about an unswept floor. For a fleeting moment, those familiarly busy hands paused — brows furrowed gently in concern even before their full attention met Anna.
"Why so early? Is there something...?" began Dolly warmly yet tentatively as she approached to take the delicate gloved hand.
Anna bit her lip at Darya's solicitous face. Would she now have to divulge her secrets of how utterly unhinged everything had become, or simply hint enough for suspicion?
"I — I need your advice," Anna whispered shyly, for there was no courage left to handle the world straight-up that day.
Dolly hugged the fragile figure and replied earnestly, "Speak with me here freely. No matter what is troubling you..."
That simple reassurance gave Anna courage. The despairing weight lessened as though carried a little by this soft-hearted soul.
“I have changed more than you would ever understand...” began Anna softly, but the words broke as unshed tears finally welled.
"Your eyes tell their story alone," soothed Dolly, patting gently her friend’s hand. “Do not be ashamed.”
In the wake of support and open hearts, Anna continued her revelation, pouring out all fears and hesitations — she who had been so content to live life sheltered inside, was suddenly exposed in full force by necessity alone; fragile glass cast into a brutal world.
---
At intervals throughout that day, Anna could scarcely believe how swiftly and faithfully Darya’s love proved the eternal rock to stand against crumbling shores. Amidst shared tea and comfort that night though shadows still lingered darkly between them.
When asked about Count Vronsky's presence with the family at gatherings or personal times now, Anna met this inquiry with a breathy sorrowful silence which spoke volumes to both the depth of their new estrangement, as well as how truly beloved the name was yet by Anna still in spite all waning prospects for resolution.
Daria continued to encourage her friend steadfastly; even shared letters and small consolations to penultimate messages destined to Count Vronsky alone. Each offered another grain's worth of hope for peace - both with him, herself, and beyond her personal dilemmas within that cloaked society so mercilessly turning its back against any form of irregularities in married life.
By the end of their meeting late evening shadows were upon them once more, and Anna reluctantly made to depart — a newly found strength (but still sorely tested) rippling across the troubled soul. However, as Anna prepared herself for returning to her solitary chambers, Dolly rose swiftly off the settee with concern laced in her tone.
“But we have barely scratched the surface of what might truly be affecting your heart.” The sympathy etched upon her face almost spoke a plea for understanding.
Anna reached over and took hands of compassionate warmth — eyes wide with gratefulness.
"I think perhaps it is all right to stay here for the night… while I gather myself," she confided quietly into Darya’s caring ears.[...]
</details>
-----------------------------------
<b>*As you can see the model has mostly adapted to the intended response style from Gutenberg dataset.*</b>
## Recipe
```yaml
models:
- model: v000000/Qwen2.5-14B-Gutenberg-1e-Delta
- model: v000000/Qwen2.5-14B-Gutenberg-0.6e-Sequential
- model: v000000/Qwen2.5-14B-Gutenberg-0.25e-Early
- model: v000000/Qwen2.5-14B-Gutenberg-2e-Sequential
- model: v000000/Qwen2.5-14B-Gutenberg-0.37e-Early
- model: v000000/Qwen2.5-14B-Gutenberg-2e-Zeta
- model: v000000/Qwen2.5-14B-Gutenberg-1e-Theta
- model: tanliboy/lambda-qwen2.5-14b-dpo-test
- model: v000000/Qwen2.5-14B-Gutenberg-1e-Delta
- model: tanliboy/lambda-qwen2.5-14b-dpo-test
- model: v000000/Qwen2.5-14B-Gutenberg-UltraLambda-Slerpeno
- model: v000000/Qwen2.5-14B-Gutenberg-Instruct-Slerpeno
base_model: v000000/Qwen2.5-14B-Gutenberg-1e-Delta
merge_method: model_stock
dtype: bfloat16
```
### Finetune and merge
This is a merge and finetune of pre-trained language models.
### Models Merged
[Arxiv 2403.19522](https://arxiv.org/abs/2403.19522)
The following models were included in the merge:
* v000000/Qwen2.5-14B-Gutenberg-1e-Delta
* v000000/Qwen2.5-14B-Gutenberg-0.6e-Sequential
* v000000/Qwen2.5-14B-Gutenberg-0.25e-Early
* v000000/Qwen2.5-14B-Gutenberg-2e-Sequential
* v000000/Qwen2.5-14B-Gutenberg-0.37e-Early
* v000000/Qwen2.5-14B-Gutenberg-2e-Zeta
* v000000/Qwen2.5-14B-Gutenberg-1e-Theta
* v000000/Qwen2.5-14B-Gutenberg-UltraLambda-Slerpeno
* v000000/Qwen2.5-14B-Gutenberg-Instruct-Slerpeno
* tanliboy/lambda-qwen2.5-14b-dpo-test
-------------------------------------------------------------------------------
- Context Length: Full 131,072 tokens and generation 8192 tokens
- Qwen2(ChatML) Prompt format
|
pictgensupport/golfv2 | pictgensupport | "2025-03-29T19:25:04Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-03-29T19:25:01Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
VishnuPJ/MalayaLLM-Paligemma-VQA-3B-4bitQuant | VishnuPJ | "2024-08-30T06:23:55Z" | 7 | 0 | null | [
"safetensors",
"paligemma",
"ml",
"license:mit",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2024-08-23T05:48:13Z" | ---
license: mit
language:
- ml
---
# PaliGemma-3B-MalayaLLM
<img src="https://github.com/user-attachments/assets/8e8937a7-fd47-482c-acaf-48efc3c04597" alt="Baby MalayaLLM" width="300" height="auto">
# Introducing the Developer:
Discover the mind behind this model and stay updated on their contributions to the field
https://www.linkedin.com/in/vishnu-prasad-j/
# Model description
This is a PaliGemma-3B based model for Malayalam captioning and Visual Question Answering.
- **Model type:** A 3B PaliGemma-2 finetuned model on Malayalam captions and queries.
- **Language(s):** Malayalam and English
- **Datasets:**
* [VishnuPJ/SAM-LLAVA-20k-Malayalam-Caption-Pretrain](https://huggingface.co/datasets/VishnuPJ/SAM-LLAVA-20k-Malayalam-Caption-Pretrain)
* [VishnuPJ/laion-14k-GPT4V-LIVIS-Captions_Malayalam](https://huggingface.co/datasets/VishnuPJ/laion-14k-GPT4V-LIVIS-Captions_Malayalam)
* [VishnuPJ/Malayalam-VQA](https://huggingface.co/datasets/VishnuPJ/Malayalam-VQA)
- **Caption Model-Full Precisoin:** [VishnuPJ/MalayaLLM-Paligemma-Caption-3B-Full-Precision](https://huggingface.co/VishnuPJ/MalayaLLM-Paligemma-Caption-3B-Full-Precision)
- **Caption 4bit Quant:** [VishnuPJ/MalayaLLM-Paligemma-Caption-3B-4bitQuant](https://huggingface.co/VishnuPJ/MalayaLLM-Paligemma-Caption-3B-4bitQuant)
- **VQA Model-Full Precison:** [VishnuPJ/MalayaLLM-Paligemma-VQA-3B-Full-Precision](https://huggingface.co/VishnuPJ/MalayaLLM-Paligemma-VQA-3B-Full-Precision)
- **VQA 4bit Quant:** [VishnuPJ/MalayaLLM-Paligemma-VQA-3B-4bitQuant](https://huggingface.co/VishnuPJ/MalayaLLM-Paligemma-VQA-3B-4bitQuant)
- **VQA LORA Adapters:** [VishnuPJ/MalayaLLM-Paligemma-VQA-3B-Adapters](https://huggingface.co/VishnuPJ/MalayaLLM-Paligemma-VQA-3B-Adapters)
- **Training Precision:** `float16`,`4bit`
# Dataset Creation
I have used [indictrans2](https://huggingface.co/ai4bharat/indictrans2-en-indic-1B) for translating English datasets to Malayalam.
# 🌟Happy coding💻🌟 |
thangla01/f4dd0fc1-a280-4505-8fb8-21db6e4b34a5 | thangla01 | "2025-01-11T17:12:38Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:MLP-KTLim/llama-3-Korean-Bllossom-8B",
"base_model:adapter:MLP-KTLim/llama-3-Korean-Bllossom-8B",
"license:llama3",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-11T17:03:17Z" | ---
library_name: peft
license: llama3
base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f4dd0fc1-a280-4505-8fb8-21db6e4b34a5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 213dde068166ffaa_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/213dde068166ffaa_train_data.json
type:
field_input: ''
field_instruction: passage
field_output: question
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thangla01/f4dd0fc1-a280-4505-8fb8-21db6e4b34a5
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/213dde068166ffaa_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 63e7cc36-bee7-47a6-a7fe-22f372d0a5af
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 63e7cc36-bee7-47a6-a7fe-22f372d0a5af
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# f4dd0fc1-a280-4505-8fb8-21db6e4b34a5
This model is a fine-tuned version of [MLP-KTLim/llama-3-Korean-Bllossom-8B](https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4568
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 94
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.1418 | 0.9973 | 93 | 1.4510 |
| 2.3897 | 1.0080 | 94 | 1.4568 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
swnnysingh801/instent-image-genrator | swnnysingh801 | "2025-02-27T13:07:45Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-02-27T13:07:45Z" | ---
license: apache-2.0
---
|
desarrolloasesoreslocales/SetFitPruebaRecorte | desarrolloasesoreslocales | "2023-10-10T19:45:23Z" | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | "2023-10-10T19:44:32Z" | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# desarrolloasesoreslocales/SetFitPruebaRecorte
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("desarrolloasesoreslocales/SetFitPruebaRecorte")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
vishnuKC/bert-large-uncased-finetuned-spec5g-v2 | vishnuKC | "2023-11-17T17:01:37Z" | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:google-bert/bert-large-uncased",
"base_model:finetune:google-bert/bert-large-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2023-11-17T16:57:22Z" | ---
license: apache-2.0
base_model: bert-large-uncased
tags:
- generated_from_trainer
model-index:
- name: bert-large-uncased-finetuned-spec5g-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-finetuned-spec5g-v2
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 41 | 2.3034 |
| No log | 2.0 | 82 | 2.3958 |
| No log | 3.0 | 123 | 2.2291 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
LoneStriker/Aetheria-L2-70B-2.65bpw-h6-exl2-2 | LoneStriker | "2023-12-10T11:33:18Z" | 8 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama 2",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-10T11:19:08Z" | ---
inference: false
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- llama
- llama 2
---
# Aetheria-L2-70B
This is a [Llama 2](https://huggingface.co/meta-llama/Llama-2-70b)-based model consisting of a merge between:
- [Sao10K/Euryale-1.3-L2-70B](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B)
- [allenai/tulu-2-dpo-70b](https://huggingface.co/allenai/tulu-2-dpo-70b)
- [GOAT-AI/GOAT-70B-Storytelling](https://huggingface.co/GOAT-AI/GOAT-70B-Storytelling)
- [Doctor-Shotgun/limarpv3-llama2-70b-qlora](https://huggingface.co/Doctor-Shotgun/limarpv3-llama2-70b-qlora)
This model combines the excellent Euryale v1.3 base with the DPO training of the Tulu v2 model and creative prose training of the GOAT Storytelling model. The LimaRP v3 qlora was then added for further roleplaying capability and the ability to tune the length of the outputs.
The goal was to create a capable 70B model for collaborative storytelling and roleplay.
## Usage:
The intended prompt format is the Alpaca instruction format of LimaRP v3:
```
### Instruction:
Character's Persona: {bot character description}
User's Persona: {user character description}
Scenario: {what happens in the story}
Play the role of Character. You must engage in a roleplaying chat with User below this line. Do not write dialogues and narration for User.
### Input:
User: {utterance}
### Response:
Character: {utterance}
### Input
User: {utterance}
### Response:
Character: {utterance}
(etc.)
```
## Message length control
Due to the inclusion of LimaRP v3, it is possible to append a length modifier to the response instruction sequence, like this:
```
### Input
User: {utterance}
### Response: (length = medium)
Character: {utterance}
```
This has an immediately noticeable effect on bot responses. The available lengths are: `micro, tiny, short, medium, long, massive, huge, enormous, humongous, unlimited`. The recommended starting length is `medium`. Keep in mind that the AI may ramble or impersonate the user with very long messages.
## Bias, Risks, and Limitations
The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model. It is not intended for supplying factual information or advice in any form.
## Training Details
This model is a merge. Please refer to the link repositories of the merged models for details. |
Kuongan/xlm-roberta-base-hin-noaug | Kuongan | "2025-02-26T01:50:24Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-02-26T01:40:57Z" | ---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: xlm-roberta-base-hin-noaug
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-hin-noaug
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1446
- F1: 0.8035
- Roc Auc: 0.8679
- Accuracy: 0.81
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.4307 | 1.0 | 80 | 0.3801 | 0.0 | 0.5 | 0.31 |
| 0.3343 | 2.0 | 160 | 0.2959 | 0.0702 | 0.5266 | 0.34 |
| 0.2741 | 3.0 | 240 | 0.2382 | 0.4597 | 0.6939 | 0.53 |
| 0.1988 | 4.0 | 320 | 0.1897 | 0.7500 | 0.8531 | 0.72 |
| 0.1601 | 5.0 | 400 | 0.1612 | 0.7855 | 0.8680 | 0.75 |
| 0.1128 | 6.0 | 480 | 0.1535 | 0.7955 | 0.8735 | 0.78 |
| 0.1096 | 7.0 | 560 | 0.1640 | 0.7641 | 0.8498 | 0.73 |
| 0.0856 | 8.0 | 640 | 0.1446 | 0.8035 | 0.8679 | 0.81 |
| 0.0744 | 9.0 | 720 | 0.1795 | 0.7636 | 0.8622 | 0.77 |
| 0.051 | 10.0 | 800 | 0.1638 | 0.7917 | 0.8876 | 0.78 |
| 0.0571 | 11.0 | 880 | 0.1799 | 0.7579 | 0.8597 | 0.73 |
| 0.0402 | 12.0 | 960 | 0.1648 | 0.7855 | 0.8684 | 0.77 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
the1ilya1g/dog | the1ilya1g | "2025-01-09T17:01:30Z" | 7 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2025-01-09T16:50:54Z" | ---
base_model: CompVis/stable-diffusion-v1-4
library_name: diffusers
license: creativeml-openrail-m
inference: true
instance_prompt: a photo of sks dog
tags:
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - the1ilya1g/dog
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
Helsinki-NLP/opus-mt-ko-sv | Helsinki-NLP | "2023-08-16T11:59:45Z" | 119 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ko",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2022-03-02T23:29:04Z" | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ko-sv
* source languages: ko
* target languages: sv
* OPUS readme: [ko-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ko-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/ko-sv/opus-2020-01-09.zip)
* test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ko-sv/opus-2020-01-09.test.txt)
* test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ko-sv/opus-2020-01-09.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ko.sv | 26.5 | 0.468 |
|
rocca/dalle-mini-js | rocca | "2022-07-02T18:41:23Z" | 0 | 0 | null | [
"tflite",
"license:mit",
"region:us"
] | null | "2022-06-29T20:17:53Z" | ---
license: mit
---
# DALL-E Mini Running in the Browser (work in progress)
### Notes:
* Working tflite conversion in [this notebook](https://colab.research.google.com/gist/josephrocca/f427377f76c574f1c1e8e4d6d62c34b6/tflite-dalle-mini-conversion-separated-encoder-and-decoder.ipynb).
* Note that the encoder and decoder need to be converted separately for some reason. [More info on this bug](https://github.com/tensorflow/tensorflow/issues/56629#issuecomment-1172928973).
* But these models currently require TF Select operators due to bitwise operations that aren't supported in the default tflite opset. [This issue](https://github.com/tensorflow/tfjs/issues/5844) tracks the potential inclusion of TF Select ops in tfjs-tflite, so models like this can be used on the web.
* The encoder requires the TF Select `Erf` op. The decoder requires these TF Select ops: `Bitcast`, `BitwiseOr`, `BitwiseXor`, `ConcatV2`, `Erf`, `LeftShift`, `RightShift`, `ScatterNd`, `SelectV2`, `StridedSlice`
* Here are the two models:
* [dalle_bart_encoder_flax_model](https://huggingface.co/rocca/dalle-mini-js/blob/main/debug/dalle_bart_encoder_flax_model.tflite)
* [dalle_bart_decoder_flax_model](https://huggingface.co/rocca/dalle-mini-js/blob/main/debug/dalle_bart_decoder_flax_model.tflite)
* ONNX conversion is blocked on support for `Bitcast`, `BitwiseOr` and `BitwiseXor`: https://github.com/onnx/tensorflow-onnx/issues/1985 The need for the `BitwiseAnd` op can be avoided by pulling `jax.random.PRNGKey` outside of the encode-decode function, but I'm not sure where the other ops are coming from - netron doesn't seem to "expand" certain types of nodes, so I think they're hidden within some other nodes.
* TF.js conversion is blocked on lack of support for these ops: `BitwiseXor`, `Bitcast`, `LeftShift`, `BitwiseOr`, `RightShift` ([relevant issue](https://github.com/tensorflow/tfjs/issues/6599))
|
winglian/falcon-7b-alibi | winglian | "2023-06-02T23:15:49Z" | 14 | 1 | transformers | [
"transformers",
"RefinedWebModel",
"text-generation",
"custom_code",
"arxiv:2205.14135",
"arxiv:1911.02150",
"arxiv:2101.00027",
"arxiv:2005.14165",
"arxiv:2104.09864",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-generation | "2023-06-02T23:12:48Z" | ---
license: apache-2.0
---
# 🚀 Falcon-7B
**Falcon-7B is a 7B parameters causal decoder-only model built by [TII](https://www.tii.ae) and trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. It is made available under the Apache 2.0 license.**
*Paper coming soon* 😊.
## Why use Falcon-7B?
* **It outperforms comparable open-source models** (e.g., [MPT-7B](https://huggingface.co/mosaicml/mpt-7b), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1) etc.), thanks to being trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
* **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)).
* **It is made available under a permissive Apache 2.0 license allowing for commercial use**, without any royalties or restrictions.
⚠️ **This is a raw, pretrained model, which should be further finetuned for most usecases.** If you are looking for a version better suited to taking generic instructions in a chat format, we recommend taking a look at [Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct).
🔥 **Looking for an even more powerful model?** [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) is Falcon-7B's big brother!
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!**
# Model Card for Falcon-7B
## Model Details
### Model Description
- **Developed by:** [https://www.tii.ae](https://www.tii.ae);
- **Model type:** Causal decoder-only;
- **Language(s) (NLP):** English and French;
- **License:** Apache 2.0.
### Model Source
- **Paper:** *coming soon*.
## Uses
### Direct Use
Research on large language models; as a foundation for further specialization and finetuning for specific usecases (e.g., summarization, text generation, chatbot, etc.)
### Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
## Bias, Risks, and Limitations
Falcon-7B is trained on English and French data only, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
### Recommendations
We recommend users of Falcon-7B to consider finetuning it for the specific set of tasks of interest, and for guardrails and appropriate precautions to be taken for any production use.
## How to Get Started with the Model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Training Details
### Training Data
Falcon-7B was trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a high-quality filtered and deduplicated web dataset which we enhanced with curated corpora. Significant components from our curated copora were inspired by The Pile ([Gao et al., 2020](https://arxiv.org/abs/2101.00027)).
| **Data source** | **Fraction** | **Tokens** | **Sources** |
|--------------------|--------------|------------|-----------------------------------|
| [RefinedWeb-English](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | 79% | 1,185B | massive web crawl |
| Books | 7% | 110B | |
| Conversations | 6% | 85B | Reddit, StackOverflow, HackerNews |
| Code | 3% | 45B | |
| RefinedWeb-French | 3% | 45B | massive web crawl |
| Technical | 2% | 30B | arXiv, PubMed, UPSTO, etc. |
The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b) tokenizer.
### Training Procedure
Falcon-7B was trained on 384 A100 40GB GPUs, using a 2D parallelism strategy (PP=2, DP=192) combined with ZeRO.
#### Training Hyperparameters
| **Hyperparameter** | **Value** | **Comment** |
|--------------------|------------|-------------------------------------------|
| Precision | `bfloat16` | |
| Optimizer | AdamW | |
| Learning rate | 6e-4 | 4B tokens warm-up, cosine decay to 1.2e-5 |
| Weight decay | 1e-1 | |
| Z-loss | 1e-4 | |
| Batch size | 2304 | 30B tokens ramp-up |
#### Speeds, Sizes, Times
Training happened in early March 2023 and took about two weeks.
## Evaluation
*Paper coming soon.*
See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) for early results.
## Technical Specifications
### Model Architecture and Objective
Falcon-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences:
* **Positionnal embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864));
* **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135));
* **Decoder-block:** parallel attention/MLP with a single layer norm.
| **Hyperparameter** | **Value** | **Comment** |
|--------------------|-----------|----------------------------------------|
| Layers | 32 | |
| `d_model` | 4544 | Increased to compensate for multiquery |
| `head_dim` | 64 | Reduced to optimise for FlashAttention |
| Vocabulary | 65024 | |
| Sequence length | 2048 | |
### Compute Infrastructure
#### Hardware
Falcon-7B was trained on AWS SageMaker, on 384 A100 40GB GPUs in P4d instances.
#### Software
Falcon-7B was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)
## Citation
*Paper coming soon* 😊.
## License
Falcon-7B is made available under the Apache 2.0 license.
## Contact
[email protected] |
Subsets and Splits