modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-13 18:27:00
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 425
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-13 18:24:29
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ZeroWw/Phi-3.5-mini-instruct-GGUF | ZeroWw | "2024-08-21T13:43:25Z" | 8 | 0 | null | [
"gguf",
"text-generation",
"en",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2024-08-21T13:35:35Z" |
---
license: mit
language:
- en
pipeline_tag: text-generation
---
My own (ZeroWw) quantizations.
output and embed tensors quantized to f16.
all other tensors quantized to q5_k or q6_k.
Result:
both f16.q6 and f16.q5 are smaller than q8_0 standard quantization
and they perform as well as the pure f16.
Updated on: Wed Aug 21, 13:35:36
|
mradermacher/Phi_medprob-biochemistry-GGUF | mradermacher | "2025-02-28T04:15:38Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:emilykang/Phi_medprob-biochemistry",
"base_model:quantized:emilykang/Phi_medprob-biochemistry",
"endpoints_compatible",
"region:us"
] | null | "2025-02-28T03:58:55Z" | ---
base_model: emilykang/Phi_medprob-biochemistry
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/emilykang/Phi_medprob-biochemistry
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Phi_medprob-biochemistry-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Phi_medprob-biochemistry-GGUF/resolve/main/Phi_medprob-biochemistry.Q2_K.gguf) | Q2_K | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Phi_medprob-biochemistry-GGUF/resolve/main/Phi_medprob-biochemistry.Q3_K_S.gguf) | Q3_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Phi_medprob-biochemistry-GGUF/resolve/main/Phi_medprob-biochemistry.Q3_K_M.gguf) | Q3_K_M | 1.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Phi_medprob-biochemistry-GGUF/resolve/main/Phi_medprob-biochemistry.IQ4_XS.gguf) | IQ4_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Phi_medprob-biochemistry-GGUF/resolve/main/Phi_medprob-biochemistry.Q3_K_L.gguf) | Q3_K_L | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/Phi_medprob-biochemistry-GGUF/resolve/main/Phi_medprob-biochemistry.Q4_K_S.gguf) | Q4_K_S | 1.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Phi_medprob-biochemistry-GGUF/resolve/main/Phi_medprob-biochemistry.Q4_K_M.gguf) | Q4_K_M | 1.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Phi_medprob-biochemistry-GGUF/resolve/main/Phi_medprob-biochemistry.Q5_K_S.gguf) | Q5_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Phi_medprob-biochemistry-GGUF/resolve/main/Phi_medprob-biochemistry.Q5_K_M.gguf) | Q5_K_M | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Phi_medprob-biochemistry-GGUF/resolve/main/Phi_medprob-biochemistry.Q6_K.gguf) | Q6_K | 2.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Phi_medprob-biochemistry-GGUF/resolve/main/Phi_medprob-biochemistry.Q8_0.gguf) | Q8_0 | 3.1 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Phi_medprob-biochemistry-GGUF/resolve/main/Phi_medprob-biochemistry.f16.gguf) | f16 | 5.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
VonMakiseKurisu/Tobias_Forge | VonMakiseKurisu | "2023-08-27T19:08:58Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-08-27T18:54:40Z" | ---
license: creativeml-openrail-m
---
|
SicariusSicariiStuff/Redemption_Wind_24B_GPTQ | SicariusSicariiStuff | "2025-02-07T23:59:30Z" | 5 | 0 | null | [
"safetensors",
"mistral",
"license:apache-2.0",
"4-bit",
"gptq",
"region:us"
] | null | "2025-02-07T23:34:30Z" | ---
license: apache-2.0
---
|
RyyyT/ppo-Huggy | RyyyT | "2023-08-30T08:22:39Z" | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | "2023-08-30T08:22:27Z" | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: RyyyT/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
nzksidbk/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-aquatic_dense_chicken | nzksidbk | "2025-04-06T10:18:04Z" | 1 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am aquatic dense chicken",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-01T18:19:30Z" | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-aquatic_dense_chicken
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am aquatic dense chicken
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-aquatic_dense_chicken
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="nzksidbk/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-aquatic_dense_chicken", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.50.3
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Ankudinov/ANKUDINOVIVAN01 | Ankudinov | "2025-03-18T19:40:12Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-03-18T18:28:56Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ANKUDINOVIVAN01
---
# Ankudinovivan01
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ANKUDINOVIVAN01` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Ankudinov/ANKUDINOVIVAN01', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Sashkanik13/safetensors_rugpt3small | Sashkanik13 | "2023-10-03T16:33:26Z" | 232 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"safetensors",
"gpt2",
"text-generation",
"PyTorch",
"Transformers",
"ru",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-10-03T07:59:02Z" | ---
language:
- ru
tags:
- PyTorch
- Transformers
thumbnail: "https://github.com/sberbank-ai/ru-gpts"
---
# rugpt3small\_based\_on\_gpt2 safetensors variant
Model was trained with sequence length 1024 using transformers by [SberDevices](https://sberdevices.ru/) team on 80B tokens around 3 epoch. After that model was finetuned on 2048 context.
Total training time took around one week on 32 GPUs.
# Authors
+ NLP core team RnD [Telegram channel](https://t.me/nlpcoreteam):
+ Dmitry Zmitrovich
+ Safetensors variant by [Sashkanik13](https://huggingface.co/Sashkanik13)
|
CShorten/mistral-schemaSplit-500-steps | CShorten | "2023-11-09T21:41:07Z" | 2 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | "2023-11-09T21:40:53Z" | ---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2.dev0
|
AlexYu90/new-rubric-Llama3.2-3B | AlexYu90 | "2025-04-03T23:30:07Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-04-03T23:20:23Z" | ---
base_model: unsloth/llama-3.2-3b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** AlexYu90
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
DBangshu/V5_Base_GPT2_e5_4_7 | DBangshu | "2024-12-05T10:54:15Z" | 149 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-05T10:53:59Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LangAGI-Lab/Meta-Llama-3.1-8B-Instruct-WM-acctree-16k-50-adapter-new | LangAGI-Lab | "2024-09-29T19:02:55Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"region:us"
] | null | "2024-09-29T19:01:52Z" | ---
library_name: peft
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0 |
Sophie-Rain-SpiderMan-video-updatessss/Sophie.Rain.Spiderman.Video.Leaked.Tutorial.Viral.Full.Video.official | Sophie-Rain-SpiderMan-video-updatessss | "2025-02-26T09:46:53Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-02-26T09:46:44Z" | 45 seconds ago
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
|
djuna-test-lab/TEST-L3.2-ReWish-3B | djuna-test-lab | "2024-10-30T22:28:48Z" | 19 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"base_model:SicariusSicariiStuff/Impish_LLAMA_3B",
"base_model:merge:SicariusSicariiStuff/Impish_LLAMA_3B",
"base_model:djuna/ReWiz-Llama-3.2-3B-fix-config",
"base_model:merge:djuna/ReWiz-Llama-3.2-3B-fix-config",
"base_model:unsloth/Llama-3.2-3B",
"base_model:merge:unsloth/Llama-3.2-3B",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-10-23T12:21:12Z" | ---
library_name: transformers
tags:
- mergekit
- merge
base_model:
- djuna/ReWiz-Llama-3.2-3B-fix-config
- SicariusSicariiStuff/Impish_LLAMA_3B
- unsloth/Llama-3.2-3B
model-index:
- name: TEST-L3.2-ReWish-3B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 63.68
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=djuna-test-lab/TEST-L3.2-ReWish-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 22.07
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=djuna-test-lab/TEST-L3.2-ReWish-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 12.92
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=djuna-test-lab/TEST-L3.2-ReWish-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 4.47
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=djuna-test-lab/TEST-L3.2-ReWish-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 7.92
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=djuna-test-lab/TEST-L3.2-ReWish-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 23.62
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=djuna-test-lab/TEST-L3.2-ReWish-3B
name: Open LLM Leaderboard
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the linear [DARE](https://arxiv.org/abs/2311.03099) merge method using [unsloth/Llama-3.2-3B](https://huggingface.co/unsloth/Llama-3.2-3B) as a base.
### Models Merged
The following models were included in the merge:
* [djuna/ReWiz-Llama-3.2-3B-fix-config](https://huggingface.co/djuna/ReWiz-Llama-3.2-3B-fix-config)
* [SicariusSicariiStuff/Impish_LLAMA_3B](https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_3B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: unsloth/Llama-3.2-3B
- model: SicariusSicariiStuff/Impish_LLAMA_3B
parameters:
weight: 1
- model: djuna/ReWiz-Llama-3.2-3B-fix-config
parameters:
weight: 1
merge_method: dare_linear
base_model: unsloth/Llama-3.2-3B
tokenizer_source: djuna/ReWiz-Llama-3.2-3B-fix-config
parameters:
normalize: true
int8_mask: true
dtype: float32
name: rewish
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_djuna-test-lab__TEST-L3.2-ReWish-3B)
| Metric |Value|
|-------------------|----:|
|Avg. |22.45|
|IFEval (0-Shot) |63.68|
|BBH (3-Shot) |22.07|
|MATH Lvl 5 (4-Shot)|12.92|
|GPQA (0-shot) | 4.47|
|MuSR (0-shot) | 7.92|
|MMLU-PRO (5-shot) |23.62|
|
rizki-syazali/tapasid_finetuned_sqa_to_itqa | rizki-syazali | "2024-10-24T02:43:29Z" | 71 | 0 | transformers | [
"transformers",
"safetensors",
"tapas",
"table-question-answering",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | table-question-answering | "2024-10-23T19:20:31Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
asiansoul/U-GO-GIRL-Llama-3-KoEn-8B | asiansoul | "2024-05-29T04:11:11Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:Locutusque/llama-3-neural-chat-v2.2-8B",
"base_model:merge:Locutusque/llama-3-neural-chat-v2.2-8B",
"base_model:MLP-KTLim/llama-3-Korean-Bllossom-8B",
"base_model:merge:MLP-KTLim/llama-3-Korean-Bllossom-8B",
"base_model:NousResearch/Meta-Llama-3-8B",
"base_model:merge:NousResearch/Meta-Llama-3-8B",
"base_model:NousResearch/Meta-Llama-3-8B-Instruct",
"base_model:merge:NousResearch/Meta-Llama-3-8B-Instruct",
"base_model:aaditya/Llama3-OpenBioLLM-8B",
"base_model:merge:aaditya/Llama3-OpenBioLLM-8B",
"base_model:asiansoul/Solo-Llama-3-MAAL-MLP-KoEn-8B",
"base_model:merge:asiansoul/Solo-Llama-3-MAAL-MLP-KoEn-8B",
"base_model:beomi/Llama-3-Open-Ko-8B",
"base_model:merge:beomi/Llama-3-Open-Ko-8B",
"base_model:maum-ai/Llama-3-MAAL-8B-Instruct-v0.1",
"base_model:merge:maum-ai/Llama-3-MAAL-8B-Instruct-v0.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-28T11:07:08Z" | ---
base_model:
- beomi/Llama-3-Open-Ko-8B
- aaditya/Llama3-OpenBioLLM-8B
- MLP-KTLim/llama-3-Korean-Bllossom-8B
- maum-ai/Llama-3-MAAL-8B-Instruct-v0.1
- NousResearch/Meta-Llama-3-8B
- NousResearch/Meta-Llama-3-8B-Instruct
- Locutusque/llama-3-neural-chat-v2.2-8B
- asiansoul/Solo-Llama-3-MAAL-MLP-KoEn-8B
library_name: transformers
tags:
- mergekit
- merge
---
# "U-GO_GIRL"-Llama-3-KoEn-8B
<a href="https://ibb.co/cr8X8zd"><img src="https://i.ibb.co/Tg0q0z5/ugoo.png" alt="ugoo" border="0"></a>
**Is "U-GO_GIRL" the top-tier AI magic you’ve been craving?**
Experience the pinnacle of artificial intelligence with U-GO_GIRL.
Keep in mind that the accuracy of your desired questions may vary for this merge.
When looking at an LLM, don't trust others, trust yourself by real fact check.
Buy me a cup of coffee if i can do the more work for you.
You ready Hey girl
[Toonation Donation](https://toon.at/donate/asiansoul)
ETH/USDT(ERC20) Donation : 0x8BB117dD4Cc0E19E5536ab211070c0dE039a85c0
## Use
Korean, English, Medical, Writing, Coding, and ETC.
### Models Mixed
The following models were included in the mixtape:
* [asiansoul/Solo-Llama-3-MAAL-MLP-KoEn-8B](https://huggingface.co/asiansoul/Solo-Llama-3-MAAL-MLP-KoEn-8B)
* [beomi/Llama-3-Open-Ko-8B](https://huggingface.co/beomi/Llama-3-Open-Ko-8B)
* [aaditya/Llama3-OpenBioLLM-8B](https://huggingface.co/aaditya/Llama3-OpenBioLLM-8B)
* [MLP-KTLim/llama-3-Korean-Bllossom-8B](https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B)
* [maum-ai/Llama-3-MAAL-8B-Instruct-v0.1](https://huggingface.co/maum-ai/Llama-3-MAAL-8B-Instruct-v0.1)
* [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)
* [Locutusque/llama-3-neural-chat-v2.2-8B](https://huggingface.co/Locutusque/llama-3-neural-chat-v2.2-8B)
## Citation
**Language Mix Model**
```text
@misc{U-GO_GIRL,
author = {JayLee aka "asiansoul"},
title = {U-GO_GIRL Mix Model},
year = {2024},
},
}
```
|
IDEA-CCNL/Erlangshen-Ubert-330M-Chinese | IDEA-CCNL | "2023-05-26T04:13:50Z" | 121 | 9 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"NLU",
"Sentiment",
"Chinese",
"zh",
"arxiv:2206.12094",
"arxiv:2209.02970",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | fill-mask | "2022-06-27T11:01:33Z" | ---
language:
- zh
license: apache-2.0
tags:
- bert
- NLU
- Sentiment
- Chinese
inference: false
---
# Erlangshen-Ubert-330M-Chinese
- Main Page:[Fengshenbang](https://fengshenbang-lm.com/)
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
## 简介 Brief Introduction
采用统一的框架处理多种抽取任务,AIWIN2022的冠军方案,3.3亿参数量的中文UBERT-Large。
Adopting a unified framework to handle multiple information extraction tasks, AIWIN2022's champion solution, Chinese UBERT-Large (330M).
## 模型分类 Model Taxonomy
| 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| 通用 General | 自然语言理解 NLU | 二郎神 Erlangshen | UBERT | 330M | 中文 Chinese |
## 模型信息 Model Information
参考论文:[Unified BERT for Few-shot Natural Language Understanding](https://arxiv.org/abs/2206.12094)
UBERT是[2022年AIWIN世界人工智能创新大赛:中文保险小样本多任务竞赛](http://ailab.aiwin.org.cn/competitions/68#results)的冠军解决方案。我们开发了一个基于类似BERT的骨干的多任务、多目标、统一的抽取任务框架。我们的UBERT在比赛A榜和B榜上均取得了第一名。因为比赛中的数据集在比赛结束后不再可用,我们开源的UBERT从多个任务中收集了70多个数据集(共1,065,069个样本)来进行预训练,并且我们选择了[MacBERT-Large](https://huggingface.co/hfl/chinese-macbert-large)作为骨干网络。除了支持开箱即用之外,我们的UBERT还可以用于各种场景,如NLI、实体识别和阅读理解。示例代码可以在[Github](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/dev/yangping/fengshen/examples/ubert)中找到。
UBERT was the winner solution in the [2022 AIWIN ARTIFICIAL INTELLIGENCE WORLD INNOVATIONS: Chinese Insurance Small Sample Multi-Task](http://ailab.aiwin.org.cn/competitions/68#results). We developed a unified framework based on BERT-like backbone for multiple tasks and objectives. Our UBERT owns first place, as described in leaderboards A and B. In addition to the unavailable datasets in the challenge, we carefully collect over 70 datasets (1,065,069 samples in total) from a variety of tasks for open-source UBERT. Moreover, we apply [MacBERT-Large](https://huggingface.co/hfl/chinese-macbert-large) as the backbone. Besides out-of-the-box functionality, our UBERT can be employed in various scenarios such as NLI, entity recognition, and reading comprehension. The example codes can be found in [Github](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/dev/yangping/fengshen/examples/ubert).
## 使用 Usage
Pip install fengshen
```python
git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git
cd Fengshenbang-LM
pip install --editable ./
```
Run the code
```python
import argparse
from fengshen import UbertPipelines
total_parser = argparse.ArgumentParser("TASK NAME")
total_parser = UbertPipelines.pipelines_args(total_parser)
args = total_parser.parse_args()
args.pretrained_model_path = "IDEA-CCNL/Erlangshen-Ubert-330M-Chinese"
test_data=[
{
"task_type": "抽取任务",
"subtask_type": "实体识别",
"text": "这也让很多业主据此认为,雅清苑是政府公务员挤对了国家的经适房政策。",
"choices": [
{"entity_type": "小区名字"},
{"entity_type": "岗位职责"}
],
"id": 0}
]
model = UbertPipelines(args)
result = model.predict(test_data)
for line in result:
print(line)
```
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的对该模型的论文:
If you are using the resource for your work, please cite the our paper for this model:
```text
@article{fengshenbang/ubert,
author = {JunYu Lu and
Ping Yang and
Jiaxing Zhang and
Ruyi Gan and
Jing Yang},
title = {Unified {BERT} for Few-shot Natural Language Understanding},
journal = {CoRR},
volume = {abs/2206.12094},
year = {2022}
}
```
如果您在您的工作中使用了我们的模型,也可以引用我们的[总论文](https://arxiv.org/abs/2209.02970):
If you are using the resource for your work, please cite the our [overview paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
```
|
Kalray/ssd-resnet34-mlperf | Kalray | "2025-04-07T15:11:41Z" | 0 | 0 | null | [
"onnx",
"object-detection",
"dataset:detection-datasets/coco",
"arxiv:1611.10012",
"license:apache-2.0",
"region:us"
] | object-detection | "2024-06-25T20:52:57Z" | ---
license: apache-2.0
datasets:
- detection-datasets/coco
pipeline_tag: object-detection
---
# Introduction
This repository stores the model for SSD-Resnet34 from MLPerf, compatible with Kalray's neural network API. </br>
Please see www.github.com/kalray/kann-models-zoo for details and proper usage. </br>
# Contents
- ONNX: ssd-resnet34.optimized.onnx (converted from tensorflow)
- Tensorflow: ssd-resnet34.pb
# Lecture note reference
- Speed/accuracy trade-offs for modern convolutional object detectors, https://arxiv.org/pdf/1611.10012
# Repository or links references
- [Link to download](https://zenodo.org/record/3345892/files/tf_ssd_resnet34_22.1.zip?download=1)
- https://github.com/mlcommons/inference/tree/r0.5/v0.5/classification_and_detection
Authors:
+ [email protected] |
marksverdhei/modern-norbert3-large | marksverdhei | "2025-03-07T10:09:47Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"modernbert",
"fill-mask",
"BERT",
"NorBERT",
"Norwegian",
"encoder",
"no",
"nb",
"nn",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | fill-mask | "2025-03-07T09:36:02Z" | ---
library_name: transformers
language:
- 'no'
- nb
- nn
inference: false
tags:
- BERT
- NorBERT
- Norwegian
- encoder
license: apache-2.0
---
# ModernNorBERT 3
This repository contains a version of [LTG](https://huggingface.co/ltg)'s NorBERT 3 that is converted to the ModernBERT architecture.
[Check out the original NorBERT 3 Repository here](https://huggingface.co/ltg/norbert3-large)
⚠️ This model is a cold direct weight-mapping and will not work without further training or fine-tuning |
Pin-Tzu/falcon-7b-sharded-bf16-english-quote-qlora-ITRI | Pin-Tzu | "2023-08-26T05:53:59Z" | 1 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-08-26T05:42:45Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
HawkClaws/Llama-3-youko-8b-instruct-chatvector-gguf | HawkClaws | "2024-05-14T09:56:26Z" | 4 | 3 | null | [
"gguf",
"en",
"ja",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-05-14T09:43:50Z" | ---
license: llama3
language:
- en
- ja
---
# Llama-3-youko-8b-instruct-chatvector-gguf
[aixsatoshiさんが公開しているLlama-3-youko-8b-instruct-chatvector](https://huggingface.co/aixsatoshi/Llama-3-youko-8b-instruct-chatvector)のggufフォーマット変換版です。
## Usage
```
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
make -j
./main -m ../llama-3-youko-8b-instruct-chatvector-q8_0.gguf -p 'こんちゃっす' -n 128
``` |
popV/tabula_muris_Thymus_10x | popV | "2025-03-08T14:30:12Z" | 0 | 0 | popV | [
"popV",
"joblib",
"biology",
"genomics",
"single-cell",
"anndata_version:0.11.3",
"python_version:3.11.11",
"tissue: thymus",
"license:cc-by-4.0",
"region:us"
] | null | "2025-03-08T14:30:02Z" | ---
library_name: popV
license: cc-by-4.0
tags:
- biology
- genomics
- single-cell
- anndata_version:0.11.3
- python_version:3.11.11
- popV
- 'tissue: thymus'
---
Popular Vote (popV) model for automated cell type annotation of single-cell RNA-seq data. We provide here pretrained models
for plug-in use in your own analysis.
Follow our [tutorial](https://github.com/YosefLab/popV/blob/main/tabula_sapiens_tutorial.ipynb) to learn how to use the model for cell type annotation.
# Model description
Ageing is characterized by a progressive loss of physiological integrity, leading to impaired function and increased vulnerability to death. Despite rapid advances over recent years, many of the molecular and cellular processes that underlie the progressive loss of healthy physiology are poorly understood. To gain a better insight into these processes, here we generate a single-cell transcriptomic atlas across the lifespan of Mus musculus that includes data from 23 tissues and organs. We found cell-specific changes occurring across multiple cell types and organs, as well as age-related changes in the cellular composition of different organs. Using single-cell transcriptomic data, we assessed cell-type-specific manifestations of different hallmarks of ageing—such as senescence, genomic instability and changes in the immune system. This transcriptomic atlas—which we denote Tabula Muris Senis, or ‘Mouse Ageing Cell Atlas’—provides molecular information about how the most important hallmarks of ageing are reflected in a broad range of tissues and cell types.
**Link to CELLxGENE**:
Link to the [data](https://cellxgene.cziscience.com/e/6e4f871d-fd7c-4909-8c14-e4c9957c2e8f.cxg/) in the CELLxGENE browser for interactive exploration of the data and download of the source data.
**Training Code URL**:
Not provided by uploader.
# Metrics
We provide here accuracies for each of the experts and the ensemble model. The validation set accuracies are
computed on a 10% random subset of the data that was not used for training.
| Cell Type | N cells | celltypist | knn bbknn | knn harmony | knn on scvi | onclass | scanvi | svm | xgboost | Consensus Prediction |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| professional antigen presenting cell | 259 | 0.99 | 1.00 | 0.99 | 0.99 | 0.00 | 1.00 | 1.00 | 0.99 | 1.00 |
| DN4 thymocyte | 196 | 0.93 | 0.96 | 0.96 | 0.92 | 0.00 | 0.93 | 0.96 | 0.95 | 0.96 |
| thymocyte | 201 | 0.99 | 1.00 | 0.98 | 0.99 | 0.00 | 0.99 | 0.99 | 0.99 | 1.00 |
| immature T cell | 161 | 0.94 | 0.98 | 0.97 | 0.96 | 0.00 | 0.94 | 0.96 | 0.94 | 0.97 |
| double negative thymocyte | 98 | 0.83 | 0.92 | 0.88 | 0.81 | 0.00 | 0.84 | 0.90 | 0.90 | 0.90 |
| DN3 thymocyte | 13 | 0.96 | 1.00 | 1.00 | 0.83 | 0.00 | 0.93 | 0.96 | 0.96 | 1.00 |
The train accuracies are computed on the training data.
| Cell Type | N cells | celltypist | knn bbknn | knn harmony | knn on scvi | onclass | scanvi | svm | xgboost | Consensus Prediction |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| professional antigen presenting cell | 2223 | 0.99 | 0.99 | 0.99 | 0.99 | 0.00 | 0.99 | 0.99 | 0.99 | 0.99 |
| DN4 thymocyte | 1804 | 0.94 | 0.96 | 0.98 | 0.94 | 0.00 | 0.95 | 0.97 | 0.97 | 0.98 |
| thymocyte | 1691 | 0.99 | 0.99 | 0.99 | 0.99 | 0.00 | 0.98 | 0.99 | 0.98 | 0.99 |
| immature T cell | 1623 | 0.96 | 0.97 | 0.98 | 0.96 | 0.00 | 0.95 | 0.97 | 0.97 | 0.98 |
| double negative thymocyte | 873 | 0.85 | 0.92 | 0.94 | 0.85 | 0.00 | 0.88 | 0.93 | 0.93 | 0.95 |
| DN3 thymocyte | 133 | 0.91 | 0.96 | 0.97 | 0.87 | 0.00 | 0.93 | 1.00 | 0.97 | 0.98 |
</details>
# References
A single-cell transcriptomic atlas characterizes ageing tissues in the mouse, The Tabula Muris Consortium, Nicole Almanzar, Jane Antony, Ankit S. Baghel, Isaac Bakerman, Ishita Bansal, Ben A. Barres, Philip A. Beachy, Daniela Berdnik, Biter Bilen, Douglas Brownfield, Corey Cain, Charles K. F. Chan, Michelle B. Chen, Michael F. Clarke, Stephanie D. Conley, Spyros Darmanis, Aaron Demers, Kubilay Demir, Antoine de Morree, Tessa Divita, Haley du Bois, Hamid Ebadi, F. Hernán Espinoza, Matt Fish, Qiang Gan, Benson M. George, Astrid Gillich, Rafael Gòmez-Sjöberg, Foad Green, Geraldine Genetiano, Xueying Gu, Gunsagar S. Gulati, Oliver Hahn, Michael Seamus Haney, Yan Hang, Lincoln Harris, Mu He, Shayan Hosseinzadeh, Albin Huang, Kerwyn Casey Huang, Tal Iram, Taichi Isobe, Feather Ives, Robert C. Jones, Kevin S. Kao, Jim Karkanias, Guruswamy Karnam, Andreas Keller, Aaron M. Kershner, Nathalie Khoury, Seung K. Kim, Bernhard M. Kiss, William Kong, Mark A. Krasnow, Maya E. Kumar, Christin S. Kuo, Jonathan Lam, Davis P. Lee, Song E. Lee, Benoit Lehallier, Olivia Leventhal, Guang Li, Qingyun Li, Ling Liu, Annie Lo, Wan-Jin Lu, Maria F. Lugo-Fagundo, Anoop Manjunath, Andrew P. May, Ashley Maynard, Aaron McGeever, Marina McKay, M. Windy McNerney, Bryan Merrill, Ross J. Metzger, Marco Mignardi, Dullei Min, Ahmad N. Nabhan, Norma F. Neff, Katharine M. Ng, Patricia K. Nguyen, Joseph Noh, Roel Nusse, Róbert Pálovics, Rasika Patkar, Weng Chuan Peng, Lolita Penland, Angela Oliveira Pisco, Katherine Pollard, Robert Puccinelli, Zhen Qi, Stephen R. Quake, Thomas A. Rando, Eric J. Rulifson, Nicholas Schaum, Joe M. Segal, Shaheen S. Sikandar, Rahul Sinha, Rene V. Sit, Justin Sonnenburg, Daniel Staehli, Krzysztof Szade, Michelle Tan, Weilun Tan, Cristina Tato, Krissie Tellez, Laughing Bear Torrez Dulgeroff, Kyle J. Travaglini, Carolina Tropini, Margaret Tsui, Lucas Waldburger, Bruce M. Wang, Linda J. van Weele, Kenneth Weinberg, Irving L. Weissman, Michael N. Wosczyna, Sean M. Wu, Tony Wyss-Coray, Jinyi Xiang, Soso Xue, Kevin A. Yamauchi, Andrew C. Yang, Lakshmi P. Yerra, Justin Youngyunpipatkul, Brian Yu, Fabio Zanini, Macy E. Zardeneta, Alexander Zee, Chunyu Zhao, Fan Zhang, Hui Zhang, Martin Jinye Zhang, Lu Zhou, James Zou; Nature, doi: https://doi.org/10.1038/s41586-020-2496-1
|
huggingtweets/nyshra_ | huggingtweets | "2021-05-22T17:07:53Z" | 6 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1133336385711214592/CVletvRA_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Nyshra 🤖 AI Bot </div>
<div style="font-size: 15px">@nyshra_ bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@nyshra_'s tweets](https://twitter.com/nyshra_).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 2405 |
| Retweets | 1968 |
| Short tweets | 134 |
| Tweets kept | 303 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/39zp1zix/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @nyshra_'s tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/128qa3vv) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/128qa3vv/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/nyshra_')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Niggendar/ObroModern_ | Niggendar | "2024-08-23T10:42:00Z" | 80 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-08-23T10:29:15Z" | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
seglinglin/Historical-Illustration-Extraction | seglinglin | "2025-02-25T15:31:55Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-09-20T12:15:10Z" | ---
license: mit
---
# Model for visual element extraction in historical document |
DevQuasar/Nexusflow.NexusRaven-V2-13B-GGUF | DevQuasar | "2025-02-21T21:46:14Z" | 0 | 0 | null | [
"gguf",
"text-generation",
"base_model:Nexusflow/NexusRaven-V2-13B",
"base_model:quantized:Nexusflow/NexusRaven-V2-13B",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-21T19:32:11Z" | ---
base_model:
- Nexusflow/NexusRaven-V2-13B
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [Nexusflow/NexusRaven-V2-13B](https://huggingface.co/Nexusflow/NexusRaven-V2-13B)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
Sakalti/SJT-4B | Sakalti | "2025-01-16T06:50:51Z" | 64 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"dataset:yahma/alpaca-cleaned",
"dataset:HachiML/Hachi-Alpaca",
"base_model:Sakalti/Tara-3.8B-v1.1",
"base_model:finetune:Sakalti/Tara-3.8B-v1.1",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-12T02:33:21Z" | ---
base_model: Sakalti/Tara-3.8B-v1.1
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: mit
inference: true
language:
- en
library_name: transformers
datasets:
- yahma/alpaca-cleaned
- HachiML/Hachi-Alpaca
widget:
- messages:
- role: user
content: こんにちは!
- messages:
- role: user
content: ドラゴンフルーツは何科ですか?
- messages:
- role: user
content: hello!
---
This models was using "yahma/alpaca-cleaned" This dataset is licensed under CC BY SA 4.0
Last Update : 2023-05-25
# Uploaded model
- **Developed by:** Sakalti
- **License:** apache-2.0
- **Finetuned from model :** Sakalti/Tara-3.8B-v1.1
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
mradermacher/LuminRP-13B-128k-i1-GGUF | mradermacher | "2024-05-11T18:48:55Z" | 58 | 0 | transformers | [
"transformers",
"gguf",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"en",
"base_model:Ppoyaa/LuminRP-13B-128k",
"base_model:quantized:Ppoyaa/LuminRP-13B-128k",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | "2024-05-11T03:04:10Z" | ---
base_model: Ppoyaa/LuminRP-13B-128k
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- frankenmoe
- merge
- mergekit
- lazymergekit
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/Ppoyaa/LuminRP-13B-128k
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/LuminRP-13B-128k-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/LuminRP-13B-128k-i1-GGUF/resolve/main/LuminRP-13B-128k.i1-IQ1_S.gguf) | i1-IQ1_S | 2.8 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/LuminRP-13B-128k-i1-GGUF/resolve/main/LuminRP-13B-128k.i1-IQ1_M.gguf) | i1-IQ1_M | 3.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/LuminRP-13B-128k-i1-GGUF/resolve/main/LuminRP-13B-128k.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/LuminRP-13B-128k-i1-GGUF/resolve/main/LuminRP-13B-128k.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/LuminRP-13B-128k-i1-GGUF/resolve/main/LuminRP-13B-128k.i1-IQ2_S.gguf) | i1-IQ2_S | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/LuminRP-13B-128k-i1-GGUF/resolve/main/LuminRP-13B-128k.i1-IQ2_M.gguf) | i1-IQ2_M | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/LuminRP-13B-128k-i1-GGUF/resolve/main/LuminRP-13B-128k.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/LuminRP-13B-128k-i1-GGUF/resolve/main/LuminRP-13B-128k.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/LuminRP-13B-128k-i1-GGUF/resolve/main/LuminRP-13B-128k.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/LuminRP-13B-128k-i1-GGUF/resolve/main/LuminRP-13B-128k.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.7 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/LuminRP-13B-128k-i1-GGUF/resolve/main/LuminRP-13B-128k.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/LuminRP-13B-128k-i1-GGUF/resolve/main/LuminRP-13B-128k.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/LuminRP-13B-128k-i1-GGUF/resolve/main/LuminRP-13B-128k.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/LuminRP-13B-128k-i1-GGUF/resolve/main/LuminRP-13B-128k.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/LuminRP-13B-128k-i1-GGUF/resolve/main/LuminRP-13B-128k.i1-IQ4_XS.gguf) | i1-IQ4_XS | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/LuminRP-13B-128k-i1-GGUF/resolve/main/LuminRP-13B-128k.i1-Q4_0.gguf) | i1-Q4_0 | 7.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/LuminRP-13B-128k-i1-GGUF/resolve/main/LuminRP-13B-128k.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/LuminRP-13B-128k-i1-GGUF/resolve/main/LuminRP-13B-128k.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/LuminRP-13B-128k-i1-GGUF/resolve/main/LuminRP-13B-128k.i1-Q5_K_S.gguf) | i1-Q5_K_S | 9.0 | |
| [GGUF](https://huggingface.co/mradermacher/LuminRP-13B-128k-i1-GGUF/resolve/main/LuminRP-13B-128k.i1-Q5_K_M.gguf) | i1-Q5_K_M | 9.2 | |
| [GGUF](https://huggingface.co/mradermacher/LuminRP-13B-128k-i1-GGUF/resolve/main/LuminRP-13B-128k.i1-Q6_K.gguf) | i1-Q6_K | 10.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
balakhonoff/Enlighten_Instruct | balakhonoff | "2024-03-24T15:43:42Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | "2024-03-22T19:10:02Z" | ---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 |
Rungalad/falcon-7b-lora-text2sql_100ep | Rungalad | "2023-09-27T10:08:41Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-09-26T15:56:09Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
baby-dev/2a0ebe96-2a3a-4a4e-9ff6-85048b36302b | baby-dev | "2025-02-05T08:31:03Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-1.5B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-02-05T08:26:12Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2a0ebe96-2a3a-4a4e-9ff6-85048b36302b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
# 2a0ebe96-2a3a-4a4e-9ff6-85048b36302b
This model is a fine-tuned version of [unsloth/Qwen2.5-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5993
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
rim0/quadruped_mechas | rim0 | "2023-03-01T13:48:10Z" | 0 | 6 | null | [
"Stable Diffusion",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2023-02-28T17:16:33Z" | ---
license: creativeml-openrail-m
language:
- en
tags:
- Stable Diffusion
- text-to-image
---
You can use this model to create realistic or sci-fi non-humanoid robots (mainly quadruped mechas).
Sampler recommends using DPM++ SDE Karras
try this prompt:
masterpiece,best quality,hignity 8k wallpaper,illustration,mecha \\\(flegs\\\), no humans, robot, mecha, realistic, smoke, science fiction, building, weapon, city, blurry, gun, damaged, outdoors, cannon, blurry background, non-humanoid robot, military
Regarding the model, I only tried it on my merge model, but I believe it will also perform well on other models, such as AbyssOrangeMix2.Or you can try using [dreamboxmix-M](https://huggingface.co/rim0/dreamboxmix-M).
<img src=https://huggingface.co/rim0/quadruped_mechas/resolve/main/images/1%20(26).png>
<img src=https://huggingface.co/rim0/quadruped_mechas/resolve/main/images/1%20(13).png>
<img src=https://huggingface.co/rim0/quadruped_mechas/resolve/main/images/1%20(14).png>
<img src=https://huggingface.co/rim0/quadruped_mechas/resolve/main/images/1%20(29).png>
<img src=https://huggingface.co/rim0/quadruped_mechas/resolve/main/images/1%20(21).png>
<img src=https://huggingface.co/rim0/quadruped_mechas/resolve/main/images/1%20(22).png>
<img src=https://huggingface.co/rim0/quadruped_mechas/resolve/main/images/1%20(23).png>
<img src=https://huggingface.co/rim0/quadruped_mechas/resolve/main/images/1%20(15).png>
<img src=https://huggingface.co/rim0/quadruped_mechas/resolve/main/images/1%20(9).png>
<img src=https://huggingface.co/rim0/quadruped_mechas/resolve/main/images/1%20(12).png>
<img src=https://huggingface.co/rim0/quadruped_mechas/resolve/main/images/1%20(27).png>
<img src=https://huggingface.co/rim0/quadruped_mechas/resolve/main/images/1%20(28).png>
If you like this mod, please consider following me on [twitter](https://twitter.com/rimgirlO) and [pixiv](https://www.pixiv.net/users/17103368) or supporting me on [ko-fi](https://ko-fi.com/rimg0) (only if you think it's worth it). |
mradermacher/Llama3-8B-SuperNova-Spectrum-Hermes-DPO-i1-GGUF | mradermacher | "2024-09-26T04:17:07Z" | 132 | 0 | transformers | [
"transformers",
"gguf",
"dpo",
"rlhf",
"trl",
"en",
"base_model:yuvraj17/Llama3-8B-SuperNova-Spectrum-Hermes-DPO",
"base_model:quantized:yuvraj17/Llama3-8B-SuperNova-Spectrum-Hermes-DPO",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2024-09-26T01:56:38Z" | ---
base_model: yuvraj17/Llama3-8B-SuperNova-Spectrum-Hermes-DPO
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- dpo
- rlhf
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/yuvraj17/Llama3-8B-SuperNova-Spectrum-Hermes-DPO
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama3-8B-SuperNova-Spectrum-Hermes-DPO-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama3-8B-SuperNova-Spectrum-Hermes-DPO-i1-GGUF/resolve/main/Llama3-8B-SuperNova-Spectrum-Hermes-DPO.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama3-8B-SuperNova-Spectrum-Hermes-DPO-i1-GGUF/resolve/main/Llama3-8B-SuperNova-Spectrum-Hermes-DPO.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama3-8B-SuperNova-Spectrum-Hermes-DPO-i1-GGUF/resolve/main/Llama3-8B-SuperNova-Spectrum-Hermes-DPO.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-8B-SuperNova-Spectrum-Hermes-DPO-i1-GGUF/resolve/main/Llama3-8B-SuperNova-Spectrum-Hermes-DPO.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-8B-SuperNova-Spectrum-Hermes-DPO-i1-GGUF/resolve/main/Llama3-8B-SuperNova-Spectrum-Hermes-DPO.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-8B-SuperNova-Spectrum-Hermes-DPO-i1-GGUF/resolve/main/Llama3-8B-SuperNova-Spectrum-Hermes-DPO.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-8B-SuperNova-Spectrum-Hermes-DPO-i1-GGUF/resolve/main/Llama3-8B-SuperNova-Spectrum-Hermes-DPO.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama3-8B-SuperNova-Spectrum-Hermes-DPO-i1-GGUF/resolve/main/Llama3-8B-SuperNova-Spectrum-Hermes-DPO.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-8B-SuperNova-Spectrum-Hermes-DPO-i1-GGUF/resolve/main/Llama3-8B-SuperNova-Spectrum-Hermes-DPO.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-8B-SuperNova-Spectrum-Hermes-DPO-i1-GGUF/resolve/main/Llama3-8B-SuperNova-Spectrum-Hermes-DPO.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama3-8B-SuperNova-Spectrum-Hermes-DPO-i1-GGUF/resolve/main/Llama3-8B-SuperNova-Spectrum-Hermes-DPO.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama3-8B-SuperNova-Spectrum-Hermes-DPO-i1-GGUF/resolve/main/Llama3-8B-SuperNova-Spectrum-Hermes-DPO.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-8B-SuperNova-Spectrum-Hermes-DPO-i1-GGUF/resolve/main/Llama3-8B-SuperNova-Spectrum-Hermes-DPO.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama3-8B-SuperNova-Spectrum-Hermes-DPO-i1-GGUF/resolve/main/Llama3-8B-SuperNova-Spectrum-Hermes-DPO.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama3-8B-SuperNova-Spectrum-Hermes-DPO-i1-GGUF/resolve/main/Llama3-8B-SuperNova-Spectrum-Hermes-DPO.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-8B-SuperNova-Spectrum-Hermes-DPO-i1-GGUF/resolve/main/Llama3-8B-SuperNova-Spectrum-Hermes-DPO.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.8 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-8B-SuperNova-Spectrum-Hermes-DPO-i1-GGUF/resolve/main/Llama3-8B-SuperNova-Spectrum-Hermes-DPO.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.8 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-8B-SuperNova-Spectrum-Hermes-DPO-i1-GGUF/resolve/main/Llama3-8B-SuperNova-Spectrum-Hermes-DPO.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.8 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-8B-SuperNova-Spectrum-Hermes-DPO-i1-GGUF/resolve/main/Llama3-8B-SuperNova-Spectrum-Hermes-DPO.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-8B-SuperNova-Spectrum-Hermes-DPO-i1-GGUF/resolve/main/Llama3-8B-SuperNova-Spectrum-Hermes-DPO.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-8B-SuperNova-Spectrum-Hermes-DPO-i1-GGUF/resolve/main/Llama3-8B-SuperNova-Spectrum-Hermes-DPO.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3-8B-SuperNova-Spectrum-Hermes-DPO-i1-GGUF/resolve/main/Llama3-8B-SuperNova-Spectrum-Hermes-DPO.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-8B-SuperNova-Spectrum-Hermes-DPO-i1-GGUF/resolve/main/Llama3-8B-SuperNova-Spectrum-Hermes-DPO.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-8B-SuperNova-Spectrum-Hermes-DPO-i1-GGUF/resolve/main/Llama3-8B-SuperNova-Spectrum-Hermes-DPO.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
huihui-ai/Dria-Agent-a-7B-abliterated | huihui-ai | "2025-01-20T13:21:45Z" | 62 | 3 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"code",
"chat",
"qwen",
"qwen-coder",
"agent",
"abliterated",
"uncensored",
"conversational",
"en",
"base_model:driaforall/Dria-Agent-a-7B",
"base_model:finetune:driaforall/Dria-Agent-a-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-20T12:55:57Z" | ---
license: apache-2.0
license_link: https://huggingface.co/huihui-ai/Dria-Agent-a-7B-abliterated/blob/main/LICENSE
language:
- en
base_model:
- driaforall/Dria-Agent-a-7B
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- chat
- qwen
- qwen-coder
- agent
- abliterated
- uncensored
---
# huihui-ai/Dria-Agent-a-7B-abliterated
This is an uncensored version of [driaforall/Dria-Agent-a-7B](https://huggingface.co/driaforall/Dria-Agent-a-7B) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it).
This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.
|
paisanx/Reinforce-Cartpole-V1 | paisanx | "2023-12-28T16:27:46Z" | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-12-28T16:27:42Z" | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole-V1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 145.80 +/- 6.84
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
seohuibae/sft-full-llama3.1-sci | seohuibae | "2025-04-12T15:49:12Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-25T07:00:40Z" | ---
library_name: transformers
license: llama3.1
base_model: meta-llama/Llama-3.1-8B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: sft-full-llama3.1-sci
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft-full-llama3.1-sci
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on the eto_sciworld_sft dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- total_eval_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.49.0
- Pytorch 2.2.0a0+81ea7a4
- Datasets 3.3.2
- Tokenizers 0.21.0
|
LoneStriker/Mixtral-8x7B-v0.1-3.0bpw-h6-exl2-2 | LoneStriker | "2023-12-17T16:18:03Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"fr",
"it",
"de",
"es",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-12-17T15:30:41Z" | ---
license: apache-2.0
language:
- fr
- it
- de
- es
- en
---
# Model Card for Mixtral-8x7B
The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mistral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/).
## Warning
This repo contains weights that are compatible with [vLLM](https://github.com/vllm-project/vllm) serving of the model as well as Hugging Face [transformers](https://github.com/huggingface/transformers) library. It is based on the original Mixtral [torrent release](magnet:?xt=urn:btih:5546272da9065eddeb6fcd7ffddeef5b75be79a7&dn=mixtral-8x7b-32kseqlen&tr=udp%3A%2F%http://2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=http%3A%2F%http://2Ftracker.openbittorrent.com%3A80%2Fannounce), but the file format and parameter names are different. Please note that model cannot (yet) be instantiated with HF.
## Run the model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
text = "Hello my name is"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
By default, transformers will load the model in full precision. Therefore you might be interested to further reduce down the memory requirements to run the model through the optimizations we offer in HF ecosystem:
### In half-precision
Note `float16` precision only works on GPU devices
<details>
<summary> Click to expand </summary>
```diff
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16).to(0)
text = "Hello my name is"
+ inputs = tokenizer(text, return_tensors="pt").to(0)
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>
### Lower precision using (8-bit & 4-bit) using `bitsandbytes`
<details>
<summary> Click to expand </summary>
```diff
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True)
text = "Hello my name is"
+ inputs = tokenizer(text, return_tensors="pt").to(0)
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>
### Load the model with Flash Attention 2
<details>
<summary> Click to expand </summary>
```diff
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, use_flash_attention_2=True)
text = "Hello my name is"
+ inputs = tokenizer(text, return_tensors="pt").to(0)
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>
## Notice
Mixtral-8x7B is a pretrained base model and therefore does not have any moderation mechanisms.
# The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed. |
groonga/bge-m3-Q4_K_M-GGUF | groonga | "2025-01-15T02:22:09Z" | 29 | 0 | sentence-transformers | [
"sentence-transformers",
"gguf",
"feature-extraction",
"sentence-similarity",
"llama-cpp",
"gguf-my-repo",
"base_model:BAAI/bge-m3",
"base_model:quantized:BAAI/bge-m3",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2025-01-15T02:22:04Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- llama-cpp
- gguf-my-repo
license: mit
base_model: BAAI/bge-m3
---
# ktou/bge-m3-Q4_K_M-GGUF
This model was converted to GGUF format from [`BAAI/bge-m3`](https://huggingface.co/BAAI/bge-m3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/BAAI/bge-m3) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo ktou/bge-m3-Q4_K_M-GGUF --hf-file bge-m3-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo ktou/bge-m3-Q4_K_M-GGUF --hf-file bge-m3-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo ktou/bge-m3-Q4_K_M-GGUF --hf-file bge-m3-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo ktou/bge-m3-Q4_K_M-GGUF --hf-file bge-m3-q4_k_m.gguf -c 2048
```
|
pfh1976/missionGenPFH-dataset | pfh1976 | "2023-11-08T02:01:35Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"en",
"arxiv:1910.09700",
"base_model:bigscience/bloom-1b7",
"base_model:adapter:bigscience/bloom-1b7",
"license:bigscience-openrail-m",
"region:us"
] | null | "2023-11-08T00:52:54Z" | ---
library_name: peft
base_model: bigscience/bloom-1b7
license: bigscience-openrail-m
language:
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.7.0.dev0 |
nrbhole/layoutxlm-finetuned-xfund-fr | nrbhole | "2024-03-06T20:34:59Z" | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"layoutlmv2",
"token-classification",
"generated_from_trainer",
"dataset:xfund-custom",
"base_model:microsoft/layoutxlm-base",
"base_model:finetune:microsoft/layoutxlm-base",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-03-06T19:10:35Z" | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- xfund-custom
base_model: microsoft/layoutxlm-base
model-index:
- name: layoutxlm-finetuned-xfund-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutxlm-finetuned-xfund-fr
This model is a fine-tuned version of [microsoft/layoutxlm-base](https://huggingface.co/microsoft/layoutxlm-base) on the xfund-custom dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
bay-llm/gemma-9b-SFT-1020-large-16bit | bay-llm | "2024-12-25T23:40:40Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"ja",
"dataset:kanhatakeyama/wizardlm8x22b-logical-math-coding-sft_additional-ja",
"dataset:kanhatakeyama/AutoMultiTurnByCalm3-22B",
"dataset:kanhatakeyama/ramdom-to-fixed-multiturn-Calm3",
"base_model:google/gemma-2-9b",
"base_model:finetune:google/gemma-2-9b",
"license:gemma",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-16T21:00:59Z" | ---
base_model:
- google/gemma-2-9b
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- trl
license: gemma
language:
- en
- ja
datasets:
- kanhatakeyama/wizardlm8x22b-logical-math-coding-sft_additional-ja
- kanhatakeyama/AutoMultiTurnByCalm3-22B
- kanhatakeyama/ramdom-to-fixed-multiturn-Calm3
---
# Model Card for Model ID
Instruction tuning
The models have been fine-tuned.
Usage
```python
!pip install vllm==0.6.4.post1 --force-reinstall
import time
import torch
import transformers
from transformers import (
AutoTokenizer,
AutoModelForCausalLM,
)
import vllm ### packaging==24.1にしないとエラーになる!! ###
print(vllm.__version__)
MAX_LENGTH = 1000
MODEL_NAME = "bay-llm/gemma-9b-SFT-1020-large-16bit" # コンペで提出したいモデルに適宜置換
llm = vllm.LLM(
model=MODEL_NAME,
tensor_parallel_size=1,
gpu_memory_utilization=0.95,
trust_remote_code=True,
max_model_len=1024,
)
tokenizer = llm.get_tokenizer()
# ELYZA-tasks-100-TVの読み込み。事前にファイルをアップロードしてください
# データセットの読み込み。
# omnicampusの開発環境では、左にタスクのjsonlをドラッグアンドドロップしてから実行。
import json
datasets = []
with open("../elyza-tasks-100-TV_0.jsonl", "r") as f:
item = ""
for line in f:
line = line.strip()
item += line
if item.endswith("}"):
datasets.append(json.loads(item))
item = ""
print(datasets[0])
messages_list = [
[{"role": "user", "content": datasets[i]["input"]}] for i in range(len(datasets))
]
prompts = [line[0]["content"] for line in messages_list]
prompt_token_ids = [tokenizer.apply_chat_template(messages, add_generation_prompt=True) for messages in messages_list]
sampling_params = vllm.SamplingParams(
temperature=0.5,
max_tokens=512,
)
outputs = llm.generate(prompt_token_ids=prompt_token_ids, sampling_params=sampling_params)
for prompt, response in zip(prompts, outputs):
print("prompt:", prompt)
print("output:", response.outputs[0].text.strip())
print("-"*80)
import json
data = [{
"task_id": i,
"input": prompts[i],
"output": outputs[i].outputs[0].text.strip()
} for i in range(len(datasets))]
file_path = 'submmit.jsonl'
with open(file_path, 'w', encoding='utf-8') as file:
for entry in data:
json.dump(entry, file, ensure_ascii=False)
file.write('\n')
```
# Uploaded model
- **Developed by:** bay-llm
- **License:** gemma
- **Finetuned from model :** unsloth/gemma-2-9b-bnb-4bit
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
ABHISHEKMONU2001/llama_finetunning_9_April | ABHISHEKMONU2001 | "2024-04-11T07:22:58Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-04-11T07:22:26Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
klei1/bleta-logjike-27b-lora | klei1 | "2025-03-22T18:36:02Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-27b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-27b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-03-22T18:35:47Z" | ---
base_model: unsloth/gemma-3-27b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** klei1
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-27b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ManthanKulakarni/Text2JQLBuilder_v2 | ManthanKulakarni | "2023-06-22T18:40:32Z" | 10 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"Jira",
"Atlasian",
"T5",
"Flan-T5",
"JQL",
"Query",
"en",
"dataset:ManthanKulakarni/Text2JQL_v2",
"license:bsd",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-06-21T11:14:52Z" | ---
license: bsd
datasets:
- ManthanKulakarni/Text2JQL_v2
language:
- en
library_name: transformers
pipeline_tag: text2text-generation
tags:
- Jira
- Atlasian
- T5
- Flan-T5
- JQL
- Query
---
## Model in Action 🚀
```python
from transformers import AutoModelWithLMHead, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("ManthanKulakarni/Text2JQLBuilder_v2")
model = AutoModelWithLMHead.from_pretrained("ManthanKulakarni/Text2JQLBuilder_v2")
def gen_sentence(words, max_length=64):
input_text = words
features = tokenizer([input_text], return_tensors='pt')
output = model.generate(input_ids=features['input_ids'],
attention_mask=features['attention_mask'],
max_length=max_length)
return tokenizer.decode(output[0], skip_special_tokens=True)
words = "JQL: all story under project ACM"
gen_sentence(words)
# output: 'JQL: project = ACM'
``` |
Niggendar/cyberrealisticPony_v20a | Niggendar | "2024-06-12T19:34:55Z" | 84 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-06-12T19:28:02Z" | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Best000/c6ac3ff4-99fc-4d6b-9062-b7b2a6d4ea29 | Best000 | "2025-02-08T19:14:57Z" | 11 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:jhflow/mistral7b-lora-multi-turn-v2",
"base_model:adapter:jhflow/mistral7b-lora-multi-turn-v2",
"region:us"
] | null | "2025-02-08T18:56:38Z" | ---
library_name: peft
base_model: jhflow/mistral7b-lora-multi-turn-v2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c6ac3ff4-99fc-4d6b-9062-b7b2a6d4ea29
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
# c6ac3ff4-99fc-4d6b-9062-b7b2a6d4ea29
This model is a fine-tuned version of [jhflow/mistral7b-lora-multi-turn-v2](https://huggingface.co/jhflow/mistral7b-lora-multi-turn-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0080
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
b13nb3n/solid_snake_04 | b13nb3n | "2025-02-25T17:21:37Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2203.05482",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-17T09:54:12Z" | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# solid_snake_04
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* ./models/model7
* ./models/model6
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: "./models/model7"
parameters:
weight: 0.48
# density: 0.49
- model: "./models/model6"
parameters:
weight: 0.46
# density: 0.46
merge_method: linear
dtype: float16
hardware:
device: "cpu"
optimize:
enabled: true
techniques:
- "bettertransformer"
- "quantize_int4"
output:
precision: "fp16"
options:
lazy_unpickle: true
allow_crimes: true
```
|
mradermacher/OpenCerebrum-1.0-7b-SFT-i1-GGUF | mradermacher | "2025-01-04T13:07:03Z" | 149 | 0 | transformers | [
"transformers",
"gguf",
"open-source",
"code",
"math",
"chemistry",
"biology",
"text-generation",
"question-answering",
"en",
"dataset:Open-Orca/SlimOrca",
"dataset:glaiveai/glaive-code-assistant",
"dataset:camel-ai/physics",
"dataset:camel-ai/math",
"dataset:camel-ai/chemistry",
"dataset:camel-ai/biology",
"dataset:WizardLM/WizardLM_evol_instruct_V2_196k",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:grimulkan/theory-of-mind",
"dataset:Vezora/Tested-22k-Python-Alpaca",
"dataset:m-a-p/Code-Feedback",
"dataset:Locutusque/arc-cot",
"dataset:jondurbin/airoboros-2.1",
"dataset:WizardLM/WizardLM_evol_instruct_70k",
"base_model:Locutusque/OpenCerebrum-1.0-7b-SFT",
"base_model:quantized:Locutusque/OpenCerebrum-1.0-7b-SFT",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | question-answering | "2025-01-04T12:20:44Z" | ---
base_model: Locutusque/OpenCerebrum-1.0-7b-SFT
datasets:
- Open-Orca/SlimOrca
- glaiveai/glaive-code-assistant
- camel-ai/physics
- camel-ai/math
- camel-ai/chemistry
- camel-ai/biology
- WizardLM/WizardLM_evol_instruct_V2_196k
- microsoft/orca-math-word-problems-200k
- grimulkan/theory-of-mind
- Vezora/Tested-22k-Python-Alpaca
- m-a-p/Code-Feedback
- Locutusque/arc-cot
- jondurbin/airoboros-2.1
- WizardLM/WizardLM_evol_instruct_70k
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- open-source
- code
- math
- chemistry
- biology
- text-generation
- question-answering
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Locutusque/OpenCerebrum-1.0-7b-SFT
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/OpenCerebrum-1.0-7b-SFT-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-1.0-7b-SFT-i1-GGUF/resolve/main/OpenCerebrum-1.0-7b-SFT.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-1.0-7b-SFT-i1-GGUF/resolve/main/OpenCerebrum-1.0-7b-SFT.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-1.0-7b-SFT-i1-GGUF/resolve/main/OpenCerebrum-1.0-7b-SFT.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-1.0-7b-SFT-i1-GGUF/resolve/main/OpenCerebrum-1.0-7b-SFT.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-1.0-7b-SFT-i1-GGUF/resolve/main/OpenCerebrum-1.0-7b-SFT.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-1.0-7b-SFT-i1-GGUF/resolve/main/OpenCerebrum-1.0-7b-SFT.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-1.0-7b-SFT-i1-GGUF/resolve/main/OpenCerebrum-1.0-7b-SFT.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-1.0-7b-SFT-i1-GGUF/resolve/main/OpenCerebrum-1.0-7b-SFT.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-1.0-7b-SFT-i1-GGUF/resolve/main/OpenCerebrum-1.0-7b-SFT.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-1.0-7b-SFT-i1-GGUF/resolve/main/OpenCerebrum-1.0-7b-SFT.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-1.0-7b-SFT-i1-GGUF/resolve/main/OpenCerebrum-1.0-7b-SFT.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-1.0-7b-SFT-i1-GGUF/resolve/main/OpenCerebrum-1.0-7b-SFT.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-1.0-7b-SFT-i1-GGUF/resolve/main/OpenCerebrum-1.0-7b-SFT.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-1.0-7b-SFT-i1-GGUF/resolve/main/OpenCerebrum-1.0-7b-SFT.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-1.0-7b-SFT-i1-GGUF/resolve/main/OpenCerebrum-1.0-7b-SFT.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-1.0-7b-SFT-i1-GGUF/resolve/main/OpenCerebrum-1.0-7b-SFT.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-1.0-7b-SFT-i1-GGUF/resolve/main/OpenCerebrum-1.0-7b-SFT.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-1.0-7b-SFT-i1-GGUF/resolve/main/OpenCerebrum-1.0-7b-SFT.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-1.0-7b-SFT-i1-GGUF/resolve/main/OpenCerebrum-1.0-7b-SFT.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-1.0-7b-SFT-i1-GGUF/resolve/main/OpenCerebrum-1.0-7b-SFT.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-1.0-7b-SFT-i1-GGUF/resolve/main/OpenCerebrum-1.0-7b-SFT.i1-Q4_1.gguf) | i1-Q4_1 | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-1.0-7b-SFT-i1-GGUF/resolve/main/OpenCerebrum-1.0-7b-SFT.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-1.0-7b-SFT-i1-GGUF/resolve/main/OpenCerebrum-1.0-7b-SFT.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/OpenCerebrum-1.0-7b-SFT-i1-GGUF/resolve/main/OpenCerebrum-1.0-7b-SFT.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
jjqsdq/RWKV-Claude | jjqsdq | "2023-10-10T07:17:56Z" | 0 | 0 | null | [
"license:cc0-1.0",
"region:us"
] | null | "2023-10-10T07:10:31Z" | ---
license: cc0-1.0
---
Converted From [LocalNSFW/RWKV-Claude](https://huggingface.co/LocalNSFW/RWKV-Claude)
ShareClaude计划的RWKV微调分支
项目的初心是,摆脱大公司的控制,建立所有人都能涩涩的本地大语言模型。
目前以7B 15G的全量微调模型为最佳效果,全面超过Claude-Slack版,略差于Claude2。
本项目完全依靠广大贡献者的聊天数据支持,还有不少匿名参与者提供专业卡,进行微调。
在此感谢他们的贡献。
我们目前已经基本结束了对Claude-Slack对话的收集,下一步计划主要围绕Claude2酒馆的对话收集展开。
如果你也希望参与这个项目的建设,欢迎加入ShareClaude交流群:839206500。
所有贡献者均有对模型的优先访问权,可以提前得到未发布的内测模型。
|
magnifi/parser_user_v14b_epoch_7_lr_0.002 | magnifi | "2024-07-18T16:29:51Z" | 75 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"base_model:finetune:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-18T16:23:47Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** magnifi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Gxg/Op_Bert_Merge | Gxg | "2022-10-14T10:15:41Z" | 102 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"bert",
"feature-extraction",
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2022-10-14T06:40:10Z" | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# BERT base model (uncased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Model variations
BERT has originally been released in base and large variations, for cased and uncased input text. The uncased models also strips out an accent markers.
Chinese and multilingual uncased and cased versions followed shortly after.
Modified preprocessing with whole word masking has replaced subpiece masking in a following work, with the release of two models.
Other 24 smaller models are released afterwards.
The detailed release history can be found on the [google-research/bert readme](https://github.com/google-research/bert/blob/master/README.md) on github.
| Model | #params | Language |
|------------------------|--------------------------------|-------|
| [`bert-base-uncased`](https://huggingface.co/bert-base-uncased) | 110M | English |
| [`bert-large-uncased`](https://huggingface.co/bert-large-uncased) | 340M | English | sub
| [`bert-base-cased`](https://huggingface.co/bert-base-cased) | 110M | English |
| [`bert-large-cased`](https://huggingface.co/bert-large-cased) | 340M | English |
| [`bert-base-chinese`](https://huggingface.co/bert-base-chinese) | 110M | Chinese |
| [`bert-base-multilingual-cased`](https://huggingface.co/bert-base-multilingual-cased) | 110M | Multiple |
| [`bert-large-uncased-whole-word-masking`](https://huggingface.co/bert-large-uncased-whole-word-masking) | 340M | English |
| [`bert-large-cased-whole-word-masking`](https://huggingface.co/bert-large-cased-whole-word-masking) | 340M | English |
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
>>> unmasker("Hello I'm a [MASK] model.")
[{'sequence': "[CLS] hello i'm a fashion model. [SEP]",
'score': 0.1073106899857521,
'token': 4827,
'token_str': 'fashion'},
{'sequence': "[CLS] hello i'm a role model. [SEP]",
'score': 0.08774490654468536,
'token': 2535,
'token_str': 'role'},
{'sequence': "[CLS] hello i'm a new model. [SEP]",
'score': 0.05338378623127937,
'token': 2047,
'token_str': 'new'},
{'sequence': "[CLS] hello i'm a super model. [SEP]",
'score': 0.04667217284440994,
'token': 3565,
'token_str': 'super'},
{'sequence': "[CLS] hello i'm a fine model. [SEP]",
'score': 0.027095865458250046,
'token': 2986,
'token_str': 'fine'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = TFBertModel.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
>>> unmasker("The man worked as a [MASK].")
[{'sequence': '[CLS] the man worked as a carpenter. [SEP]',
'score': 0.09747550636529922,
'token': 10533,
'token_str': 'carpenter'},
{'sequence': '[CLS] the man worked as a waiter. [SEP]',
'score': 0.0523831807076931,
'token': 15610,
'token_str': 'waiter'},
{'sequence': '[CLS] the man worked as a barber. [SEP]',
'score': 0.04962705448269844,
'token': 13362,
'token_str': 'barber'},
{'sequence': '[CLS] the man worked as a mechanic. [SEP]',
'score': 0.03788609802722931,
'token': 15893,
'token_str': 'mechanic'},
{'sequence': '[CLS] the man worked as a salesman. [SEP]',
'score': 0.037680890411138535,
'token': 18968,
'token_str': 'salesman'}]
>>> unmasker("The woman worked as a [MASK].")
[{'sequence': '[CLS] the woman worked as a nurse. [SEP]',
'score': 0.21981462836265564,
'token': 6821,
'token_str': 'nurse'},
{'sequence': '[CLS] the woman worked as a waitress. [SEP]',
'score': 0.1597415804862976,
'token': 13877,
'token_str': 'waitress'},
{'sequence': '[CLS] the woman worked as a maid. [SEP]',
'score': 0.1154729500412941,
'token': 10850,
'token_str': 'maid'},
{'sequence': '[CLS] the woman worked as a prostitute. [SEP]',
'score': 0.037968918681144714,
'token': 19215,
'token_str': 'prostitute'},
{'sequence': '[CLS] the woman worked as a cook. [SEP]',
'score': 0.03042375110089779,
'token': 5660,
'token_str': 'cook'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
| Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average |
|:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:|
| | 84.6/83.4 | 71.2 | 90.5 | 93.5 | 52.1 | 85.8 | 88.9 | 66.4 | 79.6 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=bert-base-uncased">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
qwp4w3hyb/Nous-Hermes-2-SOLAR-10.7B-iMat-GGUF | qwp4w3hyb | "2024-03-25T10:25:26Z" | 117 | 0 | null | [
"gguf",
"llama",
"SOLAR",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"distillation",
"en",
"dataset:teknium/OpenHermes-2.5",
"base_model:NousResearch/Nous-Hermes-2-SOLAR-10.7B",
"base_model:quantized:NousResearch/Nous-Hermes-2-SOLAR-10.7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-03-24T18:34:58Z" | ---
base_model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
tags:
- llama
- SOLAR
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
model-index:
- name: Nous-Hermes-2-SOLAR-10.7B-iMat-GGUF
results: []
license: apache-2.0
language:
- en
datasets:
- teknium/OpenHermes-2.5
---
# NousResearch/Nous-Hermes-2-SOLAR-10.7B
Source Model: [NousResearch/Nous-Hermes-2-SOLAR-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B)
Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) commit [46acb3676718b983157058aecf729a2064fc7d34](https://github.com/ggerganov/llama.cpp/commit/46acb3676718b983157058aecf729a2064fc7d34)
Imatrix was generated from the f16 gguf via this command:
./imatrix -c 512 -m $out_path/$base_quant_name -f $llama_cpp_path/groups_merged.txt -o $out_path/imat-f16-gmerged.dat
Using the dataset from [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
|
raffenmb/my_awesome_qa_model | raffenmb | "2024-07-24T20:36:12Z" | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2024-07-24T20:20:59Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6524
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.3854 |
| 2.6982 | 2.0 | 500 | 1.7241 |
| 2.6982 | 3.0 | 750 | 1.6524 |
### Framework versions
- Transformers 4.42.4
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
RichardErkhov/KR-X-AI_-_krx-qwen2.5-7B-0.0.0.3-4bits | RichardErkhov | "2025-04-06T09:52:33Z" | 0 | 0 | null | [
"safetensors",
"qwen2",
"arxiv:1910.09700",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2025-04-06T09:47:55Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
krx-qwen2.5-7B-0.0.0.3 - bnb 4bits
- Model creator: https://huggingface.co/KR-X-AI/
- Original model: https://huggingface.co/KR-X-AI/krx-qwen2.5-7B-0.0.0.3/
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
llmixer/Meta-Llama-3-70B-Instruct-8.0bpw-h8-exl2 | llmixer | "2024-05-05T13:15:18Z" | 10 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"exl2",
"region:us"
] | text-generation | "2024-04-18T19:50:31Z" | ---
pipeline_tag: text-generation
license: llama3
---
8.1 bpw exl2 quant of [Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) |
robiulawaldev/455b3b18-aa41-467f-a225-fa8749e45fec | robiulawaldev | "2025-03-11T15:31:07Z" | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:princeton-nlp/Sheared-LLaMA-1.3B",
"base_model:adapter:princeton-nlp/Sheared-LLaMA-1.3B",
"region:us"
] | null | "2025-03-11T15:30:50Z" | ---
library_name: peft
tags:
- generated_from_trainer
base_model: princeton-nlp/Sheared-LLaMA-1.3B
model-index:
- name: robiulawaldev/455b3b18-aa41-467f-a225-fa8749e45fec
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robiulawaldev/455b3b18-aa41-467f-a225-fa8749e45fec
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7515
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
jinx2321/nllb-jeju-araea-tagged | jinx2321 | "2025-03-28T10:04:49Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/nllb-200-distilled-600M",
"base_model:finetune:facebook/nllb-200-distilled-600M",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-03-28T09:05:09Z" | ---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/nllb-200-distilled-600M
tags:
- generated_from_trainer
model-index:
- name: nllb-jeju-araea-tagged
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nllb-jeju-araea-tagged
This model is a fine-tuned version of [facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
tomaarsen/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-plistmle-seeded | tomaarsen | "2025-03-20T12:26:30Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"cross-encoder",
"generated_from_trainer",
"dataset_size:78704",
"loss:PListMLELoss",
"text-ranking",
"en",
"dataset:microsoft/ms_marco",
"arxiv:1908.10084",
"base_model:microsoft/MiniLM-L12-H384-uncased",
"base_model:finetune:microsoft/MiniLM-L12-H384-uncased",
"model-index",
"co2_eq_emissions",
"region:us"
] | text-ranking | "2025-03-20T12:26:26Z" | ---
language:
- en
tags:
- sentence-transformers
- cross-encoder
- generated_from_trainer
- dataset_size:78704
- loss:PListMLELoss
base_model: microsoft/MiniLM-L12-H384-uncased
datasets:
- microsoft/ms_marco
pipeline_tag: text-ranking
library_name: sentence-transformers
metrics:
- map
- mrr@10
- ndcg@10
co2_eq_emissions:
emissions: 93.08788204215189
energy_consumed: 0.23948392867068316
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
ram_total_size: 31.777088165283203
hours_used: 0.972
hardware_used: 1 x NVIDIA GeForce RTX 3090
model-index:
- name: CrossEncoder based on microsoft/MiniLM-L12-H384-uncased
results:
- task:
type: cross-encoder-reranking
name: Cross Encoder Reranking
dataset:
name: NanoMSMARCO R100
type: NanoMSMARCO_R100
metrics:
- type: map
value: 0.49
name: Map
- type: mrr@10
value: 0.4792
name: Mrr@10
- type: ndcg@10
value: 0.5526
name: Ndcg@10
- task:
type: cross-encoder-reranking
name: Cross Encoder Reranking
dataset:
name: NanoNFCorpus R100
type: NanoNFCorpus_R100
metrics:
- type: map
value: 0.3317
name: Map
- type: mrr@10
value: 0.5575
name: Mrr@10
- type: ndcg@10
value: 0.3642
name: Ndcg@10
- task:
type: cross-encoder-reranking
name: Cross Encoder Reranking
dataset:
name: NanoNQ R100
type: NanoNQ_R100
metrics:
- type: map
value: 0.5829
name: Map
- type: mrr@10
value: 0.5914
name: Mrr@10
- type: ndcg@10
value: 0.6488
name: Ndcg@10
- task:
type: cross-encoder-nano-beir
name: Cross Encoder Nano BEIR
dataset:
name: NanoBEIR R100 mean
type: NanoBEIR_R100_mean
metrics:
- type: map
value: 0.4682
name: Map
- type: mrr@10
value: 0.5427
name: Mrr@10
- type: ndcg@10
value: 0.5219
name: Ndcg@10
---
# CrossEncoder based on microsoft/MiniLM-L12-H384-uncased
This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on the [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco) dataset using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
## Model Details
### Model Description
- **Model Type:** Cross Encoder
- **Base model:** [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) <!-- at revision 44acabbec0ef496f6dbc93adadea57f376b7c0ec -->
- **Maximum Sequence Length:** 512 tokens
- **Number of Output Labels:** 1 label
- **Training Dataset:**
- [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import CrossEncoder
# Download from the 🤗 Hub
model = CrossEncoder("tomaarsen/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-plistmle-seeded")
# Get scores for pairs of texts
pairs = [
['How many calories in an egg', 'There are on average between 55 and 80 calories in an egg depending on its size.'],
['How many calories in an egg', 'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.'],
['How many calories in an egg', 'Most of the calories in an egg come from the yellow yolk in the center.'],
]
scores = model.predict(pairs)
print(scores.shape)
# (3,)
# Or rank different texts based on similarity to a single text
ranks = model.rank(
'How many calories in an egg',
[
'There are on average between 55 and 80 calories in an egg depending on its size.',
'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.',
'Most of the calories in an egg come from the yellow yolk in the center.',
]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Cross Encoder Reranking
* Datasets: `NanoMSMARCO_R100`, `NanoNFCorpus_R100` and `NanoNQ_R100`
* Evaluated with [<code>CrossEncoderRerankingEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderRerankingEvaluator) with these parameters:
```json
{
"at_k": 10,
"always_rerank_positives": true
}
```
| Metric | NanoMSMARCO_R100 | NanoNFCorpus_R100 | NanoNQ_R100 |
|:------------|:---------------------|:---------------------|:---------------------|
| map | 0.4900 (+0.0004) | 0.3317 (+0.0707) | 0.5829 (+0.1632) |
| mrr@10 | 0.4792 (+0.0017) | 0.5575 (+0.0577) | 0.5914 (+0.1647) |
| **ndcg@10** | **0.5526 (+0.0122)** | **0.3642 (+0.0391)** | **0.6488 (+0.1481)** |
#### Cross Encoder Nano BEIR
* Dataset: `NanoBEIR_R100_mean`
* Evaluated with [<code>CrossEncoderNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderNanoBEIREvaluator) with these parameters:
```json
{
"dataset_names": [
"msmarco",
"nfcorpus",
"nq"
],
"rerank_k": 100,
"at_k": 10,
"always_rerank_positives": true
}
```
| Metric | Value |
|:------------|:---------------------|
| map | 0.4682 (+0.0781) |
| mrr@10 | 0.5427 (+0.0747) |
| **ndcg@10** | **0.5219 (+0.0665)** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### ms_marco
* Dataset: [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco) at [a47ee7a](https://huggingface.co/datasets/microsoft/ms_marco/tree/a47ee7aae8d7d466ba15f9f0bfac3b3681087b3a)
* Size: 78,704 training samples
* Columns: <code>query</code>, <code>docs</code>, and <code>labels</code>
* Approximate statistics based on the first 1000 samples:
| | query | docs | labels |
|:--------|:-----------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|
| type | string | list | list |
| details | <ul><li>min: 11 characters</li><li>mean: 33.61 characters</li><li>max: 85 characters</li></ul> | <ul><li>min: 2 elements</li><li>mean: 6.00 elements</li><li>max: 10 elements</li></ul> | <ul><li>min: 2 elements</li><li>mean: 6.00 elements</li><li>max: 10 elements</li></ul> |
* Samples:
| query | docs | labels |
|:------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------|
| <code>what does syllables mean</code> | <code>['A syllable is a unit of organization for a sequence of speech sounds. For example, the word water is composed of two syllables: wa and ter. A syllable is typically made up of a syllable nucleus (most often a vowel) with optional initial and final margins (typically, consonants). Syllables are often considered the phonological building blocks of words. They can influence the rhythm of a language, its prosody, its poetic meter and its stress patterns. The first syllable of a word is the initial syllable and the last syllable is the final syllable. In languages accented on one of the last three syllables, the last syllable is called the ultima, the next-to-last is called the penult, and the third syllable from the end is called the antepenult.', '1 A unit of pronunciation having one vowel sound, with or without surrounding consonants, forming the whole or a part of a word; for example, there are two syllables in water and three in inferno. Example sentences. 1 The vowels of the stresse...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
| <code>how long does it take to become a child psychiatrist</code> | <code>["The Path to Becoming a Psychologist. First, you will need a bachelor's degree (4 to 5 years), which teaches the fundamentals of psychology. After that, you will need a master's degree (2 to 3 years), which can qualify you to practice in the field as a case manager, employment specialist, or social worker.", 'For example, becoming a school psychologist can take a little as two years of graduate-level education, and only requires a master’s degree. On the other hand, if you want to become a child psychologist you will need to earn a doctorate degree, which can require up to seven additional years of psychologist schooling.', '1 During the first four years of medical school you take classes, do lab work, and learn about medical ethics. 2 You may not have the opportunity to do hands-on psychiatry work at this stage, but earning your medical degree is a requirement in the path to becoming a psychiatrist, so stick with it.', '1 Clinical Psychologist: Doctorate Degree in Psychology (4 to 7...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
| <code>how do great horned owls defend themselves</code> | <code>["Owls can't always successfully defend themselves from other animals, particularly their prey. Great horned owls, for example, are often found either dead or injured as a result of would-be prey like skunks and porcupines fighting back. Feet and Beak. Like other birds in the raptor group, owls of all species use their beaks and talons to defend themselves. An owl's feet are equipped with particularly long, sharp and curved claws, which he can dig into an adversary and use like hooks to tear and rip at flesh.", "Tom Brakefield/Stockbyte/Getty Images. Owls are raptors, birds of prey. They provide sustenance and defend themselves with strong, sharp breaks and talons. The owl's ability to avoid detection is perhaps the most important weapon in his defensive arsenal, since it allows him to avoid confrontation in the first place. Feet and Beak. Like other birds in the raptor group, owls of all species use their beaks and talons to defend themselves. An owl's feet are equipped with particula...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
* Loss: [<code>PListMLELoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#plistmleloss) with these parameters:
```json
{
"lambda_weight": "sentence_transformers.cross_encoder.losses.PListMLELoss.PListMLELambdaWeight",
"activation_fct": "torch.nn.modules.linear.Identity",
"mini_batch_size": 16,
"respect_input_order": true
}
```
### Evaluation Dataset
#### ms_marco
* Dataset: [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco) at [a47ee7a](https://huggingface.co/datasets/microsoft/ms_marco/tree/a47ee7aae8d7d466ba15f9f0bfac3b3681087b3a)
* Size: 1,000 evaluation samples
* Columns: <code>query</code>, <code>docs</code>, and <code>labels</code>
* Approximate statistics based on the first 1000 samples:
| | query | docs | labels |
|:--------|:-----------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|
| type | string | list | list |
| details | <ul><li>min: 12 characters</li><li>mean: 33.62 characters</li><li>max: 99 characters</li></ul> | <ul><li>min: 3 elements</li><li>mean: 6.50 elements</li><li>max: 10 elements</li></ul> | <ul><li>min: 3 elements</li><li>mean: 6.50 elements</li><li>max: 10 elements</li></ul> |
* Samples:
| query | docs | labels |
|:------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------|
| <code>what age do kids fly free?</code> | <code>["If you're taking a domestic flight with your infant, your airline will likely allow the baby to fly at no cost -- provided you hold him on your lap throughout the flight. Generally, American Airlines allows children younger than two years of age to fly for free with a parent or another adult over the age of 18. You'll save cash, though you'll likely be uncomfortable after a short time unless you're traveling with a partner or other adult who can take turns holding the baby. ", "Unaccompanied Minor Program. The Unaccompanied Minor Program is required for all children 5-14 years old when not traveling in the same compartment with an adult who is at least 18 years old or the child's parent/legal guardian. The program is optional for children 15-17 years old. ", 'Most airlines let under 2 fly for free (not under 3).If flying internationally,taxes or a small service fee usually 10% of adult fare will have to be paid. ANOTHER ANSWER I totally agree with answer #2. Whether you have a newbor...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
| <code>extensor muscles of the hand that are innervated by radial nerve</code> | <code>['Extrinsic muscles of the hand innervated by the radial nerve. extensor digitorum communis (EDC), extensor digiti minimi (EDM), extensor indicis, extensor pollicis longus (EPL), extensor pollicis brevis (EPB), abductor pollicis longus (APL).', 'The radial nerve contributed 1 to 3 branches to the brachialis in 10 of 20 specimens. In all specimens, the radial nerve innervated all of the extensor fore-arm muscles. In 2 of 20 specimens, there was an extensor medius proprius (EMP) muscle.', 'The thenar muscles are three short muscles located at the base of the thumb. The muscle bellies produce a bulge, known as the thenar eminence. They are responsible for the fine movements of the thumb. The median nerve innervates all the thenar muscles.', 'A total of 27 bones constitute the basic skeleton of the wrist and hand. The hand is innervated by 3 nerves — the median, ulnar, and radial nerves — each of which has sensory and motor components. The muscles of the hand are divided into intrinsic and...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
| <code>what does domestic limited liability company mean</code> | <code>['2. Domestic limited liability company means an entity that is an unincorporated association having one or more members and that is organized under ORS chapter 63. 4. Look beforeyou eat. Portland-area restaurant health scores. 1. Domestic limited liability company means an entity that is an unincorporated association having one or more members and that is organized under ORS chapter 63.', 'To register a Domestic Limited Liability Company in Hawaii, you must file the Articles of Organization for Limited Liability Company Form LLC-1 with the appropriate filing fee(s) . Use the links above to register and pay online or to access our fillable PDF forms which you can print and mail in with your payment. ', "I was talking to someone the other day who has a limited liability company (LLC). She is doing business in several states and she said she was told she must register as a foreign LLC in each state. She wondered why it was called a foreign LLC, since she wasn't doing business outside t...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |
* Loss: [<code>PListMLELoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#plistmleloss) with these parameters:
```json
{
"lambda_weight": "sentence_transformers.cross_encoder.losses.PListMLELoss.PListMLELambdaWeight",
"activation_fct": "torch.nn.modules.linear.Identity",
"mini_batch_size": 16,
"respect_input_order": true
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `seed`: 12
- `bf16`: True
- `load_best_model_at_end`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 12
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | NanoMSMARCO_R100_ndcg@10 | NanoNFCorpus_R100_ndcg@10 | NanoNQ_R100_ndcg@10 | NanoBEIR_R100_mean_ndcg@10 |
|:----------:|:--------:|:-------------:|:---------------:|:------------------------:|:-------------------------:|:--------------------:|:--------------------------:|
| -1 | -1 | - | - | 0.0300 (-0.5104) | 0.2528 (-0.0723) | 0.0168 (-0.4839) | 0.0999 (-0.3555) |
| 0.0002 | 1 | 2.2023 | - | - | - | - | - |
| 0.0508 | 250 | 2.1003 | - | - | - | - | - |
| 0.1016 | 500 | 1.9606 | 1.9318 | 0.2069 (-0.3335) | 0.2496 (-0.0755) | 0.2308 (-0.2699) | 0.2291 (-0.2263) |
| 0.1525 | 750 | 1.8932 | - | - | - | - | - |
| 0.2033 | 1000 | 1.8711 | 1.8656 | 0.4275 (-0.1129) | 0.2878 (-0.0372) | 0.4897 (-0.0109) | 0.4017 (-0.0537) |
| 0.2541 | 1250 | 1.8597 | - | - | - | - | - |
| 0.3049 | 1500 | 1.8486 | 1.8518 | 0.5873 (+0.0469) | 0.3577 (+0.0327) | 0.5874 (+0.0868) | 0.5108 (+0.0555) |
| 0.3558 | 1750 | 1.8415 | - | - | - | - | - |
| 0.4066 | 2000 | 1.8338 | 1.8441 | 0.5467 (+0.0062) | 0.3619 (+0.0368) | 0.5936 (+0.0929) | 0.5007 (+0.0453) |
| 0.4574 | 2250 | 1.8189 | - | - | - | - | - |
| 0.5082 | 2500 | 1.8338 | 1.8293 | 0.5523 (+0.0119) | 0.3676 (+0.0426) | 0.6452 (+0.1446) | 0.5217 (+0.0664) |
| 0.5591 | 2750 | 1.8109 | - | - | - | - | - |
| 0.6099 | 3000 | 1.8291 | 1.8306 | 0.5489 (+0.0085) | 0.3649 (+0.0398) | 0.6360 (+0.1353) | 0.5166 (+0.0612) |
| 0.6607 | 3250 | 1.8124 | - | - | - | - | - |
| **0.7115** | **3500** | **1.8205** | **1.8301** | **0.5526 (+0.0122)** | **0.3642 (+0.0391)** | **0.6488 (+0.1481)** | **0.5219 (+0.0665)** |
| 0.7624 | 3750 | 1.8166 | - | - | - | - | - |
| 0.8132 | 4000 | 1.8223 | 1.8205 | 0.5512 (+0.0108) | 0.3578 (+0.0328) | 0.6173 (+0.1167) | 0.5088 (+0.0534) |
| 0.8640 | 4250 | 1.8129 | - | - | - | - | - |
| 0.9148 | 4500 | 1.8132 | 1.8214 | 0.5364 (-0.0040) | 0.3603 (+0.0353) | 0.6257 (+0.1251) | 0.5075 (+0.0521) |
| 0.9656 | 4750 | 1.8188 | - | - | - | - | - |
| -1 | -1 | - | - | 0.5526 (+0.0122) | 0.3642 (+0.0391) | 0.6488 (+0.1481) | 0.5219 (+0.0665) |
* The bold row denotes the saved checkpoint.
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Energy Consumed**: 0.239 kWh
- **Carbon Emitted**: 0.093 kg of CO2
- **Hours Used**: 0.972 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x NVIDIA GeForce RTX 3090
- **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
- **RAM Size**: 31.78 GB
### Framework Versions
- Python: 3.11.6
- Sentence Transformers: 3.5.0.dev0
- Transformers: 4.49.0
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.1
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### PListMLELoss
```bibtex
@inproceedings{lan2014position,
title={Position-Aware ListMLE: A Sequential Learning Process for Ranking.},
author={Lan, Yanyan and Zhu, Yadong and Guo, Jiafeng and Niu, Shuzi and Cheng, Xueqi},
booktitle={UAI},
volume={14},
pages={449--458},
year={2014}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
mradermacher/MS-MagpantheonselRP-22B-14.1-Recalculated-GGUF | mradermacher | "2025-01-02T05:58:32Z" | 13 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Kaoeiri/MS-MagpantheonselRP-22B-14.1-Recalculated",
"base_model:quantized:Kaoeiri/MS-MagpantheonselRP-22B-14.1-Recalculated",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-02T05:20:57Z" | ---
base_model: Kaoeiri/MS-MagpantheonselRP-22B-14.1-Recalculated
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Kaoeiri/MS-MagpantheonselRP-22B-14.1-Recalculated
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MS-MagpantheonselRP-22B-14.1-Recalculated-GGUF/resolve/main/MS-MagpantheonselRP-22B-14.1-Recalculated.Q2_K.gguf) | Q2_K | 8.4 | |
| [GGUF](https://huggingface.co/mradermacher/MS-MagpantheonselRP-22B-14.1-Recalculated-GGUF/resolve/main/MS-MagpantheonselRP-22B-14.1-Recalculated.Q3_K_S.gguf) | Q3_K_S | 9.7 | |
| [GGUF](https://huggingface.co/mradermacher/MS-MagpantheonselRP-22B-14.1-Recalculated-GGUF/resolve/main/MS-MagpantheonselRP-22B-14.1-Recalculated.Q3_K_M.gguf) | Q3_K_M | 10.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MS-MagpantheonselRP-22B-14.1-Recalculated-GGUF/resolve/main/MS-MagpantheonselRP-22B-14.1-Recalculated.Q3_K_L.gguf) | Q3_K_L | 11.8 | |
| [GGUF](https://huggingface.co/mradermacher/MS-MagpantheonselRP-22B-14.1-Recalculated-GGUF/resolve/main/MS-MagpantheonselRP-22B-14.1-Recalculated.IQ4_XS.gguf) | IQ4_XS | 12.1 | |
| [GGUF](https://huggingface.co/mradermacher/MS-MagpantheonselRP-22B-14.1-Recalculated-GGUF/resolve/main/MS-MagpantheonselRP-22B-14.1-Recalculated.Q4_K_S.gguf) | Q4_K_S | 12.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MS-MagpantheonselRP-22B-14.1-Recalculated-GGUF/resolve/main/MS-MagpantheonselRP-22B-14.1-Recalculated.Q4_K_M.gguf) | Q4_K_M | 13.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MS-MagpantheonselRP-22B-14.1-Recalculated-GGUF/resolve/main/MS-MagpantheonselRP-22B-14.1-Recalculated.Q5_K_S.gguf) | Q5_K_S | 15.4 | |
| [GGUF](https://huggingface.co/mradermacher/MS-MagpantheonselRP-22B-14.1-Recalculated-GGUF/resolve/main/MS-MagpantheonselRP-22B-14.1-Recalculated.Q5_K_M.gguf) | Q5_K_M | 15.8 | |
| [GGUF](https://huggingface.co/mradermacher/MS-MagpantheonselRP-22B-14.1-Recalculated-GGUF/resolve/main/MS-MagpantheonselRP-22B-14.1-Recalculated.Q6_K.gguf) | Q6_K | 18.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MS-MagpantheonselRP-22B-14.1-Recalculated-GGUF/resolve/main/MS-MagpantheonselRP-22B-14.1-Recalculated.Q8_0.gguf) | Q8_0 | 23.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
shyp/Hoshi_model | shyp | "2024-05-30T11:37:54Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-05-30T11:16:51Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ggallipoli/t5-base_for2inf_family | ggallipoli | "2024-12-03T00:01:37Z" | 113 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"style-transfer",
"formality-transfer",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-12-01T10:19:29Z" | ---
language:
- en
pipeline_tag: text2text-generation
library_name: transformers
tags:
- style-transfer
- formality-transfer
---
# Text Style Transfer using CycleGANs
This repository contains the models from the paper "Self-supervised Text Style Transfer using Cycle-Consistent Adversarial Networks" (ACM TIST 2024).\
The work introduces a novel approach to Text Style Transfer using CycleGANs with sequence-level supervision and Transformer architectures.
## Available Models
### Formality transfer
#### GYAFC dataset (Family & Relationships)
| model | checkpoint |
|:----------:|:------------------------------------------------------:|
| BART base | [informal-to-formal](https://huggingface.co/ggallipoli/bart-base_inf2for_family), [formal-to-informal](https://huggingface.co/ggallipoli/bart-base_for2inf_family) |
| BART large | [informal-to-formal](https://huggingface.co/ggallipoli/bart-large_inf2for_family), [formal-to-informal](https://huggingface.co/ggallipoli/bart-large_for2inf_family) |
| T5 small | [informal-to-formal](https://huggingface.co/ggallipoli/t5-small_inf2for_family), [formal-to-informal](https://huggingface.co/ggallipoli/t5-small_for2inf_family) |
| T5 base | [informal-to-formal](https://huggingface.co/ggallipoli/t5-base_inf2for_family), [formal-to-informal](https://huggingface.co/ggallipoli/t5-base_for2inf_family) |
| T5 large | [informal-to-formal](https://huggingface.co/ggallipoli/t5-large_inf2for_family), [formal-to-informal](https://huggingface.co/ggallipoli/t5-large_for2inf_family) |
| BERT base | [style classifier](https://huggingface.co/ggallipoli/formality_classifier_gyafc_family) |
#### GYAFC dataset (Entertainment & Music)
| model | checkpoint |
|:----------:|:------------------------------------------------------:|
| BART base | [informal-to-formal](https://huggingface.co/ggallipoli/bart-base_inf2for_music), [formal-to-informal](https://huggingface.co/ggallipoli/bart-base_for2inf_music) |
| BART large | [informal-to-formal](https://huggingface.co/ggallipoli/bart-large_inf2for_music), [formal-to-informal](https://huggingface.co/ggallipoli/bart-large_for2inf_music) |
| T5 small | [informal-to-formal](https://huggingface.co/ggallipoli/t5-small_inf2for_music), [formal-to-informal](https://huggingface.co/ggallipoli/t5-small_for2inf_music) |
| T5 base | [informal-to-formal](https://huggingface.co/ggallipoli/t5-base_inf2for_music), [formal-to-informal](https://huggingface.co/ggallipoli/t5-base_for2inf_music) |
| T5 large | [informal-to-formal](https://huggingface.co/ggallipoli/t5-large_inf2for_music), [formal-to-informal](https://huggingface.co/ggallipoli/t5-large_for2inf_music) |
| BERT base | [style classifier](https://huggingface.co/ggallipoli/formality_classifier_gyafc_music) |
### Sentiment transfer
#### Yelp dataset
| model | checkpoint |
|:----------:|:------------------------------------------------------:|
| BART base | [negative-to-positive](https://huggingface.co/ggallipoli/bart-base_neg2pos), [positive-to-negative](https://huggingface.co/ggallipoli/bart-base_pos2neg) |
| BART large | [negative-to-positive](https://huggingface.co/ggallipoli/bart-large_neg2pos), [positive-to-negative](https://huggingface.co/ggallipoli/bart-large_pos2neg) |
| T5 small | [negative-to-positive](https://huggingface.co/ggallipoli/t5-small_neg2pos), [positive-to-negative](https://huggingface.co/ggallipoli/t5-small_pos2neg) |
| T5 base | [negative-to-positive](https://huggingface.co/ggallipoli/t5-base_neg2pos), [positive-to-negative](https://huggingface.co/ggallipoli/t5-base_pos2neg) |
| T5 large | [negative-to-positive](https://huggingface.co/ggallipoli/t5-large_neg2pos), [positive-to-negative](https://huggingface.co/ggallipoli/t5-large_pos2neg) |
| BERT base | [style classifier](https://huggingface.co/ggallipoli/sentiment_classifier_yelp) |
## Model Description
The models implement a CycleGAN architecture for Text Style Transfer that:
- Applies self-supervision directly at sequence level
- Maintains content while transferring style attributes
- Employs pre-trained style classifiers to guide generation
- Uses Transformer-based generators and discriminators
The models achieve state-of-the-art results on both formality and sentiment transfer tasks.
## Usage
Both generators and style classifiers can be used with the Hugging Face 🤗 transformers library:
Each generator model can be loaded as:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("[GENERATOR_MODEL]")
tokenizer = AutoTokenizer.from_pretrained("[GENERATOR_MODEL]")
```
The style classifiers can be loaded as:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
classifier = AutoModelForSequenceClassification.from_pretrained("[CLASSIFIER_MODEL]")
tokenizer = AutoTokenizer.from_pretrained("[CLASSIFIER_MODEL]")
```
## Citation
For more details, you can refer to the [paper](https://dl.acm.org/doi/10.1145/3678179).
```bibtex
@article{10.1145/3678179,
author = {La Quatra, Moreno and Gallipoli, Giuseppe and Cagliero, Luca},
title = {Self-supervised Text Style Transfer Using Cycle-Consistent Adversarial Networks},
year = {2024},
issue_date = {October 2024},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {15},
number = {5},
issn = {2157-6904},
url = {https://doi.org/10.1145/3678179},
doi = {10.1145/3678179},
journal = {ACM Trans. Intell. Syst. Technol.},
month = nov,
articleno = {110},
numpages = {38},
keywords = {Text Style Transfer, Sentiment transfer, Formality transfer, Cycle-consistent Generative Adversarial Networks, Transformers}
}
```
## Code
The full implementation is available at: https://github.com/gallipoligiuseppe/TST-CycleGAN.
## License
This work is licensed under the <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>. |
therealcyberlord/fake-news-classification-distilbert | therealcyberlord | "2023-04-06T17:48:25Z" | 2,227 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"distilbert",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-08-03T21:45:25Z" | ---
license: mit
widget:
- text: "Health and Human Services Secretary Xavier Becerra declared the monkeypox outbreak a public health emergency on Thursday in an effort to galvanize awareness and unlock additional flexibility and funding to fight the virus’s spread."
---
# Fake News Classification Distilbert 🤗
This model was trained on 32,326 news articles from CLÉMENT BISAILLON's dataset on Kaggle. The goal is to classify fake news from real news.
0 : Fake News, 1 : Real News
# Sources
Dataset used: https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset
Base Distilbert: https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english
|
henryscheible/rte_roberta-base_144 | henryscheible | "2023-01-18T20:16:19Z" | 0 | 0 | null | [
"pytorch",
"generated_from_trainer",
"en",
"dataset:glue",
"license:mit",
"model-index",
"region:us"
] | null | "2023-01-18T20:03:49Z" | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: rte_roberta-base_144
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE RTE
type: glue
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.7256317689530686
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rte_roberta-base_144
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6194
- Accuracy: 0.7256
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Liogl/RL-Course | Liogl | "2023-12-03T18:48:43Z" | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-12-03T18:48:06Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO-MLP
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 246.46 +/- 32.22
name: mean_reward
verified: false
---
# **PPO-MLP** Agent playing **LunarLander-v2**
This is a trained model of a **PPO-MLP** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
emegona/finetuning-pysentimiento-war-tweets | emegona | "2022-07-11T03:33:41Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-06-30T13:03:38Z" | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-pysentimiento-war-tweets
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-pysentimiento-war-tweets
This model is a fine-tuned version of [finiteautomata/beto-sentiment-analysis](https://huggingface.co/finiteautomata/beto-sentiment-analysis) on a dataset of 1500 tweets from Peruvian accounts. It achieves the following results on the evaluation set:
- Loss: 1.7689
- Accuracy: 0.7378
- F1: 0.7456
## Model description
This model in a fine-tuned version of [finiteautomata/beto-sentiment-analysis](https://huggingface.co/finiteautomata/beto-sentiment-analysis) using five labels: **pro_russia**, **against_ukraine**, **neutral**, **against_russia**, **pro_ukraine**.
## Intended uses & limitations
This model shall be used to classify text (more specifically, Spanish tweets) as expressing a position concerning the Russo-Ukrainian war.
## Training and evaluation data
We used an 80/20 training/test split on the aforementioned dataset.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
mlfoundations-dev/hp_ablations_gemma_adambeta2_0.995_dcftv1.2 | mlfoundations-dev | "2024-12-07T12:18:40Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"base_model:google/gemma-2-9b",
"base_model:finetune:google/gemma-2-9b",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-06T09:36:45Z" | ---
library_name: transformers
license: gemma
base_model: google/gemma-2-9b
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: hp_ablations_gemma_adambeta2_0.995_dcftv1.2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hp_ablations_gemma_adambeta2_0.995_dcftv1.2
This model is a fine-tuned version of [google/gemma-2-9b](https://huggingface.co/google/gemma-2-9b) on the mlfoundations-dev/oh-dcft-v1.2_no-curation_gpt-4o-mini dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6349
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 16
- total_train_batch_size: 512
- total_eval_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.995) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.1
- lr_scheduler_warmup_steps: 1738
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6082 | 0.9998 | 334 | 0.6192 |
| 0.5586 | 1.9996 | 668 | 0.6157 |
| 0.5039 | 2.9994 | 1002 | 0.6349 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.3.0
- Datasets 3.0.2
- Tokenizers 0.20.3
|
anup-zessta/Chambal_table_singapore_chunk | anup-zessta | "2025-03-05T16:16:10Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mllama",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Llama-3.2-11B-Vision-Instruct",
"base_model:finetune:unsloth/Llama-3.2-11B-Vision-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2025-03-05T14:57:12Z" | ---
base_model: unsloth/Llama-3.2-11B-Vision-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- mllama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** anup-zessta
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-11B-Vision-Instruct
This mllama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AbstractPhil/clips | AbstractPhil | "2025-04-09T19:05:20Z" | 41 | 5 | null | [
"gguf",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | "2025-03-22T13:47:12Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
codxsolutions/t5-small-correction-fp32 | codxsolutions | "2025-03-13T21:24:18Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-03-13T21:24:06Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ugaoo/model_vql3enxg | ugaoo | "2025-03-04T00:02:26Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"generated_from_trainer",
"dataset:ugaoo/shortinstruction_input_output_calculator_data",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2025-03-03T23:54:30Z" | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- generated_from_trainer
datasets:
- ugaoo/shortinstruction_input_output_calculator_data
model-index:
- name: out/Qwen_Qwen2.5_7B_Instruct_ugaoo_shortinstruction_input_output_calculator_data
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.8.0.dev0`
```yaml
base_model: Qwen/Qwen2.5-7B-Instruct
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
trust_remote_code: true
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: ugaoo/shortinstruction_input_output_calculator_data
type: alpaca
val_set_size: 0
output_dir: ./out/Qwen_Qwen2.5_7B_Instruct_ugaoo_shortinstruction_input_output_calculator_data
sequence_len: 4000
sample_packing: true
pad_to_sequence_len: true
adapter: qlora
lora_r: 256
lora_alpha: 512
lora_dropout: 0.05
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
- up_proj
- down_proj
- gate_proj
wandb_project: cosmosearch
wandb_entity:
wandb_watch:
wandb_name: Qwen_Qwen2.5_7B_Instruct_ugaoo_shortinstruction_input_output_calculator_data
wandb_log_model:
gradient_accumulation_steps: 10
micro_batch_size: 4
num_epochs: 6
optimizer: adamw_torch
lr_scheduler: cosine
learning_rate: 5e-6
train_on_inputs: false
group_by_length: false
bf16: auto
fp16: false
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 100
evals_per_epoch: 6
eval_table_size:
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
save_total_limit: 6
```
</details><br>
# out/Qwen_Qwen2.5_7B_Instruct_ugaoo_shortinstruction_input_output_calculator_data
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the ugaoo/shortinstruction_input_output_calculator_data dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 10
- total_train_batch_size: 40
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 6.0
### Training results
### Framework versions
- PEFT 0.14.0
- Transformers 4.49.0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0 |
vocabtrimmer/mt5-small-frquad-qg-trimmed-fr-120000 | vocabtrimmer | "2023-04-28T14:12:37Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-03-15T11:21:17Z" | # Vocabulary Trimmed [lmqg/mt5-small-frquad-qg](https://huggingface.co/lmqg/mt5-small-frquad-qg): `vocabtrimmer/mt5-small-frquad-qg-trimmed-fr-120000`
This model is a trimmed version of [lmqg/mt5-small-frquad-qg](https://huggingface.co/lmqg/mt5-small-frquad-qg) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mt5-small-frquad-qg | vocabtrimmer/mt5-small-frquad-qg-trimmed-fr-120000 |
|:---------------------------|:---------------------------|:-----------------------------------------------------|
| parameter_size_full | 300,165,504 | 166,944,128 |
| parameter_size_embedding | 256,103,424 | 122,882,048 |
| vocab_size | 250,101 | 120,002 |
| compression_rate_full | 100.0 | 55.62 |
| compression_rate_embedding | 100.0 | 47.98 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| fr | vocabtrimmer/mc4_validation | text | fr | validation | 120000 | 2 | |
mradermacher/rombos_Llama-3-13B-GGUF | mradermacher | "2024-12-10T04:48:27Z" | 17 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:rombodawg/rombos_Llama-3-13B",
"base_model:quantized:rombodawg/rombos_Llama-3-13B",
"license:other",
"endpoints_compatible",
"region:us"
] | null | "2024-10-07T07:31:26Z" | ---
base_model: rombodawg/rombos_Llama-3-13B
language:
- en
library_name: transformers
license: other
license_link: https://llama.meta.com/llama3/license/
license_name: llama-3
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/rombodawg/rombos_Llama-3-13B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/rombos_Llama-3-13B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/rombos_Llama-3-13B-GGUF/resolve/main/rombos_Llama-3-13B.Q2_K.gguf) | Q2_K | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/rombos_Llama-3-13B-GGUF/resolve/main/rombos_Llama-3-13B.IQ3_XS.gguf) | IQ3_XS | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/rombos_Llama-3-13B-GGUF/resolve/main/rombos_Llama-3-13B.Q3_K_S.gguf) | Q3_K_S | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/rombos_Llama-3-13B-GGUF/resolve/main/rombos_Llama-3-13B.IQ3_S.gguf) | IQ3_S | 6.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/rombos_Llama-3-13B-GGUF/resolve/main/rombos_Llama-3-13B.IQ3_M.gguf) | IQ3_M | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/rombos_Llama-3-13B-GGUF/resolve/main/rombos_Llama-3-13B.Q3_K_M.gguf) | Q3_K_M | 6.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/rombos_Llama-3-13B-GGUF/resolve/main/rombos_Llama-3-13B.Q3_K_L.gguf) | Q3_K_L | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/rombos_Llama-3-13B-GGUF/resolve/main/rombos_Llama-3-13B.IQ4_XS.gguf) | IQ4_XS | 7.4 | |
| [GGUF](https://huggingface.co/mradermacher/rombos_Llama-3-13B-GGUF/resolve/main/rombos_Llama-3-13B.Q4_K_S.gguf) | Q4_K_S | 7.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/rombos_Llama-3-13B-GGUF/resolve/main/rombos_Llama-3-13B.Q4_K_M.gguf) | Q4_K_M | 8.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/rombos_Llama-3-13B-GGUF/resolve/main/rombos_Llama-3-13B.Q5_K_S.gguf) | Q5_K_S | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/rombos_Llama-3-13B-GGUF/resolve/main/rombos_Llama-3-13B.Q5_K_M.gguf) | Q5_K_M | 9.5 | |
| [GGUF](https://huggingface.co/mradermacher/rombos_Llama-3-13B-GGUF/resolve/main/rombos_Llama-3-13B.Q6_K.gguf) | Q6_K | 11.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/rombos_Llama-3-13B-GGUF/resolve/main/rombos_Llama-3-13B.Q8_0.gguf) | Q8_0 | 14.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Bodolaz/Unit-3-final | Bodolaz | "2023-06-26T14:53:14Z" | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-06-26T14:52:45Z" | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 257.00 +/- 38.81
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Bodolaz -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Bodolaz -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Bodolaz
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.1),
('learning_starts', 100000),
('n_timesteps', 500000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
e22vvb/EN_mt5-small_5_wikiSQL | e22vvb | "2024-01-23T09:03:21Z" | 92 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"dataset:wikisql",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-01-22T11:58:51Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikisql
model-index:
- name: EN_mt5-small_5_wikiSQL
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EN_mt5-small_5_wikiSQL
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wikisql dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1353
- Rouge2 Precision: 0.8272
- Rouge2 Recall: 0.7504
- Rouge2 Fmeasure: 0.7811
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.286 | 1.0 | 4049 | 0.1954 | 0.7917 | 0.7111 | 0.7421 |
| 0.2149 | 2.0 | 8098 | 0.1560 | 0.815 | 0.7378 | 0.768 |
| 0.1892 | 3.0 | 12147 | 0.1436 | 0.8229 | 0.7452 | 0.7762 |
| 0.1792 | 4.0 | 16196 | 0.1383 | 0.826 | 0.7494 | 0.7799 |
| 0.1744 | 5.0 | 20245 | 0.1353 | 0.8272 | 0.7504 | 0.7811 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.7.dev0
- Tokenizers 0.13.3
|
HazemHM/rl_course_vizdoom_health_gathering_supreme | HazemHM | "2024-01-31T21:44:41Z" | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-01-31T21:44:30Z" | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.30 +/- 4.38
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r HazemHM/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
Jonjew/LucyLiu | Jonjew | "2025-03-05T22:33:26Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
] | text-to-image | "2025-03-05T22:33:17Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
High res waist up portrait photo of a woman with glittery eye-shadow and
clear glossy lip-gloss.She is looking at the viewer seductively . She is
wearing a thin string-like black choker and hoop earrings. In the background
is a nightclub scene out of focus.,
,<lora:lucyliu_1990s_local_flux_1_standard-000039:1>
output:
url: images/00213-755798452.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: unknown
---
# Lucy Liu (1990s) (FLUX)
<Gallery />
## Model description
FROM https://civitai.com/models/795071/lucy-liu-1990s-flux?modelVersionId=889048
strength 0.8-1.2. No keywords needed
FLUX v1.0 :
Trained on FLUX dev with 40 photos of Lucy Liu in the 1990s with detailed captions. Tested on FLUX 1.dev (full) and FLUX fp8 and FLUX nf4 ! Use around strength 0.8-1.2. No keywords needed! Distilled CFG around 1-4 and CFG 1.0 (without negative prompt). Clipskip 1. Can be used for example as follows:
Positive : {Artstyle, Character and scene description in usual FLUX fashion}, , <lora:lucyliu_1990s_local_flux_1_standard-000039:1>
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/LucyLiu/tree/main) them in the Files & versions tab.
|
Brhnglc/ppo-Huggy | Brhnglc | "2023-01-12T23:22:08Z" | 3 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | "2023-01-12T23:21:59Z" |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: Brhnglc/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
RichardErkhov/siddhant7876_-_phi35_dpo_bf16_final-gguf | RichardErkhov | "2025-04-03T02:24:36Z" | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-03T01:10:13Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
karim155/swin-tiny-patch4-window7-224-finetuned | karim155 | "2024-09-13T13:40:38Z" | 223 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-08-18T23:41:22Z" | ---
library_name: transformers
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: swin-tiny-patch4-window7-224-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9034
- Accuracy: 0.6660
- Precision: 0.6546
- Recall: 0.6660
- F1: 0.6519
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.0809 | 0.9846 | 32 | 1.0485 | 0.5833 | 0.5506 | 0.5833 | 0.5627 |
| 1.0052 | 2.0 | 65 | 1.0600 | 0.5727 | 0.5941 | 0.5727 | 0.5170 |
| 0.9429 | 2.9846 | 97 | 0.9755 | 0.6160 | 0.5878 | 0.6160 | 0.5837 |
| 0.9497 | 4.0 | 130 | 0.9318 | 0.6497 | 0.6458 | 0.6497 | 0.6313 |
| 0.8807 | 4.9846 | 162 | 0.9541 | 0.6304 | 0.6321 | 0.6304 | 0.6200 |
| 0.8089 | 6.0 | 195 | 0.9556 | 0.6266 | 0.6270 | 0.6266 | 0.6150 |
| 0.801 | 6.9846 | 227 | 0.9050 | 0.6603 | 0.6512 | 0.6603 | 0.6472 |
| 0.7753 | 8.0 | 260 | 0.9134 | 0.6506 | 0.6440 | 0.6506 | 0.6440 |
| 0.6986 | 8.9846 | 292 | 0.9138 | 0.6554 | 0.6468 | 0.6554 | 0.6436 |
| 0.7107 | 9.8462 | 320 | 0.9034 | 0.6660 | 0.6546 | 0.6660 | 0.6519 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
Onutoa/1_9e-3_5_0.1 | Onutoa | "2023-09-07T17:44:57Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-09-07T14:46:32Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: 1_9e-3_5_0.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 1_9e-3_5_0.1
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9096
- Accuracy: 0.7495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.009
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.6689 | 1.0 | 590 | 1.8930 | 0.3792 |
| 1.4177 | 2.0 | 1180 | 1.1713 | 0.6217 |
| 1.4671 | 3.0 | 1770 | 0.9910 | 0.4239 |
| 1.2704 | 4.0 | 2360 | 1.0000 | 0.4969 |
| 1.1101 | 5.0 | 2950 | 0.8316 | 0.6459 |
| 1.0767 | 6.0 | 3540 | 0.9325 | 0.6428 |
| 1.0047 | 7.0 | 4130 | 1.4778 | 0.4725 |
| 0.9251 | 8.0 | 4720 | 0.7582 | 0.6801 |
| 0.8846 | 9.0 | 5310 | 0.8984 | 0.6737 |
| 0.8439 | 10.0 | 5900 | 0.8034 | 0.7018 |
| 0.8068 | 11.0 | 6490 | 0.8305 | 0.6624 |
| 0.7643 | 12.0 | 7080 | 1.0910 | 0.5859 |
| 0.7306 | 13.0 | 7670 | 0.7682 | 0.6908 |
| 0.6488 | 14.0 | 8260 | 0.7171 | 0.7226 |
| 0.6521 | 15.0 | 8850 | 0.6864 | 0.7202 |
| 0.6048 | 16.0 | 9440 | 0.7442 | 0.7260 |
| 0.5536 | 17.0 | 10030 | 1.0092 | 0.6532 |
| 0.5654 | 18.0 | 10620 | 0.7884 | 0.7052 |
| 0.5349 | 19.0 | 11210 | 0.7640 | 0.7073 |
| 0.4958 | 20.0 | 11800 | 0.7724 | 0.7343 |
| 0.4706 | 21.0 | 12390 | 0.7728 | 0.7183 |
| 0.459 | 22.0 | 12980 | 0.7394 | 0.7254 |
| 0.4362 | 23.0 | 13570 | 0.7550 | 0.7196 |
| 0.4176 | 24.0 | 14160 | 0.7744 | 0.7248 |
| 0.4012 | 25.0 | 14750 | 0.8998 | 0.7364 |
| 0.388 | 26.0 | 15340 | 0.9046 | 0.7104 |
| 0.3852 | 27.0 | 15930 | 0.7894 | 0.7278 |
| 0.3737 | 28.0 | 16520 | 0.8274 | 0.7391 |
| 0.3456 | 29.0 | 17110 | 0.7725 | 0.7471 |
| 0.34 | 30.0 | 17700 | 0.9009 | 0.7260 |
| 0.3247 | 31.0 | 18290 | 0.7733 | 0.7398 |
| 0.3197 | 32.0 | 18880 | 0.8370 | 0.7385 |
| 0.3109 | 33.0 | 19470 | 0.8705 | 0.7269 |
| 0.3047 | 34.0 | 20060 | 0.8475 | 0.7373 |
| 0.2815 | 35.0 | 20650 | 0.9676 | 0.7407 |
| 0.2782 | 36.0 | 21240 | 0.8183 | 0.7450 |
| 0.2808 | 37.0 | 21830 | 0.8551 | 0.7394 |
| 0.2639 | 38.0 | 22420 | 0.9552 | 0.7440 |
| 0.2599 | 39.0 | 23010 | 0.8785 | 0.7422 |
| 0.2563 | 40.0 | 23600 | 1.0538 | 0.7364 |
| 0.2471 | 41.0 | 24190 | 0.9479 | 0.7502 |
| 0.2524 | 42.0 | 24780 | 0.9348 | 0.7398 |
| 0.2419 | 43.0 | 25370 | 0.9101 | 0.7401 |
| 0.2338 | 44.0 | 25960 | 0.8726 | 0.7394 |
| 0.2218 | 45.0 | 26550 | 0.8953 | 0.7416 |
| 0.2115 | 46.0 | 27140 | 0.8966 | 0.7291 |
| 0.2234 | 47.0 | 27730 | 0.9359 | 0.7416 |
| 0.2047 | 48.0 | 28320 | 0.9434 | 0.7284 |
| 0.2218 | 49.0 | 28910 | 0.9202 | 0.7465 |
| 0.2075 | 50.0 | 29500 | 0.8866 | 0.7394 |
| 0.1982 | 51.0 | 30090 | 0.9081 | 0.7358 |
| 0.2064 | 52.0 | 30680 | 0.9691 | 0.7321 |
| 0.1955 | 53.0 | 31270 | 0.9527 | 0.7275 |
| 0.2006 | 54.0 | 31860 | 0.8744 | 0.7456 |
| 0.2021 | 55.0 | 32450 | 0.9529 | 0.7419 |
| 0.1932 | 56.0 | 33040 | 0.9040 | 0.7391 |
| 0.1823 | 57.0 | 33630 | 0.9188 | 0.7382 |
| 0.1726 | 58.0 | 34220 | 0.8715 | 0.7385 |
| 0.1867 | 59.0 | 34810 | 0.9165 | 0.7410 |
| 0.1831 | 60.0 | 35400 | 0.9393 | 0.7431 |
| 0.1741 | 61.0 | 35990 | 0.9843 | 0.7502 |
| 0.1687 | 62.0 | 36580 | 0.9161 | 0.7419 |
| 0.1712 | 63.0 | 37170 | 0.9630 | 0.7431 |
| 0.1742 | 64.0 | 37760 | 0.9306 | 0.7443 |
| 0.1721 | 65.0 | 38350 | 0.9384 | 0.7446 |
| 0.1614 | 66.0 | 38940 | 0.9237 | 0.7401 |
| 0.1631 | 67.0 | 39530 | 0.9315 | 0.7404 |
| 0.1626 | 68.0 | 40120 | 0.8884 | 0.7434 |
| 0.1547 | 69.0 | 40710 | 0.9163 | 0.7483 |
| 0.1609 | 70.0 | 41300 | 0.9340 | 0.7422 |
| 0.1592 | 71.0 | 41890 | 0.9292 | 0.7352 |
| 0.1588 | 72.0 | 42480 | 0.8887 | 0.7495 |
| 0.1504 | 73.0 | 43070 | 0.9228 | 0.7480 |
| 0.1422 | 74.0 | 43660 | 0.9570 | 0.7361 |
| 0.1535 | 75.0 | 44250 | 0.9705 | 0.7446 |
| 0.1486 | 76.0 | 44840 | 0.9364 | 0.7477 |
| 0.146 | 77.0 | 45430 | 0.9385 | 0.7517 |
| 0.1519 | 78.0 | 46020 | 0.8991 | 0.7495 |
| 0.148 | 79.0 | 46610 | 0.9516 | 0.7483 |
| 0.1388 | 80.0 | 47200 | 0.9189 | 0.7462 |
| 0.1392 | 81.0 | 47790 | 0.8985 | 0.7474 |
| 0.1426 | 82.0 | 48380 | 0.9112 | 0.7459 |
| 0.1388 | 83.0 | 48970 | 0.9468 | 0.7456 |
| 0.1396 | 84.0 | 49560 | 0.9185 | 0.7474 |
| 0.1316 | 85.0 | 50150 | 0.9230 | 0.7434 |
| 0.1332 | 86.0 | 50740 | 0.9365 | 0.7388 |
| 0.1245 | 87.0 | 51330 | 0.9405 | 0.7502 |
| 0.1283 | 88.0 | 51920 | 0.9384 | 0.7453 |
| 0.1309 | 89.0 | 52510 | 0.9250 | 0.7483 |
| 0.127 | 90.0 | 53100 | 0.9176 | 0.7434 |
| 0.124 | 91.0 | 53690 | 0.9207 | 0.7446 |
| 0.1294 | 92.0 | 54280 | 0.8949 | 0.7489 |
| 0.1322 | 93.0 | 54870 | 0.9154 | 0.7495 |
| 0.1242 | 94.0 | 55460 | 0.9033 | 0.7508 |
| 0.1251 | 95.0 | 56050 | 0.9201 | 0.7502 |
| 0.1174 | 96.0 | 56640 | 0.9043 | 0.7480 |
| 0.1284 | 97.0 | 57230 | 0.9111 | 0.7489 |
| 0.1188 | 98.0 | 57820 | 0.9175 | 0.7489 |
| 0.1201 | 99.0 | 58410 | 0.9150 | 0.7498 |
| 0.1229 | 100.0 | 59000 | 0.9096 | 0.7495 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
dbmdz/flair-historic-ner-lft | dbmdz | "2020-12-11T10:41:44Z" | 17 | 1 | flair | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"de",
"license:mit",
"region:us"
] | token-classification | "2022-03-02T23:29:05Z" | ---
tags:
- flair
- token-classification
- sequence-tagger-model
language: de
inference: false
license: mit
---
# Towards Robust Named Entity Recognition for Historic German
Based on [our paper](https://www.aclweb.org/anthology/W19-4312/)
we release a new model trained on the LFT dataset.
**Note:** We use BPEmbeddings instead of the combination of
Wikipedia, Common Crawl and character embeddings (as used in the paper),
so save space and training/inferencing time.
# Results
| Dataset \ Run | Run 1 | Run 2 | Run 3† | Avg.
| ------------- | ----- | ----- | --------- | ------------
| Development | 76.32 | 76.13 | **76.36** | 76.27
| Test | 77.07 | 77.35 | 77.20 | 77.21
Paper reported an averaged F1-score of 77.51.
† denotes that this model is selected for upload.
|
RichardErkhov/tianyil1_-_denas-llama2-gguf | RichardErkhov | "2024-09-02T04:38:01Z" | 6 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-09-02T01:41:14Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
denas-llama2 - GGUF
- Model creator: https://huggingface.co/tianyil1/
- Original model: https://huggingface.co/tianyil1/denas-llama2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [denas-llama2.Q2_K.gguf](https://huggingface.co/RichardErkhov/tianyil1_-_denas-llama2-gguf/blob/main/denas-llama2.Q2_K.gguf) | Q2_K | 2.36GB |
| [denas-llama2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/tianyil1_-_denas-llama2-gguf/blob/main/denas-llama2.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [denas-llama2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/tianyil1_-_denas-llama2-gguf/blob/main/denas-llama2.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [denas-llama2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/tianyil1_-_denas-llama2-gguf/blob/main/denas-llama2.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [denas-llama2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/tianyil1_-_denas-llama2-gguf/blob/main/denas-llama2.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [denas-llama2.Q3_K.gguf](https://huggingface.co/RichardErkhov/tianyil1_-_denas-llama2-gguf/blob/main/denas-llama2.Q3_K.gguf) | Q3_K | 3.07GB |
| [denas-llama2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/tianyil1_-_denas-llama2-gguf/blob/main/denas-llama2.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [denas-llama2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/tianyil1_-_denas-llama2-gguf/blob/main/denas-llama2.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [denas-llama2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/tianyil1_-_denas-llama2-gguf/blob/main/denas-llama2.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [denas-llama2.Q4_0.gguf](https://huggingface.co/RichardErkhov/tianyil1_-_denas-llama2-gguf/blob/main/denas-llama2.Q4_0.gguf) | Q4_0 | 3.56GB |
| [denas-llama2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/tianyil1_-_denas-llama2-gguf/blob/main/denas-llama2.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [denas-llama2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/tianyil1_-_denas-llama2-gguf/blob/main/denas-llama2.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [denas-llama2.Q4_K.gguf](https://huggingface.co/RichardErkhov/tianyil1_-_denas-llama2-gguf/blob/main/denas-llama2.Q4_K.gguf) | Q4_K | 3.8GB |
| [denas-llama2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/tianyil1_-_denas-llama2-gguf/blob/main/denas-llama2.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [denas-llama2.Q4_1.gguf](https://huggingface.co/RichardErkhov/tianyil1_-_denas-llama2-gguf/blob/main/denas-llama2.Q4_1.gguf) | Q4_1 | 3.95GB |
| [denas-llama2.Q5_0.gguf](https://huggingface.co/RichardErkhov/tianyil1_-_denas-llama2-gguf/blob/main/denas-llama2.Q5_0.gguf) | Q5_0 | 4.33GB |
| [denas-llama2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/tianyil1_-_denas-llama2-gguf/blob/main/denas-llama2.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [denas-llama2.Q5_K.gguf](https://huggingface.co/RichardErkhov/tianyil1_-_denas-llama2-gguf/blob/main/denas-llama2.Q5_K.gguf) | Q5_K | 4.45GB |
| [denas-llama2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/tianyil1_-_denas-llama2-gguf/blob/main/denas-llama2.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [denas-llama2.Q5_1.gguf](https://huggingface.co/RichardErkhov/tianyil1_-_denas-llama2-gguf/blob/main/denas-llama2.Q5_1.gguf) | Q5_1 | 4.72GB |
| [denas-llama2.Q6_K.gguf](https://huggingface.co/RichardErkhov/tianyil1_-_denas-llama2-gguf/blob/main/denas-llama2.Q6_K.gguf) | Q6_K | 5.15GB |
| [denas-llama2.Q8_0.gguf](https://huggingface.co/RichardErkhov/tianyil1_-_denas-llama2-gguf/blob/main/denas-llama2.Q8_0.gguf) | Q8_0 | 6.67GB |
Original model description:
---
license: llama2
---
# DENAS-LLAMA2
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Expert68/llama2_13b_instructed_version2 | Expert68 | "2023-10-15T10:06:39Z" | 1,535 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"gpt",
"llm",
"large language model",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-14T02:27:16Z" | ---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
inference: false
license: apache-2.0
---
# Model Card
## Training Dataset
` llama2_13b_instructed ` is trained on multiple datasets:
- [Stanford Alpaca (en)](https://github.com/tatsu-lab/stanford_alpaca)
- [Open Assistant (multilingual)](https://huggingface.co/datasets/OpenAssistant/oasst1)
- [LIMA (en)](https://huggingface.co/datasets/GAIR/lima)
- [CodeAlpaca 20k (en)](https://huggingface.co/datasets/sahil2801/CodeAlpaca-20k)
- [GPT-4 Generated Data (en&zh)](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM)
- [UltraChat (en)](https://github.com/thunlp/UltraChat) |
mrhunghd/12799e43-d6d7-4b47-845a-715064fe3b80 | mrhunghd | "2025-01-21T00:34:02Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.1-Storm-8B",
"base_model:adapter:unsloth/Llama-3.1-Storm-8B",
"license:llama3.1",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-21T00:07:48Z" | ---
library_name: peft
license: llama3.1
base_model: unsloth/Llama-3.1-Storm-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 12799e43-d6d7-4b47-845a-715064fe3b80
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Llama-3.1-Storm-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ff96928aeca90664_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ff96928aeca90664_train_data.json
type:
field_input: tags
field_instruction: short description
field_output: LLM description
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: mrhunghd/12799e43-d6d7-4b47-845a-715064fe3b80
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/ff96928aeca90664_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2a0104e5-6e0b-4f2e-9a39-647e3f6ae0fa
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 2a0104e5-6e0b-4f2e-9a39-647e3f6ae0fa
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 12799e43-d6d7-4b47-845a-715064fe3b80
This model is a fine-tuned version of [unsloth/Llama-3.1-Storm-8B](https://huggingface.co/unsloth/Llama-3.1-Storm-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4001
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.464 | 0.3413 | 200 | 0.4001 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nblinh63/02e94366-8ecb-4212-bf9d-67b9cd913802 | nblinh63 | "2025-01-22T20:21:58Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Korabbit/llama-2-ko-7b",
"base_model:adapter:Korabbit/llama-2-ko-7b",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-22T19:33:54Z" | ---
library_name: peft
base_model: Korabbit/llama-2-ko-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 02e94366-8ecb-4212-bf9d-67b9cd913802
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Korabbit/llama-2-ko-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 4eb645f2919c3020_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/4eb645f2919c3020_train_data.json
type:
field_instruction: body
field_output: title
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nblinh63/02e94366-8ecb-4212-bf9d-67b9cd913802
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/4eb645f2919c3020_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5ba9a929-a389-4d0f-b704-240a5f0d1443
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5ba9a929-a389-4d0f-b704-240a5f0d1443
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 02e94366-8ecb-4212-bf9d-67b9cd913802
This model is a fine-tuned version of [Korabbit/llama-2-ko-7b](https://huggingface.co/Korabbit/llama-2-ko-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9482
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.8089 | 0.0171 | 200 | 1.9482 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Vampire-D/Llama-3.2-1B-HW | Vampire-D | "2025-03-11T13:55:28Z" | 0 | 0 | null | [
"safetensors",
"llama",
"trl",
"sft",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2025-03-11T12:58:56Z" | ---
license: apache-2.0
tags:
- trl
- sft
---
|
vocabtrimmer/mt5-small-jaquad-qg-trimmed-ja-120000 | vocabtrimmer | "2023-04-28T17:07:54Z" | 114 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-03-15T10:36:27Z" | # Vocabulary Trimmed [lmqg/mt5-small-jaquad-qg](https://huggingface.co/lmqg/mt5-small-jaquad-qg): `vocabtrimmer/mt5-small-jaquad-qg-trimmed-ja-120000`
This model is a trimmed version of [lmqg/mt5-small-jaquad-qg](https://huggingface.co/lmqg/mt5-small-jaquad-qg) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mt5-small-jaquad-qg | vocabtrimmer/mt5-small-jaquad-qg-trimmed-ja-120000 |
|:---------------------------|:---------------------------|:-----------------------------------------------------|
| parameter_size_full | 300,165,504 | 166,944,128 |
| parameter_size_embedding | 256,103,424 | 122,882,048 |
| vocab_size | 250,101 | 120,002 |
| compression_rate_full | 100.0 | 55.62 |
| compression_rate_embedding | 100.0 | 47.98 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| ja | vocabtrimmer/mc4_validation | text | ja | validation | 120000 | 2 | |
AMR-KELEG/Sentence-ALDi-30 | AMR-KELEG | "2023-10-14T10:38:49Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"region:us"
] | text-classification | "2023-10-12T17:44:44Z" | ---
inference: false
---
# Model Card for Sentence-ALDi
[](https://github.com/AMR-KELEG/ALDi)
<!-- Provide a quick summary of what the model is/does. -->
A BERT-based model fine-tuned to estimate the Arabic Level od Dialectness of text.
### Model Description
<!-- Provide a longer summary of what this model is. -->
<!-- - **Developed by:** Amr Keleg -->
- **Model type:** Regression head on top of a BERT-based model fine-tuned for estimating the Arabic Level of Dialectness of text.
- **Language(s) (NLP):** Arabic.
<!--- **License:** [More Information Needed] -->
- **Finetuned from model :** [MarBERT](https://huggingface.co/UBC-NLP/MARBERT)
More information coming soon! |
jtatman/tinydolphin-2.8_1b-samantha-alpaca | jtatman | "2024-01-28T13:06:34Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-01-27T07:38:14Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
youralien/roberta-Structure-goodareas-eval_FeedbackESConv5pp_CARE10pp-sweeps-current | youralien | "2025-03-13T06:49:17Z" | 1 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-03-12T16:31:52Z" | ---
library_name: transformers
license: mit
base_model: FacebookAI/roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: roberta-Structure-goodareas-eval_FeedbackESConv5pp_CARE10pp-sweeps-current
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-Structure-goodareas-eval_FeedbackESConv5pp_CARE10pp-sweeps-current
This model is a fine-tuned version of [FacebookAI/roberta-large](https://huggingface.co/FacebookAI/roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1962
- Accuracy: 0.9127
- Precision: 0.4457
- Recall: 0.7069
- F1: 0.5467
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.253164784470222e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.3038 | 1.0 | 167 | 0.2109 | 0.9089 | 0.3898 | 0.3966 | 0.3932 |
| 0.2729 | 2.0 | 334 | 0.2530 | 0.9012 | 0.4078 | 0.7241 | 0.5217 |
| 0.243 | 3.0 | 501 | 0.2277 | 0.9114 | 0.4409 | 0.7069 | 0.5430 |
| 0.2129 | 4.0 | 668 | 0.1612 | 0.9204 | 0.4767 | 0.7069 | 0.5694 |
| 0.1673 | 5.0 | 835 | 0.1962 | 0.9127 | 0.4457 | 0.7069 | 0.5467 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.5.1+cu124
- Datasets 2.21.0
- Tokenizers 0.21.0
|
hchcsuim/batch-size16_FFPP-raw_opencv-1FPS_faces-expand0-aligned_unaugmentation_seed-random_4_3060 | hchcsuim | "2024-07-07T07:08:01Z" | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-07-07T06:14:14Z" | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: batch-size16_FFPP-raw_opencv-1FPS_faces-expand0-aligned_unaugmentation_seed-random_4_3060
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9845940917547761
- name: Precision
type: precision
value: 0.9849845112436898
- name: Recall
type: recall
value: 0.9954922309833024
- name: F1
type: f1
value: 0.9902104959630911
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# batch-size16_FFPP-raw_opencv-1FPS_faces-expand0-aligned_unaugmentation_seed-random_4_3060
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0425
- Accuracy: 0.9846
- Precision: 0.9850
- Recall: 0.9955
- F1: 0.9902
- Roc Auc: 0.9989
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc |
|:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| 0.0484 | 0.9996 | 1377 | 0.0425 | 0.9846 | 0.9850 | 0.9955 | 0.9902 | 0.9989 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1
- Datasets 2.20.0
- Tokenizers 0.19.1
|
TnTerry/MEGL-BLIP-Baseline-Object | TnTerry | "2024-10-25T01:32:10Z" | 64 | 0 | transformers | [
"transformers",
"safetensors",
"blip",
"visual-question-answering",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | visual-question-answering | "2024-10-25T01:30:14Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sail-rvc/Doki | sail-rvc | "2023-07-14T07:21:29Z" | 1 | 0 | transformers | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
] | audio-to-audio | "2023-07-14T07:21:16Z" |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# Doki
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:21:29
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
|
ucmp137538/distilbert-base-uncased-finetuned-sst2 | ucmp137538 | "2024-03-27T04:01:42Z" | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-03-13T21:21:04Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-sst2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2810
- Accuracy: 0.9048
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1828 | 1.0 | 2105 | 0.2810 | 0.9048 |
| 0.1126 | 2.0 | 4210 | 0.3361 | 0.8945 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Niggendar/hadrianDelicexlPony_v26j | Niggendar | "2024-08-13T17:32:03Z" | 86 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-08-13T17:18:12Z" | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
enesuio/ennurv9 | enesuio | "2024-12-14T00:25:52Z" | 8 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2024-12-14T00:25:48Z" | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: ennurv9
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# ennurv9
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `ennurv9` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
Gendalfblack115/lgit | Gendalfblack115 | "2024-06-10T21:43:28Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-10T21:43:28Z" | ---
license: apache-2.0
---
|
omarelshehy/Arabic-STS-Matryoshka | omarelshehy | "2024-10-13T01:20:29Z" | 172 | 2 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"mteb",
"ar",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-10-12T03:10:56Z" | ---
base_model: FacebookAI/xlm-roberta-large
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- mteb
model-index:
- name: omarelshehy/Arabic-STS-Matryoshka
results:
- dataset:
config: ar-ar
name: MTEB STS17 (ar-ar)
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
split: test
type: mteb/sts17-crosslingual-sts
metrics:
- type: cosine_pearson
value: 81.88865368687937
- type: cosine_spearman
value: 82.90236782891859
- type: euclidean_pearson
value: 81.21254869664341
- type: euclidean_spearman
value: 82.28002933909444
- type: main_score
value: 82.90236782891859
- type: manhattan_pearson
value: 81.26482951395201
- type: manhattan_spearman
value: 82.36146806563059
- type: pearson
value: 81.88865526924
- type: spearman
value: 82.89304993265725
task:
type: STS
license: apache-2.0
language:
- ar
---
# SentenceTransformer based on FacebookAI/xlm-roberta-large
This is an **Arabic only** [sentence-transformers](https://www.SBERT.net) model finetuned from [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
The model is trained using the MatryoshkaLoss for embeddings of size 1024, 786, 512, 128, and 64 for storage optimization (See [Evaluation](https://huggingface.co/omarelshehy/Arabic-STS-Matryoshka#evaluation)).
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) <!-- at revision c23d21b0620b635a76227c604d44e43a9f0ee389 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
matryoshka_dim = 786
model = SentenceTransformer("omarelshehy/Arabic-STS-Matryoshka", truncate_dim=matryoshka_dim)
# Run inference
sentences = [
'أحب قراءة الكتب في أوقات فراغي.',
'أستمتع بقراءة القصص في المساء قبل النوم.',
'القراءة تعزز معرفتي وتفتح أمامي آفاق جديدة.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
# Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `sts-dev`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8256 |
| **spearman_cosine** | **0.8275** |
| pearson_manhattan | 0.8228 |
| spearman_manhattan | 0.8284 |
| pearson_euclidean | 0.8232 |
| spearman_euclidean | 0.8289 |
| pearson_dot | 0.8017 |
| spearman_dot | 0.8004 |
| pearson_max | 0.8256 |
| spearman_max | 0.8289 |
#### Embedding Size and Performance
This plot shows the slight degradation of performance qith smaller embedding sizes (worth investigating for your case since the benefits are huge compared to the slight loss in performance)

## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
Dichitha/struc_pruned-bert-mrpc | Dichitha | "2024-07-17T15:04:54Z" | 108 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-07-17T13:46:17Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
intanm/sa10-clm-20230403-001-3 | intanm | "2023-04-03T07:25:26Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-04-03T07:19:47Z" | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sa10-clm-20230403-001-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sa10-clm-20230403-001-3
This model is a fine-tuned version of [intanm/clm-20230403-001-3](https://huggingface.co/intanm/clm-20230403-001-3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6258
- Accuracy: 0.7692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 11 | 0.7291 | 0.7143 |
| No log | 2.0 | 22 | 0.6258 | 0.7692 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.1+cu116
- Datasets 2.11.0
- Tokenizers 0.13.2
|
Subsets and Splits