modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
luckycanucky/droogs-x31 | luckycanucky | 2025-06-01T01:34:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-01T01:05:58Z | ---
base_model: unsloth/llama-3.2-3b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** luckycanucky
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Triangle104/Qwen3-30B-A3B-Q4_K_M-GGUF | Triangle104 | 2025-06-01T01:29:02Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:Qwen/Qwen3-30B-A3B",
"base_model:quantized:Qwen/Qwen3-30B-A3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-06-01T01:01:39Z | ---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: Qwen/Qwen3-30B-A3B
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen3-30B-A3B-Q4_K_M-GGUF
This model was converted to GGUF format from [`Qwen/Qwen3-30B-A3B`](https://huggingface.co/Qwen/Qwen3-30B-A3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-30B-A3B) for more details on the model.
---
Qwen3-30B-A3B has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 30.5B in total and 3.3B activated
- Number of Paramaters (Non-Embedding): 29.9B
- Number of Layers: 48
- Number of Attention Heads (GQA): 32 for Q and 4 for KV
- Number of Experts: 128
- Number of Activated Experts: 8
- Context Length: 32,768 natively and 131,072 tokens with YaRN.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen3-30B-A3B-Q4_K_M-GGUF --hf-file qwen3-30b-a3b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen3-30B-A3B-Q4_K_M-GGUF --hf-file qwen3-30b-a3b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen3-30B-A3B-Q4_K_M-GGUF --hf-file qwen3-30b-a3b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen3-30B-A3B-Q4_K_M-GGUF --hf-file qwen3-30b-a3b-q4_k_m.gguf -c 2048
```
|
AmberYifan/Qwen2.5-7B-sft-SPIN-gpt4o-ORPO | AmberYifan | 2025-06-01T01:21:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"orpo",
"conversational",
"arxiv:2403.07691",
"base_model:AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF",
"base_model:finetune:AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-01T01:06:30Z | ---
base_model: AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF
library_name: transformers
model_name: Qwen2.5-7B-sft-SPIN-gpt4o-ORPO
tags:
- generated_from_trainer
- trl
- orpo
licence: license
---
# Model Card for Qwen2.5-7B-sft-SPIN-gpt4o-ORPO
This model is a fine-tuned version of [AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF](https://huggingface.co/AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AmberYifan/Qwen2.5-7B-sft-SPIN-gpt4o-ORPO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/y32d2xjm)
This model was trained with ORPO, a method introduced in [ORPO: Monolithic Preference Optimization without Reference Model](https://huggingface.co/papers/2403.07691).
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.20.3
## Citations
Cite ORPO as:
```bibtex
@article{hong2024orpo,
title = {{ORPO: Monolithic Preference Optimization without Reference Model}},
author = {Jiwoo Hong and Noah Lee and James Thorne},
year = 2024,
eprint = {arXiv:2403.07691}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mci29/sn29_s2m4_flnh | mci29 | 2025-06-01T01:16:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-01T01:13:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AmberYifan/Qwen2.5-7B-sft-dpo-10k-ORPO | AmberYifan | 2025-06-01T01:04:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"orpo",
"conversational",
"arxiv:2403.07691",
"base_model:AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF",
"base_model:finetune:AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-01T00:49:27Z | ---
base_model: AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF
library_name: transformers
model_name: Qwen2.5-7B-sft-dpo-10k-ORPO
tags:
- generated_from_trainer
- trl
- orpo
licence: license
---
# Model Card for Qwen2.5-7B-sft-dpo-10k-ORPO
This model is a fine-tuned version of [AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF](https://huggingface.co/AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AmberYifan/Qwen2.5-7B-sft-dpo-10k-ORPO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/tsolsinw)
This model was trained with ORPO, a method introduced in [ORPO: Monolithic Preference Optimization without Reference Model](https://huggingface.co/papers/2403.07691).
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.20.3
## Citations
Cite ORPO as:
```bibtex
@article{hong2024orpo,
title = {{ORPO: Monolithic Preference Optimization without Reference Model}},
author = {Jiwoo Hong and Noah Lee and James Thorne},
year = 2024,
eprint = {arXiv:2403.07691}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
AmberYifan/Qwen2.5-7B-sft-spin-10k-KTO | AmberYifan | 2025-06-01T01:01:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"kto",
"conversational",
"arxiv:2402.01306",
"base_model:AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF",
"base_model:finetune:AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-01T00:41:13Z | ---
base_model: AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF
library_name: transformers
model_name: Qwen2.5-7B-sft-spin-10k-KTO
tags:
- generated_from_trainer
- trl
- kto
licence: license
---
# Model Card for Qwen2.5-7B-sft-spin-10k-KTO
This model is a fine-tuned version of [AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF](https://huggingface.co/AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AmberYifan/Qwen2.5-7B-sft-spin-10k-KTO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/0kzcew0u)
This model was trained with KTO, a method introduced in [KTO: Model Alignment as Prospect Theoretic Optimization](https://huggingface.co/papers/2402.01306).
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.20.3
## Citations
Cite KTO as:
```bibtex
@article{ethayarajh2024kto,
title = {{KTO: Model Alignment as Prospect Theoretic Optimization}},
author = {Kawin Ethayarajh and Winnie Xu and Niklas Muennighoff and Dan Jurafsky and Douwe Kiela},
year = 2024,
eprint = {arXiv:2402.01306},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mci29/sn29_s2m3_hbui | mci29 | 2025-06-01T01:00:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-01T00:56:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
seekerdeep/task-10-microsoft-Phi-3.5-mini-instruct | seekerdeep | 2025-06-01T00:57:14Z | 28 | 0 | peft | [
"peft",
"safetensors",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:adapter:microsoft/Phi-3.5-mini-instruct",
"license:other",
"region:us"
] | null | 2025-05-29T12:43:53Z | ---
library_name: peft
license: other
base_model: microsoft/Phi-3.5-mini-instruct
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lora
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
### Training results
### Framework versions
- PEFT 0.12.0
- Transformers 4.48.3
- Pytorch 2.6.0+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
rasyosef/roberta-amharic-text-embedding-base | rasyosef | 2025-06-01T00:51:37Z | 326 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:40237",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"am",
"dataset:rasyosef/amharic-passage-retrieval-dataset",
"base_model:rasyosef/roberta-base-amharic",
"base_model:finetune:rasyosef/roberta-base-amharic",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-01-04T15:57:00Z | ---
language:
- am
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:40237
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: rasyosef/roberta-base-amharic
widget:
- source_sentence: የሞዴል ጥቃቅንና አነስተኛ ኢንተርፕራይዞች ኤግዚቢሽንና ባዛር የ4 ሚሊዮን ብር ሽያጭና የገበያ ትስስር
እንደሚፈጠር ተገለጸ
sentences:
- አዲስ አበባ ፣ ነሃሴ 22 ፣ 2012 (ኤፍ ቢ ሲ) ሰኔ 16 ቀን 2010 ዓ.ም በአዲስ አበባ መስቀል አደባባይ ለጠቅላይ ሚኒስትር
ዐቢይ አሕመድ በተካሄደ የድጋፍ ሰልፍ ላይ ቦምብ በመወርወር የሽብር ወንጀል የተከሰሱ አምስት ተከሳሾች የጥፋተኝነት ፍርድ ተፈረደባቸው።ተከሳሾቹ
ጌቱ ቶሎሳ፣ ብርሃኑ ጃፋር፣ ጥላሁን ጌታቸው፣ ደሳለኝ ተስፋዬ እና ባህሩ ቶላ ሲሆኑ የጥፋተኝነት ፍርዱን የፌደራሉ ከፍተኛ ፍርድ
ቤት 1ኛ የወንጀል ችሎት ነው ያስተላለፈው።የዐቃቤ ህግ ክስ እንደሚያመላክተው ተከሳሾቹ ወንጀሉን የፈጸሙት ሰኔ 16 ቀን 2010
ዓ.ም በአዲስ አባባ መስቀል አደባባይ ከረፋዱ አራት ሰአት ላይ በ40 ሜትር ርቀት አካባቢ ለጠቅላይ ሚኒስትር ዐቢይ አሕመድ
በተደረገው የድጋፍ ሰልፍ ላይ ቦንብ በመወርወር ነው።ተከሳሾቹ በ1996 ዓ.ም የወጣውን የኢፌዴሪ የወንጀል ህግ አንቀጽ 32/1ሀ
እንዲሁም አንቀጽ 38 እና የፀረ ሽብርተኝነት አዋጅ ቁጥር 652/2001 አንቀጽ 3 ስር የተመለከተውን በመተላለፍ፤ በሃገሪቱ
ያለውን ለውጥ ተከትሎ በጠቅላይ ሚኒስትር ዐቢይ የሚመራ መንግስት መኖር የለበትም በሚል የራሳቸውን አላማ ለማራመድ በማሰብ መንቀሳቀሳቸውን
ዐቃቤ ህግ በክሱ አመላክቷል።በዚህም ከ1ኛ እስከ 4ኛ ያሉ ተከሳሾች ከሱሉሉታ ከተማ መነሻቸውን በማድረግ በስልክ በመደዋወልና
በአካል በመገናኘት በድጋፍ ሰልፉ ላይ እንዴት ቦምብ መወርወር እንዳለባቸው ሲዘጋጁ ቆይተዋልም ነው ያለው ዐቃቤ ህግ፡፡በዚህ
መልኩ በ1ኛ ተከሳሽ ቤት ቡራዩ በማደር 2ኛ ተከሳሽ በሚያሽከረክረው ተሽከርካሪ 2ኛ ተከሳሽ ያዘጋጀውን ኤፍ1 ቦምብ በመያዝ
ከ3 እስከ 5ኛ ያሉ ተከሳሾች ጋር ከፒያሳ ወደ ቴድሮስ አደባባይ በመምጣትና የድጋፍ ቲሸርት ልብስ ገዝተው በመልበስ ተመሳስለው
መግባታቸው ተጠቅሷል።በድጋፍ ሰልፉ ላይ ጠቅላይ ሚኒስትር ዐቢይ ንግግር ካደረጉ በኋላ ተከሳሾቹ በ40 ሜትር ርቀት ላይ ቦምብ
የወረወሩ ሲሆን በዚህም የሁለት ሰዎች ህይወት ሲያልፍ ከ163 በላይ ሰዎች ላይ ደግሞ ከከባድ እስከ ቀላል የአካል ጉዳት እንደደረሰባቸውም
ዐቃቤ ህግ አስረድቷል፡፡የዐቃቤ ህግን የሰነድና የሰው ምስክር እንዲሁም የተከሳሾችን መከላከያ የመረመረው ፍርድ ቤቱ ተከሳሾቹን
በተከሰሱበት ወንጀል ጥፋተኛ ብሏቸዋል።በተከሳሾቹ ላይ የቅጣት ውሳኔ ለመስጠትም ለጥቅምት 17 ቀን 2013 ዓ.ም ተለዋጭ ቀጠሮ
ሰጥቷል።እስከ ጥቅምት 17 ድረስ ግን የቅጣት ማቅለያዎችን ማቅረብ እንደሚቻል ትዕዛዝ ሰጥቷል።በታሪክ አዱኛ
- 'አዲሱ ገረመው አዲስ አበባ፡- የ2013 በጀት ዓመት የ4 ሚሊዮን ብር ሽያጭና የገበያ ትስስር እንደሚፈጥር የፌዴራል የከተሞች
የስራ ዕድል ፈጠራና የምግብ ዋስትና ኤጀንሲ አስታወቀ። ከተሳታፊዎች ውስጥ 50 በመቶዎቹ ሴቶች መሆናቸው ተጠቆመ ። ኤጀንሲው
ለአዲስ ዘመን
ጋዜጣ በላከው መግለጫ
እንዳስታወቀው፤ በ2013 በጀት
አመት አንደኛው ዙር
የሞዴል ጥቃቅንና አነስተኛ
ኢንተርፕራይዞች ሀገር አቀፍ
ኤግዚቢሽንና ባዛር ‹‹ዘላቂነት
ያለው የገበያ ትስስር
ለስራ ዕድል ፈጠራና
ለኢንተርፕራይዞች ልማት መሰረት
ነው ›› በሚል
መሪ ቃል ከታህሳስ
22 እስከ ታህሳስ 28 ቀን
2013 ዓ.ም በጀሞ አንድ አደባባይ ትራፊክ መብራት ፊትለፊት ለሰባት ተከታታይ ቀናት የሚካሄድ ይሆናል። የ4 ሚሊዮን ብር ሽያጭና
የገበያ ትስስር እንዲሚፈጥርም ይጠበቃል። በኤግዚቢሽንና ባዛሩ ላይ ከሁሉም ክልሎችና ከተሞች የተውጣጡ 202 የጥቃቅን እና አነስተኛ
ኢንተርፕራይዞች 10 አነስተኛና መካከለኛ ኢንዱስትሪዎች የሚሳተፉ ሲሆን፤ ሴቶች 50 በመቶ እና አካል ጉዳተኛ ሦስት በመቶ በማሳተፍ
ምርትና አገልግሎታቸው ከ20ሺ በላይ በሚሆን ተጠቃሚ የህብረተሰብ ክፍል እንዲጎበኝ ይደረጋል ብሏል ። ባዛሩ ከተለያዩ ክልሎችና
አካባቢዎች የተሰባሰቡና በልዩ ልዩ ዘርፎች የተሰማሩ ብቁና ተወዳዳሪ ኢንተርፕራይዞችንና አንቀሳቃሾችን የሚያሳትፍ ሲሆን፤ በአንድ
ማዕከል በማገናኘት በሚፈጠረው ትውውቅና የልምድ ልውውጥ በመካከላቸው ጤናማ የውድድር ስሜት ለማቀጣጠል እንደሚያስችልም “ኤጀንሲው
አመልክቷል ። ባህላዊና ዘመናዊ የጨርቃጨርቅና
አልባሳት ምርት ውጤቶች፣
ባህላዊና ዘመናዊ የቆዳ
አልባሳትና የቆዳ ምርት
ውጤቶች፣ ባህላዊ የዕደ-ጥበባትና
ቅርጻ-ቅርጽ ሥራዎችና
ውጤቶች፣ የብረታብረት፣ የእንጨት
ሥራና የኢንጅነሪንግ ስራዎችና
ውጤቶች፣ የአግሮ-ፕሮሰሲንግ
ምርቶች እና የከተማ
ግብርና ውጤቶች፣ የቴክኖሎጂ
ውጤቶችና የፈጠራ ስራዎች፣
ፈሳሽ ሳሙና፣አልኮል፣ሳኒታይዘር፣
የአፍና አፍንጫ መሸፈኛ
ጭንብል/ማስኮች/፣
እና ሌሎችም ምርቶች
በኤግዚቢሽንና ባዛሩ እንደሚቀርቡ
አስታውቋል። የአዲስ አበባ ነጋዴ ሴቶች ማህበር፣ የሴቶች ኢንተርፕርነርሺፕ ልማት ፕሮግራም፣ ኢንተርፕርነርሺፕ ልማት ማዕከል፣
ፋሽን ዲዛይን አሶሴሽን፣ የሴቶች ራስ አገዝ ድርጅት፣ የባህልና ቱሪዝም ሚኒስቴር በዕደ ጥበብ ዘርፍ የተሰማሩ ኢንተርፕራይዞችና
ሌሎችም ተሳታፊ ኢንተርፕራይዞች እንደሚሆኑ ጠቁሟል። ሁነቱ የተሞክሮ ልውውጥና
የንግድ ልማት ግንዛቤ
ከማዳበሩም ባሻገር፤ ኢንተርፕራይዞች
ከተጠቃሚው ህብረተሰብ ጋር
በሚያደርጉት ግንኙነት ዘላቂ
የገበያ ትስስር ለመፍጠር
የሚያስችል ምቹ አጋጣሚ
ይሆንላቸዋል። ምርቶቻቸውንና አገልግሎታቸውን
ለተጠቃሚዎች በቀጥታ በመሸጥም
ተጠቃሚ እንደሚሆኑም እጀንሲው
አስታውቋል ።አዲስ ዘመን ታህሳስ 22/2013'
- የአሜሪካው ሜሪየም ዌብስተር መዝገበ ቃላት እንደ ኦክስፎርድ መዝገበ ቃላት ሁሉ ታዋቂና ዓለም አቀፍ ተቀባይነት ያለው መዝገበ
ቃላት ነው።አንዲት ወጣት ጥቁር አሜሪካዊት ታዲያ ለዚህ መዝገበ ቃላት አሳታሚ በጻፈቸው ደብዳቤ ምክንያት መዝገበ ቃላቱ ዘረኝነት
ወይም (racism) ለሚለው የእንግሊዝኛ ቃል የትርጉም ፍቺ ማሻሻያ ለማድረግ ወስኗል።
- source_sentence: የደኢሕዴን ከፍተኛ አመራሮች በሐዋሳ እየመከሩ ነው
sentences:
- 'የሁለት ዞኖች ከፍተኛ አመራሮች ታግደዋል የደቡብ ኢትዮጵያ ሕዝቦች ዴሞክራሲያዊ ንቅናቄ (ደኢሕዴን) ከፍተኛ አመራሮች ከሐሙስ
ሐምሌ 18 እስከ 22 ቀን 2011 ዓ.ም. ድረስ በሐዋሳ እየመከሩ ነው፡፡ ከፍተኛ አመራሮቹ በክልሉ ውስጥ በተከሰተው ወቅታዊ
ችግርና በአገራዊ ጉዳዮች ላይ እንደሚወያዩ፣ በተለይ በድርጅቱ ህልውና ላይ እንደሚያተኩሩም ታውቋል፡፡ የደኢሕዴን ሊቀመንበር
ወ/ሮ ሙፈሪያት ካሚል በምክክሩ ላይ ባደረጉት ንግግር፣ በአገር ደረጃና በደቡብ ክልል የፖለቲካና የፀጥታ ጉዳዮች ላይ ወጥ አቋም
ያለው አመራር አስፈላጊነትን አውስተዋል፡፡ ከዚህ አንፃርም አመራሩ ራሱን በመፈተሽ ለለውጥ ዝግጁ መሆን እንዳለበት አስታውቀዋል፡፡
እንደ ወ/ሮ ሙፈሪያት ማብራሪያ የደኢሕዴን ህልውና መረጋገጥ የሚችለው፣ አመራሩ ከመቼውም ጊዜ በላይ መንቀሳቀስ ሲችል ብቻ እንደሆነ
ነው፡፡ አመራሩ ምንም ነገር እንደማይመጣ በመኩራራት ወይም በወቅታዊ ሁኔታዎች በመሥጋት የሚቀጥል ከሆነ ውጤት እንደማይኖር፣
በወቅቱ ተጨባጭ ሁኔታ ላይ በዝርዝር በመወያየት የድርጅቱ ህልውናን ማስቀጠል ላይ ትኩረት መስጠት እንደሚገባ አስረድተዋል፡፡
ይህ በዚህ እንዳለ ደኢሕዴን የሲዳማ ዞን፣ የሐዋሳ ከተማና የሃድያ ዞን ከፍተኛ አመራሮችን ማገዱንና ለወላይታና ለካፋ ዞኖች
አመራሮች ደግሞ ማስጠንቀቂያ መስጠቱን አስታውቋል፡፡ ከክልልነት ጥያቄ ጋር በተያያዘ በተለይ በሲዳማ ዞን ወረዳዎችና በሐዋሳ
ከተማ በተፈጸሙ ጥቃቶች የበርካቶች ሕይወት ማለፉን፣ የበርካቶች ቤት ንብረት መውደሙን ተከትሎ የደቡብ ክልል በፌዴራል መንግሥት
የፀጥታ አካላት ኮማንድ ፖስት ሥር እንዲተዳደሩ መወሰኑ የሚታወስ ሲሆን፣ በዚህም ምክንያት የደኢሕዴን ሥራ አስፈጻሚ ኮሚቴ በሐዋሳ
ከተማ ባደረገው ስብሰባ የአመራሮቹን የዕግድ ውሳኔ አሳልፏል፡፡ በዚህ ስብሰባው የክልሉን የፀጥታ ሁኔታ እንደገመገመ የገለጸው
የሥራ አስፈጻሚ ኮሚቴው፣ በተፈጠረ የፀጥታ ችግሮች ሳቢያ የሲዳማ ዞንና የሐዋሳ ከተማን፣ እንዲሁም የሃዲያ ዞን ‹‹የፊት አመራሮች››
እንዳገደ አስታውቋል፡፡ በተያያዘም በወላይታና በካፋ ዞኖች እየታዩ ያሉ ሁኔታዎች የሕግ ተጠያቂነትን የሚያስከትሉ ስለሆኑ፣ አመራሩ
የሕዝቡን ደኅንነት ለማስጠበቅ እንዲሠራ ሲል አስጠንቅቋል፡፡ በዚህም ሳቢያ የሲዳማ ዞን አስተዳዳሪ አቶ ቃሬ ጫዊቻና የሐዋሳ
ከተማ ከንቲባ አቶ ሱካሬ ሹዳ መታገዳቸውን ለማወቅ ተችሏል፡፡ የሥራ አስፈጻሚ ኮሚቴው በሐዋሳና በአካባቢው ሐምሌ 11 ቀን 2011
ዓ.ም. ክልልነትን እናውጃለን በሚል በተፈጸመ ጥቃት የተጎዱ ቤተሰቦችን መልሶ ለማቋቋም እንደሚሠራ በማስታወቅ፣ የጥፋቱ ተሳታፊዎችም
ሆኑ አስተባባሪዎች የሕግ ተጠያቂ እንዲሆኑ እሠራለሁ ብሏል፡፡ አሁን ለተከሰተው ጥፋትም ሆነ እየተስተዋለ በሚገኘው ሥርዓተ አልበኝነት
ውስጥ የአመራሩ ሚና ከፍተኛ መሆኑን ያመነው የሥራ አስፈጻሚ ኮሚቴው፣ ይኼንን ለማረም ከሥራ አስፈጻሚ እስከ ታችኛው የአመራር
ሥርዓት ድረስ ፈትሾ ዕርምጃ እንደሚወስድ ቃል ገብቷል፡፡ '
- 'አዲስ አበባ፣ ጥር 2፣ 2012 (ኤፍ.ቢ.ሲ) በፓኪስታን ደቡብ ምእራብ ኩዌታ ከተማ በመስጊድ ላይ በተፈፀመ የቦብም ጥቃት
የሞቱ ሰዎች ቁጥር 15 መድረሱን ፖሊስ አስታወቀ።በአርብ ፀሎት ላይ በነበሩ ሰዎች ላይ በተፈፀመው የቦምብ ጥቃቱ ከሞቱት ሰዎች
በተጨማሪም ከ20 በላይ ሰዎች ላይ የተለያየ መጠን ያለው ጉዳት መድረሱንም ነው የገለፀው።በመስጊድ ላይ ለተፈፀመው ጥቃትም በአካባቢው
የሚንቀሳቀሰው የአሸባሪው ኢስላሚክ ስቴት (አይ.ኤስ) ቡድን ኃላፊነት መውሰዱ ተነገሯል።በሽብር ጥቃቱ በአፍጋኒስታን የሚንቀሳቀሰው
የታሊባን ቡድን አመራሮች ተገድለዋል ቢባልም፤ ታሊባን ግን አመራሮቼ ላይ ጉዳት አልደረሰም ሲል አስተባብሏል።ምንጭ፦ '
- በኢትዮጵያ ፕሪምየር ሊግ ዘጠነኛ ሳምንት መቐለ 70 እንደርታ በሜዳው ሲዳማ ቡናን 3-1 ካሸነፈ በኋላ የሁለቱ ቡድኖች አሰልጣኞች
አስተያየታቸውን ሰጥተዋል። ” ሲዳማ ቡና በጥሩ ወቅታዊ አቋም የሚገኝ ቡድን በመሆኑ ጨዋታው ከባድ ነበር” – ገ/መድኅን ኃይሌ
– መቐለ 70 እንደርታስለ ጨዋታው” ጨዋታው ከባድ ነበር፤ ሲዳማ ቡና በጥሩ ወቅታዊ አቋም የሚገኝ ቡድን ነው ፤ የያዙት ነጥብም
ለዚህ ጨዋታ ጥሩ የስነልቦና ጥንካሬ አስገኝቶላቸዋል። በአንፃሩ እኛ አራት ጨዋታዎች ሳናሸንፍ ነው ወደ ጨዋታው የገባነው። በዚ
ምክንያት ጨዋታው አክብዶብን ነበር። በአጠቃላይ ጨዋታውን አሸንፈናል። በቀጣይ ጨዋታዎች ቀስ በቀሰ ወደ አሸናፊነት መጥተን ይህን
እናስቀጥላለን። ”“ዳኝነት ላይ ያየሁት ነገር ጥሩ አይደለም” ዘርዓይ ሙሉ – ሲዳማ ቡና ስለ ጨዋታው ” ከዕረፍት በፊት ከጨዋታ
ውጪ ኳሱ በኋላ ተጫዋቾቻችን መረጋጋት አልቻሉም። በጨዋታው አሳፋሪ ዳኝነት ነው ያየሁት። ስለ ጨዋታው ብጠይቀኝ አሳፋሪ እና
ሚዛናዊት የሌለው ዳኝነት ነው። የተቆጠርቡን ግቦች እኛ ላይ ጥፋት እየተፈፀሙ የተቆጠሩ ናቸው። ከጨዋታ ውጭ ሆኖም ግብ ይቆጠራል።
በቃ ይህንን ነው ያየሁት። ከዚ ውጭ ግን መቐለ ለማሸነፍ የነበረው ተነሳሽነት ጥሩ ነበር። እንደ ቡድን ተንቀሳቅሰዋል እኛም
የተሻለ ኳስ ተቆጣጥረን ተጫውተናል። እንዳያችሁት ኳሱን መስርተን ነው የወጣነው ግን በተለያዩ ስህተቶች ግብ ሲቆጠርብን የተጫዋቾቻችን
ብቃት አወረደው። የምንፈልገው እንቅስቃሴ ያላደረግነው በዳኞች ምክንያት ነው። ገና በሰባተኛ ደቂቃ ነው የተጀመረው ይሄ ነገር።
ጨዋታው ጥሩ ሆኖ ሳለ ሚዛኑ የጠበቀ ዳኝነት አላየንም። ዳኝነቱ ልክ ካልሆነ የጨዋታው እንቅስቃሴ እንዳለ ይበላሻል ይሄ ሁሉ
ደጋፊ የገባው ጥሩ ጨዋታ ለማየት ነው። ለምንድነው ተጫዋቾች ሮጠው ዳኛ ላይ የሚሄዱት። በተደጋጋሚ ስህተት ይሰራ ነበር። እኛ
ተጫዋቾቻችንን ብናረጋጋም የሚያደርጉት ስህተት ለሌላ ነገር የሚዳርግ ነበር። ዳኞቹ አቅም አንሷቸው ነው ብዬ አላስብም፤ ሆን
ተብሎ የተደረገ ነገር ነው። ዳኝነት ላይ ያየሁት ነገር ጥሩ አይደለም። መቐለን ግን እንደ ቡድን ጥሩ ነው እንኳን ደስ አላቹ
ማለት እፈልጋለው። ”ስለ ስታድየሙ ድባብ” ደጋፊው የሚደነቅ ደጋፊ ነው። በስርዓት ነው ቡድኑን የሚደግፈው። ምንም ነገር ቢፈጠር
ቡድኑን ነበር ሲደግፍ የነበረው። ”ዳኝነት ላይ ስለሰጠው አስተያየት” እኔ አዳላ አላልኩም። ግን ብቃት ማነስ ነው ብዬ አላስብም።
እነዚህ ሁሉ ግቦች እስኪቆጠሩ ብቃት ማነስ አይደለም። በአጠቃላይ ዳኝነቱ ሚዘናዊ አልነበረም። ሁሉም ግብ ላይ የዳኛ ተፅዕኖ
አለበት፤ በቃ ይሄን ነው የምለው። አንዱን ከጨዋታ ውጪ ብለህ አንዱን የምታፀድቅ ከሆነ ስህተት ነው። “
- source_sentence: የከምባታና ጠንባሮ አርሶአደሮች
sentences:
- በደሴ ማረሚያ ቤት በተደረገ የኮቪድ-19 ምርመራ 13 ሰዎች ቫይረሱ እንዳለባቸው ማረጋገጡን የከተማው ጤና መምሪያ አስታወቀ።የመምሪያው
ኃላፊ አቶ አብዱልሃሚድ ይመር በተለይ ለቢቢሲ እንዳስታወቁት 12ቱ የህግ ታራሚዎች ሲሆኑ ሌላኛው ደግሞ የማረሚያ ቤቱ ባልደረባ
ናቸው።እንደ አቶ አብዱልሃሚድ ገለጻ ከሆነ ከማረሚያ ቤቱ ጋር በመነጋገርም አዲስ የሚገቡ ታራሚዎች ለ14 ቀናት ለብቻቸው እንዲቆዩ
ከማድረግ በተጨማሪ በመጨረሻዎቹ ቀናት ላይ ምርመራ ሲደረግላቸው ቆይቷል።ከሐምሌ 20 በኋላ ማረሚያ ቤቱ የገቡ 46 ታራሚዎች
ላይ በተደረገ ምርመራ 10 ሰዎች ኮሮናቫይረስ እንዳለባቸው ለማረጋገጥ ተችሏል።“ታራሚዎቹ ከተለያዩ አካባቢዎች የመጡ ናቸው።
ከተለያዩ ከደቡብ ወሎ ወረዳዎች እና ከደሴ ከተማም የተገኙ ናቸው” ብለዋል።በሁለተኛ ዙር 60 ሰዎች ላይ በተደረገ ምርመራ ሦስቱ
ቫይረሱ እንዳለባቸው ተረጋግጧል።በሁለተኛው ዙር ቫይረሱ ከተገኘባቸው መካከል በመጀመሪያው ዙር እንዳለባቸው ከታወቁ ሰዎች ጋር
ንክኪ የነበራቸው እና አንድ ማረሚያ ቤቱ ባልደረባ ይገኙበታል።የማረሚያ ቤቱን የሕግ ታራሚዎች እና ባልደረባዎችን በሙሉ ለመመርመር
መቻሉንም አቶ አብዱልሃሚድ አስታውቀዋል።ቫይረሱ የተገኘባቸው ቦሩ ሜዳ መጀመሪያ ደረጃ ሆስፒታል የተላኩ ሲሆን፤ ተጓዳኝ ህመም
ያለበት አንድ ታራሚ ካሳየው የህመም ምልክት ውጭ ሁሉም በጥሩ ሁኔታ ላይ እንደሚገኙ ተናግረዋል።በማረሚያ ቤቱ የቫይረሱ ስርጭት
እንዳይስፋፋ አዲስ የሚገቡትን እና ነባር ታራሚዎችን ከመመርመር ባለፈ የግንዛቤ ማስጨበጫ ሥራ፣ የኬሚካል ርጭት፣ ርቀትን ማስጠበቅ
እና ንጽህና የማስጠበቅ ሥራ እየተከናወነ ነው ብለዋል።ባለፉት ወራት በአማራ ክልል በተደረገ የኮሮናቫይረስ ምርመራ 83 አሽከርካሪዎች
እና ረዳቶቻቸው ቫይረሱ ተገኝቶባቸዋል።በክልሉ ቫይረሱ ከተገኘባቸው ሰዎች መካካል 23 የህክምና ባለሙያዎች እንደሚገኙበትም ከአማራ
ህብረተሰብ ጤና ኢንስቲትዩት ያገኘነው መረጃ ያሳያል።በአጠቃላይ በኢትዮጵያ በኮቪድ-19 የተያዙ ሰዎች ቁጥር 25,118 የደረሱ
ሲሆን የሟቾች ቁጥር 463 ደርሷል። እንዲሁም አጠቃላይ ከበሽታው ያገገሙ ሰዎች 11,034 ደርሰዋል።
- 'በደቡብ ክልል ከፋ ዞን ዴቻ ወረዳ ከ20 ሺህ በላይ የከምባታና ጠምባሮ አርሶአደሮች በማንነታችን ጥቃት ደርሶብናል በማለት
እየተፈናቀሉ ናቸው፡፡አርሶአደሮቹ የተፈናቀሉት ከሶስት ሳምንት በፊት በወረዳው ከ30 በላይ ሲቪሎች በታጠቁ ግለሰቦች በአሰቃቂ
ሁኔታ መገደላቸውን ተከትሎ ነው ተብሏል፡፡ጉዳያችንን ለክልሉ መንግሥት ብናሳውቅም ችላ ተብለናል ሲሉ አርሶአደቹ ተናግረዋል።
አሁን ለችግር መጋለጣቸውንም ለቪኦኤ አስረድተዋል፡፡የከምባታ ጠንባሮ ዞን በበኩሉ የተፈናቀሉ ዜጎች በስቃይ ላይ መሆናቸውን ገልጦ
መፍትሔ እየተፈለገ መሆኑን አስታውቋል፡፡ '
- ባሕር ዳር፡ መስከረም 7/2012 ዓ.ም (አብመድ) በጣልያን ባሕር ዳርቻ ጠባቂዎች ሕይወታቸው የተረፉ 90 ስደተኞችን ማልታ
ለመቀበል ተስማማች፡፡በቀጣዩ ሳምንት ደግሞ በአዲስ የስደተኞች መከፋፈያ አሠራር ዘዴ ላይ የአውሮፓ ኅብረት ሊመክር ነው፡፡የማልታ
የሕይወት አድን ትብብር ማዕከል በጠየቀው መሠረት ትናንት የጣልያን ባሕር ዳርቻ ጠባቂ ቡድን ስደተኞቹን ታድጓል፡፡ ከሊቢያ የባሕር
ክልል ውጭ እየሰመጠች ከነበረች ጀልባ ነው ስደተኞቹን ማትረፍ የተቻለው፡፡ ማልታ በመጀመሪያ ስደተኞቹን ወደ ሀገሯ ለማስገባት
ፈቃደኛ አልሆነችም ነበር፡፡
- source_sentence: የአዲስ አበባ ከተማ አስተዳደር የጀመረው ኦዲት ወደ ባለ ኮከብ ሆቴሎችና ኢንዱስትሪዎች ተሸጋገረ
sentences:
- የኢትዮጵያ እግር ኳስ ፌዴሬሽን ከኢትዮጵያ ብሮድካስቲንግ ኮርፖሬሽን (EBC) ጋር በተፈራረመው የመግባቢያ ሰነድ ስምምነት ዙሪያ
ከፕሪሚየር ሊግ ክለቦች ጋር ነገ ከጠዋቱ 4፡00 ጀምሮ በኢንተርኮንትኔንታል ሆቴል ውይይት ያካሂዳል፡፡በውይይቱ ፌዴሬሽኑና EBC
የኢትዮጵያ ፕሪሚየር ሊግ ጨዋታዎችን በቀጥታ የተሌቭዥን ስርጭት አማካኝነት በመላ ኢትዮጵያ ተደራሽ ለማድረግ ነሃሴ 6/2007
ዓ.ም የተፈራረሙትን የመግባቢያ ሰነድ አስመልክቶ ስለ ስምምነቱ ፋይዳና ሂደት ገለፃ የሚደረግ ሲሆን ከፕሪሚየር ሊግ ክለቦች
ለሚነሱ ጥያቄዎች ማብራሪያ ይሰጣል፡፡ በክለቦች መብትና ተጠቃሚነት ዙሪያም ግልጽ ውይይት ይካሄዳል፡፡ስምምነቱ ይፋ መደረጉንና
መፈረሙን ተከትሎ ከተለያዩ በላድርሻ አከላት የተነሱት ጥያቄዎች በተለይም የኢትዮጵያ ቡና ስፖርት ክለብ በደብዳቤ አቋሙን የገለጸበት
አግባብ ተቀባይነት እንዳለው ታምኖበታል፡፡ ነገ ከጠዋቱ 4፡00 ጀምሮ የሚካሄደውና የፕሪሚየር ሊግ ክለቦች ፕሬዝዳንቶች እና
ስራ አስኪያጆች የሚሳተፉበት የውይይት መድረክ ስምምነቱን አስመልክቶ ሊነሱ የሚችሉትን ጥያቄዎች በመቀበል የማስተካካያ ርምጃ
ለመውሰድ የሚያስችል በመሆኑ ሁሉም ክለቦች የውይይቱ ተሳታፊ እንዲሆኑ ፌዴሬሽኑ ጥሪውን አስተላልፋል፡፡ፌዴሬሽኑና ኢቢሲ አለም
አቀፍና የሀገር ውስጥ ጨዋታዎችን በቴሌቭዥን የቀጥታ ስርጭት ለማስተላለፍ የተፈራረሙት የመግባቢያ ሰነድ ዓላማዎች በዋነኝነት
የወጣቱን ትውልድ የእግር ኳስ ስፖርት ተነሳሽነት ማሳደግ፣ የብሔራዊ እና አገር ውስጥ ውድድሮችን የቀጥታ ስርጭት ተደራሽነት
ማረጋገጥ እንዲሁም ለእግር ኳስ ስፖርት ዘላቂና አስተማማኝ እድገት አመቺ ሁኔታዎችን በመፍጠር ላይ እንደሚመሰረት መገለጹ ይታወሳል፡፡ማስታወሻ፡-
በውይይቱ የሚሳተፉት የፌዴሬሽኑ የስራ ሃላፊዎችና የክለቦች ተወካዮች ብቻ ናቸው፡፡
- ለመጀመርያ ጊዜ በተሟላ ደረጃ መሬትና መሬት ነክ ይዞታዎችን ኦዲት በማድረግ ላይ የሚገኘው የአዲስ አበባ ከተማ አስተዳደር፣
የኦዲት አድማሱን በማስፋት በባለ ኮከብ ሆቴሎችና በኢንዱስትሪዎች ላይ ቆጠራ ሊያካሂድ ነው፡፡ የአዲስ አበባ ከተማ አስተዳደር
ከ1995 ዓ.ም. ጀምሮ እስከ ኅዳር 2004 ዓ.ም. የከተማ ቦታ በሊዝ ስለመያዝ የሚደነግገው እስኪወጣበት ጊዜ ድረስ፣ ላለፉት
15 ዓመታት በኢንዱስትሪ ዞኖችና በተናጠል ለሚካሄዱ ፋብሪካዎች በርካታ ቦታዎችን ሰጥቷል፡፡ ከዚህ በተጨማሪ ለበርካታ ሆቴሎች
ግንባታ የሚሆን ሰፋፊ ቦታዎችንም እንዲሁ አቅርቧል፡፡ነገር ግን አስተዳደሩ በሰጣቸው ቦታዎች ላይ ስለተከናወነው ልማትም ሆነ፣
የተከናወኑት ግንባታዎች በውላቸው መሠረት ስለመካሄዳቸው በትክክል የተጠናቀረ መረጃ እንደሌለ ይገልጻል፡፡በከተማው ውስጥ የሚገኙ
አምራች ኢንዱስትሪዎችንና ባለ ኮከብ ሆቴሎችን ቁጥር ለማወቅ፣ በአግባቡ ሥራዎችን ባላካሄዱት ላይ ደግሞ የማስተካከያ ዕርምጃ
ለመውሰድ ኦዲት እንደሚከናወን ለማወቅ ተችሏል፡፡የአዲስ አበባ ከተማ አስተዳደር ምክትል ከንቲባ ታከለ ኡማ (ኢንጂነር) ለሪፖርተር፣
‹‹እስካሁን ግንባታ ሳይካሄድባቸው ለዓመታት ታጥረው የቆዩ ከአራት ሚሊዮን ካሬ ሜትር በላይ ቦታ መልሰን ወስደናል፤›› ብለዋል፡፡‹‹‹ይህ
ትልቅ ሥራ ነው፤›› በማለት ምክትል ከንቲባው ገልጸው፣ በቀጣይ ደግሞ በሆቴሎች፣ በኢንዱስትሪዎች፣ በድንጋይ ማምረቻ ካባዎች፣
እንዲሁም በመኖሪያ ቤቶች ላይ ኦዲት ተካሂዶ ዕርምጃ ይወሰዳል ሲሉ ገልጸዋል፡፡ ‹‹ሥራው ውስብስብ በመሆኑ የሚካሄደው ኦዲት
አንዴ ብቻ ሳይሆን ሦስት፣ አራት ጊዜ ይታያል፡፡ ካስፈለገም የማረጋገጡን ሥራ ማዕከላዊ ስታትስቲክስ ኤጀንሲ ሊያከናውን ይችላል፤››
በማለት ምክትል ከንቲባው አስረድተዋል፡፡በአዲስ አበባ ከተማ አምራች ኢንዱስትሪዎች፣ ሆቴሎች፣ ለድንጋይ ማውጪያ የተሰጡ ቦታዎች
ያሉበት ወቅታዊ ሁኔታ በትክክል አይታወቅም፡፡ ለእነዚህ ዘርፎች የቀረበው ቦታ ለታለመለት ዓላማ በትክክል ስለመዋሉ፣ ከዘርፉ
የሚመነጨው ኢኮኖሚም ሆነ የተፈጠረው የሥራ ዕድል ሽፋን እምብዛም አይታወቅም፡፡ይህንን ሥራ በተሻለ ደረጃ ለመሥራት የከተማው
ኢንዱስትሪ ቢሮ ከማዕከላዊ ስታትስቲክስ ኤጀንሲ ጋር በጋራ ለመሥራትም መስማማታቸው ታውቋል፡፡ የማዕከላዊ ስታትስቲክስ ኤጀንሲ
የቢዝነስ ስታትስቲክስ ዳይሬክተር አቶ ዘለዓለም ኃይለ ጊዮርጊስ፣ በሆቴሎችና በኢንዱስትሪዎች ላይ ቆጠራውን ለማካሄድ ሙሉ ዝግጅት
እየተደረገ መሆኑን ለሪፖርተር ገልጸው፣ በጉዳዩ ላይ ዝርዝር መረጃ ከመስጠት ተቆጥበዋል፡፡
- ጠቅላይ ሚኒስትር ዶክተር አብይ አህመድ ለተለያዩ የመንግስት የስራ ሀላፊዎች ሹመት መስጠታቸውን የጠቅላይ ሚኒስቴር ጽህፈት ቤት
አስታውቋል።በጠቅላይ ሚኒስትር ጽህፈት ቤት መግለጫ መሰረት፦ 1.ዶክተር አምባቸው መኮንን፦ የጠቅላይ ሚንስትሩ የመሰረተ ልማትና
የከተማ ልማት አማካሪ ሚንስትር 2.አቶ ገብረእግዚአብሔር አርአያ፦ በሚንስትር ዴኤታ ማዕረግ በህዝብ ተወካዮች ምክር ቤት የመንግስት
ረዳት ተጠሪ 3.አቶ ጫኔ ሽመካ፦ በሚንስትር ዴኤታ ማዕረግ በህዝብ ተወካዮች ምክር ቤት የመንግስት ረዳት ተጠሪ 4.አቶ ጫላ
ለሚ፦ በሚንስትር ዴኤታ ማዕረግ በህዝብ ተወካዮች ምክር ቤት የመንግስት ረዳት ተጠሪ5.አቶ ተስፋሁን ጎበዛይ፦ የጠቅላይ ሚንስትሩ
የብሔራዊ ደህንነት ጉዳዮች አማካሪ ሚንስትር ዴኤታ6.ብርጋዴል ጄኔራል አህመድ ሀምዛ፦ የብረታ ብረት ኢንጂነሪንግ ኮርፖሬሽን
ዋና ዳይሬክተር7.አቶ ሞቱማ መቃሳ፦ የጠቅላይ ሚንስትሩ የብሔራዊ ደህንነት ጉዳዮች አማካሪ ሚንስትር ዴኤታ8.አቶ ከበደ ይማም፦
የአካባቢ ጥበቃ ደንና የአየር ንብረት ለውጥ ኮሚሽን ምክትል ኮሚሽነር9.አቶ አዘዘው ጫኔ፦ የጉምሩክ ኮሚሽን ምክትል ኮሚሽነር10.አቶ
አወል አብዲ፦ የብረታ ብረት ኢንጂነሪንግ ኮርፖሬሽን ምክትል ዋና ዳይሬክተር11.አቶ ሙሉጌታ በየነ፦ የጉምሩክ ኮሚሽን ምክትል
ኮሚሽነር12. ዶክተር ፅጌረዳ ክፍሌ፦ የብሔራዊ ኤች. አይ. ቪ/ኤድስ መከላከያና መቆጣጠሪያ ጽ/ቤት ዋና ዳይሬክተር13.ወይዘሮ
ያምሮት አንዱዓለም፦ የአርማወር ሐሰን የምርምር ኢንስቲትዩት ምክትል ዋና ዳይሬክተር14.ዶክተር ሚዛን ኪሮስ፦ የኢትዮጵያ ጤና
መድህን ኤጀንሲ ዋና ዳይሬክተር15.አቶ ሀሚድ ከኒሶ፦ የሰነዶች ማረጋገጫና ምዝገባ ኤጀንሲ ምክትል ዋና ዳይሬክተር16.አቶ ከበደ
ጫኔ፦ የስደተኞችና ከስደት ተመላሾች ጉዳይ ኤጀንሲ ዋና ዳይሬክተር17.ወይዘሮ ምስራቅ ማሞ፦ የጉምሩክ ኮሚሽን ምክትል ኮሚሽነር
ሆነው ተሹመዋል።
- source_sentence: በቁጥጥር ስር የዋሉ የህወሓት ታጣቂዎች ልዩ ኃይሉና ወጣቱ የጥፋት ቡድኑ እኩይ ዓላማ ማስፈጸሚያ ከመሆን
እንዲቆጠቡ አስገነዘቡ
sentences:
- 'የፕሬዚዳንት ዶናልድ ትራምፕ ተቺዎች እንደሚሉት፤ ፕሬዚዳንቱ ለዘመናት የአሜሪካ ወዳጆች በሆኑት ኢትዮጵያ እና ግብፅ መካከል
ታላቁ የሕዳሴ ግድብን በተመለከተ ውጥረት ቀስቅሰዋል።ይህም በአሜሪካ እና በአፍሪካ የዲፕሎማሲ ታሪክ ትልቁ የትራምፕ ስህተት
ነው ይላሉ።ትራምፕ ከቀናት በፊት ግብፅ "ግድቡን ልታፈነዳው ትችላለች" ማለታቸው ይታወሳል። ጥር ላይ ፕሬዚዳንቱ "ስምምነት
መፍጠር ችያለሁ፤ ከባድ ጦርነትም አስቁሜያለሁ" ብለው የኖቤል የሰላም ሽልማት እንደሚገባቸው መናገራቸው ይታወሳል።ነገር ግን
ተሸላሚ የሆኑት ጠቅላይ ሚንስትር ዐብይ አሕመድ ነበሩ ።ትራምፕ የኖቤል የሰላም ሽልማት እንደሚገባቸው ሲናገሩ ጉዳዩን ግልፅ
ባያደርጉትም፤ በግብፁ ፕሬዘዳንት አብዱልፈታህ አል-ሲሲ ጥሪ መሠረት በኢትዮጵያ እና በግብፅ መካከል ጣልቃ ስለመግባታቸው እየተናገሩ
እንደነበረ ይታመናል።ትራምፕ በአንድ ወቅት አብዱልፈታህ አል-ሲሲን "የኔ ምርጡ አምባገነን" ማለታቸው አይዘነጋም።ግብፅ ታላቁ
ሕዳሴ ግድብ "ለደህንነቴ ያሰጋኛል" ትላለች። ሱዳንም የግብፅን ያህል ባይሆንም ስጋቱን ትጋራለች። በሌላ በኩል ኢትዮጵያ የኃይል
አመንጪውን ግድብ አስፈላጊነት አስረግጣ ትገልጻለች።ኬንያ የሚገኘው የአፍሪካ ቀንድ የጸጥታ ጉዳይ ተንታኝ ረሺድ አብዲ እንደሚለው፤
በግድቡ ዙሪያ ኢትዮጵያ እና ግብፅን ለማደራደር አሜሪካ ጣልቃ መግባቷ የሁለቱን አገሮች ውጥረት አባብሷል።"ኢትዮጵያ በግድቡ
አቅራቢያ የጸጥታ ኃይሏን እያጠናከረች ነው። ቤንሻንጉል ጉሙዝ ክልልን ከበረራ ውጪ ማድረጓ አንዱ ማሳያ ነው። በግድቡ ዙሪያ
በረራ የሚያግድ መሣሪያም ተገጥሟል። ግብፅ የወታደራዊ ቅኝት በረራ ልታደርግ እንደምትችል ከመስጋት የመነጨ ሊሆን ይችላል" ይላል።ተንታኙ
እንደሚናገረው፤ ትራምፕ ዓለም አቀፍ ዲፕሎማሲ እንዴት እንደሚሠራ የሚገነዘቡ አይመስልም።"በንግዱ ዓለም እንደሚደረገው ስምምነት
ላይ መድረስ ይቻላል የሚል የተዛባ አመለካከት አላቸው። የውጪ ጉዳይ መያዝ ያለበትን ጉዳይ ግምዣ ቤት ድርድሩን እንዲመራ ያደረጉትም
ለዚህ ነው። ከመነሻውም መጥፎ የነበረውን ሁኔታም አባብሶታል" ሲልም ረሺድ ያስረዳል።ኢትዮጵያ ከግብፅ እና ከሱዳን ጋር ያለው
ድርድር ሳይቋጭ ግድቡን ለመሙላት በመወሰኗ አሜሪካ የ100 ሚሊዮን ዶላር እርዳታ ማጠፏ ተዘግቧል።ረሺድ "ኢትዮጵያ አሜሪካ እንደከዳቻት
ይሰማታል። ብዙ ኢትዮጵያውያን ትራምፕን የጥላቻ ምልክት አድርገውታል" በማለት ሁኔታውን ይገልጻል።የዴሞክራት እጩው ጆ ባይደን
እንዲያሸንፉም የበርካታ ኢትዮጵያውያን ምኞት ነው።አሜሪካ የሚገኘው ሴንተር ፎር ግሎባል ዴቨሎፕመንት ውስጥ የፖሊሲ አጥኚ ደብሊው
ጉዬ ሙር እንደሚሉት፤ የትራምፕ አስተዳደር እስራኤልና የአረብ ሊግ አገራት መካከል ሰላም መፍጠር ስለሚፈልግ ከግብፅ ጎን መቆሙ
የሚጠበቅ ነው።ግብፅ ከእስራኤል ጋር ዘመናት ያስቆጠረ ዲፕሎማሲያዊ ትስስር አላት። ትራምፕ የአረብ ሊግ አገራት ለእስራኤል እውቅና
እንዲሰጡ ጥረት እያደረጉ ስለሆነ አብዱልፈታህ አል-ሲሲን ማስቀየም አይፈልጉም።ሙር እንደሚናገሩት፤ የትራምፕ አስተዳደር በግድቡ
ዙርያ ለግብፅ የወገነውም በዚህ ምክንያት ነው።ትራምፕ ሱዳንን በተመለከተ የደረሱበት ውሳኔ የአረቡን አገራት ከእስራኤል ጋር
ለማስስማት የሚያደርጉት ጥረት አንድ አካል ነው።ሱዳን ከእስራኤል ጋር ስምምነት ለማድረግ ወስናለች።በእርግጥ የአገሪቱ ተጠባባቂ
የውጪ ጉዳይ ሚንስትር ውሳኔው ገና በሕግ አውጪ መጽደቅ እንዳለበት ቢናገሩም፤ ሱዳን እንደ ጎርጎሮሳውያኑ 1967 ላይ የአረብ
ሊግ አገራት ውይይት ማስተናገዷ መዘንጋት የለበትም። በውይይቱ "ከእስራኤል ጋር መቼም ሰላም አይፈጠርም። መቼም ቢሆን ለእስራኤል
እውቅና አይሰጥም። ድርድርም አይካሄድም" ተብሎም ነበር።ሱዳን ከእስራኤል ጋር ለመስማማት በመፍቀዷ ትራምፕ ሽብርን ከሚድፉ አገሮች
ዝርዝር እንደሚያስወጧት ተናግረዋል። ይህም ለምጣኔ ሀብቷ ማገገም የሚረዳ ድጋፍ እንድታገኝ ያግዛታል።ትራምፕ በድጋሚ ከተመረጡ
ኢትዮጵያ ግድቡን በተመለከተ ሱዳን እና ግብፅ ላላቸው ስጋት አንዳች መልስ እንድትሰጥ ጫና እንደሚያደርጉ ይጠበቃል።አጥኚው እንደሚሉት፤
ሱዳን ሽብርን ከሚደግፉ አገሮች ዝርዝር ከወጣች የትራምፕ አስተዳደር በምላሹ የሚጠብቀው ነገር አለ።"ከእስራኤል ጋር ስምምነት
የመፍጠር ጉዳይ የሱዳን ማኅበረሰብን የከፋፈለ ነው። መንግሥት የራሱ የጸጥታ ጥያቄዎች እያሉበት ይህን ውሳኔ ማሳለፉ ችግር ሊያስከትል
ይችላል" ብለዋል። ትራምፕ አፍሪካን በተመለከተ የሚያራምዱት ፖሊሲ፤ በአሜሪካ እና በቻይና መካከል የሚካሄድ ''አዲሱ ቀዝቃዛ
ጦርነት'' ነው ሲል ረሺድ ይገልጸዋል።ለምሳሌ ቻይና ከግዛቷ ውጪ የመጀመሪያውን ወታደራዊ መቀመጫ የከፈተችው በጅቡቲ ነው። ማዕከሉ
የሚገኘው አሜሪካ የሶማሊያ ታጣቂዎች ላይ የአየር ጥቃት ለመሰንዘር ያቋቋመችው ማዕከል አቅራቢያ ነው።በቅርቡ የአሜሪካ ተዋጊ
ጀቶች ለማረፍ ሲሞክሩ፤ ቻይና የአሜሪካውያን ወታደሮችን እይታ የሚጋርድ መሣሪያ መሞከሯን ረሺድ ያጣቅሳል። "የትራምፕ አስተዳደር
ጸረ ቻይና ፖሊስ ያራምዳል" የሚለው ተንታኙ ሁኔታው ለአፍሪካ ቀንድ አስቸጋሪ መሆኑንም ያስረዳል።ቻይና አፍሪካ ውስጥ ያላትን
የንግድ የበላይነት ለመቀልበስ፤ የትራምፕ አስተዳደር ''ፕሮስፔሪቲ አፍሪካ ኢን 2018'' የተባለ ፖሊሲ ነድፏል።በአፍሪካ እና
በአሜሪካ መካከል የሚካሄደውን ንግድ በእጥፍ የማሳደግ እቅድ አለ። አምና የአሜሪካ መንግሥት የንግድ ተቋሞች አፍሪካ ውስጥ እንዲሠሩ
የገንዘብ ድጋፍ የሚሰጥበት አሠራር ዘርግቷል።ሙር እንደሚሉት፤ የአሜሪካ ድርጅቶች ከቻይና ተቋሞች ጋር መወዳደር አልቻልንም ብለው
ቅሬታ ስላሰሙ የገንዘብ ድጋፍ ለመስጠት ተወስኗል። "የአይቲ ዘርፍ እንደ ማሳያ ቢወሰድ፤ 70 በመቶ የአፍሪካ ኢንፎርሜሽን ቴክኖሎጂ
የተመሠረተው በቻይና ድርጅቶች ላይ ነው" ሲሉ ያብራራሉ። የትራምፕ አስተዳደር በ2025 የሚያበቃውን ከ30 በላይ የአፍሪካ አገሮች
ተጠቃሚ እንዲሆኑበት ታስቦ በአሜሪካ ለአፍሪካውያን የተሰጠው ከታሪፍና ከቀረጥ ነፃ የገበያ ዕድል (አፍሪካ ግሮዝ ኤንድ ኦፖርቹኒቲ
አክት-አጎዋ) የመሰረዝ እቅድ አለው። ለአፍሪካ ምርቶች የአሜሪካን ገበያ ክፍት የሚያደርገው ስምምነት የተፈረመው በቢል ክሊንተን
ነበር።አሜሪካ አሁን ላይ ትኩረቷ የሁለትዮሽ የንግድ ስምምነት እንደሆነ ሙር ይናገራሉ። ለምሳሌ ከኬንያ ጋር ንግግር እየተካሄደ
ነው።ኬንያ፤ የቻይና ''ቤልት ኤንድ ሮድ ኢኒሽየቲቭ'' አካል እንደሆነች ይታወቃል። ስምምነቱ ቻይናን ከአፍሪካ ጋር በንግድ
የሚያስተሳስርና የቻይና ዓለም አቀፍ ተደማጭነት የሚያጎላ እንደሆነ አሜሪካ ታምናለች።ትራምፕ ከኬንያ ጋር በቀጥታ ከተስማሙ በኋላ
ተመሳሳይ መንገድ ተጠቅመው ከሌሎች የአፍሪካ አገሮች ጋር የመሥራት ውጥን እንዳላቸው ሙር ይናገራሉ።ይህ የትራምፕ መንገድ፤ ከአፍሪካ
ሕብረት የንድግና ኢንዱስትሪ ኮሚሽነር አልበርት ሙቻንጋን ሐሳብ ጋር ይጣረሳል።እሳቸው የአፍሪካ አገራት በተናጠል ሳይሆን በአንድነት
ከአሜሪካ ጋር ስምምነት እንዲያደርጉ ይፈልጋሉ። ሙር እንደሚሉት፤ የአሜሪካ ውሳኔ የአፍሪካ ሕብረት የአህጉሪቱን ምጣኔ ሀብት
ለማጣመር ከሚያደርገው ጥረት ጋር ይጣረሳል።ሕብረቱ፤ አፍሪካን የዓለም ትልቋ ነጻ የንግድ ቀጠና የማድረግ አላማ አለው።ትራምፕ
ግን በጥምረት ከሚሠሩ ኃይሎች ጋር በጋራ ያለመደራደር አዝማሚያ ያሳያሉ ሲሉ አጥኚው ያክላሉ።የትራምፕ ተቀናቃኝ ጆ ባይደን ካሸነፉ
የአፍሪካ ፖሊሲያቸው ምን እንደሚሆን እስካሁን አልገለጹም።"የባይደን አስተዳደር በኦባማ ጊዜ ወደነበረው ሂደት ሊመለስ ይችላል"
ይላሉ ሙር። '
- አዲስ አበባ፣ ጥር 2፣ 2013(ኤፍ ቢ ሲ) የጋምቤላ ክልል ወጣት የሴራ ፖለቲካ አራማጆችን በዝምታ አይመለከቱም ሲል የክልሉ
ብልጽግና ፓርቲ ወጣቶች ሊግ ሰብሳቢ ወጣት ራች ጎች ገለጸ።የክልሉ የብልጽግና ፓርቲ ወጣቶች ሊግ የውይይት መድረክ ትናንት ተካሂዷል።ከአሁን
በፊት በነበረው የፖለቲካ ሴራ ወጣቱም ሆነ መላው የክልሉ ህዝብ ተጠቃሚ ሳይሆን ቆይቷል ያለው ሰብሳቢው ይህንን የህዝብ ጥቅም
የማያረጋግጥ የፖለቲካ ሴራ አካሄድ የክልሉ ወጣት እንደማይቀበለው ገልጿል።የክልሉ ህዝብ እኩል ተጠቃሚ የመሆን ዕድል ማግኘቱን
አስታውሶ፤ “በቀጣይ የሴራ ፖለቲካ አራማጆችን ወጣቱ በዝምታ አይመለከትም” ብሏል።የሊጉ ምክትል ሰብሳቢ ወጣት ኡጁሉ ቢሩ በበኩሉ
“ከአሁን በጎጥና በመንደር በመከፋፈል አንድነቱን ለመሸርሽር ሲሰራ ነበር” ብሏል።ህዝቡ ልዩነቶች እንዳማያስፈልጉ በመረዳቱ በክልሉ
ሰላም መረጋገጡን ጠቅሶ፤ “በቀጣይ በሚስማሙና በሚያግባቡ ጎዳዮች ዙሪያ እንሰራለን” ሲል ተናግሯል።የመድረኩ ተሳታፊ ወጣቶችም
ሀገርን ማልማትና ማሳደግ በሚያስችሉ ጉዳዮች ላይ ትኩረት ማድረግ እንደሚገባ በመግለጽ ሐሳብ አንስተዋል።ለዘንድሮ ምርጫ ሰላማዊ
ሂደትና ለተጀመረው የብልጽግና ጉዞ ስኬታማነት የበኩላቸውን አስተዋጽኦ ለማበርከት ዝግጁ መሆናቸውንም አረጋግጠዋል።ከጽንፈኝነትና
ከብሄርተኝነት አስተሳሰቦች በመውጣት መንግስት በጀመራቸው የሰላም፣ የዴምክራሲና የልማት ስራዎች በንቃት ለመሳተፍ ዝግጁ እንደሆኑ
መግለፃቸውን ኢዜአ ዘግቧል።የክልሉ ብልጽግና ፓርቲ ጽህፈት ቤት ኃላፊ አቶ ላክደር ላክባክ ፤ በሀገሪቱ እየተካሄደ ያለውን ሁለንተናዊ
ለውጥና የብልፅግና ጉዞ እውን ለማድረግ ወጣቱ ኃይል የማይተካ ሚና አለው ብለዋል።ከፌስቡክ ገፃችን በተጨማሪ ወቅታዊ፣ ትኩስ
እና የተሟሉ መረጃዎችን ለማግኘት፡-የፋና ድረ ገጽ ይጎብኙ፤ተንቀሳቃሽ ምስሎችን ለማግኘት የፋና ቴሌቪዥን የዩቲዩብ ቻናል ሰብስክራይብ
ያድርጉፈጣን መረጃዎችን ለማግኘት ትክክለኛውን የፋና ቴሌግራም ቻናል ይቀላቀሉከዚህ በተጨማሪም በትዊተር ገጻችን ይወዳጁንዘወትር
ከእኛ ጋር ስላሉ እናመሰግናለን!
- አዲስ አበባ ፣ ህዳር 1 ፣ 2013 (ኤፍ ቢ ሲ) ልዩ ኃይሉና ወጣቱ የጥፋት ቡድኑ እኩይ ዓላማ ማስፈጸሚያ መሆን የለባቸውም
ሲሉ በቁጥጥር ስር የዋሉ የጽንፈኛው ህወሓት ቡድን ታጣቂዎች ገለጹ።ከአንድ ሳምንት በፊት በትግራይ ክልል በነበረው የመከላከያ
ሰራዊት ሰሜን ዕዝ ላይ በህወሓት ቡድን የተፈጸመውን ጥቃት ተከትሎ የሃገር መከላከያ ሠራዊት በጠቅላይ ሚኒስትር ዐቢይ አሕመድ
በተሰጠው ሃገርን የማዳን ተልዕኮ ሕግ ለማስከበር የዘመቻ ሥራዎችን እያከናወነ ይገኛል።የሠራዊቱ 5ኛ ሜካናይዝድ ክፍለ ጦር የህወሓትን
ታጣቂዎች በቁጥጥር ስር አውሏል።በቁጥጥር ስር የዋሉት ታጣቂዎች የትግራይ ልዩ ኃይልን የተቀላቀሉት ኑሯቸውን አሸንፈው ለማደግ
እንጂ ከሃገር መከላከያ ሠራዊት ጋር ለመዋጋት አለመሆኑን ገልጸዋል።ኑሮን ለማሸነፍ በሚል ወደ ልዩ ኃይሉ ቢገቡም የህወሓት የጥፋት
ቡድን እኩይ ዓላማ ማስፈጸሚያ ከመሆን ውጪ ያገኙት ነገር አለመኖሩን ነው የተናገሩት።ከሃገር መከላከያ ሠራዊት ጋር መጋጨት ማለት
ከኢትዮጵያ ጋር መጋጨት መሆኑንም ገልጸዋል።የትግራይ ልዩ ኃይል እና ወጣትም የህወሓት የጥፋት ቡድን ሰላባ እንዳይሆኑ ከሃገር
መከላከያ ሠራዊቱ ጎን መቆም እንዳለባቸው ተናግረዋል።ታጣቂዎቹ በቁጥጥር ስር ከዋሉ በኋላ በሃገር መከላከያ ሠራዊቱ የደረሰባቸው
ምንም አይነት ችግር እንደሌለና በአሁኑ ወቅት በጥሩ ሁኔታ ላይ እንደሚገኙም አስረድተዋል።የሃገር መከላከያ ሠራዊት እያከናወነ
ባለው ዘመቻ የትግራይ ልዩ ኃይልና ሚሊሻ አባላት በቁጥጥር ስር እየዋሉ መሆኑን ኢዜአ ዘግቧል።
datasets:
- rasyosef/amharic-passage-retrieval-dataset
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: RoBERTa Amharic Text Embedding Base
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.7023037351822858
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8335942742115858
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8711697606799373
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9129948557369716
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7023037351822858
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2778647580705286
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17423395213598747
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09129948557369715
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.7023037351822858
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8335942742115858
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8711697606799373
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9129948557369716
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8082894138343916
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7746795930032351
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7776566707094859
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.6951464996645046
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8250950570342205
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8649071796018788
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9071796018787743
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6951464996645046
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.27503168567807346
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17298143592037574
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09071796018787742
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6951464996645046
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8250950570342205
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8649071796018788
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9071796018787743
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8023571633917459
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7686455570821505
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7718664338092751
name: Cosine Map@100
---
# RoBERTa Amharic Text Embedding Base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [rasyosef/roberta-base-amharic](https://huggingface.co/rasyosef/roberta-base-amharic) on the [amharic-passage-retrieval-dataset](https://huggingface.co/datasets/rasyosef/amharic-passage-retrieval-dataset) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Training Code
This model was trained as part of our **ACL 2025 Findings** paper: ***Optimized Text Embedding Models and Benchmarks for Amharic Passage Retrieval***.
- **Models Collection:** https://huggingface.co/collections/rasyosef/amharic-text-embedding-models-679cb55eae1d498e3ac5bdc5
- **Code:** https://github.com/kidist-amde/amharic-ir-benchmarks
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [rasyosef/roberta-base-amharic](https://huggingface.co/rasyosef/roberta-base-amharic) <!-- at revision b1a3d2c267262e2b82c83be9d4e59db762a5e931 -->
- **Maximum Sequence Length:** 510 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [amharic-passage-retrieval-dataset](https://huggingface.co/datasets/rasyosef/amharic-passage-retrieval-dataset)
- **Language:** am
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 510, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("rasyosef/roberta-amharic-text-embedding-base")
# Run inference
sentences = [
"የተደጋገመው የመሬት መንቀጥቀጥና የእሳተ ገሞራ ምልክት በአፋር ክልል",
"በአክሱም ከተማ የሚገኙ ሙስሊም ሴት ተማሪዎች ከሒጃብ መልበስ ጋር በተያያዘ ውዝግብ ከትምህርት ገበታ ውጭ ሆነው እንደሚገኙ የትግራይ እስልምና ጉዳዮች ምክርቤት ስታወቀ። ይህን ለመፍታት ከክልሉ ትምህርት ቢሮ ጋር ንግግር ላይ መሆኑም የክልሉ እስልምና ጉዳዮች ምክርቤት ለዶቼቬለ ገልጿል።",
"በማዕከላዊ ኢትዮጵያ ክልል ሃድያ ዞን ጊቤ ወረዳ በሚገኙ 12 ቀበሌዎች መሠረታዊ የመንግሥት አገልግሎት መስጫ ተቋማት በሙሉና በከፊል በመዘጋታቸው መቸገራቸውን ነዋሪዎች አመለከቱ። ከባለፈው ዓመት ጀምሮ የጤና፣ የትምህርት እና የግብር አሰባሰብ ሥራዎች በየአካባቢያቸው እየተከናወኑ አለመሆናቸውንም ለዶቼ ቬለ ተናግረዋል።",
"የሕዝብ ተወካዮች ምክር ቤት አባል እና የቋሚ ኮሚቴ ሰብሳቢ የነበሩት አቶ ክርስቲያን ታደለ እና የአማራ ክልል ምክር ቤት አባል የሆኑት አቶ ዮሐንስ ቧያለው ከቃሊቲ ወደ ቂሊንጦ ማረሚያ ቤት መዛወራቸውን ጠበቃቸው ተናገሩ።",
"ከተደጋጋሚ መሬት መንቀጥቀጥ በኋላ አፋር ክልል እሳት ከመሬት ውስጥ ሲፈላ ታይቷል፡፡ ከመሬት ውስጥ እሳትና ጭስ የሚተፋው እንፋሎቱ ዛሬ ማለዳውን 11 ሰዓት ግድም ከከባድ ፍንዳታ በኋላየተስተዋለ መሆኑን የአከባቢው ነዋሪዎች እና ባለስልጣናት ለዶቼ ቬለ ተናግረዋል፡፡ አለት የሚያፈናጥር እሳት ነው የተባለው እንፋሎቱ በክልሉ ጋቢረሱ (ዞን 03) ዱለቻ ወረዳ ሰጋንቶ ቀበሌ መከሰቱን የገለጹት የአከባቢው የአይን እማኞች ከዋናው ፍንዳታ በተጨማሪ በዙሪያው ተጨማሪ ፍንዳታዎች መታየት ቀጥሏል ባይ ናቸው፡፡"
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [5, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [5, 5]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
<details><summary>Click to expand</summary>
### Metrics
#### Information Retrieval
* Dataset: `dim_768`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 768
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.7023 |
| cosine_accuracy@3 | 0.8336 |
| cosine_accuracy@5 | 0.8712 |
| cosine_accuracy@10 | 0.913 |
| cosine_precision@1 | 0.7023 |
| cosine_precision@3 | 0.2779 |
| cosine_precision@5 | 0.1742 |
| cosine_precision@10 | 0.0913 |
| cosine_recall@1 | 0.7023 |
| cosine_recall@3 | 0.8336 |
| cosine_recall@5 | 0.8712 |
| cosine_recall@10 | 0.913 |
| **cosine_ndcg@10** | **0.8083** |
| cosine_mrr@10 | 0.7747 |
| cosine_map@100 | 0.7777 |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 256
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.6951 |
| cosine_accuracy@3 | 0.8251 |
| cosine_accuracy@5 | 0.8649 |
| cosine_accuracy@10 | 0.9072 |
| cosine_precision@1 | 0.6951 |
| cosine_precision@3 | 0.275 |
| cosine_precision@5 | 0.173 |
| cosine_precision@10 | 0.0907 |
| cosine_recall@1 | 0.6951 |
| cosine_recall@3 | 0.8251 |
| cosine_recall@5 | 0.8649 |
| cosine_recall@10 | 0.9072 |
| **cosine_ndcg@10** | **0.8024** |
| cosine_mrr@10 | 0.7686 |
| cosine_map@100 | 0.7719 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
</details>
## Training Details
<details><summary>Click to expand</summary>
### Training Dataset
#### amharic-news-retrieval-dataset
* Dataset: [amharic-passage-retrieval-dataset](https://huggingface.co/datasets/rasyosef/amharic-passage-retrieval-dataset) at [3ef7092](https://huggingface.co/datasets/rasyosef/amharic-passage-retrieval-dataset/tree/3ef7092c24f5494b79b09e0264b73528044c1c03)
* Size: 40,237 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 14.69 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 45 tokens</li><li>mean: 293.39 tokens</li><li>max: 510 tokens</li></ul> |
* Samples:
| anchor | positive |
|:---------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>ሚንስትር ዴኤታ ወይዘሮ አለም-ፀሀይ የአርባ ምንጭ ሆስፒታልና የኮቪድ-19 ሕክምና ማዕከልን ጎበኙ</code> | <code>አዲስ አበባ፣ መስከረም 13፣ 2013 (ኤፍ.ቢ.ሲ) የጤና ሚኒስቴር ሚንስትር ዴኤታ ወይዘሮ አለምፀሀይ ጳውሎስ በደቡብ ክልል ጋሞ ዞን የአርባ ምንጭ ከተማ ሆስፒታል እና ጤና ጣቢያ ጎብኙ፡፡እንዲሁም በኮቪድ-19 የህክምና ማዕከል ተገኝተው ያለውን የስራ እንቅስቃሴ መመልከታቸውም ተገልጸል፡፡ሚኒስትር ዴኤታዋ በጉብኝቱ ወቅት የህክምና ተቋማቱ ለአካባቢ ነዋሪዎች እየሰጡ ያለውን ዘርፈ ብዙ አገልግሎት እና ለኮቪድ 19 ወረርሽኝ የመከላከልና የመቆጣጠር ምላሽ አሠጣጥ የሚበረታታና ውጤታማ እንደሆነ ተናግረዋል፡፡በዚህም ለማዕከሉ ሰራተኞች ምስጋናቸውን አቅርበዋል፡፡የተቋማቱ ስራ ኃላፊዎችም ከሚኒስትር ዴኤታዋ ጋር መወያየታቸው ተሰምቷል፡፡ኃላፊዎቹ አገልግሎታቸውን በተሟላ መንገድ ለመስራት አያስችሉንም ያሏቸውን ጉድለቶች አንስተው ውይይት አድረገውባቸዋል፡፡የህክምና ተቋማቱ ያሉበት የስራ አፈጻጸም የሚበረታታ ቢሆንም ለተሻለ ስራ መነሳትና የጤና አገልግሎቱን ይበልጥ ማሻሻል ያስፈልጋል ሲሉ ሚኒስትር ዴኤታዋ ማሳሰባቸውን ከሚኒስቴሩ ያገኘነው መረጃ ያመለክታል፡፡</code> |
| <code>መምህራን በትምህርት ቤቶችና በአከባቢያቸው ሰላም እንዲረጋገጥ የበኩላቸውን ሚና እንዲወጡ ተጠየቁ</code> | <code>መምህራን በትምህርት ቤቶችና በአከባቢያቸው ሰላም እንዲረጋገጥ የበኩላቸውን ሚና እንዲወጡ ተጠይቀዋል፡፡የሰላም ሚኒስቴር ከሳይንስና ከፍተኛ ትምህርት ሚኒስቴርና የኢትዮጵያ መምህራን ማህበር ጋር በመተባበር ያዘጋጁት ሀገር አቀፍ መምህራን የሰላም ውይይት መድረክ በአዲስ አበባ እየተካሄደ ነው፡፡በዚህ የውይይት መድረክ ላይ የሰላም ሚኒስትሯ ወይዘሮ ሙፈሪያት ካሚልን ጨምሮ ሌሎች ባለድርሻ አካላት ተገኝተዋል፡፡ውይይቱ “ሰላምና ሀገር ወዳድ መምህራኖች ፤ ሰላምና ሀገር ወዳድ ተማሪዎችን ያፈራሉ” በሚል መሪ ቃል እየተካሄደ የሚገኝ ሲሆን መምህራን በትምህርት ቤቶችና በአከባቢያቸው ሰላም እንዲረጋገጥ የበኩላቸውን ሚና እንዲወጡ ተጠይቀዋል፡፡በውይይቱ ንግግር ያደረጉት የሰላም ሚኒስትር ወይዘሮ ሙፈሪያት ካሚል መምህራን ትውልድን መቅረጽ ካላቸው እድል አንፃር ሰላምን በመስበክ በኩል ከፍተኛ አስተዋጽኦ ሊያበርክቱ ይገባል ብለዋል፡፡ሀገራዊ ግንባታ ትምህርትና የተሟላ ስብዕና የሚጠይቅ በመሆኑም ለማህበረሰብ ስብዕናና የበለጸገ ትውልድን በመፍጠር ረገድ የመምህራን ሚና ክፍተኛ መሆኑንም ተናግረዋል።ትምህርት ቤቶች የሰላም ማዕድ ይሆኑ ዘንድም መምህራን እያከናዎኑት ያለውን ትውልድን የመቅረጽ ተግባር አጠናክረው መቀጠል እንዳለባቸውም ወይዘሮ ሙፈሪያት አሳስበዋል፡፡ በውይይቱ ላይ አስተያየት የሰጡት መምህራን በበኩላቸው ሰላም ሁሉንም የሚመለከት ጉዳይ በመሆኑ ሰላምን በመስበክና በማረጋገጥ ረገድ ከመንግስት ጋር በመሆን የሚጠበቅባቸውን ኃላፊነት እንደሚወጡ ገልጸዋል፡፡በተለይም የስነ ዜጋ፣ ስነ ምግባርና የታሪክ ትምህርት መምህራን ለተማሪዎች በሚያቀርቡት ትምህርት ላይ ሚዛናዊና ኃላፊነት በተሞላበት መንገድ ማቅረብ እንዳለባቸውም ጠቁመዋል፡፡ መምህሩ በስነ ምግባር አርዓያ በመሆን ሰላምና ግብ...</code> |
| <code>የኢትዮጵያ እና ማሊ ከ17 አመት በታች ብሄራዊ ቡድኖች ጨዋታ እሁድ ይካሄዳል</code> | <code>በአዲስ አበባ ስታድየም እየተዘጋጀ የሚገኘው ብሄራዊ ቡድኑ በዛሬው የልምምድ መርሃ ግብር በእሁዱ ጨዋታ ላይ ቋሚ ተሰላፊዎች ይሆናሉ ተብለው የሚገመቱትን በመለየት የቅንጅትና ከርቀት አክርሮ የመምታት ልምምዶችን አከናውኗል፡፡ባለፉት ሶስት ቀናት በመጠነኛ ጉዳት በልምምድ ወቅት አቋርጠው ሲወጡ የነበሩት ሳሙኤል ተስፋዬ እና አቡበከር ነስሩ በዛሬው ልምምድ ከቡድኑ ጋር ሙሉ ልምምድ የሰሩ ሲሆን ሁሉም ተጨዋቾች በሙሉ ጤንነት ላይ ይገኛሉ፡፡ከ17 አመት ቡድናችን እሁድ ዕለት ከአፍሮ ፅዮን ጋር ባደረጉት የአቋም መፈተሻ ጨዋታ ላይ ከአፍሮፅዮን በኩል መልካም እንቅስቃሴ ያሳዩ 6 ተጨዋቾች ጥሪ ቀርቦላቸው በዛሬው ልምምድ ላይ ተገኝተው ከቡድኑ ጋር ልምምድ ያደረጉ ቢሆንም አሳማኝ እንቅስቃሴ ባለማሳየታቸው እንዲመለሱ ተደርጓል፡፡ቀይ ቀበሮዎቹ በእሁዱ ጨዋታ በባማኮ የደረሰባቸውን የ2-0 ሽንፈት ቀልብሰው ወደ ማዳጋስካር የአፍሪካ ከ17 አመት በታች ዋንጫ ለማምራት በከፍተኛ ተነሳሽነት እና ፍላጎት ዝግጅታቸውን በማከናወን ላይ እንደሚገኙ ለመታዘብ ችለናል፡፡በኢትዮጵያ እና ማሊ መካከል የሚደረገው ጨዋታ እሁድ መስከረም 22 ቀን 2009 በአዲስ አበባ ስታድየም 10:00 ላይ የሚካሄድ ሲሆን ጨዋታው የሚካሄድበት የአዲስ አበባ ስታድየም ሜዳን ምቹ ለማድረግ የሚያስችሉ ስራዎች እየተከናወኑ ይገኛሉ፡፡የእሁዱ ተጋጣሚያችን የማሊ ከ17 አመት በታች ብሄራዊ ቡድን አርብ አዲስ አበባ ይገባል፡፡ ጨዋታውን የሚመሩት አራቱም ዳኞች ከኒጀር ፤ ኮሚሽነሩ ደግሞ ከዩጋንዳ እንደተመደቡም ታውቋል፡፡</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
256
],
"matryoshka_weights": [
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `num_train_epochs`: 5
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `fp16`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_256_cosine_ndcg@10 |
|:-------:|:--------:|:-------------:|:----------------------:|:----------------------:|
| -1 | -1 | - | 0.1013 | 0.0791 |
| 1.0 | 315 | 0.7981 | 0.7568 | 0.7435 |
| 2.0 | 630 | 0.1372 | 0.7744 | 0.7662 |
| 3.0 | 945 | 0.0596 | 0.7962 | 0.7874 |
| 4.0 | 1260 | 0.0369 | 0.8070 | 0.8012 |
| **5.0** | **1575** | **0.029** | **0.8083** | **0.8024** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.11.12
- Sentence Transformers: 4.1.0
- Transformers: 4.51.3
- PyTorch: 2.7.0+cu126
- Accelerate: 1.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citation
```
@inproceedings{mekonnen2025amharic,
title={Optimized Text Embedding Models and Benchmarks for Amharic Passage Retrieval},
author={Kidist Amde Mekonnen, Yosef Worku Alemneh, Maarten de Rijke },
booktitle={Findings of ACL},
year={2025}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
rasyosef/snowflake-arctic-embed-l-v2.0-finetuned-amharic | rasyosef | 2025-06-01T00:50:51Z | 8 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:40237",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"en",
"am",
"dataset:rasyosef/amharic-passage-retrieval-dataset",
"base_model:Snowflake/snowflake-arctic-embed-l-v2.0",
"base_model:finetune:Snowflake/snowflake-arctic-embed-l-v2.0",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-02-14T19:47:25Z | ---
language:
- en
- am
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:40237
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-l-v2.0
widget:
- source_sentence: >-
የሞዴል ጥቃቅንና አነስተኛ ኢንተርፕራይዞች ኤግዚቢሽንና ባዛር የ4 ሚሊዮን ብር ሽያጭና የገበያ ትስስር እንደሚፈጠር
ተገለጸ
sentences:
- >-
አዲስ አበባ ፣ ነሃሴ 22 ፣ 2012 (ኤፍ ቢ ሲ) ሰኔ 16 ቀን 2010 ዓ.ም በአዲስ አበባ መስቀል አደባባይ ለጠቅላይ
ሚኒስትር ዐቢይ አሕመድ በተካሄደ የድጋፍ ሰልፍ ላይ ቦምብ በመወርወር የሽብር ወንጀል የተከሰሱ አምስት ተከሳሾች
የጥፋተኝነት ፍርድ ተፈረደባቸው።ተከሳሾቹ ጌቱ ቶሎሳ፣ ብርሃኑ ጃፋር፣ ጥላሁን ጌታቸው፣ ደሳለኝ ተስፋዬ እና ባህሩ ቶላ
ሲሆኑ የጥፋተኝነት ፍርዱን የፌደራሉ ከፍተኛ ፍርድ ቤት 1ኛ የወንጀል ችሎት ነው ያስተላለፈው።የዐቃቤ ህግ ክስ
እንደሚያመላክተው ተከሳሾቹ ወንጀሉን የፈጸሙት ሰኔ 16 ቀን 2010 ዓ.ም በአዲስ አባባ መስቀል አደባባይ ከረፋዱ አራት
ሰአት ላይ በ40 ሜትር ርቀት አካባቢ ለጠቅላይ ሚኒስትር ዐቢይ አሕመድ በተደረገው የድጋፍ ሰልፍ ላይ ቦንብ በመወርወር
ነው።ተከሳሾቹ በ1996 ዓ.ም የወጣውን የኢፌዴሪ የወንጀል ህግ አንቀጽ 32/1ሀ እንዲሁም አንቀጽ 38 እና የፀረ
ሽብርተኝነት አዋጅ ቁጥር 652/2001 አንቀጽ 3 ስር የተመለከተውን በመተላለፍ፤ በሃገሪቱ ያለውን ለውጥ ተከትሎ
በጠቅላይ ሚኒስትር ዐቢይ የሚመራ መንግስት መኖር የለበትም በሚል የራሳቸውን አላማ ለማራመድ በማሰብ መንቀሳቀሳቸውን ዐቃቤ
ህግ በክሱ አመላክቷል።በዚህም ከ1ኛ እስከ 4ኛ ያሉ ተከሳሾች ከሱሉሉታ ከተማ መነሻቸውን በማድረግ በስልክ በመደዋወልና
በአካል በመገናኘት በድጋፍ ሰልፉ ላይ እንዴት ቦምብ መወርወር እንዳለባቸው ሲዘጋጁ ቆይተዋልም ነው ያለው ዐቃቤ
ህግ፡፡በዚህ መልኩ በ1ኛ ተከሳሽ ቤት ቡራዩ በማደር 2ኛ ተከሳሽ በሚያሽከረክረው ተሽከርካሪ 2ኛ ተከሳሽ ያዘጋጀውን ኤፍ1
ቦምብ በመያዝ ከ3 እስከ 5ኛ ያሉ ተከሳሾች ጋር ከፒያሳ ወደ ቴድሮስ አደባባይ በመምጣትና የድጋፍ ቲሸርት ልብስ ገዝተው
በመልበስ ተመሳስለው መግባታቸው ተጠቅሷል።በድጋፍ ሰልፉ ላይ ጠቅላይ ሚኒስትር ዐቢይ ንግግር ካደረጉ በኋላ ተከሳሾቹ በ40
ሜትር ርቀት ላይ ቦምብ የወረወሩ ሲሆን በዚህም የሁለት ሰዎች ህይወት ሲያልፍ ከ163 በላይ ሰዎች ላይ ደግሞ ከከባድ
እስከ ቀላል የአካል ጉዳት እንደደረሰባቸውም ዐቃቤ ህግ አስረድቷል፡፡የዐቃቤ ህግን የሰነድና የሰው ምስክር እንዲሁም
የተከሳሾችን መከላከያ የመረመረው ፍርድ ቤቱ ተከሳሾቹን በተከሰሱበት ወንጀል ጥፋተኛ ብሏቸዋል።በተከሳሾቹ ላይ የቅጣት
ውሳኔ ለመስጠትም ለጥቅምት 17 ቀን 2013 ዓ.ም ተለዋጭ ቀጠሮ ሰጥቷል።እስከ ጥቅምት 17 ድረስ ግን የቅጣት
ማቅለያዎችን ማቅረብ እንደሚቻል ትዕዛዝ ሰጥቷል።በታሪክ አዱኛ
- "አዲሱ ገረመው አዲስ አበባ፡- የ2013 በጀት ዓመት የ4 ሚሊዮን ብር ሽያጭና የገበያ ትስስር እንደሚፈጥር የፌዴራል የከተሞች የስራ ዕድል ፈጠራና የምግብ ዋስትና ኤጀንሲ አስታወቀ። ከተሳታፊዎች ውስጥ 50 በመቶዎቹ ሴቶች መሆናቸው ተጠቆመ ። ኤጀንሲው ለአዲስ ዘመን\nጋዜጣ በላከው መግለጫ\nእንዳስታወቀው፤ በ2013 በጀት\nአመት አንደኛው ዙር\nየሞዴል ጥቃቅንና አነስተኛ\nኢንተርፕራይዞች ሀገር አቀፍ\nኤግዚቢሽንና ባዛር ‹‹ዘላቂነት\nያለው የገበያ ትስስር\nለስራ ዕድል ፈጠራና\nለኢንተርፕራይዞች ልማት መሰረት\nነው ›› በሚል\nመሪ ቃል ከታህሳስ\n22 እስከ ታህሳስ 28 ቀን\n2013 ዓ.ም\_በጀሞ አንድ አደባባይ ትራፊክ መብራት ፊትለፊት ለሰባት ተከታታይ ቀናት የሚካሄድ ይሆናል። የ4 ሚሊዮን ብር ሽያጭና የገበያ ትስስር እንዲሚፈጥርም ይጠበቃል። በኤግዚቢሽንና ባዛሩ ላይ ከሁሉም ክልሎችና ከተሞች የተውጣጡ 202 የጥቃቅን እና አነስተኛ ኢንተርፕራይዞች 10 አነስተኛና መካከለኛ ኢንዱስትሪዎች የሚሳተፉ ሲሆን፤ ሴቶች 50 በመቶ እና አካል ጉዳተኛ ሦስት በመቶ በማሳተፍ ምርትና አገልግሎታቸው ከ20ሺ በላይ በሚሆን ተጠቃሚ የህብረተሰብ ክፍል እንዲጎበኝ ይደረጋል ብሏል ። ባዛሩ ከተለያዩ ክልሎችና አካባቢዎች የተሰባሰቡና በልዩ ልዩ ዘርፎች የተሰማሩ ብቁና ተወዳዳሪ ኢንተርፕራይዞችንና አንቀሳቃሾችን የሚያሳትፍ ሲሆን፤ በአንድ ማዕከል በማገናኘት በሚፈጠረው ትውውቅና የልምድ ልውውጥ በመካከላቸው ጤናማ የውድድር ስሜት ለማቀጣጠል እንደሚያስችልም “ኤጀንሲው አመልክቷል ። ባህላዊና ዘመናዊ የጨርቃጨርቅና\nአልባሳት ምርት ውጤቶች፣\nባህላዊና ዘመናዊ የቆዳ\nአልባሳትና የቆዳ ምርት\nውጤቶች፣ ባህላዊ የዕደ-ጥበባትና\nቅርጻ-ቅርጽ ሥራዎችና\nውጤቶች፣ የብረታብረት፣ የእንጨት\nሥራና የኢንጅነሪንግ ስራዎችና\nውጤቶች፣ የአግሮ-ፕሮሰሲንግ\nምርቶች እና የከተማ\nግብርና ውጤቶች፣ የቴክኖሎጂ\nውጤቶችና የፈጠራ ስራዎች፣\nፈሳሽ ሳሙና፣አልኮል፣ሳኒታይዘር፣\nየአፍና አፍንጫ መሸፈኛ\nጭንብል/ማስኮች/፣\nእና ሌሎችም ምርቶች\nበኤግዚቢሽንና ባዛሩ እንደሚቀርቡ\nአስታውቋል።\_የአዲስ አበባ ነጋዴ ሴቶች ማህበር፣ የሴቶች ኢንተርፕርነርሺፕ ልማት ፕሮግራም፣ ኢንተርፕርነርሺፕ ልማት ማዕከል፣ ፋሽን ዲዛይን አሶሴሽን፣ የሴቶች ራስ አገዝ ድርጅት፣ የባህልና ቱሪዝም ሚኒስቴር በዕደ ጥበብ ዘርፍ የተሰማሩ ኢንተርፕራይዞችና ሌሎችም ተሳታፊ ኢንተርፕራይዞች እንደሚሆኑ ጠቁሟል። ሁነቱ የተሞክሮ ልውውጥና\nየንግድ ልማት ግንዛቤ\nከማዳበሩም ባሻገር፤ ኢንተርፕራይዞች\nከተጠቃሚው ህብረተሰብ ጋር\nበሚያደርጉት ግንኙነት ዘላቂ\nየገበያ ትስስር ለመፍጠር\nየሚያስችል ምቹ አጋጣሚ\nይሆንላቸዋል። ምርቶቻቸውንና አገልግሎታቸውን\nለተጠቃሚዎች በቀጥታ በመሸጥም\nተጠቃሚ እንደሚሆኑም እጀንሲው\nአስታውቋል ።አዲስ ዘመን ታህሳስ 22/2013"
- >-
የአሜሪካው ሜሪየም ዌብስተር መዝገበ ቃላት እንደ ኦክስፎርድ መዝገበ ቃላት ሁሉ ታዋቂና ዓለም አቀፍ ተቀባይነት ያለው
መዝገበ ቃላት ነው።አንዲት ወጣት ጥቁር አሜሪካዊት ታዲያ ለዚህ መዝገበ ቃላት አሳታሚ በጻፈቸው ደብዳቤ ምክንያት መዝገበ
ቃላቱ ዘረኝነት ወይም (racism) ለሚለው የእንግሊዝኛ ቃል የትርጉም ፍቺ ማሻሻያ ለማድረግ ወስኗል።
- source_sentence: የደኢሕዴን ከፍተኛ አመራሮች በሐዋሳ እየመከሩ ነው
sentences:
- >-
የሁለት ዞኖች ከፍተኛ አመራሮች ታግደዋል የደቡብ ኢትዮጵያ ሕዝቦች ዴሞክራሲያዊ ንቅናቄ (ደኢሕዴን) ከፍተኛ አመራሮች
ከሐሙስ ሐምሌ 18 እስከ 22 ቀን 2011 ዓ.ም. ድረስ በሐዋሳ እየመከሩ ነው፡፡ ከፍተኛ አመራሮቹ በክልሉ ውስጥ
በተከሰተው ወቅታዊ ችግርና በአገራዊ ጉዳዮች ላይ እንደሚወያዩ፣ በተለይ በድርጅቱ ህልውና ላይ እንደሚያተኩሩም ታውቋል፡፡
የደኢሕዴን ሊቀመንበር ወ/ሮ ሙፈሪያት ካሚል በምክክሩ ላይ ባደረጉት ንግግር፣ በአገር ደረጃና በደቡብ ክልል የፖለቲካና
የፀጥታ ጉዳዮች ላይ ወጥ አቋም ያለው አመራር አስፈላጊነትን አውስተዋል፡፡ ከዚህ አንፃርም አመራሩ ራሱን በመፈተሽ ለለውጥ
ዝግጁ መሆን እንዳለበት አስታውቀዋል፡፡ እንደ ወ/ሮ ሙፈሪያት ማብራሪያ የደኢሕዴን ህልውና መረጋገጥ የሚችለው፣ አመራሩ
ከመቼውም ጊዜ በላይ መንቀሳቀስ ሲችል ብቻ እንደሆነ ነው፡፡ አመራሩ ምንም ነገር እንደማይመጣ በመኩራራት ወይም በወቅታዊ
ሁኔታዎች በመሥጋት የሚቀጥል ከሆነ ውጤት እንደማይኖር፣ በወቅቱ ተጨባጭ ሁኔታ ላይ በዝርዝር በመወያየት የድርጅቱ ህልውናን
ማስቀጠል ላይ ትኩረት መስጠት እንደሚገባ አስረድተዋል፡፡ ይህ በዚህ እንዳለ ደኢሕዴን የሲዳማ ዞን፣ የሐዋሳ ከተማና
የሃድያ ዞን ከፍተኛ አመራሮችን ማገዱንና ለወላይታና ለካፋ ዞኖች አመራሮች ደግሞ ማስጠንቀቂያ መስጠቱን አስታውቋል፡፡
ከክልልነት ጥያቄ ጋር በተያያዘ በተለይ በሲዳማ ዞን ወረዳዎችና በሐዋሳ ከተማ በተፈጸሙ ጥቃቶች የበርካቶች ሕይወት
ማለፉን፣ የበርካቶች ቤት ንብረት መውደሙን ተከትሎ የደቡብ ክልል በፌዴራል መንግሥት የፀጥታ አካላት ኮማንድ ፖስት ሥር
እንዲተዳደሩ መወሰኑ የሚታወስ ሲሆን፣ በዚህም ምክንያት የደኢሕዴን ሥራ አስፈጻሚ ኮሚቴ በሐዋሳ ከተማ ባደረገው ስብሰባ
የአመራሮቹን የዕግድ ውሳኔ አሳልፏል፡፡ በዚህ ስብሰባው የክልሉን የፀጥታ ሁኔታ እንደገመገመ የገለጸው የሥራ አስፈጻሚ
ኮሚቴው፣ በተፈጠረ የፀጥታ ችግሮች ሳቢያ የሲዳማ ዞንና የሐዋሳ ከተማን፣ እንዲሁም የሃዲያ ዞን ‹‹የፊት አመራሮች››
እንዳገደ አስታውቋል፡፡ በተያያዘም በወላይታና በካፋ ዞኖች እየታዩ ያሉ ሁኔታዎች የሕግ ተጠያቂነትን የሚያስከትሉ ስለሆኑ፣
አመራሩ የሕዝቡን ደኅንነት ለማስጠበቅ እንዲሠራ ሲል አስጠንቅቋል፡፡ በዚህም ሳቢያ የሲዳማ ዞን አስተዳዳሪ አቶ ቃሬ
ጫዊቻና የሐዋሳ ከተማ ከንቲባ አቶ ሱካሬ ሹዳ መታገዳቸውን ለማወቅ ተችሏል፡፡ የሥራ አስፈጻሚ ኮሚቴው በሐዋሳና በአካባቢው
ሐምሌ 11 ቀን 2011 ዓ.ም. ክልልነትን እናውጃለን በሚል በተፈጸመ ጥቃት የተጎዱ ቤተሰቦችን መልሶ ለማቋቋም እንደሚሠራ
በማስታወቅ፣ የጥፋቱ ተሳታፊዎችም ሆኑ አስተባባሪዎች የሕግ ተጠያቂ እንዲሆኑ እሠራለሁ ብሏል፡፡ አሁን ለተከሰተው ጥፋትም
ሆነ እየተስተዋለ በሚገኘው ሥርዓተ አልበኝነት ውስጥ የአመራሩ ሚና ከፍተኛ መሆኑን ያመነው የሥራ አስፈጻሚ ኮሚቴው፣
ይኼንን ለማረም ከሥራ አስፈጻሚ እስከ ታችኛው የአመራር ሥርዓት ድረስ ፈትሾ ዕርምጃ እንደሚወስድ ቃል ገብቷል፡፡
- >-
አዲስ አበባ፣ ጥር 2፣ 2012 (ኤፍ.ቢ.ሲ) በፓኪስታን ደቡብ ምእራብ ኩዌታ ከተማ በመስጊድ ላይ በተፈፀመ የቦብም ጥቃት
የሞቱ ሰዎች ቁጥር 15 መድረሱን ፖሊስ አስታወቀ።በአርብ ፀሎት ላይ በነበሩ ሰዎች ላይ በተፈፀመው የቦምብ ጥቃቱ ከሞቱት
ሰዎች በተጨማሪም ከ20 በላይ ሰዎች ላይ የተለያየ መጠን ያለው ጉዳት መድረሱንም ነው የገለፀው።በመስጊድ ላይ ለተፈፀመው
ጥቃትም በአካባቢው የሚንቀሳቀሰው የአሸባሪው ኢስላሚክ ስቴት (አይ.ኤስ) ቡድን ኃላፊነት መውሰዱ ተነገሯል።በሽብር ጥቃቱ
በአፍጋኒስታን የሚንቀሳቀሰው የታሊባን ቡድን አመራሮች ተገድለዋል ቢባልም፤ ታሊባን ግን አመራሮቼ ላይ ጉዳት አልደረሰም
ሲል አስተባብሏል።ምንጭ፦
- "በኢትዮጵያ ፕሪምየር ሊግ ዘጠነኛ ሳምንት መቐለ 70 እንደርታ በሜዳው ሲዳማ ቡናን 3-1 ካሸነፈ በኋላ የሁለቱ ቡድኖች አሰልጣኞች አስተያየታቸውን ሰጥተዋል።\_” ሲዳማ ቡና በጥሩ ወቅታዊ አቋም የሚገኝ ቡድን በመሆኑ ጨዋታው ከባድ ነበር” –\_ ገ/መድኅን ኃይሌ – መቐለ 70 እንደርታስለ ጨዋታው” ጨዋታው ከባድ ነበር፤ ሲዳማ ቡና በጥሩ ወቅታዊ አቋም የሚገኝ ቡድን ነው ፤ የያዙት ነጥብም ለዚህ ጨዋታ ጥሩ የስነልቦና ጥንካሬ አስገኝቶላቸዋል። በአንፃሩ እኛ አራት ጨዋታዎች ሳናሸንፍ ነው ወደ ጨዋታው የገባነው። በዚ ምክንያት ጨዋታው አክብዶብን ነበር። በአጠቃላይ ጨዋታውን አሸንፈናል። በቀጣይ ጨዋታዎች ቀስ በቀሰ ወደ አሸናፊነት መጥተን ይህን እናስቀጥላለን። ”“ዳኝነት ላይ ያየሁት ነገር ጥሩ አይደለም” ዘርዓይ ሙሉ – ሲዳማ ቡና\_ስለ ጨዋታው\_” ከዕረፍት በፊት ከጨዋታ ውጪ ኳሱ በኋላ ተጫዋቾቻችን መረጋጋት አልቻሉም። በጨዋታው አሳፋሪ ዳኝነት ነው ያየሁት። ስለ ጨዋታው ብጠይቀኝ አሳፋሪ እና ሚዛናዊት የሌለው ዳኝነት ነው። የተቆጠርቡን ግቦች እኛ ላይ ጥፋት እየተፈፀሙ የተቆጠሩ ናቸው። ከጨዋታ ውጭ ሆኖም ግብ ይቆጠራል። በቃ ይህንን ነው ያየሁት። ከዚ ውጭ ግን መቐለ ለማሸነፍ የነበረው ተነሳሽነት ጥሩ ነበር። እንደ ቡድን ተንቀሳቅሰዋል እኛም የተሻለ ኳስ ተቆጣጥረን ተጫውተናል። እንዳያችሁት ኳሱን መስርተን ነው የወጣነው ግን በተለያዩ ስህተቶች ግብ ሲቆጠርብን የተጫዋቾቻችን ብቃት አወረደው። የምንፈልገው እንቅስቃሴ ያላደረግነው በዳኞች ምክንያት ነው። ገና በሰባተኛ ደቂቃ ነው የተጀመረው ይሄ ነገር። ጨዋታው ጥሩ ሆኖ ሳለ ሚዛኑ የጠበቀ ዳኝነት አላየንም። ዳኝነቱ ልክ ካልሆነ የጨዋታው እንቅስቃሴ እንዳለ ይበላሻል ይሄ ሁሉ ደጋፊ የገባው ጥሩ ጨዋታ ለማየት ነው። ለምንድነው ተጫዋቾች ሮጠው ዳኛ ላይ የሚሄዱት። በተደጋጋሚ ስህተት ይሰራ ነበር። እኛ ተጫዋቾቻችንን ብናረጋጋም የሚያደርጉት ስህተት ለሌላ ነገር የሚዳርግ ነበር። ዳኞቹ አቅም አንሷቸው ነው ብዬ አላስብም፤ ሆን ተብሎ የተደረገ ነገር ነው። ዳኝነት ላይ ያየሁት ነገር ጥሩ አይደለም። መቐለን ግን እንደ ቡድን ጥሩ ነው እንኳን ደስ አላቹ ማለት እፈልጋለው። ”ስለ ስታድየሙ ድባብ” ደጋፊው የሚደነቅ ደጋፊ ነው። በስርዓት ነው ቡድኑን የሚደግፈው። ምንም ነገር ቢፈጠር ቡድኑን ነበር ሲደግፍ የነበረው። ”ዳኝነት ላይ ስለሰጠው አስተያየት” እኔ አዳላ አላልኩም። ግን ብቃት ማነስ ነው ብዬ አላስብም። እነዚህ ሁሉ ግቦች እስኪቆጠሩ ብቃት ማነስ አይደለም። በአጠቃላይ ዳኝነቱ ሚዘናዊ አልነበረም። ሁሉም ግብ ላይ የዳኛ ተፅዕኖ አለበት፤ በቃ ይሄን ነው የምለው። አንዱን ከጨዋታ ውጪ ብለህ አንዱን የምታፀድቅ ከሆነ ስህተት ነው። “"
- source_sentence: የከምባታና ጠንባሮ አርሶአደሮች
sentences:
- >-
በደሴ ማረሚያ ቤት በተደረገ የኮቪድ-19 ምርመራ 13 ሰዎች ቫይረሱ እንዳለባቸው ማረጋገጡን የከተማው ጤና መምሪያ
አስታወቀ።የመምሪያው ኃላፊ አቶ አብዱልሃሚድ ይመር በተለይ ለቢቢሲ እንዳስታወቁት 12ቱ የህግ ታራሚዎች ሲሆኑ ሌላኛው
ደግሞ የማረሚያ ቤቱ ባልደረባ ናቸው።እንደ አቶ አብዱልሃሚድ ገለጻ ከሆነ ከማረሚያ ቤቱ ጋር በመነጋገርም አዲስ የሚገቡ
ታራሚዎች ለ14 ቀናት ለብቻቸው እንዲቆዩ ከማድረግ በተጨማሪ በመጨረሻዎቹ ቀናት ላይ ምርመራ ሲደረግላቸው ቆይቷል።ከሐምሌ
20 በኋላ ማረሚያ ቤቱ የገቡ 46 ታራሚዎች ላይ በተደረገ ምርመራ 10 ሰዎች ኮሮናቫይረስ እንዳለባቸው ለማረጋገጥ
ተችሏል።“ታራሚዎቹ ከተለያዩ አካባቢዎች የመጡ ናቸው። ከተለያዩ ከደቡብ ወሎ ወረዳዎች እና ከደሴ ከተማም የተገኙ ናቸው”
ብለዋል።በሁለተኛ ዙር 60 ሰዎች ላይ በተደረገ ምርመራ ሦስቱ ቫይረሱ እንዳለባቸው ተረጋግጧል።በሁለተኛው ዙር ቫይረሱ
ከተገኘባቸው መካከል በመጀመሪያው ዙር እንዳለባቸው ከታወቁ ሰዎች ጋር ንክኪ የነበራቸው እና አንድ ማረሚያ ቤቱ ባልደረባ
ይገኙበታል።የማረሚያ ቤቱን የሕግ ታራሚዎች እና ባልደረባዎችን በሙሉ ለመመርመር መቻሉንም አቶ አብዱልሃሚድ
አስታውቀዋል።ቫይረሱ የተገኘባቸው ቦሩ ሜዳ መጀመሪያ ደረጃ ሆስፒታል የተላኩ ሲሆን፤ ተጓዳኝ ህመም ያለበት አንድ ታራሚ
ካሳየው የህመም ምልክት ውጭ ሁሉም በጥሩ ሁኔታ ላይ እንደሚገኙ ተናግረዋል።በማረሚያ ቤቱ የቫይረሱ ስርጭት እንዳይስፋፋ
አዲስ የሚገቡትን እና ነባር ታራሚዎችን ከመመርመር ባለፈ የግንዛቤ ማስጨበጫ ሥራ፣ የኬሚካል ርጭት፣ ርቀትን ማስጠበቅ እና
ንጽህና የማስጠበቅ ሥራ እየተከናወነ ነው ብለዋል።ባለፉት ወራት በአማራ ክልል በተደረገ የኮሮናቫይረስ ምርመራ 83
አሽከርካሪዎች እና ረዳቶቻቸው ቫይረሱ ተገኝቶባቸዋል።በክልሉ ቫይረሱ ከተገኘባቸው ሰዎች መካካል 23 የህክምና ባለሙያዎች
እንደሚገኙበትም ከአማራ ህብረተሰብ ጤና ኢንስቲትዩት ያገኘነው መረጃ ያሳያል።በአጠቃላይ በኢትዮጵያ በኮቪድ-19 የተያዙ
ሰዎች ቁጥር 25,118 የደረሱ ሲሆን የሟቾች ቁጥር 463 ደርሷል። እንዲሁም አጠቃላይ ከበሽታው ያገገሙ ሰዎች 11,034
ደርሰዋል።
- >-
በደቡብ ክልል ከፋ ዞን ዴቻ ወረዳ ከ20 ሺህ በላይ የከምባታና ጠምባሮ አርሶአደሮች በማንነታችን ጥቃት ደርሶብናል በማለት
እየተፈናቀሉ ናቸው፡፡አርሶአደሮቹ የተፈናቀሉት ከሶስት ሳምንት በፊት በወረዳው ከ30 በላይ ሲቪሎች በታጠቁ ግለሰቦች
በአሰቃቂ ሁኔታ መገደላቸውን ተከትሎ ነው ተብሏል፡፡ጉዳያችንን ለክልሉ መንግሥት ብናሳውቅም ችላ ተብለናል ሲሉ አርሶአደቹ
ተናግረዋል። አሁን ለችግር መጋለጣቸውንም ለቪኦኤ አስረድተዋል፡፡የከምባታ ጠንባሮ ዞን በበኩሉ የተፈናቀሉ ዜጎች በስቃይ
ላይ መሆናቸውን ገልጦ መፍትሔ እየተፈለገ መሆኑን አስታውቋል፡፡
- "\_ባሕር ዳር፡ መስከረም 7/2012 ዓ.ም (አብመድ) በጣልያን ባሕር ዳርቻ ጠባቂዎች ሕይወታቸው የተረፉ 90 ስደተኞችን ማልታ ለመቀበል ተስማማች፡፡በቀጣዩ ሳምንት ደግሞ በአዲስ የስደተኞች መከፋፈያ አሠራር ዘዴ ላይ የአውሮፓ ኅብረት ሊመክር ነው፡፡የማልታ የሕይወት አድን ትብብር ማዕከል በጠየቀው መሠረት ትናንት የጣልያን ባሕር ዳርቻ ጠባቂ ቡድን ስደተኞቹን ታድጓል፡፡ ከሊቢያ የባሕር ክልል ውጭ እየሰመጠች ከነበረች ጀልባ ነው ስደተኞቹን ማትረፍ የተቻለው፡፡ ማልታ በመጀመሪያ ስደተኞቹን ወደ ሀገሯ ለማስገባት ፈቃደኛ አልሆነችም ነበር፡፡"
- source_sentence: የአዲስ አበባ ከተማ አስተዳደር የጀመረው ኦዲት ወደ ባለ ኮከብ ሆቴሎችና ኢንዱስትሪዎች ተሸጋገረ
sentences:
- >-
የኢትዮጵያ እግር ኳስ ፌዴሬሽን ከኢትዮጵያ ብሮድካስቲንግ ኮርፖሬሽን (EBC) ጋር በተፈራረመው የመግባቢያ ሰነድ ስምምነት
ዙሪያ ከፕሪሚየር ሊግ ክለቦች ጋር ነገ ከጠዋቱ 4፡00 ጀምሮ በኢንተርኮንትኔንታል ሆቴል ውይይት ያካሂዳል፡፡በውይይቱ
ፌዴሬሽኑና EBC የኢትዮጵያ ፕሪሚየር ሊግ ጨዋታዎችን በቀጥታ የተሌቭዥን ስርጭት አማካኝነት በመላ ኢትዮጵያ ተደራሽ
ለማድረግ ነሃሴ 6/2007 ዓ.ም የተፈራረሙትን የመግባቢያ ሰነድ አስመልክቶ ስለ ስምምነቱ ፋይዳና ሂደት ገለፃ የሚደረግ
ሲሆን ከፕሪሚየር ሊግ ክለቦች ለሚነሱ ጥያቄዎች ማብራሪያ ይሰጣል፡፡ በክለቦች መብትና ተጠቃሚነት ዙሪያም ግልጽ ውይይት
ይካሄዳል፡፡ስምምነቱ ይፋ መደረጉንና መፈረሙን ተከትሎ ከተለያዩ በላድርሻ አከላት የተነሱት ጥያቄዎች በተለይም የኢትዮጵያ
ቡና ስፖርት ክለብ በደብዳቤ አቋሙን የገለጸበት አግባብ ተቀባይነት እንዳለው ታምኖበታል፡፡ ነገ ከጠዋቱ 4፡00 ጀምሮ
የሚካሄደውና የፕሪሚየር ሊግ ክለቦች ፕሬዝዳንቶች እና ስራ አስኪያጆች የሚሳተፉበት የውይይት መድረክ ስምምነቱን አስመልክቶ
ሊነሱ የሚችሉትን ጥያቄዎች በመቀበል የማስተካካያ ርምጃ ለመውሰድ የሚያስችል በመሆኑ ሁሉም ክለቦች የውይይቱ ተሳታፊ
እንዲሆኑ ፌዴሬሽኑ ጥሪውን አስተላልፋል፡፡ፌዴሬሽኑና ኢቢሲ አለም አቀፍና የሀገር ውስጥ ጨዋታዎችን በቴሌቭዥን የቀጥታ
ስርጭት ለማስተላለፍ የተፈራረሙት የመግባቢያ ሰነድ ዓላማዎች በዋነኝነት የወጣቱን ትውልድ የእግር ኳስ ስፖርት ተነሳሽነት
ማሳደግ፣ የብሔራዊ እና አገር ውስጥ ውድድሮችን የቀጥታ ስርጭት ተደራሽነት ማረጋገጥ እንዲሁም ለእግር ኳስ ስፖርት ዘላቂና
አስተማማኝ እድገት አመቺ ሁኔታዎችን በመፍጠር ላይ እንደሚመሰረት መገለጹ ይታወሳል፡፡ማስታወሻ፡- በውይይቱ የሚሳተፉት
የፌዴሬሽኑ የስራ ሃላፊዎችና የክለቦች ተወካዮች ብቻ ናቸው፡፡
- "ለመጀመርያ ጊዜ በተሟላ ደረጃ መሬትና መሬት ነክ ይዞታዎችን ኦዲት በማድረግ ላይ የሚገኘው የአዲስ አበባ ከተማ አስተዳደር፣ የኦዲት አድማሱን በማስፋት በባለ ኮከብ ሆቴሎችና በኢንዱስትሪዎች ላይ ቆጠራ ሊያካሂድ ነው፡፡ የአዲስ አበባ ከተማ አስተዳደር ከ1995 ዓ.ም. ጀምሮ እስከ ኅዳር 2004 ዓ.ም. የከተማ ቦታ በሊዝ ስለመያዝ የሚደነግገው እስኪወጣበት ጊዜ ድረስ፣ ላለፉት 15 ዓመታት በኢንዱስትሪ ዞኖችና በተናጠል ለሚካሄዱ ፋብሪካዎች በርካታ ቦታዎችን ሰጥቷል፡፡ ከዚህ በተጨማሪ ለበርካታ ሆቴሎች ግንባታ የሚሆን ሰፋፊ ቦታዎችንም እንዲሁ አቅርቧል፡፡ነገር ግን አስተዳደሩ በሰጣቸው ቦታዎች ላይ ስለተከናወነው ልማትም ሆነ፣ የተከናወኑት ግንባታዎች በውላቸው መሠረት ስለመካሄዳቸው በትክክል የተጠናቀረ መረጃ እንደሌለ ይገልጻል፡፡በከተማው ውስጥ የሚገኙ አምራች ኢንዱስትሪዎችንና ባለ ኮከብ ሆቴሎችን ቁጥር ለማወቅ፣ በአግባቡ ሥራዎችን ባላካሄዱት ላይ ደግሞ የማስተካከያ ዕርምጃ ለመውሰድ ኦዲት እንደሚከናወን ለማወቅ ተችሏል፡፡የአዲስ አበባ ከተማ አስተዳደር ምክትል ከንቲባ ታከለ ኡማ (ኢንጂነር) ለሪፖርተር፣ ‹‹እስካሁን ግንባታ ሳይካሄድባቸው ለዓመታት ታጥረው የቆዩ ከአራት ሚሊዮን ካሬ ሜትር በላይ ቦታ መልሰን ወስደናል፤›› ብለዋል፡፡‹‹‹ይህ ትልቅ ሥራ ነው፤›› በማለት ምክትል ከንቲባው ገልጸው፣ በቀጣይ ደግሞ በሆቴሎች፣ በኢንዱስትሪዎች፣ በድንጋይ ማምረቻ ካባዎች፣ እንዲሁም በመኖሪያ ቤቶች ላይ ኦዲት ተካሂዶ ዕርምጃ ይወሰዳል ሲሉ ገልጸዋል፡፡ ‹‹ሥራው ውስብስብ በመሆኑ የሚካሄደው ኦዲት አንዴ ብቻ ሳይሆን ሦስት፣ አራት ጊዜ ይታያል፡፡ ካስፈለገም የማረጋገጡን ሥራ ማዕከላዊ ስታትስቲክስ ኤጀንሲ ሊያከናውን ይችላል፤›› በማለት ምክትል ከንቲባው አስረድተዋል፡፡በአዲስ አበባ ከተማ አምራች ኢንዱስትሪዎች፣ ሆቴሎች፣ ለድንጋይ ማውጪያ የተሰጡ ቦታዎች ያሉበት ወቅታዊ ሁኔታ በትክክል አይታወቅም፡፡ ለእነዚህ ዘርፎች የቀረበው ቦታ ለታለመለት ዓላማ በትክክል ስለመዋሉ፣ ከዘርፉ የሚመነጨው ኢኮኖሚም ሆነ የተፈጠረው የሥራ ዕድል ሽፋን እምብዛም አይታወቅም፡፡ይህንን ሥራ በተሻለ ደረጃ ለመሥራት የከተማው ኢንዱስትሪ ቢሮ ከማዕከላዊ ስታትስቲክስ ኤጀንሲ ጋር በጋራ ለመሥራትም መስማማታቸው ታውቋል፡፡ የማዕከላዊ ስታትስቲክስ ኤጀንሲ የቢዝነስ ስታትስቲክስ ዳይሬክተር አቶ ዘለዓለም ኃይለ ጊዮርጊስ፣ በሆቴሎችና በኢንዱስትሪዎች ላይ ቆጠራውን ለማካሄድ ሙሉ ዝግጅት እየተደረገ መሆኑን ለሪፖርተር ገልጸው፣ በጉዳዩ ላይ ዝርዝር መረጃ ከመስጠት ተቆጥበዋል፡፡ \_"
- "ጠቅላይ ሚኒስትር ዶክተር አብይ አህመድ ለተለያዩ\_የመንግስት የስራ ሀላፊዎች ሹመት መስጠታቸውን የጠቅላይ ሚኒስቴር ጽህፈት ቤት አስታውቋል።በጠቅላይ ሚኒስትር ጽህፈት ቤት መግለጫ መሰረት፦\_1.ዶክተር አምባቸው መኮንን፦ የጠቅላይ ሚንስትሩ የመሰረተ ልማትና የከተማ ልማት አማካሪ ሚንስትር\_2.አቶ ገብረእግዚአብሔር አርአያ፦ በሚንስትር ዴኤታ ማዕረግ በህዝብ ተወካዮች ምክር ቤት የመንግስት ረዳት ተጠሪ\_3.አቶ ጫኔ ሽመካ፦ በሚንስትር ዴኤታ ማዕረግ በህዝብ ተወካዮች ምክር ቤት የመንግስት ረዳት ተጠሪ\_4.አቶ ጫላ ለሚ፦ በሚንስትር ዴኤታ ማዕረግ በህዝብ ተወካዮች ምክር ቤት የመንግስት ረዳት ተጠሪ5.አቶ ተስፋሁን ጎበዛይ፦ የጠቅላይ ሚንስትሩ የብሔራዊ ደህንነት ጉዳዮች አማካሪ ሚንስትር ዴኤታ6.ብርጋዴል ጄኔራል አህመድ ሀምዛ፦ የብረታ ብረት ኢንጂነሪንግ ኮርፖሬሽን ዋና ዳይሬክተር7.አቶ ሞቱማ መቃሳ፦ የጠቅላይ ሚንስትሩ የብሔራዊ ደህንነት ጉዳዮች አማካሪ ሚንስትር ዴኤታ8.አቶ ከበደ ይማም፦ የአካባቢ ጥበቃ ደንና የአየር ንብረት ለውጥ ኮሚሽን ምክትል ኮሚሽነር9.አቶ አዘዘው ጫኔ፦ የጉምሩክ ኮሚሽን ምክትል ኮሚሽነር10.አቶ አወል አብዲ፦ የብረታ ብረት ኢንጂነሪንግ ኮርፖሬሽን ምክትል ዋና ዳይሬክተር11.አቶ ሙሉጌታ በየነ፦ የጉምሩክ ኮሚሽን ምክትል ኮሚሽነር12. ዶክተር ፅጌረዳ ክፍሌ፦ የብሔራዊ ኤች. አይ. ቪ/ኤድስ መከላከያና መቆጣጠሪያ ጽ/ቤት ዋና ዳይሬክተር13.ወይዘሮ ያምሮት አንዱዓለም፦ የአርማወር ሐሰን የምርምር ኢንስቲትዩት ምክትል ዋና ዳይሬክተር14.ዶክተር ሚዛን ኪሮስ፦ የኢትዮጵያ ጤና መድህን ኤጀንሲ ዋና ዳይሬክተር15.አቶ ሀሚድ ከኒሶ፦ የሰነዶች ማረጋገጫና ምዝገባ ኤጀንሲ ምክትል ዋና ዳይሬክተር16.አቶ ከበደ ጫኔ፦ የስደተኞችና ከስደት ተመላሾች ጉዳይ ኤጀንሲ ዋና ዳይሬክተር17.ወይዘሮ ምስራቅ ማሞ፦ የጉምሩክ ኮሚሽን ምክትል ኮሚሽነር ሆነው ተሹመዋል።"
- source_sentence: >-
በቁጥጥር ስር የዋሉ የህወሓት ታጣቂዎች ልዩ ኃይሉና ወጣቱ የጥፋት ቡድኑ እኩይ ዓላማ ማስፈጸሚያ ከመሆን እንዲቆጠቡ
አስገነዘቡ
sentences:
- >-
የፕሬዚዳንት ዶናልድ ትራምፕ ተቺዎች እንደሚሉት፤ ፕሬዚዳንቱ ለዘመናት የአሜሪካ ወዳጆች በሆኑት ኢትዮጵያ እና ግብፅ
መካከል ታላቁ የሕዳሴ ግድብን በተመለከተ ውጥረት ቀስቅሰዋል።ይህም በአሜሪካ እና በአፍሪካ የዲፕሎማሲ ታሪክ ትልቁ
የትራምፕ ስህተት ነው ይላሉ።ትራምፕ ከቀናት በፊት ግብፅ "ግድቡን ልታፈነዳው ትችላለች" ማለታቸው ይታወሳል። ጥር ላይ
ፕሬዚዳንቱ "ስምምነት መፍጠር ችያለሁ፤ ከባድ ጦርነትም አስቁሜያለሁ" ብለው የኖቤል የሰላም ሽልማት እንደሚገባቸው
መናገራቸው ይታወሳል።ነገር ግን ተሸላሚ የሆኑት ጠቅላይ ሚንስትር ዐብይ አሕመድ ነበሩ ።ትራምፕ የኖቤል የሰላም ሽልማት
እንደሚገባቸው ሲናገሩ ጉዳዩን ግልፅ ባያደርጉትም፤ በግብፁ ፕሬዘዳንት አብዱልፈታህ አል-ሲሲ ጥሪ መሠረት በኢትዮጵያ እና
በግብፅ መካከል ጣልቃ ስለመግባታቸው እየተናገሩ እንደነበረ ይታመናል።ትራምፕ በአንድ ወቅት አብዱልፈታህ አል-ሲሲን "የኔ
ምርጡ አምባገነን" ማለታቸው አይዘነጋም።ግብፅ ታላቁ ሕዳሴ ግድብ "ለደህንነቴ ያሰጋኛል" ትላለች። ሱዳንም የግብፅን ያህል
ባይሆንም ስጋቱን ትጋራለች። በሌላ በኩል ኢትዮጵያ የኃይል አመንጪውን ግድብ አስፈላጊነት አስረግጣ ትገልጻለች።ኬንያ
የሚገኘው የአፍሪካ ቀንድ የጸጥታ ጉዳይ ተንታኝ ረሺድ አብዲ እንደሚለው፤ በግድቡ ዙሪያ ኢትዮጵያ እና ግብፅን ለማደራደር
አሜሪካ ጣልቃ መግባቷ የሁለቱን አገሮች ውጥረት አባብሷል።"ኢትዮጵያ በግድቡ አቅራቢያ የጸጥታ ኃይሏን እያጠናከረች ነው።
ቤንሻንጉል ጉሙዝ ክልልን ከበረራ ውጪ ማድረጓ አንዱ ማሳያ ነው። በግድቡ ዙሪያ በረራ የሚያግድ መሣሪያም ተገጥሟል። ግብፅ
የወታደራዊ ቅኝት በረራ ልታደርግ እንደምትችል ከመስጋት የመነጨ ሊሆን ይችላል" ይላል።ተንታኙ እንደሚናገረው፤ ትራምፕ
ዓለም አቀፍ ዲፕሎማሲ እንዴት እንደሚሠራ የሚገነዘቡ አይመስልም።"በንግዱ ዓለም እንደሚደረገው ስምምነት ላይ መድረስ
ይቻላል የሚል የተዛባ አመለካከት አላቸው። የውጪ ጉዳይ መያዝ ያለበትን ጉዳይ ግምዣ ቤት ድርድሩን እንዲመራ ያደረጉትም
ለዚህ ነው። ከመነሻውም መጥፎ የነበረውን ሁኔታም አባብሶታል" ሲልም ረሺድ ያስረዳል።ኢትዮጵያ ከግብፅ እና ከሱዳን ጋር
ያለው ድርድር ሳይቋጭ ግድቡን ለመሙላት በመወሰኗ አሜሪካ የ100 ሚሊዮን ዶላር እርዳታ ማጠፏ ተዘግቧል።ረሺድ "ኢትዮጵያ
አሜሪካ እንደከዳቻት ይሰማታል። ብዙ ኢትዮጵያውያን ትራምፕን የጥላቻ ምልክት አድርገውታል" በማለት ሁኔታውን
ይገልጻል።የዴሞክራት እጩው ጆ ባይደን እንዲያሸንፉም የበርካታ ኢትዮጵያውያን ምኞት ነው።አሜሪካ የሚገኘው ሴንተር ፎር
ግሎባል ዴቨሎፕመንት ውስጥ የፖሊሲ አጥኚ ደብሊው ጉዬ ሙር እንደሚሉት፤ የትራምፕ አስተዳደር እስራኤልና የአረብ ሊግ
አገራት መካከል ሰላም መፍጠር ስለሚፈልግ ከግብፅ ጎን መቆሙ የሚጠበቅ ነው።ግብፅ ከእስራኤል ጋር ዘመናት ያስቆጠረ
ዲፕሎማሲያዊ ትስስር አላት። ትራምፕ የአረብ ሊግ አገራት ለእስራኤል እውቅና እንዲሰጡ ጥረት እያደረጉ ስለሆነ አብዱልፈታህ
አል-ሲሲን ማስቀየም አይፈልጉም።ሙር እንደሚናገሩት፤ የትራምፕ አስተዳደር በግድቡ ዙርያ ለግብፅ የወገነውም በዚህ ምክንያት
ነው።ትራምፕ ሱዳንን በተመለከተ የደረሱበት ውሳኔ የአረቡን አገራት ከእስራኤል ጋር ለማስስማት የሚያደርጉት ጥረት አንድ
አካል ነው።ሱዳን ከእስራኤል ጋር ስምምነት ለማድረግ ወስናለች።በእርግጥ የአገሪቱ ተጠባባቂ የውጪ ጉዳይ ሚንስትር ውሳኔው
ገና በሕግ አውጪ መጽደቅ እንዳለበት ቢናገሩም፤ ሱዳን እንደ ጎርጎሮሳውያኑ 1967 ላይ የአረብ ሊግ አገራት ውይይት
ማስተናገዷ መዘንጋት የለበትም። በውይይቱ "ከእስራኤል ጋር መቼም ሰላም አይፈጠርም። መቼም ቢሆን ለእስራኤል እውቅና
አይሰጥም። ድርድርም አይካሄድም" ተብሎም ነበር።ሱዳን ከእስራኤል ጋር ለመስማማት በመፍቀዷ ትራምፕ ሽብርን ከሚድፉ አገሮች
ዝርዝር እንደሚያስወጧት ተናግረዋል። ይህም ለምጣኔ ሀብቷ ማገገም የሚረዳ ድጋፍ እንድታገኝ ያግዛታል።ትራምፕ በድጋሚ
ከተመረጡ ኢትዮጵያ ግድቡን በተመለከተ ሱዳን እና ግብፅ ላላቸው ስጋት አንዳች መልስ እንድትሰጥ ጫና እንደሚያደርጉ
ይጠበቃል።አጥኚው እንደሚሉት፤ ሱዳን ሽብርን ከሚደግፉ አገሮች ዝርዝር ከወጣች የትራምፕ አስተዳደር በምላሹ የሚጠብቀው
ነገር አለ።"ከእስራኤል ጋር ስምምነት የመፍጠር ጉዳይ የሱዳን ማኅበረሰብን የከፋፈለ ነው። መንግሥት የራሱ የጸጥታ
ጥያቄዎች እያሉበት ይህን ውሳኔ ማሳለፉ ችግር ሊያስከትል ይችላል" ብለዋል። ትራምፕ አፍሪካን በተመለከተ የሚያራምዱት
ፖሊሲ፤ በአሜሪካ እና በቻይና መካከል የሚካሄድ 'አዲሱ ቀዝቃዛ ጦርነት' ነው ሲል ረሺድ ይገልጸዋል።ለምሳሌ ቻይና ከግዛቷ
ውጪ የመጀመሪያውን ወታደራዊ መቀመጫ የከፈተችው በጅቡቲ ነው። ማዕከሉ የሚገኘው አሜሪካ የሶማሊያ ታጣቂዎች ላይ የአየር
ጥቃት ለመሰንዘር ያቋቋመችው ማዕከል አቅራቢያ ነው።በቅርቡ የአሜሪካ ተዋጊ ጀቶች ለማረፍ ሲሞክሩ፤ ቻይና የአሜሪካውያን
ወታደሮችን እይታ የሚጋርድ መሣሪያ መሞከሯን ረሺድ ያጣቅሳል። "የትራምፕ አስተዳደር ጸረ ቻይና ፖሊስ ያራምዳል" የሚለው
ተንታኙ ሁኔታው ለአፍሪካ ቀንድ አስቸጋሪ መሆኑንም ያስረዳል።ቻይና አፍሪካ ውስጥ ያላትን የንግድ የበላይነት ለመቀልበስ፤
የትራምፕ አስተዳደር 'ፕሮስፔሪቲ አፍሪካ ኢን 2018' የተባለ ፖሊሲ ነድፏል።በአፍሪካ እና በአሜሪካ መካከል የሚካሄደውን
ንግድ በእጥፍ የማሳደግ እቅድ አለ። አምና የአሜሪካ መንግሥት የንግድ ተቋሞች አፍሪካ ውስጥ እንዲሠሩ የገንዘብ ድጋፍ
የሚሰጥበት አሠራር ዘርግቷል።ሙር እንደሚሉት፤ የአሜሪካ ድርጅቶች ከቻይና ተቋሞች ጋር መወዳደር አልቻልንም ብለው ቅሬታ
ስላሰሙ የገንዘብ ድጋፍ ለመስጠት ተወስኗል። "የአይቲ ዘርፍ እንደ ማሳያ ቢወሰድ፤ 70 በመቶ የአፍሪካ ኢንፎርሜሽን
ቴክኖሎጂ የተመሠረተው በቻይና ድርጅቶች ላይ ነው" ሲሉ ያብራራሉ። የትራምፕ አስተዳደር በ2025 የሚያበቃውን ከ30 በላይ
የአፍሪካ አገሮች ተጠቃሚ እንዲሆኑበት ታስቦ በአሜሪካ ለአፍሪካውያን የተሰጠው ከታሪፍና ከቀረጥ ነፃ የገበያ ዕድል
(አፍሪካ ግሮዝ ኤንድ ኦፖርቹኒቲ አክት-አጎዋ) የመሰረዝ እቅድ አለው። ለአፍሪካ ምርቶች የአሜሪካን ገበያ ክፍት
የሚያደርገው ስምምነት የተፈረመው በቢል ክሊንተን ነበር።አሜሪካ አሁን ላይ ትኩረቷ የሁለትዮሽ የንግድ ስምምነት እንደሆነ
ሙር ይናገራሉ። ለምሳሌ ከኬንያ ጋር ንግግር እየተካሄደ ነው።ኬንያ፤ የቻይና 'ቤልት ኤንድ ሮድ ኢኒሽየቲቭ' አካል
እንደሆነች ይታወቃል። ስምምነቱ ቻይናን ከአፍሪካ ጋር በንግድ የሚያስተሳስርና የቻይና ዓለም አቀፍ ተደማጭነት የሚያጎላ
እንደሆነ አሜሪካ ታምናለች።ትራምፕ ከኬንያ ጋር በቀጥታ ከተስማሙ በኋላ ተመሳሳይ መንገድ ተጠቅመው ከሌሎች የአፍሪካ
አገሮች ጋር የመሥራት ውጥን እንዳላቸው ሙር ይናገራሉ።ይህ የትራምፕ መንገድ፤ ከአፍሪካ ሕብረት የንድግና ኢንዱስትሪ
ኮሚሽነር አልበርት ሙቻንጋን ሐሳብ ጋር ይጣረሳል።እሳቸው የአፍሪካ አገራት በተናጠል ሳይሆን በአንድነት ከአሜሪካ ጋር
ስምምነት እንዲያደርጉ ይፈልጋሉ። ሙር እንደሚሉት፤ የአሜሪካ ውሳኔ የአፍሪካ ሕብረት የአህጉሪቱን ምጣኔ ሀብት ለማጣመር
ከሚያደርገው ጥረት ጋር ይጣረሳል።ሕብረቱ፤ አፍሪካን የዓለም ትልቋ ነጻ የንግድ ቀጠና የማድረግ አላማ አለው።ትራምፕ ግን
በጥምረት ከሚሠሩ ኃይሎች ጋር በጋራ ያለመደራደር አዝማሚያ ያሳያሉ ሲሉ አጥኚው ያክላሉ።የትራምፕ ተቀናቃኝ ጆ ባይደን
ካሸነፉ የአፍሪካ ፖሊሲያቸው ምን እንደሚሆን እስካሁን አልገለጹም።"የባይደን አስተዳደር በኦባማ ጊዜ ወደነበረው ሂደት
ሊመለስ ይችላል" ይላሉ ሙር።
- "አዲስ አበባ፣ ጥር 2፣ 2013(ኤፍ ቢ ሲ) የጋምቤላ ክልል ወጣት የሴራ ፖለቲካ አራማጆችን በዝምታ አይመለከቱም ሲል የክልሉ ብልጽግና ፓርቲ ወጣቶች ሊግ ሰብሳቢ ወጣት ራች ጎች ገለጸ።የክልሉ የብልጽግና ፓርቲ ወጣቶች ሊግ የውይይት መድረክ ትናንት ተካሂዷል።ከአሁን በፊት በነበረው የፖለቲካ ሴራ ወጣቱም ሆነ መላው የክልሉ ህዝብ ተጠቃሚ ሳይሆን ቆይቷል ያለው ሰብሳቢው ይህንን የህዝብ ጥቅም የማያረጋግጥ የፖለቲካ ሴራ አካሄድ የክልሉ ወጣት እንደማይቀበለው ገልጿል።የክልሉ ህዝብ እኩል ተጠቃሚ የመሆን ዕድል ማግኘቱን አስታውሶ፤ “በቀጣይ የሴራ ፖለቲካ አራማጆችን ወጣቱ በዝምታ አይመለከትም” ብሏል።የሊጉ ምክትል ሰብሳቢ ወጣት ኡጁሉ ቢሩ በበኩሉ “ከአሁን በጎጥና በመንደር በመከፋፈል አንድነቱን ለመሸርሽር ሲሰራ ነበር” ብሏል።ህዝቡ ልዩነቶች እንዳማያስፈልጉ በመረዳቱ በክልሉ ሰላም መረጋገጡን ጠቅሶ፤ “በቀጣይ በሚስማሙና በሚያግባቡ ጎዳዮች ዙሪያ እንሰራለን” ሲል ተናግሯል።የመድረኩ ተሳታፊ ወጣቶችም ሀገርን ማልማትና ማሳደግ በሚያስችሉ ጉዳዮች ላይ ትኩረት ማድረግ እንደሚገባ በመግለጽ ሐሳብ አንስተዋል።ለዘንድሮ ምርጫ ሰላማዊ ሂደትና ለተጀመረው የብልጽግና ጉዞ ስኬታማነት የበኩላቸውን አስተዋጽኦ ለማበርከት ዝግጁ መሆናቸውንም አረጋግጠዋል።ከጽንፈኝነትና ከብሄርተኝነት አስተሳሰቦች በመውጣት መንግስት በጀመራቸው የሰላም፣ የዴምክራሲና የልማት ስራዎች በንቃት ለመሳተፍ ዝግጁ እንደሆኑ መግለፃቸውን ኢዜአ ዘግቧል።የክልሉ ብልጽግና ፓርቲ ጽህፈት ቤት ኃላፊ አቶ ላክደር ላክባክ ፤ በሀገሪቱ እየተካሄደ ያለውን ሁለንተናዊ ለውጥና የብልፅግና ጉዞ እውን ለማድረግ ወጣቱ ኃይል የማይተካ\_ ሚና አለው ብለዋል።ከፌስቡክ ገፃችን በተጨማሪ ወቅታዊ፣ ትኩስ እና የተሟሉ መረጃዎችን ለማግኘት፡-የፋና ድረ ገጽ ይጎብኙ፤ተንቀሳቃሽ ምስሎችን ለማግኘት የፋና ቴሌቪዥን የዩቲዩብ ቻናል ሰብስክራይብ ያድርጉፈጣን መረጃዎችን ለማግኘት ትክክለኛውን የፋና ቴሌግራም ቻናል ይቀላቀሉከዚህ በተጨማሪም በትዊተር ገጻችን ይወዳጁንዘወትር ከእኛ ጋር ስላሉ እናመሰግናለን!"
- >-
አዲስ አበባ ፣ ህዳር 1 ፣ 2013 (ኤፍ ቢ ሲ) ልዩ ኃይሉና ወጣቱ የጥፋት ቡድኑ እኩይ ዓላማ ማስፈጸሚያ መሆን
የለባቸውም ሲሉ በቁጥጥር ስር የዋሉ የጽንፈኛው ህወሓት ቡድን ታጣቂዎች ገለጹ።ከአንድ ሳምንት በፊት በትግራይ ክልል
በነበረው የመከላከያ ሰራዊት ሰሜን ዕዝ ላይ በህወሓት ቡድን የተፈጸመውን ጥቃት ተከትሎ የሃገር መከላከያ ሠራዊት በጠቅላይ
ሚኒስትር ዐቢይ አሕመድ በተሰጠው ሃገርን የማዳን ተልዕኮ ሕግ ለማስከበር የዘመቻ ሥራዎችን እያከናወነ ይገኛል።የሠራዊቱ
5ኛ ሜካናይዝድ ክፍለ ጦር የህወሓትን ታጣቂዎች በቁጥጥር ስር አውሏል።በቁጥጥር ስር የዋሉት ታጣቂዎች የትግራይ ልዩ
ኃይልን የተቀላቀሉት ኑሯቸውን አሸንፈው ለማደግ እንጂ ከሃገር መከላከያ ሠራዊት ጋር ለመዋጋት አለመሆኑን ገልጸዋል።ኑሮን
ለማሸነፍ በሚል ወደ ልዩ ኃይሉ ቢገቡም የህወሓት የጥፋት ቡድን እኩይ ዓላማ ማስፈጸሚያ ከመሆን ውጪ ያገኙት ነገር
አለመኖሩን ነው የተናገሩት።ከሃገር መከላከያ ሠራዊት ጋር መጋጨት ማለት ከኢትዮጵያ ጋር መጋጨት መሆኑንም
ገልጸዋል።የትግራይ ልዩ ኃይል እና ወጣትም የህወሓት የጥፋት ቡድን ሰላባ እንዳይሆኑ ከሃገር መከላከያ ሠራዊቱ ጎን መቆም
እንዳለባቸው ተናግረዋል።ታጣቂዎቹ በቁጥጥር ስር ከዋሉ በኋላ በሃገር መከላከያ ሠራዊቱ የደረሰባቸው ምንም አይነት ችግር
እንደሌለና በአሁኑ ወቅት በጥሩ ሁኔታ ላይ እንደሚገኙም አስረድተዋል።የሃገር መከላከያ ሠራዊት እያከናወነ ባለው ዘመቻ
የትግራይ ልዩ ኃይልና ሚሊሻ አባላት በቁጥጥር ስር እየዋሉ መሆኑን ኢዜአ ዘግቧል።
datasets:
- rasyosef/amharic-passage-retrieval-dataset
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: Snowflake Arctic Embed L Amharic
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 1024
type: dim_1024
metrics:
- type: cosine_accuracy@1
value: 0.7609036009841199
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8830239320062626
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9129948557369716
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9418474614180273
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7609036009841199
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2943413106687542
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.18259897114739432
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09418474614180272
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.7609036009841199
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8830239320062626
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9129948557369716
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9418474614180273
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.855137055526685
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8269398380391463
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8289477032031941
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.7539700290762693
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8776560053679267
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9085215835383583
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9391634980988594
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7539700290762693
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2925520017893089
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.18170431670767165
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09391634980988592
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.7539700290762693
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8776560053679267
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9085215835383583
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9391634980988594
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8503334638727912
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8214851973742615
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8235238738982227
name: Cosine Map@100
---
# Snowflake Arctic Embed L Amharic
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l-v2.0](https://huggingface.co/Snowflake/snowflake-arctic-embed-l-v2.0) on the [amharic-passage-retrieval-dataset](https://huggingface.co/datasets/rasyosef/amharic-passage-retrieval-dataset) dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Training Code
This model was trained as part of our **ACL 2025 Findings** paper: ***Optimized Text Embedding Models and Benchmarks for Amharic Passage Retrieval***.
- **Models Collection:** https://huggingface.co/collections/rasyosef/amharic-text-embedding-models-679cb55eae1d498e3ac5bdc5
- **Code:** https://github.com/kidist-amde/amharic-ir-benchmarks
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Snowflake/snowflake-arctic-embed-l-v2.0](https://huggingface.co/Snowflake/snowflake-arctic-embed-l-v2.0) <!-- at revision dcf86e284785c825570c5fd512ddd682b386fa3d -->
- **Maximum Sequence Length:** 1024 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [amharic-passage-retrieval-dataset](https://huggingface.co/datasets/rasyosef/amharic-passage-retrieval-dataset)
- **Language:** en, am
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("rasyosef/snowflake-arctic-embed-l-v2.0-finetuned-amharic")
# Run inference
sentences = [
"የተደጋገመው የመሬት መንቀጥቀጥና የእሳተ ገሞራ ምልክት በአፋር ክልል",
"በአክሱም ከተማ የሚገኙ ሙስሊም ሴት ተማሪዎች ከሒጃብ መልበስ ጋር በተያያዘ ውዝግብ ከትምህርት ገበታ ውጭ ሆነው እንደሚገኙ የትግራይ እስልምና ጉዳዮች ምክርቤት ስታወቀ። ይህን ለመፍታት ከክልሉ ትምህርት ቢሮ ጋር ንግግር ላይ መሆኑም የክልሉ እስልምና ጉዳዮች ምክርቤት ለዶቼቬለ ገልጿል።",
"በማዕከላዊ ኢትዮጵያ ክልል ሃድያ ዞን ጊቤ ወረዳ በሚገኙ 12 ቀበሌዎች መሠረታዊ የመንግሥት አገልግሎት መስጫ ተቋማት በሙሉና በከፊል በመዘጋታቸው መቸገራቸውን ነዋሪዎች አመለከቱ። ከባለፈው ዓመት ጀምሮ የጤና፣ የትምህርት እና የግብር አሰባሰብ ሥራዎች በየአካባቢያቸው እየተከናወኑ አለመሆናቸውንም ለዶቼ ቬለ ተናግረዋል።",
"የሕዝብ ተወካዮች ምክር ቤት አባል እና የቋሚ ኮሚቴ ሰብሳቢ የነበሩት አቶ ክርስቲያን ታደለ እና የአማራ ክልል ምክር ቤት አባል የሆኑት አቶ ዮሐንስ ቧያለው ከቃሊቲ ወደ ቂሊንጦ ማረሚያ ቤት መዛወራቸውን ጠበቃቸው ተናገሩ።",
"ከተደጋጋሚ መሬት መንቀጥቀጥ በኋላ አፋር ክልል እሳት ከመሬት ውስጥ ሲፈላ ታይቷል፡፡ ከመሬት ውስጥ እሳትና ጭስ የሚተፋው እንፋሎቱ ዛሬ ማለዳውን 11 ሰዓት ግድም ከከባድ ፍንዳታ በኋላየተስተዋለ መሆኑን የአከባቢው ነዋሪዎች እና ባለስልጣናት ለዶቼ ቬለ ተናግረዋል፡፡ አለት የሚያፈናጥር እሳት ነው የተባለው እንፋሎቱ በክልሉ ጋቢረሱ (ዞን 03) ዱለቻ ወረዳ ሰጋንቶ ቀበሌ መከሰቱን የገለጹት የአከባቢው የአይን እማኞች ከዋናው ፍንዳታ በተጨማሪ በዙሪያው ተጨማሪ ፍንዳታዎች መታየት ቀጥሏል ባይ ናቸው፡፡"
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [5, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [5, 5]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
<details><summary>Click to expand</summary>
### Metrics
#### Information Retrieval
* Dataset: `dim_1024`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 1024
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.7609 |
| cosine_accuracy@3 | 0.883 |
| cosine_accuracy@5 | 0.913 |
| cosine_accuracy@10 | 0.9418 |
| cosine_precision@1 | 0.7609 |
| cosine_precision@3 | 0.2943 |
| cosine_precision@5 | 0.1826 |
| cosine_precision@10 | 0.0942 |
| cosine_recall@1 | 0.7609 |
| cosine_recall@3 | 0.883 |
| cosine_recall@5 | 0.913 |
| cosine_recall@10 | 0.9418 |
| **cosine_ndcg@10** | **0.8551** |
| cosine_mrr@10 | 0.8269 |
| cosine_map@100 | 0.8289 |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 256
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.754 |
| cosine_accuracy@3 | 0.8777 |
| cosine_accuracy@5 | 0.9085 |
| cosine_accuracy@10 | 0.9392 |
| cosine_precision@1 | 0.754 |
| cosine_precision@3 | 0.2926 |
| cosine_precision@5 | 0.1817 |
| cosine_precision@10 | 0.0939 |
| cosine_recall@1 | 0.754 |
| cosine_recall@3 | 0.8777 |
| cosine_recall@5 | 0.9085 |
| cosine_recall@10 | 0.9392 |
| **cosine_ndcg@10** | **0.8503** |
| cosine_mrr@10 | 0.8215 |
| cosine_map@100 | 0.8235 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
</details>
## Training Details
<details><summary>Click to expand</summary>
### Training Dataset
#### amharic-news-retrieval-dataset
* Dataset: [amharic-passage-retrieval-dataset](https://huggingface.co/datasets/rasyosef/amharic-passage-retrieval-dataset) at [3ef7092](https://huggingface.co/datasets/rasyosef/amharic-passage-retrieval-dataset/tree/3ef7092c24f5494b79b09e0264b73528044c1c03)
* Size: 40,237 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 23.09 tokens</li><li>max: 64 tokens</li></ul> | <ul><li>min: 76 tokens</li><li>mean: 507.11 tokens</li><li>max: 1024 tokens</li></ul> |
* Samples:
| anchor | positive |
|:---------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>ሚንስትር ዴኤታ ወይዘሮ አለም-ፀሀይ የአርባ ምንጭ ሆስፒታልና የኮቪድ-19 ሕክምና ማዕከልን ጎበኙ</code> | <code>አዲስ አበባ፣ መስከረም 13፣ 2013 (ኤፍ.ቢ.ሲ) የጤና ሚኒስቴር ሚንስትር ዴኤታ ወይዘሮ አለምፀሀይ ጳውሎስ በደቡብ ክልል ጋሞ ዞን የአርባ ምንጭ ከተማ ሆስፒታል እና ጤና ጣቢያ ጎብኙ፡፡እንዲሁም በኮቪድ-19 የህክምና ማዕከል ተገኝተው ያለውን የስራ እንቅስቃሴ መመልከታቸውም ተገልጸል፡፡ሚኒስትር ዴኤታዋ በጉብኝቱ ወቅት የህክምና ተቋማቱ ለአካባቢ ነዋሪዎች እየሰጡ ያለውን ዘርፈ ብዙ አገልግሎት እና ለኮቪድ 19 ወረርሽኝ የመከላከልና የመቆጣጠር ምላሽ አሠጣጥ የሚበረታታና ውጤታማ እንደሆነ ተናግረዋል፡፡በዚህም ለማዕከሉ ሰራተኞች ምስጋናቸውን አቅርበዋል፡፡የተቋማቱ ስራ ኃላፊዎችም ከሚኒስትር ዴኤታዋ ጋር መወያየታቸው ተሰምቷል፡፡ኃላፊዎቹ አገልግሎታቸውን በተሟላ መንገድ ለመስራት አያስችሉንም ያሏቸውን ጉድለቶች አንስተው ውይይት አድረገውባቸዋል፡፡የህክምና ተቋማቱ ያሉበት የስራ አፈጻጸም የሚበረታታ ቢሆንም ለተሻለ ስራ መነሳትና የጤና አገልግሎቱን ይበልጥ ማሻሻል ያስፈልጋል ሲሉ ሚኒስትር ዴኤታዋ ማሳሰባቸውን ከሚኒስቴሩ ያገኘነው መረጃ ያመለክታል፡፡</code> |
| <code>መምህራን በትምህርት ቤቶችና በአከባቢያቸው ሰላም እንዲረጋገጥ የበኩላቸውን ሚና እንዲወጡ ተጠየቁ</code> | <code>መምህራን በትምህርት ቤቶችና በአከባቢያቸው ሰላም እንዲረጋገጥ የበኩላቸውን ሚና እንዲወጡ ተጠይቀዋል፡፡የሰላም ሚኒስቴር ከሳይንስና ከፍተኛ ትምህርት ሚኒስቴርና የኢትዮጵያ መምህራን ማህበር ጋር በመተባበር ያዘጋጁት ሀገር አቀፍ መምህራን የሰላም ውይይት መድረክ በአዲስ አበባ እየተካሄደ ነው፡፡በዚህ የውይይት መድረክ ላይ የሰላም ሚኒስትሯ ወይዘሮ ሙፈሪያት ካሚልን ጨምሮ ሌሎች ባለድርሻ አካላት ተገኝተዋል፡፡ውይይቱ “ሰላምና ሀገር ወዳድ መምህራኖች ፤ ሰላምና ሀገር ወዳድ ተማሪዎችን ያፈራሉ” በሚል መሪ ቃል እየተካሄደ የሚገኝ ሲሆን መምህራን በትምህርት ቤቶችና በአከባቢያቸው ሰላም እንዲረጋገጥ የበኩላቸውን ሚና እንዲወጡ ተጠይቀዋል፡፡በውይይቱ ንግግር ያደረጉት የሰላም ሚኒስትር ወይዘሮ ሙፈሪያት ካሚል መምህራን ትውልድን መቅረጽ ካላቸው እድል አንፃር ሰላምን በመስበክ በኩል ከፍተኛ አስተዋጽኦ ሊያበርክቱ ይገባል ብለዋል፡፡ሀገራዊ ግንባታ ትምህርትና የተሟላ ስብዕና የሚጠይቅ በመሆኑም ለማህበረሰብ ስብዕናና የበለጸገ ትውልድን በመፍጠር ረገድ የመምህራን ሚና ክፍተኛ መሆኑንም ተናግረዋል።ትምህርት ቤቶች የሰላም ማዕድ ይሆኑ ዘንድም መምህራን እያከናዎኑት ያለውን ትውልድን የመቅረጽ ተግባር አጠናክረው መቀጠል እንዳለባቸውም ወይዘሮ ሙፈሪያት አሳስበዋል፡፡ በውይይቱ ላይ አስተያየት የሰጡት መምህራን በበኩላቸው ሰላም ሁሉንም የሚመለከት ጉዳይ በመሆኑ ሰላምን በመስበክና በማረጋገጥ ረገድ ከመንግስት ጋር በመሆን የሚጠበቅባቸውን ኃላፊነት እንደሚወጡ ገልጸዋል፡፡በተለይም የስነ ዜጋ፣ ስነ ምግባርና የታሪክ ትምህርት መምህራን ለተማሪዎች በሚያቀርቡት ትምህርት ላይ ሚዛናዊና ኃላፊነት በተሞላበት መንገድ ማቅረብ እንዳለባቸውም ጠቁመዋል፡፡ መምህሩ በስነ ምግባር አርዓያ በመሆን ሰላምና ግብ...</code> |
| <code>የኢትዮጵያ እና ማሊ ከ17 አመት በታች ብሄራዊ ቡድኖች ጨዋታ እሁድ ይካሄዳል</code> | <code>በአዲስ አበባ ስታድየም እየተዘጋጀ የሚገኘው ብሄራዊ ቡድኑ በዛሬው የልምምድ መርሃ ግብር በእሁዱ ጨዋታ ላይ ቋሚ ተሰላፊዎች ይሆናሉ ተብለው የሚገመቱትን በመለየት የቅንጅትና ከርቀት አክርሮ የመምታት ልምምዶችን አከናውኗል፡፡ባለፉት ሶስት ቀናት በመጠነኛ ጉዳት በልምምድ ወቅት አቋርጠው ሲወጡ የነበሩት ሳሙኤል ተስፋዬ እና አቡበከር ነስሩ በዛሬው ልምምድ ከቡድኑ ጋር ሙሉ ልምምድ የሰሩ ሲሆን ሁሉም ተጨዋቾች በሙሉ ጤንነት ላይ ይገኛሉ፡፡ከ17 አመት ቡድናችን እሁድ ዕለት ከአፍሮ ፅዮን ጋር ባደረጉት የአቋም መፈተሻ ጨዋታ ላይ ከአፍሮፅዮን በኩል መልካም እንቅስቃሴ ያሳዩ 6 ተጨዋቾች ጥሪ ቀርቦላቸው በዛሬው ልምምድ ላይ ተገኝተው ከቡድኑ ጋር ልምምድ ያደረጉ ቢሆንም አሳማኝ እንቅስቃሴ ባለማሳየታቸው እንዲመለሱ ተደርጓል፡፡ቀይ ቀበሮዎቹ በእሁዱ ጨዋታ በባማኮ የደረሰባቸውን የ2-0 ሽንፈት ቀልብሰው ወደ ማዳጋስካር የአፍሪካ ከ17 አመት በታች ዋንጫ ለማምራት በከፍተኛ ተነሳሽነት እና ፍላጎት ዝግጅታቸውን በማከናወን ላይ እንደሚገኙ ለመታዘብ ችለናል፡፡በኢትዮጵያ እና ማሊ መካከል የሚደረገው ጨዋታ እሁድ መስከረም 22 ቀን 2009 በአዲስ አበባ ስታድየም 10:00 ላይ የሚካሄድ ሲሆን ጨዋታው የሚካሄድበት የአዲስ አበባ ስታድየም ሜዳን ምቹ ለማድረግ የሚያስችሉ ስራዎች እየተከናወኑ ይገኛሉ፡፡የእሁዱ ተጋጣሚያችን የማሊ ከ17 አመት በታች ብሄራዊ ቡድን አርብ አዲስ አበባ ይገባል፡፡ ጨዋታውን የሚመሩት አራቱም ዳኞች ከኒጀር ፤ ኮሚሽነሩ ደግሞ ከዩጋንዳ እንደተመደቡም ታውቋል፡፡</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
1024,
256
],
"matryoshka_weights": [
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 32
- `gradient_accumulation_steps`: 8
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `fp16`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 8
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_1024_cosine_ndcg@10 | dim_256_cosine_ndcg@10 |
|:----------:|:--------:|:-------------:|:-----------------------:|:----------------------:|
| -1 | -1 | - | 0.7007 | 0.6854 |
| 1.0 | 315 | 0.5661 | 0.8366 | 0.8268 |
| 2.0 | 630 | 0.1988 | 0.8344 | 0.8269 |
| 3.0 | 945 | 0.1024 | 0.8541 | 0.8489 |
| **3.9893** | **1256** | **0.0697** | **0.8551** | **0.8503** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.11.12
- Sentence Transformers: 4.1.0
- Transformers: 4.51.3
- PyTorch: 2.7.0+cu126
- Accelerate: 1.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citation
```
@inproceedings{mekonnen2025amharic,
title={Optimized Text Embedding Models and Benchmarks for Amharic Passage Retrieval},
author={Kidist Amde Mekonnen, Yosef Worku Alemneh, Maarten de Rijke },
booktitle={Findings of ACL},
year={2025}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
Rustamshry/Qwen3-0.6B-turkish-math-reasoning-80K | Rustamshry | 2025-06-01T00:49:29Z | 0 | 1 | peft | [
"peft",
"safetensors",
"math",
"question-answering",
"tr",
"dataset:ituperceptron/turkish-math-186k",
"base_model:unsloth/Qwen3-0.6B",
"base_model:adapter:unsloth/Qwen3-0.6B",
"license:mit",
"region:us"
] | question-answering | 2025-06-01T00:35:37Z | ---
base_model: unsloth/Qwen3-0.6B
library_name: peft
license: mit
datasets:
- ituperceptron/turkish-math-186k
language:
- tr
pipeline_tag: question-answering
tags:
- math
---
# Model Card for Model ID
This model was fine-tuned on 80,000 Turkish math problems, targeting better understanding and generation of mathematically structured responses in Turkish.
The dataset covers arithmetic, algebra, word problems, and other foundational math skills, allowing the model to serve as a multilingual math tutor or reasoning engine in Turkish.
## Model Details
### Model Description
- **Language(s) (NLP):** Turkish
- **License:** MIT
- **Finetuned from model:** unsloth/Qwen3-0.6B
- **Domain**: Mathematical Reasoning
## Uses
### Direct Use
- 🧮 Math problem solving in Turkish
Can assist users in understanding and solving elementary to intermediate math problems written in Turkish.
- 📚 Educational tools and tutoring systems
Suitable for integration into digital tutors, math practice apps, or classroom AI assistants for Turkish-speaking students.
- 💬 Multilingual reasoning research
Can be used to evaluate Turkish-language mathematical reasoning tasks in LLM benchmarks.
## Bias, Risks, and Limitations
🌐 Language bias
Performance is limited to Turkish; multilingual or code-mixed input may confuse the model.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from huggingface_hub import login
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
login(token="")
tokenizer = AutoTokenizer.from_pretrained("unsloth/Qwen3-0.6B",)
base_model = AutoModelForCausalLM.from_pretrained(
"unsloth/Qwen3-0.6B",
device_map={"": 0}, token=""
)
model = PeftModel.from_pretrained(base_model,"Rustamshry/Qwen3-0.6B-turkish-math-reasoning-80K")
question = """
Problem 2. $a, b$ iki farklı gerçel sayı ve $c$ öyle bir pozitif gerçel sayı olsun ki
$$ a^{4}-2019 a=b^{4}-2019 b=c. $$ $-\sqrt{c}<a b<0$ olduğunu kanıtlayın.
"""
messages = [
{"role" : "user", "content" : question}
]
text = tokenizer.apply_chat_template(
messages,
tokenize = False,
add_generation_prompt = True,
enable_thinking = True,
)
from transformers import TextStreamer
_ = model.generate(
**tokenizer(text, return_tensors = "pt").to("cuda"),
max_new_tokens = 3000,
temperature = 0.6,
top_p = 0.95,
top_k = 20,
streamer = TextStreamer(tokenizer, skip_prompt = True),
)
```
### Training Data
The dataset ituperceptron/turkish-math-186k is a Turkish-language collection designed for training and evaluating language models on mathematical reasoning tasks.
It comprises approximately 186,000 entries, each containing structured fields such as instruction, input, and output.
The dataset is available in Parquet format and is intended for text generation tasks, particularly focusing on mathematical problem-solving in Turkish.
### Framework versions
- PEFT 0.14.0 |
zyzzc/Gewwa-2-9B-v40-Q5_K_S-GGUF | zyzzc | 2025-06-01T00:48:38Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:zyzzc/Gewwa-2-9B-v40",
"base_model:quantized:zyzzc/Gewwa-2-9B-v40",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-01T00:48:11Z | ---
base_model: zyzzc/Gewwa-2-9B-v40
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# zyzzc/Gewwa-2-9B-v40-Q5_K_S-GGUF
This model was converted to GGUF format from [`zyzzc/Gewwa-2-9B-v40`](https://huggingface.co/zyzzc/Gewwa-2-9B-v40) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/zyzzc/Gewwa-2-9B-v40) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo zyzzc/Gewwa-2-9B-v40-Q5_K_S-GGUF --hf-file gewwa-2-9b-v40-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo zyzzc/Gewwa-2-9B-v40-Q5_K_S-GGUF --hf-file gewwa-2-9b-v40-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo zyzzc/Gewwa-2-9B-v40-Q5_K_S-GGUF --hf-file gewwa-2-9b-v40-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo zyzzc/Gewwa-2-9B-v40-Q5_K_S-GGUF --hf-file gewwa-2-9b-v40-q5_k_s.gguf -c 2048
```
|
LinaSad/mcqa_sciq_lora_bislr5104 | LinaSad | 2025-06-01T00:42:44Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-01T00:42:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Triangle104/Qwen3-30B-A3B-Q4_K_S-GGUF | Triangle104 | 2025-06-01T00:42:37Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:Qwen/Qwen3-30B-A3B",
"base_model:quantized:Qwen/Qwen3-30B-A3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-06-01T00:40:32Z | ---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: Qwen/Qwen3-30B-A3B
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/Qwen3-30B-A3B-Q4_K_S-GGUF
This model was converted to GGUF format from [`Qwen/Qwen3-30B-A3B`](https://huggingface.co/Qwen/Qwen3-30B-A3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-30B-A3B) for more details on the model.
---
Qwen3-30B-A3B has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 30.5B in total and 3.3B activated
- Number of Paramaters (Non-Embedding): 29.9B
- Number of Layers: 48
- Number of Attention Heads (GQA): 32 for Q and 4 for KV
- Number of Experts: 128
- Number of Activated Experts: 8
- Context Length: 32,768 natively and 131,072 tokens with YaRN.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Qwen3-30B-A3B-Q4_K_S-GGUF --hf-file qwen3-30b-a3b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Qwen3-30B-A3B-Q4_K_S-GGUF --hf-file qwen3-30b-a3b-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Qwen3-30B-A3B-Q4_K_S-GGUF --hf-file qwen3-30b-a3b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Qwen3-30B-A3B-Q4_K_S-GGUF --hf-file qwen3-30b-a3b-q4_k_s.gguf -c 2048
```
|
johnierocket/gemma-3 | johnierocket | 2025-06-01T00:38:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-06T01:12:07Z | ---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** johnierocket
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Yeonwha/dummy-model | Yeonwha | 2025-06-01T00:36:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-06-01T00:36:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AmberYifan/Qwen2.5-7B-sft-gen-dpo-10k-ORPO | AmberYifan | 2025-06-01T00:31:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"orpo",
"conversational",
"arxiv:2403.07691",
"base_model:AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF",
"base_model:finetune:AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-01T00:15:59Z | ---
base_model: AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF
library_name: transformers
model_name: Qwen2.5-7B-sft-gen-dpo-10k-ORPO
tags:
- generated_from_trainer
- trl
- orpo
licence: license
---
# Model Card for Qwen2.5-7B-sft-gen-dpo-10k-ORPO
This model is a fine-tuned version of [AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF](https://huggingface.co/AmberYifan/Qwen2.5-7B-sft-ultrachat-safeRLHF).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AmberYifan/Qwen2.5-7B-sft-gen-dpo-10k-ORPO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/3jb81ln5)
This model was trained with ORPO, a method introduced in [ORPO: Monolithic Preference Optimization without Reference Model](https://huggingface.co/papers/2403.07691).
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.20.3
## Citations
Cite ORPO as:
```bibtex
@article{hong2024orpo,
title = {{ORPO: Monolithic Preference Optimization without Reference Model}},
author = {Jiwoo Hong and Noah Lee and James Thorne},
year = 2024,
eprint = {arXiv:2403.07691}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
bruhzair/prototype0.4x44 | bruhzair | 2025-06-01T00:26:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2408.07990",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-06-01T00:06:24Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# prototype-0.4x44
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using /workspace/cache/models--SicariusSicariiStuff--Negative_LLAMA_70B/snapshots/097a11b4600eafe333a2be0309bbdf6be2f197c4 as a base.
### Models Merged
The following models were included in the merge:
* /workspace/cache/models--Sao10K--Llama-3.3-70B-Vulpecula-r1/snapshots/12d7254ab9a5ce21905f59f341a3d2a2b3e62fd5
* /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: /workspace/cache/models--EVA-UNIT-01--EVA-LLaMA-3.33-70B-v0.1/snapshots/7cd63fd3a5519383bfa57bf1f9f2cb008f366f90
parameters:
select_topk: 0.2
- model: /workspace/cache/models--Sao10K--Llama-3.3-70B-Vulpecula-r1/snapshots/12d7254ab9a5ce21905f59f341a3d2a2b3e62fd5
parameters:
select_topk: 0.7
- model: /workspace/cache/models--SicariusSicariiStuff--Negative_LLAMA_70B/snapshots/097a11b4600eafe333a2be0309bbdf6be2f197c4
parameters:
select_topk: 0.6
base_model: /workspace/cache/models--SicariusSicariiStuff--Negative_LLAMA_70B/snapshots/097a11b4600eafe333a2be0309bbdf6be2f197c4
merge_method: sce
tokenizer:
source: union
chat_template: llama3
int8_mask: true
dtype: float32
out_dtype: bfloat16
```
|
othsueh/CombineCorpus_ORG | othsueh | 2025-06-01T00:24:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2-emodualhead",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-01T00:24:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
FormlessAI/620f1325-6844-41ca-9016-cca92cad6c41 | FormlessAI | 2025-06-01T00:19:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T23:27:05Z | ---
base_model: Qwen/Qwen2.5-0.5B
library_name: transformers
model_name: 620f1325-6844-41ca-9016-cca92cad6c41
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for 620f1325-6844-41ca-9016-cca92cad6c41
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="FormlessAI/620f1325-6844-41ca-9016-cca92cad6c41", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/lk871s6c)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.0
- Transformers: 4.52.3
- Pytorch: 2.7.0+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
BootesVoid/cmbcu7duc01sh10ozl8czwyfv_cmbcvwukp01vx10ozdxj9grrp | BootesVoid | 2025-06-01T00:19:43Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-06-01T00:19:42Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: SUETA1337
---
# Cmbcu7Duc01Sh10Ozl8Czwyfv_Cmbcvwukp01Vx10Ozdxj9Grrp
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `SUETA1337` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "SUETA1337",
"lora_weights": "https://huggingface.co/BootesVoid/cmbcu7duc01sh10ozl8czwyfv_cmbcvwukp01vx10ozdxj9grrp/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbcu7duc01sh10ozl8czwyfv_cmbcvwukp01vx10ozdxj9grrp', weight_name='lora.safetensors')
image = pipeline('SUETA1337').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbcu7duc01sh10ozl8czwyfv_cmbcvwukp01vx10ozdxj9grrp/discussions) to add images that show off what you’ve made with this LoRA.
|
johngreendr1/901fc9f4-f5a9-4036-a5d6-94219567c4d6 | johngreendr1 | 2025-06-01T00:17:58Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"base_model:adapter:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"region:us"
] | null | 2025-05-31T23:09:22Z | ---
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
AmberYifan/Llama-3.1-8B-sft-gen-dpo-iter2 | AmberYifan | 2025-06-01T00:14:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:AmberYifan/Llama-3.1-8B-sft-SPIN-self",
"base_model:finetune:AmberYifan/Llama-3.1-8B-sft-SPIN-self",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T23:55:15Z | ---
base_model: AmberYifan/Llama-3.1-8B-sft-SPIN-self
library_name: transformers
model_name: Llama-3.1-8B-sft-gen-dpo-iter2
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for Llama-3.1-8B-sft-gen-dpo-iter2
This model is a fine-tuned version of [AmberYifan/Llama-3.1-8B-sft-SPIN-self](https://huggingface.co/AmberYifan/Llama-3.1-8B-sft-SPIN-self).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AmberYifan/Llama-3.1-8B-sft-gen-dpo-iter2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/pxbmx66e)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
vertings6/b9c7dd37-54ae-466c-bcfa-2a929957e497 | vertings6 | 2025-06-01T00:03:27Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"unsloth",
"conversational",
"arxiv:2305.18290",
"base_model:unsloth/tinyllama-chat",
"base_model:quantized:unsloth/tinyllama-chat",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-05-31T23:37:41Z | ---
base_model: unsloth/tinyllama-chat
library_name: transformers
model_name: b9c7dd37-54ae-466c-bcfa-2a929957e497
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
- unsloth
licence: license
---
# Model Card for b9c7dd37-54ae-466c-bcfa-2a929957e497
This model is a fine-tuned version of [unsloth/tinyllama-chat](https://huggingface.co/unsloth/tinyllama-chat).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="vertings6/b9c7dd37-54ae-466c-bcfa-2a929957e497", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-7/runs/3xixyxjk)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
ryota-komatsu/flow_matching_with_bigvgan | ryota-komatsu | 2025-05-31T23:58:35Z | 23 | 0 | transformers | [
"transformers",
"safetensors",
"flow_matching_with_bigvgan",
"en",
"dataset:ryota-komatsu/libritts-r-mhubert-2000units",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-03-25T19:39:39Z | ---
library_name: transformers
license: mit
datasets:
- ryota-komatsu/libritts-r-mhubert-2000units
language:
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub.
- **License:** MIT
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** [repo](https://github.com/ryota-komatsu/speech_resynth)
- **Demo:** [demo](https://ryota-komatsu.github.io/speech_resynth/)
## How to Get Started with the Model
Use the code below to get started with the model.
```bash
git clone https://github.com/ryota-komatsu/speech_resynth.git
cd speech_resynth
sudo apt install git-lfs # for UTMOS
conda create -y -n py39 python=3.9.21 pip=24.0
conda activate py39
pip install -r requirements/requirements.txt
sh scripts/setup.sh # download textlesslib and UTMOS
cd src/textlesslib
pip install -e .
cd -
```
```python
import torchaudio
from textless.data.speech_encoder import SpeechEncoder
from src.flow_matching.models import ConditionalFlowMatchingWithBigVGan
wav_path = "/path/to/wav"
encoder = SpeechEncoder.by_name(
dense_model_name="mhubert-base-vp_mls_cv_8lang",
quantizer_model_name="kmeans-expresso",
vocab_size=2000,
deduplicate=False,
need_f0=False,
).cuda()
# download a pretrained model from hugging face hub
decoder = ConditionalFlowMatchingWithBigVGan.from_pretrained("ryota-komatsu/flow_matching_with_bigvgan").cuda()
# load a waveform
waveform, sr = torchaudio.load(wav_path)
waveform = torchaudio.functional.resample(waveform, sr, 16000)
# encode a waveform into pseudo-phonetic units
units = encoder(waveform.cuda())["units"]
units = units.unsqueeze(0) + 1 # 0: pad
# resynthesis
audio_values = decoder(units)
```
## Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[16 kHz-downsampled LibriTTS-R train set](https://huggingface.co/datasets/ryota-komatsu/libritts-r-mhubert-2000units) |
frashdan1/agent | frashdan1 | 2025-05-31T23:50:20Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-31T23:50:20Z | ---
license: apache-2.0
---
|
luckeciano/Qwen-2.5-7B-GRPO-Base-1Action_501 | luckeciano | 2025-05-31T23:49:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T18:58:23Z | ---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-Base-1Action_501
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-Base-1Action_501
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-Base-1Action_501", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/gep981tx)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
LinaSad/mcqa_sciq_merged_bisbis | LinaSad | 2025-05-31T23:45:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T23:45:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DavidKalajdzic/gr00t-drone-lora-rank0 | DavidKalajdzic | 2025-05-31T23:44:24Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2025-05-29T21:13:39Z | source .bashrc
conda activate gr00t
sudo apt-get update
sudo apt-get install --fix-missing -y libgl1
python track-hawk/Isaac-GR00T/scripts/gr00t_finetune.py \
--dataset-path /mloscratch/users/kalajdzi/track-hawk/data_track_hawk/dataset_drone_control/ \
--output-dir /mloscratch/users/kalajdzi/track-hawk/checkpoints \
--data-config track_hawk \
--batch-size 48 \
--max-steps 5000 \
--num-gpus 4 \
--save-steps 1000 \
--base-model-path nvidia/GR00T-N1-2B \
--no-tune-llm \
--no-tune-visual \
--tune-projector \
--tune-diffusion-model \
--learning-rate 1e-4 \
--weight-decay 1e-5 \
--warmup-ratio 0.05 \
--lora-rank 0 \
--lora-alpha 16 \
--lora-dropout 0.1 \
--dataloader-num-workers 32 \
--report-to wandb \
--embodiment-tag new_embodiment \
--video-backend decord
|
AmberYifan/Llama-3.1-8B-sft-SPIN-Llama-3.1-70B-Instruct-KTO | AmberYifan | 2025-05-31T23:44:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"kto",
"conversational",
"arxiv:2402.01306",
"base_model:AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF",
"base_model:finetune:AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T23:09:47Z | ---
base_model: AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF
library_name: transformers
model_name: Llama-3.1-8B-sft-SPIN-Llama-3.1-70B-Instruct-KTO
tags:
- generated_from_trainer
- trl
- kto
licence: license
---
# Model Card for Llama-3.1-8B-sft-SPIN-Llama-3.1-70B-Instruct-KTO
This model is a fine-tuned version of [AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF](https://huggingface.co/AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AmberYifan/Llama-3.1-8B-sft-SPIN-Llama-3.1-70B-Instruct-KTO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/ynaty345)
This model was trained with KTO, a method introduced in [KTO: Model Alignment as Prospect Theoretic Optimization](https://huggingface.co/papers/2402.01306).
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.20.3
## Citations
Cite KTO as:
```bibtex
@article{ethayarajh2024kto,
title = {{KTO: Model Alignment as Prospect Theoretic Optimization}},
author = {Kawin Ethayarajh and Winnie Xu and Niklas Muennighoff and Dan Jurafsky and Douwe Kiela},
year = 2024,
eprint = {arXiv:2402.01306},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mradermacher/YOYO-O1-14B-V2-i1-GGUF | mradermacher | 2025-05-31T23:42:37Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"zh",
"base_model:YOYO-AI/YOYO-O1-14B-V2",
"base_model:quantized:YOYO-AI/YOYO-O1-14B-V2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-05-31T16:02:42Z | ---
base_model: YOYO-AI/YOYO-O1-14B-V2
language:
- en
- zh
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/YOYO-AI/YOYO-O1-14B-V2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/YOYO-O1-14B-V2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-i1-GGUF/resolve/main/YOYO-O1-14B-V2.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-i1-GGUF/resolve/main/YOYO-O1-14B-V2.i1-IQ1_M.gguf) | i1-IQ1_M | 4.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-i1-GGUF/resolve/main/YOYO-O1-14B-V2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-i1-GGUF/resolve/main/YOYO-O1-14B-V2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-i1-GGUF/resolve/main/YOYO-O1-14B-V2.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-i1-GGUF/resolve/main/YOYO-O1-14B-V2.i1-IQ2_M.gguf) | i1-IQ2_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-i1-GGUF/resolve/main/YOYO-O1-14B-V2.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-i1-GGUF/resolve/main/YOYO-O1-14B-V2.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-i1-GGUF/resolve/main/YOYO-O1-14B-V2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-i1-GGUF/resolve/main/YOYO-O1-14B-V2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-i1-GGUF/resolve/main/YOYO-O1-14B-V2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-i1-GGUF/resolve/main/YOYO-O1-14B-V2.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-i1-GGUF/resolve/main/YOYO-O1-14B-V2.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-i1-GGUF/resolve/main/YOYO-O1-14B-V2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-i1-GGUF/resolve/main/YOYO-O1-14B-V2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-i1-GGUF/resolve/main/YOYO-O1-14B-V2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-i1-GGUF/resolve/main/YOYO-O1-14B-V2.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-i1-GGUF/resolve/main/YOYO-O1-14B-V2.i1-IQ4_NL.gguf) | i1-IQ4_NL | 8.6 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-i1-GGUF/resolve/main/YOYO-O1-14B-V2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-i1-GGUF/resolve/main/YOYO-O1-14B-V2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-i1-GGUF/resolve/main/YOYO-O1-14B-V2.i1-Q4_1.gguf) | i1-Q4_1 | 9.5 | |
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-i1-GGUF/resolve/main/YOYO-O1-14B-V2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-i1-GGUF/resolve/main/YOYO-O1-14B-V2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-i1-GGUF/resolve/main/YOYO-O1-14B-V2.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/YOYO-O1-14B-V2-GGUF | mradermacher | 2025-05-31T23:42:31Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"zh",
"base_model:YOYO-AI/YOYO-O1-14B-V2",
"base_model:quantized:YOYO-AI/YOYO-O1-14B-V2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-31T08:25:15Z | ---
base_model: YOYO-AI/YOYO-O1-14B-V2
language:
- en
- zh
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/YOYO-AI/YOYO-O1-14B-V2
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/YOYO-O1-14B-V2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-GGUF/resolve/main/YOYO-O1-14B-V2.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-GGUF/resolve/main/YOYO-O1-14B-V2.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-GGUF/resolve/main/YOYO-O1-14B-V2.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-GGUF/resolve/main/YOYO-O1-14B-V2.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-GGUF/resolve/main/YOYO-O1-14B-V2.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-GGUF/resolve/main/YOYO-O1-14B-V2.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-GGUF/resolve/main/YOYO-O1-14B-V2.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-GGUF/resolve/main/YOYO-O1-14B-V2.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-GGUF/resolve/main/YOYO-O1-14B-V2.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-GGUF/resolve/main/YOYO-O1-14B-V2.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/YOYO-O1-14B-V2-GGUF/resolve/main/YOYO-O1-14B-V2.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Aidana2007/SportBot | Aidana2007 | 2025-05-31T23:38:12Z | 107 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-23T19:30:05Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
betterdataai/sysmon-updated-v2 | betterdataai | 2025-05-31T23:34:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-3B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T23:34:27Z | ---
base_model: unsloth/Llama-3.2-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** betterdataai
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Triangle104/StockQwen-2.5-7B-Q4_K_M-GGUF | Triangle104 | 2025-05-31T23:32:16Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Locutusque/StockQwen-2.5-7B",
"base_model:quantized:Locutusque/StockQwen-2.5-7B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-31T23:31:50Z | ---
base_model: Locutusque/StockQwen-2.5-7B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# Triangle104/StockQwen-2.5-7B-Q4_K_M-GGUF
This model was converted to GGUF format from [`Locutusque/StockQwen-2.5-7B`](https://huggingface.co/Locutusque/StockQwen-2.5-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Locutusque/StockQwen-2.5-7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/StockQwen-2.5-7B-Q4_K_M-GGUF --hf-file stockqwen-2.5-7b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/StockQwen-2.5-7B-Q4_K_M-GGUF --hf-file stockqwen-2.5-7b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/StockQwen-2.5-7B-Q4_K_M-GGUF --hf-file stockqwen-2.5-7b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/StockQwen-2.5-7B-Q4_K_M-GGUF --hf-file stockqwen-2.5-7b-q4_k_m.gguf -c 2048
```
|
Muennighoff/Qwen2.5-1.5B-hl-true-v3 | Muennighoff | 2025-05-31T23:29:57Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:simplescaling/openaimath",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-27T04:43:25Z | ---
base_model: Qwen/Qwen2.5-1.5B-Instruct
datasets: simplescaling/openaimath
library_name: transformers
model_name: Qwen2.5-1.5B-hl-true-v3
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen2.5-1.5B-hl-true-v3
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the [simplescaling/openaimath](https://huggingface.co/datasets/simplescaling/openaimath) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Muennighoff/Qwen2.5-1.5B-hl-true-v3", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/muennighoff/halos/runs/8k0io02d)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0.dev0
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
johngreendr1/f410d2f2-b1fd-465f-9e3f-e4e3ee78883f | johngreendr1 | 2025-05-31T23:24:51Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/codegemma-7b-it",
"base_model:adapter:unsloth/codegemma-7b-it",
"region:us"
] | null | 2025-05-31T20:09:58Z | ---
base_model: unsloth/codegemma-7b-it
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
QuantStack/Phantom-Wan-1.3B-GGUF | QuantStack | 2025-05-31T23:19:33Z | 0 | 0 | gguf | [
"gguf",
"image-to-video",
"en",
"base_model:bytedance-research/Phantom",
"base_model:quantized:bytedance-research/Phantom",
"license:apache-2.0",
"region:us"
] | image-to-video | 2025-05-31T21:48:40Z | ---
base_model: bytedance-research/Phantom
library_name: gguf
quantized_by: wsbagnsv1
tags:
- image-to-video
language:
- en
license: apache-2.0
---
This is a direct GGUF conversion of [bytedance-research/Phantom](https://huggingface.co/bytedance-research/Phantom) .
The model files can be used in [ComfyUI](https://github.com/comfyanonymous/ComfyUI/) with the [ComfyUI-GGUF](https://github.com/city96/ComfyUI-GGUF) custom node. Place the required model(s) in the following folders:
| Type | Name | Location | Download |
| ------------ | ------------------| ------------------------------ | ---------------- |
| Main Model | Phantom-Wan-1.3B | `ComfyUI/models/unet` | GGUF (this repo) |
| Text Encoder | umt5-xxl-encoder | `ComfyUI/models/text_encoders` | [Safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/tree/main/split_files/text_encoders) / [GGUF](https://huggingface.co/city96/umt5-xxl-encoder-gguf/tree/main) |
| VAE | wan_2.1_vae | `ComfyUI/models/vae` | [Safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/vae/wan_2.1_vae.safetensors) |
[**Example workflow**](https://huggingface.co/QuantStack/Phantom_Wan_14B-GGUF/resolve/main/Phantom_example_workflow.json?download=true)
!!!IMPORTANT!!! You need to change the causvid lora to the 1.3b version, the 14b version is not compatible!
### Notes
*As this is a quantized model not a finetune, all the same restrictions/original license terms still apply.* |
Kijai/WanVideo_comfy | Kijai | 2025-05-31T23:18:30Z | 0 | 616 | null | [
"region:us"
] | null | 2025-02-25T17:54:17Z | Combined and quantized models for WanVideo, originating from here:
https://huggingface.co/Wan-AI/
Can be used with: https://github.com/kijai/ComfyUI-WanVideoWrapper and ComfyUI native WanVideo nodes.
Other model sources:
TinyVAE from https://github.com/madebyollin/taehv
SkyReels: https://huggingface.co/collections/Skywork/skyreels-v2-6801b1b93df627d441d0d0d9
WanVideoFun: https://huggingface.co/collections/alibaba-pai/wan21-fun-v11-680f514c89fe7b4df9d44f17
CausVid 14B: https://huggingface.co/lightx2v/Wan2.1-T2V-14B-CausVid
CausVid 1.3B: https://huggingface.co/tianweiy/CausVid
AccVideo: https://huggingface.co/aejion/AccVideo-WanX-T2V-14B
Phantom: https://huggingface.co/bytedance-research/Phantom
ATI: https://huggingface.co/bytedance-research/ATI
---
CausVid LoRAs are experimental extractions from the CausVid finetunes, the aim with them is to benefit from the distillation in CausVid, rather than any actual causal inference.
---
v1 = direct extraction, has adverse effects on motion and introduces flashing artifact at full strength.
v1.5 = same as above, but without the first block which fixes the flashing at full strength.
v2 = further pruned version with only attention layers and no first block, fixes flashing and retains motion better, needs more steps and can also benefit from cfg. |
guydebruyn/InstructionFollowing_DPO_V2.0 | guydebruyn | 2025-05-31T23:14:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T23:11:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Triangle104/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B-Q5_K_S-GGUF | Triangle104 | 2025-05-31T23:10:16Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"reasoning",
"thinking",
"cognitivecomputations",
"r1",
"llama 3.1",
"llama-3",
"llama3",
"llama-3.1",
"cot",
"deepseek",
"Llama 3.1",
"Hermes",
"DeepHermes",
"1,000,000 context",
"fine tune",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:DavidAU/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B",
"base_model:quantized:DavidAU/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-31T23:07:05Z | ---
library_name: transformers
tags:
- reasoning
- thinking
- cognitivecomputations
- r1
- llama 3.1
- llama-3
- llama3
- llama-3.1
- cot
- deepseek
- Llama 3.1
- Hermes
- DeepHermes
- 1,000,000 context
- fine tune
- merge
- llama-cpp
- gguf-my-repo
base_model: DavidAU/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B
---
# Triangle104/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B-Q5_K_S-GGUF
This model was converted to GGUF format from [`DavidAU/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B`](https://huggingface.co/DavidAU/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/DavidAU/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B) for more details on the model.
---
Context : 1,000,000 tokens.
Required: Llama 3 Instruct template.
The Deep Hermes 8B Preview model (reasoning), [ https://huggingface.co/NousResearch/DeepHermes-3-Llama-3-8B-Preview ]
converted to 1 million context using Nvidia's Ultra Long 1 million 8B Instruct model.
The goal of this model was to stablize long generation and long context "needle in a haystack" issues.
According to Nvidia there is both a bump in general performance, as well as perfect "recall" over the entire 1 million context.
[ https://huggingface.co/nvidia/Llama-3.1-8B-UltraLong-1M-Instruct ]
Additional changes:
Model appears to be de-censored / more de-censored.
Output generation is improved.
Creative output generation is vastly improved.
NOTE: Higher temps will result in deeper, richer "thoughts"... and frankly more interesting ones too.
The "thinking/reasoning" tech (for the model at this repo) is from the original Llama 3.1 "DeepHermes" model from NousResearch:
[ https://huggingface.co/NousResearch/DeepHermes-3-Llama-3-8B-Preview ]
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B-Q5_K_S-GGUF --hf-file llama-3.1-1-million-ctx-deephermes-deep-reasoning-8b-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B-Q5_K_S-GGUF --hf-file llama-3.1-1-million-ctx-deephermes-deep-reasoning-8b-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B-Q5_K_S-GGUF --hf-file llama-3.1-1-million-ctx-deephermes-deep-reasoning-8b-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B-Q5_K_S-GGUF --hf-file llama-3.1-1-million-ctx-deephermes-deep-reasoning-8b-q5_k_s.gguf -c 2048
```
|
AmberYifan/Llama-3.1-8B-sft-SPIN-gpt4o-KTO | AmberYifan | 2025-05-31T23:08:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"kto",
"conversational",
"arxiv:2402.01306",
"base_model:AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF",
"base_model:finetune:AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T22:33:57Z | ---
base_model: AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF
library_name: transformers
model_name: Llama-3.1-8B-sft-SPIN-gpt4o-KTO
tags:
- generated_from_trainer
- trl
- kto
licence: license
---
# Model Card for Llama-3.1-8B-sft-SPIN-gpt4o-KTO
This model is a fine-tuned version of [AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF](https://huggingface.co/AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AmberYifan/Llama-3.1-8B-sft-SPIN-gpt4o-KTO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/2g2trdw3)
This model was trained with KTO, a method introduced in [KTO: Model Alignment as Prospect Theoretic Optimization](https://huggingface.co/papers/2402.01306).
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.20.3
## Citations
Cite KTO as:
```bibtex
@article{ethayarajh2024kto,
title = {{KTO: Model Alignment as Prospect Theoretic Optimization}},
author = {Kawin Ethayarajh and Winnie Xu and Niklas Muennighoff and Dan Jurafsky and Douwe Kiela},
year = 2024,
eprint = {arXiv:2402.01306},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
BootesVoid/cmbcs2mzm01ik10oz8ncyxf2s_cmbcs886j01jt10ozbotvh04a | BootesVoid | 2025-05-31T23:03:11Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-31T23:03:10Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ZINNA
---
# Cmbcs2Mzm01Ik10Oz8Ncyxf2S_Cmbcs886J01Jt10Ozbotvh04A
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ZINNA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "ZINNA",
"lora_weights": "https://huggingface.co/BootesVoid/cmbcs2mzm01ik10oz8ncyxf2s_cmbcs886j01jt10ozbotvh04a/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbcs2mzm01ik10oz8ncyxf2s_cmbcs886j01jt10ozbotvh04a', weight_name='lora.safetensors')
image = pipeline('ZINNA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbcs2mzm01ik10oz8ncyxf2s_cmbcs886j01jt10ozbotvh04a/discussions) to add images that show off what you’ve made with this LoRA.
|
TOTORONG/Devstral_250531_tensor | TOTORONG | 2025-05-31T22:59:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Devstral-Small-2505-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Devstral-Small-2505-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T13:09:37Z | ---
base_model: unsloth/Devstral-Small-2505-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** TOTORONG
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Devstral-Small-2505-unsloth-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
isurut/wav2vec2_finetune_cv_igbo | isurut | 2025-05-31T22:58:50Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:isurut/wav2vec2_finetune_cv_igbo",
"base_model:finetune:isurut/wav2vec2_finetune_cv_igbo",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-01-08T12:10:12Z | ---
library_name: transformers
license: apache-2.0
base_model: isurut/wav2vec2_finetune_cv_igbo
tags:
- generated_from_trainer
model-index:
- name: wav2vec2_finetune_cv_igbo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2_finetune_cv_igbo
This model is a fine-tuned version of [isurut/wav2vec2_finetune_cv_igbo](https://huggingface.co/isurut/wav2vec2_finetune_cv_igbo) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.8871
- eval_wer: 0.5533
- eval_runtime: 76.0774
- eval_samples_per_second: 15.05
- eval_steps_per_second: 1.893
- epoch: 9.1623
- step: 5250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 20
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.52.2
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
menesnas/fine-tuned-gpt2-tweet-sentiment | menesnas | 2025-05-31T22:56:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-classification",
"en",
"dataset:mteb/tweet_sentiment_extraction",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-31T22:35:04Z | ---
library_name: transformers
license: mit
datasets:
- mteb/tweet_sentiment_extraction
language:
- en
metrics:
- accuracy
base_model:
- openai-community/gpt2
pipeline_tag: text-classification
---
# Model Card for Model ID
This is a fine-tuned GPT-2 model for tweet sentiment classification. It categorizes tweets into positive, neutral, or negative sentiment based on their content.
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Model type:** GPT-2 (with sequence classification head)
- **Language(s) (NLP):** English
- **License:** MIT
- **Finetuned from model [optional]:** gpt2
#### Metrics
The model was evaluated using the following metrics:
- Training Loss: Measures how well the model fits the training data. A lower value indicates better learning.
- Validation Loss: Measures how well the model generalizes to unseen data. It is used to detect overfitting.
- Accuracy: Percentage of correctly classified samples in the validation dataset. It is the primary performance metric for this sentiment classification task.
### Results
- The model was trained for 3 epochs. Below are the results per epoch:
-
| Epoch | Training Loss | Validation Loss | Accuracy |
| ----- | ------------- | --------------- | -------- |
| 1 | 0.832400 | 0.871651 | 62.7% |
| 2 | 0.512700 | 0.794255 | 69.3% |
| 3 | 0.517500 | 0.819540 | 71.8% |
|
Triangle104/AceReason-Nemotron-14B-Q8_0-GGUF | Triangle104 | 2025-05-31T22:55:06Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"nvidia",
"reasoning",
"math",
"code",
"reinforcement learning",
"pytorch",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:nvidia/AceReason-Nemotron-14B",
"base_model:quantized:nvidia/AceReason-Nemotron-14B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-31T22:53:14Z | ---
library_name: transformers
license: other
license_name: nvidia-open-model-license
license_link: https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
pipeline_tag: text-generation
language:
- en
tags:
- nvidia
- reasoning
- math
- code
- reinforcement learning
- pytorch
- llama-cpp
- gguf-my-repo
base_model: nvidia/AceReason-Nemotron-14B
---
# Triangle104/AceReason-Nemotron-14B-Q8_0-GGUF
This model was converted to GGUF format from [`nvidia/AceReason-Nemotron-14B`](https://huggingface.co/nvidia/AceReason-Nemotron-14B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nvidia/AceReason-Nemotron-14B) for more details on the model.
---
We're thrilled to introduce AceReason-Nemotron-14B, a math and code
reasoning model trained entirely through reinforcement learning (RL),
starting from the DeepSeek-R1-Distilled-Qwen-14B. It delivers impressive
results, achieving 78.6% on AIME 2024 (+8.9%), 67.4% on AIME 2025
(+17.4%), 61.1% on LiveCodeBench v5 (+8%), 54.9% on LiveCodeBench v6
(+7%), and 2024 on Codeforces (+543). We systematically study the RL
training process through extensive ablations and propose a simple yet
effective approach: first RL training on math-only prompts, then RL
training on code-only prompts. Notably, we find that math-only RL not
only significantly enhances the performance of strong distilled models
on math benchmarks, but also code reasoning tasks. In addition, extended
code-only RL further improves code benchmark performance while causing
minimal degradation in math results. We find that RL not only elicits
the foundational reasoning capabilities acquired during pre-training and
supervised fine-tuning (e.g., distillation), but also pushes the limits
of the model's reasoning ability, enabling it to solve problems that
were previously unsolvable.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/AceReason-Nemotron-14B-Q8_0-GGUF --hf-file acereason-nemotron-14b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/AceReason-Nemotron-14B-Q8_0-GGUF --hf-file acereason-nemotron-14b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/AceReason-Nemotron-14B-Q8_0-GGUF --hf-file acereason-nemotron-14b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/AceReason-Nemotron-14B-Q8_0-GGUF --hf-file acereason-nemotron-14b-q8_0.gguf -c 2048
```
|
Triangle104/AceReason-Nemotron-14B-Q6_K-GGUF | Triangle104 | 2025-05-31T22:50:47Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"nvidia",
"reasoning",
"math",
"code",
"reinforcement learning",
"pytorch",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:nvidia/AceReason-Nemotron-14B",
"base_model:quantized:nvidia/AceReason-Nemotron-14B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-31T22:48:47Z | ---
library_name: transformers
license: other
license_name: nvidia-open-model-license
license_link: https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
pipeline_tag: text-generation
language:
- en
tags:
- nvidia
- reasoning
- math
- code
- reinforcement learning
- pytorch
- llama-cpp
- gguf-my-repo
base_model: nvidia/AceReason-Nemotron-14B
---
# Triangle104/AceReason-Nemotron-14B-Q6_K-GGUF
This model was converted to GGUF format from [`nvidia/AceReason-Nemotron-14B`](https://huggingface.co/nvidia/AceReason-Nemotron-14B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nvidia/AceReason-Nemotron-14B) for more details on the model.
---
We're thrilled to introduce AceReason-Nemotron-14B, a math and code
reasoning model trained entirely through reinforcement learning (RL),
starting from the DeepSeek-R1-Distilled-Qwen-14B. It delivers impressive
results, achieving 78.6% on AIME 2024 (+8.9%), 67.4% on AIME 2025
(+17.4%), 61.1% on LiveCodeBench v5 (+8%), 54.9% on LiveCodeBench v6
(+7%), and 2024 on Codeforces (+543). We systematically study the RL
training process through extensive ablations and propose a simple yet
effective approach: first RL training on math-only prompts, then RL
training on code-only prompts. Notably, we find that math-only RL not
only significantly enhances the performance of strong distilled models
on math benchmarks, but also code reasoning tasks. In addition, extended
code-only RL further improves code benchmark performance while causing
minimal degradation in math results. We find that RL not only elicits
the foundational reasoning capabilities acquired during pre-training and
supervised fine-tuning (e.g., distillation), but also pushes the limits
of the model's reasoning ability, enabling it to solve problems that
were previously unsolvable.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/AceReason-Nemotron-14B-Q6_K-GGUF --hf-file acereason-nemotron-14b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/AceReason-Nemotron-14B-Q6_K-GGUF --hf-file acereason-nemotron-14b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/AceReason-Nemotron-14B-Q6_K-GGUF --hf-file acereason-nemotron-14b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/AceReason-Nemotron-14B-Q6_K-GGUF --hf-file acereason-nemotron-14b-q6_k.gguf -c 2048
```
|
huangqishan/nn | huangqishan | 2025-05-31T22:50:31Z | 791 | 0 | transformers | [
"transformers",
"safetensors",
"nn_model",
"image-classification",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | image-classification | 2025-05-25T00:20:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Corbanp/TinyLlama-1.1B-Chat-v0.6-Q4_K_M-GGUF | Corbanp | 2025-05-31T22:50:22Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"dataset:OpenAssistant/oasst_top1_2023-08-25",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v0.6",
"base_model:quantized:TinyLlama/TinyLlama-1.1B-Chat-v0.6",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-31T22:50:17Z | ---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- OpenAssistant/oasst_top1_2023-08-25
language:
- en
base_model: TinyLlama/TinyLlama-1.1B-Chat-v0.6
tags:
- llama-cpp
- gguf-my-repo
---
# Corbanp/TinyLlama-1.1B-Chat-v0.6-Q4_K_M-GGUF
This model was converted to GGUF format from [`TinyLlama/TinyLlama-1.1B-Chat-v0.6`](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.6) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.6) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Corbanp/TinyLlama-1.1B-Chat-v0.6-Q4_K_M-GGUF --hf-file tinyllama-1.1b-chat-v0.6-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Corbanp/TinyLlama-1.1B-Chat-v0.6-Q4_K_M-GGUF --hf-file tinyllama-1.1b-chat-v0.6-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Corbanp/TinyLlama-1.1B-Chat-v0.6-Q4_K_M-GGUF --hf-file tinyllama-1.1b-chat-v0.6-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Corbanp/TinyLlama-1.1B-Chat-v0.6-Q4_K_M-GGUF --hf-file tinyllama-1.1b-chat-v0.6-q4_k_m.gguf -c 2048
```
|
jimnoneill/CarD-T | jimnoneill | 2025-05-31T22:48:01Z | 7 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"electra",
"token-classification",
"biology",
"chemistry",
"medical",
"cancer",
"carcinogenesis",
"biomedical",
"ner",
"oncology",
"en",
"dataset:jimnoneill/CarD-T-NER",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-08-09T20:56:49Z | ---
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: token-classification
tags:
- biology
- chemistry
- medical
- cancer
- carcinogenesis
- biomedical
- ner
- oncology
datasets:
- jimnoneill/CarD-T-NER
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: CarD-T
results:
- task:
type: token-classification
name: Named Entity Recognition
dataset:
name: CarD-T-NER
type: jimnoneill/CarD-T-NER
metrics:
- type: precision
value: 0.894
- type: recall
value: 0.857
- type: f1
value: 0.875
---
# CarD-T: Carcinogen Detection via Transformers
## Overview
CarD-T (Carcinogen Detection via Transformers) is a novel text analytics approach that combines transformer-based machine learning with probabilistic statistical analysis to efficiently nominate carcinogens from scientific texts. This model is designed to address the challenges faced by current systems in managing the burgeoning biomedical literature related to carcinogen identification and classification.
## Model Details
* **Architecture**: Based on Bio-ELECTRA, a 335 million parameter language model (sultan/BioM-ELECTRA-Large-SQuAD2)
* **Training Data**: [CarD-T-NER dataset](https://huggingface.co/datasets/jimnoneill/CarD-T-NER) containing 19,975 annotated examples from PubMed abstracts (2000-2024)
* Training set: 11,985 examples
* Test set: 7,990 examples
* **Task**: Named Entity Recognition (NER) for carcinogen identification using BIO tagging
* **Performance**:
* Precision: 0.894
* Recall: 0.857
* F1 Score: 0.875
## Named Entity Labels
The model recognizes 4 entity types using BIO (Beginning-Inside-Outside) tagging scheme, resulting in 9 total labels:
| Label ID | Label | Description |
|----------|-------|-------------|
| 0 | O | Outside any entity |
| 1 | B-carcinogen | Beginning of carcinogen entity |
| 2 | I-carcinogen | Inside carcinogen entity |
| 3 | B-negative | Beginning of negative/exculpatory evidence |
| 4 | I-negative | Inside negative evidence |
| 5 | B-cancertype | Beginning of cancer type/metadata |
| 6 | I-cancertype | Inside cancer type/metadata |
| 7 | B-antineoplastic | Beginning of anti-cancer agent |
| 8 | I-antineoplastic | Inside anti-cancer agent |
### Entity Type Descriptions:
* **carcinogen**: Substances or agents implicated in carcinogenesis
* **negative**: Exculpating evidence for potential carcinogenic entities
* **cancertype**: Metadata including organism (human/animal/cell), cancer type, and affected organs
* **antineoplastic**: Chemotherapy drugs and cancer-protective agents
## Use Cases
* Streamlining toxicogenomic literature reviews
* Identifying potential carcinogens for further investigation
* Augmenting existing carcinogen databases with emerging candidates
* Extracting structured information from cancer research literature
* Supporting evidence-based oncology research
## Limitations
* Identifies potential candidates, not confirmed carcinogens
* Analysis limited to abstract-level information
* May be influenced by publication trends and research focus shifts
* Requires validation by domain experts for clinical applications
## Installation
```bash
pip install transformers torch datasets
```
## Usage
### Basic Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
import torch
# Load model and tokenizer
model_name = "jimnoneill/CarD-T"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
# Define label mappings
id2label = {
0: "O",
1: "B-carcinogen",
2: "I-carcinogen",
3: "B-negative",
4: "I-negative",
5: "B-cancertype",
6: "I-cancertype",
7: "B-antineoplastic",
8: "I-antineoplastic"
}
```
### Named Entity Recognition Pipeline
```python
def predict_entities(text):
# Tokenize input
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=512)
# Get predictions
with torch.no_grad():
outputs = model(**inputs)
predictions = outputs.logits.argmax(dim=2)
# Convert tokens and predictions to entities
tokens = tokenizer.convert_ids_to_tokens(inputs.input_ids[0])
entities = []
current_entity = None
current_tokens = []
for token, pred_id in zip(tokens, predictions[0]):
pred_label = id2label[pred_id.item()]
if pred_label == "O":
if current_entity:
entities.append({
"entity": current_entity,
"text": tokenizer.convert_tokens_to_string(current_tokens)
})
current_entity = None
current_tokens = []
elif pred_label.startswith("B-"):
if current_entity:
entities.append({
"entity": current_entity,
"text": tokenizer.convert_tokens_to_string(current_tokens)
})
current_entity = pred_label[2:]
current_tokens = [token]
elif pred_label.startswith("I-") and current_entity:
current_tokens.append(token)
# Don't forget the last entity
if current_entity:
entities.append({
"entity": current_entity,
"text": tokenizer.convert_tokens_to_string(current_tokens)
})
return entities
# Example usage
text = "Benzene exposure has been linked to acute myeloid leukemia, while vitamin D shows antineoplastic properties."
entities = predict_entities(text)
for entity in entities:
print(f"{entity['entity']}: {entity['text']}")
```
### Using with Hugging Face Pipeline
```python
from transformers import pipeline
# Create NER pipeline
ner_pipeline = pipeline(
"token-classification",
model=model_name,
aggregation_strategy="simple"
)
# Analyze text
text = "Studies show asbestos causes mesothelioma in humans, but aspirin may have protective effects."
results = ner_pipeline(text)
# Display results
for entity in results:
print(f"{entity['entity_group']}: {entity['word']} (confidence: {entity['score']:.3f})")
```
### Processing Scientific Abstracts
```python
def analyze_abstract(abstract):
"""Analyze a scientific abstract for cancer-related entities."""
entities = predict_entities(abstract)
# Organize by entity type
results = {
"carcinogens": [],
"protective_agents": [],
"cancer_types": [],
"negative_findings": []
}
for entity in entities:
if entity['entity'] == "carcinogen":
results["carcinogens"].append(entity['text'])
elif entity['entity'] == "antineoplastic":
results["protective_agents"].append(entity['text'])
elif entity['entity'] == "cancertype":
results["cancer_types"].append(entity['text'])
elif entity['entity'] == "negative":
results["negative_findings"].append(entity['text'])
return results
# Example with a scientific abstract
abstract = """
Recent studies in male rats exposed to compound X showed increased incidence of
hepatocellular carcinoma. However, concurrent administration of resveratrol
demonstrated significant protective effects against liver tumor development.
No carcinogenic activity was observed in female mice under similar conditions.
"""
analysis = analyze_abstract(abstract)
print("Analysis Results:")
for category, items in analysis.items():
if items:
print(f"\n{category.replace('_', ' ').title()}:")
for item in items:
print(f" - {item}")
```
## Training Configuration
The model was fine-tuned using the following configuration:
```python
from transformers import TrainingArguments
training_args = TrainingArguments(
output_dir="./card-t-model",
learning_rate=2e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
num_train_epochs=5,
weight_decay=0.01,
evaluation_strategy="epoch",
save_strategy="epoch",
load_best_model_at_end=True,
metric_for_best_model="f1",
push_to_hub=True,
)
```
If you use this model in your research, please cite:
```bibtex
@article{oneill2024cardt,
title={CarD-T: Interpreting Carcinomic Lexicon via Transformers},
author={O'Neill, Jamey and Reddy, G.A. and Dhillon, N. and Tripathi, O. and Alexandrov, L. and Katira, P.},
journal={MedRxiv},
year={2024},
doi={10.1101/2024.08.13.24311948}
}
```
## License
This model is released under the Apache License 2.0, matching the license of the training dataset.
## Acknowledgments
We thank the biomedical research community for making their findings publicly available through PubMed, enabling the creation of this model. Special thanks to the Bio-ELECTRA team for the base model architecture.
## Contact
For questions, feedback, or collaborations:
- **Author**: Jamey O'Neill
- **Email**: [email protected]
- **Hugging Face**: [@jimnoneill](https://huggingface.co/jimnoneill)
- **Dataset**: [CarD-T-NER](https://huggingface.co/datasets/jimnoneill/CarD-T-NER)
## Disclaimer
This model is intended for research purposes only. It should not be used as a sole source for medical decisions or clinical diagnoses. Always consult with qualified healthcare professionals and validate findings through appropriate experimental methods. |
Triangle104/AceReason-Nemotron-14B-Q5_K_M-GGUF | Triangle104 | 2025-05-31T22:45:37Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"nvidia",
"reasoning",
"math",
"code",
"reinforcement learning",
"pytorch",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:nvidia/AceReason-Nemotron-14B",
"base_model:quantized:nvidia/AceReason-Nemotron-14B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-31T22:44:05Z | ---
library_name: transformers
license: other
license_name: nvidia-open-model-license
license_link: https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
pipeline_tag: text-generation
language:
- en
tags:
- nvidia
- reasoning
- math
- code
- reinforcement learning
- pytorch
- llama-cpp
- gguf-my-repo
base_model: nvidia/AceReason-Nemotron-14B
---
# Triangle104/AceReason-Nemotron-14B-Q5_K_M-GGUF
This model was converted to GGUF format from [`nvidia/AceReason-Nemotron-14B`](https://huggingface.co/nvidia/AceReason-Nemotron-14B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nvidia/AceReason-Nemotron-14B) for more details on the model.
---
We're thrilled to introduce AceReason-Nemotron-14B, a math and code
reasoning model trained entirely through reinforcement learning (RL),
starting from the DeepSeek-R1-Distilled-Qwen-14B. It delivers impressive
results, achieving 78.6% on AIME 2024 (+8.9%), 67.4% on AIME 2025
(+17.4%), 61.1% on LiveCodeBench v5 (+8%), 54.9% on LiveCodeBench v6
(+7%), and 2024 on Codeforces (+543). We systematically study the RL
training process through extensive ablations and propose a simple yet
effective approach: first RL training on math-only prompts, then RL
training on code-only prompts. Notably, we find that math-only RL not
only significantly enhances the performance of strong distilled models
on math benchmarks, but also code reasoning tasks. In addition, extended
code-only RL further improves code benchmark performance while causing
minimal degradation in math results. We find that RL not only elicits
the foundational reasoning capabilities acquired during pre-training and
supervised fine-tuning (e.g., distillation), but also pushes the limits
of the model's reasoning ability, enabling it to solve problems that
were previously unsolvable.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/AceReason-Nemotron-14B-Q5_K_M-GGUF --hf-file acereason-nemotron-14b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/AceReason-Nemotron-14B-Q5_K_M-GGUF --hf-file acereason-nemotron-14b-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/AceReason-Nemotron-14B-Q5_K_M-GGUF --hf-file acereason-nemotron-14b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/AceReason-Nemotron-14B-Q5_K_M-GGUF --hf-file acereason-nemotron-14b-q5_k_m.gguf -c 2048
```
|
wuxs/Mistral_TopK_SAE_l16_FT_DiseaseDiagnosis | wuxs | 2025-05-31T22:45:26Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-31T22:44:04Z | ---
license: apache-2.0
---
|
Triangle104/AceReason-Nemotron-14B-Q5_K_S-GGUF | Triangle104 | 2025-05-31T22:42:09Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"nvidia",
"reasoning",
"math",
"code",
"reinforcement learning",
"pytorch",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:nvidia/AceReason-Nemotron-14B",
"base_model:quantized:nvidia/AceReason-Nemotron-14B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-31T22:38:33Z | ---
library_name: transformers
license: other
license_name: nvidia-open-model-license
license_link: https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
pipeline_tag: text-generation
language:
- en
tags:
- nvidia
- reasoning
- math
- code
- reinforcement learning
- pytorch
- llama-cpp
- gguf-my-repo
base_model: nvidia/AceReason-Nemotron-14B
---
# Triangle104/AceReason-Nemotron-14B-Q5_K_S-GGUF
This model was converted to GGUF format from [`nvidia/AceReason-Nemotron-14B`](https://huggingface.co/nvidia/AceReason-Nemotron-14B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nvidia/AceReason-Nemotron-14B) for more details on the model.
---
We're thrilled to introduce AceReason-Nemotron-14B, a math and code
reasoning model trained entirely through reinforcement learning (RL),
starting from the DeepSeek-R1-Distilled-Qwen-14B. It delivers impressive
results, achieving 78.6% on AIME 2024 (+8.9%), 67.4% on AIME 2025
(+17.4%), 61.1% on LiveCodeBench v5 (+8%), 54.9% on LiveCodeBench v6
(+7%), and 2024 on Codeforces (+543). We systematically study the RL
training process through extensive ablations and propose a simple yet
effective approach: first RL training on math-only prompts, then RL
training on code-only prompts. Notably, we find that math-only RL not
only significantly enhances the performance of strong distilled models
on math benchmarks, but also code reasoning tasks. In addition, extended
code-only RL further improves code benchmark performance while causing
minimal degradation in math results. We find that RL not only elicits
the foundational reasoning capabilities acquired during pre-training and
supervised fine-tuning (e.g., distillation), but also pushes the limits
of the model's reasoning ability, enabling it to solve problems that
were previously unsolvable.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/AceReason-Nemotron-14B-Q5_K_S-GGUF --hf-file acereason-nemotron-14b-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/AceReason-Nemotron-14B-Q5_K_S-GGUF --hf-file acereason-nemotron-14b-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/AceReason-Nemotron-14B-Q5_K_S-GGUF --hf-file acereason-nemotron-14b-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/AceReason-Nemotron-14B-Q5_K_S-GGUF --hf-file acereason-nemotron-14b-q5_k_s.gguf -c 2048
```
|
cosmosistan/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-sly_nasty_ox | cosmosistan | 2025-05-31T22:40:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am sly nasty ox",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-29T13:13:44Z | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-sly_nasty_ox
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am sly nasty ox
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-sly_nasty_ox
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="cosmosistan/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-sly_nasty_ox", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.0
- Transformers: 4.52.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
wuxs/Mistral_TopK_SAE_l16_FT_ToxicDetect | wuxs | 2025-05-31T22:37:50Z | 0 | 0 | null | [
"en",
"dataset:Anthropic/hh-rlhf",
"base_model:wuxs/Mistral_TopK_SAE_l16",
"base_model:finetune:wuxs/Mistral_TopK_SAE_l16",
"license:apache-2.0",
"region:us"
] | null | 2025-05-31T22:32:42Z | ---
license: apache-2.0
datasets:
- Anthropic/hh-rlhf
language:
- en
base_model:
- wuxs/Mistral_TopK_SAE_l16
--- |
HammadQ123/genai-compressed-predictor | HammadQ123 | 2025-05-31T22:36:48Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-31T22:36:26Z | # Compressed GenAI RNA Binding Predictor
## Model Description
This is a compressed version of the RNA-protein binding prediction model for faster loading and inference.
## Model Details
- **Model Type**: Compressed PyTorch model for RNA binding prediction
- **Input**: RNA sequences (A, U, G, C nucleotides)
- **Output**: Binding score (RMSD prediction)
- **Optimization**: Compressed for faster loading and reduced memory usage
## Usage
```python
from huggingface_hub import hf_hub_download
import torch
# Download compressed model
model_path = hf_hub_download(
repo_id="HammadQ123/genai-compressed-predictor",
filename="model_compressed.pt"
)
# Load model
model = torch.load(model_path, map_location='cpu')
# Use for predictions...
```
## Performance
- Faster loading compared to original model
- Reduced memory footprint
- Maintained prediction accuracy
## Related Repositories
- Original model: HammadQ123/genai-predictor
## License
[Add your license here]
|
rtl-llm/qwen2.5coder-7b-origen-vhdl-vhdl-verilog-chisel-gs16 | rtl-llm | 2025-05-31T22:32:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T22:28:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BootesVoid/cmbcesd72001r10ozzqcm5ddu_cmbcqmqri01ak10ozo5t0yksk | BootesVoid | 2025-05-31T22:31:10Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-31T22:31:08Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: SEXY
---
# Cmbcesd72001R10Ozzqcm5Ddu_Cmbcqmqri01Ak10Ozo5T0Yksk
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `SEXY` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "SEXY",
"lora_weights": "https://huggingface.co/BootesVoid/cmbcesd72001r10ozzqcm5ddu_cmbcqmqri01ak10ozo5t0yksk/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbcesd72001r10ozzqcm5ddu_cmbcqmqri01ak10ozo5t0yksk', weight_name='lora.safetensors')
image = pipeline('SEXY').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbcesd72001r10ozzqcm5ddu_cmbcqmqri01ak10ozo5t0yksk/discussions) to add images that show off what you’ve made with this LoRA.
|
rtl-llm/qwen2.5coder-7b-origen-verilog-vhdl-vhdl-chisel-batch8 | rtl-llm | 2025-05-31T22:30:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T22:27:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rtl-llm/qwen2.5coder-7b-origen-vhdl-vhdl-verilog-gs16 | rtl-llm | 2025-05-31T22:30:23Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T22:27:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Triangle104/AceReason-Nemotron-14B-Q4_K_M-GGUF | Triangle104 | 2025-05-31T22:28:37Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"nvidia",
"reasoning",
"math",
"code",
"reinforcement learning",
"pytorch",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:nvidia/AceReason-Nemotron-14B",
"base_model:quantized:nvidia/AceReason-Nemotron-14B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-31T22:22:28Z | ---
library_name: transformers
license: other
license_name: nvidia-open-model-license
license_link: https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
pipeline_tag: text-generation
language:
- en
tags:
- nvidia
- reasoning
- math
- code
- reinforcement learning
- pytorch
- llama-cpp
- gguf-my-repo
base_model: nvidia/AceReason-Nemotron-14B
---
# Triangle104/AceReason-Nemotron-14B-Q4_K_M-GGUF
This model was converted to GGUF format from [`nvidia/AceReason-Nemotron-14B`](https://huggingface.co/nvidia/AceReason-Nemotron-14B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nvidia/AceReason-Nemotron-14B) for more details on the model.
---
We're thrilled to introduce AceReason-Nemotron-14B, a math and code
reasoning model trained entirely through reinforcement learning (RL),
starting from the DeepSeek-R1-Distilled-Qwen-14B. It delivers impressive
results, achieving 78.6% on AIME 2024 (+8.9%), 67.4% on AIME 2025
(+17.4%), 61.1% on LiveCodeBench v5 (+8%), 54.9% on LiveCodeBench v6
(+7%), and 2024 on Codeforces (+543). We systematically study the RL
training process through extensive ablations and propose a simple yet
effective approach: first RL training on math-only prompts, then RL
training on code-only prompts. Notably, we find that math-only RL not
only significantly enhances the performance of strong distilled models
on math benchmarks, but also code reasoning tasks. In addition, extended
code-only RL further improves code benchmark performance while causing
minimal degradation in math results. We find that RL not only elicits
the foundational reasoning capabilities acquired during pre-training and
supervised fine-tuning (e.g., distillation), but also pushes the limits
of the model's reasoning ability, enabling it to solve problems that
were previously unsolvable.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/AceReason-Nemotron-14B-Q4_K_M-GGUF --hf-file acereason-nemotron-14b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/AceReason-Nemotron-14B-Q4_K_M-GGUF --hf-file acereason-nemotron-14b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/AceReason-Nemotron-14B-Q4_K_M-GGUF --hf-file acereason-nemotron-14b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/AceReason-Nemotron-14B-Q4_K_M-GGUF --hf-file acereason-nemotron-14b-q4_k_m.gguf -c 2048
```
|
rtl-llm/qwen2.5coder-7b-origen-verilog-vhdl-chisel | rtl-llm | 2025-05-31T22:26:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T12:46:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AmberYifan/Llama-3.1-8B-sft-spin-10k-ORPO | AmberYifan | 2025-05-31T22:24:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"orpo",
"conversational",
"arxiv:2403.07691",
"base_model:AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF",
"base_model:finetune:AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T22:06:42Z | ---
base_model: AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF
library_name: transformers
model_name: Llama-3.1-8B-sft-spin-10k-ORPO
tags:
- generated_from_trainer
- trl
- orpo
licence: license
---
# Model Card for Llama-3.1-8B-sft-spin-10k-ORPO
This model is a fine-tuned version of [AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF](https://huggingface.co/AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AmberYifan/Llama-3.1-8B-sft-spin-10k-ORPO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/821f591e)
This model was trained with ORPO, a method introduced in [ORPO: Monolithic Preference Optimization without Reference Model](https://huggingface.co/papers/2403.07691).
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.20.3
## Citations
Cite ORPO as:
```bibtex
@article{hong2024orpo,
title = {{ORPO: Monolithic Preference Optimization without Reference Model}},
author = {Jiwoo Hong and Noah Lee and James Thorne},
year = 2024,
eprint = {arXiv:2403.07691}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Triangle104/AceReason-Nemotron-14B-Q4_K_S-GGUF | Triangle104 | 2025-05-31T22:18:48Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"nvidia",
"reasoning",
"math",
"code",
"reinforcement learning",
"pytorch",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:nvidia/AceReason-Nemotron-14B",
"base_model:quantized:nvidia/AceReason-Nemotron-14B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-31T22:14:36Z | ---
library_name: transformers
license: other
license_name: nvidia-open-model-license
license_link: https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
pipeline_tag: text-generation
language:
- en
tags:
- nvidia
- reasoning
- math
- code
- reinforcement learning
- pytorch
- llama-cpp
- gguf-my-repo
base_model: nvidia/AceReason-Nemotron-14B
---
# Triangle104/AceReason-Nemotron-14B-Q4_K_S-GGUF
This model was converted to GGUF format from [`nvidia/AceReason-Nemotron-14B`](https://huggingface.co/nvidia/AceReason-Nemotron-14B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nvidia/AceReason-Nemotron-14B) for more details on the model.
---
We're thrilled to introduce AceReason-Nemotron-14B, a math and code
reasoning model trained entirely through reinforcement learning (RL),
starting from the DeepSeek-R1-Distilled-Qwen-14B. It delivers impressive
results, achieving 78.6% on AIME 2024 (+8.9%), 67.4% on AIME 2025
(+17.4%), 61.1% on LiveCodeBench v5 (+8%), 54.9% on LiveCodeBench v6
(+7%), and 2024 on Codeforces (+543). We systematically study the RL
training process through extensive ablations and propose a simple yet
effective approach: first RL training on math-only prompts, then RL
training on code-only prompts. Notably, we find that math-only RL not
only significantly enhances the performance of strong distilled models
on math benchmarks, but also code reasoning tasks. In addition, extended
code-only RL further improves code benchmark performance while causing
minimal degradation in math results. We find that RL not only elicits
the foundational reasoning capabilities acquired during pre-training and
supervised fine-tuning (e.g., distillation), but also pushes the limits
of the model's reasoning ability, enabling it to solve problems that
were previously unsolvable.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/AceReason-Nemotron-14B-Q4_K_S-GGUF --hf-file acereason-nemotron-14b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/AceReason-Nemotron-14B-Q4_K_S-GGUF --hf-file acereason-nemotron-14b-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/AceReason-Nemotron-14B-Q4_K_S-GGUF --hf-file acereason-nemotron-14b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/AceReason-Nemotron-14B-Q4_K_S-GGUF --hf-file acereason-nemotron-14b-q4_k_s.gguf -c 2048
```
|
ReadyArt/Valtrya-24B-Settings | ReadyArt | 2025-05-31T22:12:43Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-05-30T18:04:26Z | ---
license: other
license_name: other
license_link: LICENSE
---
# These are settings for a specific scenario card. |
arnaultsta/MNLP_M2_rag_training_MCQA_whole_RAG_1 | arnaultsta | 2025-05-31T22:09:52Z | 0 | 0 | peft | [
"peft",
"safetensors",
"unsloth",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2025-05-31T15:20:36Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/qwen3-0.6b-base-unsloth-bnb-4bit
tags:
- unsloth
- generated_from_trainer
model-index:
- name: MNLP_M2_rag_training_MCQA_whole_RAG_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MNLP_M2_rag_training_MCQA_whole_RAG_1
This model is a fine-tuned version of [unsloth/qwen3-0.6b-base-unsloth-bnb-4bit](https://huggingface.co/unsloth/qwen3-0.6b-base-unsloth-bnb-4bit) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.4
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.0 |
AmberYifan/Llama-3.1-8B-sft-peers-pool-ORPO | AmberYifan | 2025-05-31T22:03:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"orpo",
"conversational",
"arxiv:2403.07691",
"base_model:AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF",
"base_model:finetune:AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T21:45:27Z | ---
base_model: AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF
library_name: transformers
model_name: Llama-3.1-8B-sft-peers-pool-ORPO
tags:
- generated_from_trainer
- trl
- orpo
licence: license
---
# Model Card for Llama-3.1-8B-sft-peers-pool-ORPO
This model is a fine-tuned version of [AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF](https://huggingface.co/AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AmberYifan/Llama-3.1-8B-sft-peers-pool-ORPO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/c265z94j)
This model was trained with ORPO, a method introduced in [ORPO: Monolithic Preference Optimization without Reference Model](https://huggingface.co/papers/2403.07691).
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.20.3
## Citations
Cite ORPO as:
```bibtex
@article{hong2024orpo,
title = {{ORPO: Monolithic Preference Optimization without Reference Model}},
author = {Jiwoo Hong and Noah Lee and James Thorne},
year = 2024,
eprint = {arXiv:2403.07691}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
dltest12345/testmodel | dltest12345 | 2025-05-31T21:57:48Z | 0 | 0 | null | [
"onnx",
"license:apache-2.0",
"region:us"
] | null | 2025-05-27T04:57:40Z | ---
license: apache-2.0
---
|
ASSERT-KTH/Qwen3-8B-sft | ASSERT-KTH | 2025-05-31T21:56:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T19:45:30Z | ---
base_model: Qwen/Qwen3-8B
library_name: transformers
model_name: Qwen3-8B-sft
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen3-8B-sft
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ASSERT-KTH/Qwen3-8B-sft", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/assert-kth/SWE-Gym-SFT/runs/p2ardtou)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.0.dev0
- Transformers: 4.52.4
- Pytorch: 2.5.1+cu121
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
isaackan/m1a | isaackan | 2025-05-31T21:55:05Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-31T21:24:29Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: M1A
---
# M1A
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `M1A` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "M1A",
"lora_weights": "https://huggingface.co/isaackan/m1a/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('isaackan/m1a', weight_name='lora.safetensors')
image = pipeline('M1A').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/isaackan/m1a/discussions) to add images that show off what you’ve made with this LoRA.
|
naniltx/codonGPT | naniltx | 2025-05-31T21:46:32Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T21:31:52Z | ---
library_name: transformers
tags: []
---
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Nanil Therapeutics
- **Funded by [optional]:** Nanil Therapeutics
- **Shared by [optional]:** Nanil Therapeutics
- **Model type:** Transformer-based generative language model
- **Language(s) (NLP):** mRNA sequences (biological triplet code)
- **License:** Free for research use |
BootesVoid/cmbcesd72001r10ozzqcm5ddu_cmbcqmqxa01al10ozh7k8nv3e | BootesVoid | 2025-05-31T21:46:08Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-31T21:46:06Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: A
---
# Cmbcesd72001R10Ozzqcm5Ddu_Cmbcqmqxa01Al10Ozh7K8Nv3E
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `A` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "A",
"lora_weights": "https://huggingface.co/BootesVoid/cmbcesd72001r10ozzqcm5ddu_cmbcqmqxa01al10ozh7k8nv3e/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbcesd72001r10ozzqcm5ddu_cmbcqmqxa01al10ozh7k8nv3e', weight_name='lora.safetensors')
image = pipeline('A').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbcesd72001r10ozzqcm5ddu_cmbcqmqxa01al10ozh7k8nv3e/discussions) to add images that show off what you’ve made with this LoRA.
|
anwksms/xakka | anwksms | 2025-05-31T21:45:40Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-31T21:29:56Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: xakka
---
# Xakka
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `xakka` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "xakka",
"lora_weights": "https://huggingface.co/anwksms/xakka/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('anwksms/xakka', weight_name='lora.safetensors')
image = pipeline('xakka').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/anwksms/xakka/discussions) to add images that show off what you’ve made with this LoRA.
|
BootesVoid/cmbcpd3yj015r10ozcvajzqnx_cmbcpsg6y016n10ozn3xhhlv2 | BootesVoid | 2025-05-31T21:43:57Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-31T21:43:55Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: LLA
---
# Cmbcpd3Yj015R10Ozcvajzqnx_Cmbcpsg6Y016N10Ozn3Xhhlv2
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `LLA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "LLA",
"lora_weights": "https://huggingface.co/BootesVoid/cmbcpd3yj015r10ozcvajzqnx_cmbcpsg6y016n10ozn3xhhlv2/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbcpd3yj015r10ozcvajzqnx_cmbcpsg6y016n10ozn3xhhlv2', weight_name='lora.safetensors')
image = pipeline('LLA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbcpd3yj015r10ozcvajzqnx_cmbcpsg6y016n10ozn3xhhlv2/discussions) to add images that show off what you’ve made with this LoRA.
|
GGUF-Factory/Requests | GGUF-Factory | 2025-05-31T21:39:43Z | 0 | 0 | null | [
"en",
"region:us"
] | null | 2025-05-31T21:32:00Z | ---
language:
- en
---
<!-- Modern HTML embed inside Markdown -->
<div style="
background-color: #1e1e1e;
color: #eee;
padding: 1rem 1.5rem;
border-radius: 8px;
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
font-size: 1.1rem;
max-width: 600px;
margin: 1rem auto;
box-shadow: 0 4px 12px rgba(0,0,0,0.6);
text-align: center;
">
<strong>Custom made READMEs.</strong> Model customization before <code>GGUFS</code> — Just make your request for your model to quantize, any requests, and any model customizations before we quantatize.
</div>
<div style="
background-color: #1e1e1e;
color: #eee;
padding: 1rem 1.5rem;
border-radius: 8px;
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
font-size: 1.1rem;
max-width: 600px;
margin: 1rem auto;
box-shadow: 0 4px 12px rgba(0,0,0,0.6);
text-align: center;
">
Open a discussion in the <strong>Community</strong> tab to request a <code>GGUF</code> model.
</div>
|
wuxs/Mistral_TopK_SAE_l16 | wuxs | 2025-05-31T21:35:24Z | 0 | 0 | null | [
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-05-31T21:31:09Z | ---
license: apache-2.0
language:
- en
--- |
JacobLinCool/taiko-conformer-7 | JacobLinCool | 2025-05-31T21:28:21Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-05-31T21:28:04Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
FULL-VIDEO-18-Katrina-Lim-Viral-Video/FULL.VIDEO.pinay.Katrina.Lim.Viral.Video.Official | FULL-VIDEO-18-Katrina-Lim-Viral-Video | 2025-05-31T21:26:00Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-31T21:25:42Z | <animated-image data-catalyst=""><a href="https://wtach.club/leakvideo/?h" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
lisabdunlap/Qwen3-8B-base-5e-cpt-big | lisabdunlap | 2025-05-31T21:24:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/Qwen3-8B-Base",
"base_model:finetune:unsloth/Qwen3-8B-Base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T21:23:40Z | ---
base_model: unsloth/Qwen3-8B-Base
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** lisabdunlap
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B-Base
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
guydebruyn/InstructionFollowing_SFT_V2.5 | guydebruyn | 2025-05-31T21:20:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T21:20:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AmberYifan/Llama-3.1-8B-sft-gen-dpo-10k-KTO | AmberYifan | 2025-05-31T21:20:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"kto",
"conversational",
"arxiv:2402.01306",
"base_model:AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF",
"base_model:finetune:AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-31T20:39:13Z | ---
base_model: AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF
library_name: transformers
model_name: Llama-3.1-8B-sft-gen-dpo-10k-KTO
tags:
- generated_from_trainer
- trl
- kto
licence: license
---
# Model Card for Llama-3.1-8B-sft-gen-dpo-10k-KTO
This model is a fine-tuned version of [AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF](https://huggingface.co/AmberYifan/Llama-3.1-8B-sft-ultrachat-safeRLHF).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AmberYifan/Llama-3.1-8B-sft-gen-dpo-10k-KTO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/dczs6c0e)
This model was trained with KTO, a method introduced in [KTO: Model Alignment as Prospect Theoretic Optimization](https://huggingface.co/papers/2402.01306).
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.20.3
## Citations
Cite KTO as:
```bibtex
@article{ethayarajh2024kto,
title = {{KTO: Model Alignment as Prospect Theoretic Optimization}},
author = {Kawin Ethayarajh and Winnie Xu and Niklas Muennighoff and Dan Jurafsky and Douwe Kiela},
year = 2024,
eprint = {arXiv:2402.01306},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
GNUk/14b_lora_model_v2 | GNUk | 2025-05-31T21:19:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-14B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-14B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T21:18:53Z | ---
base_model: unsloth/Qwen3-14B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** GNUk
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-14B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
JeonMashup/Ella_Meovv_RVC2 | JeonMashup | 2025-05-31T21:17:16Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-11-18T18:53:29Z | ---
license: apache-2.0
---
|
2237architectural/cdgsf | 2237architectural | 2025-05-31T21:09:13Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-31T21:09:03Z |
# Install the Hugging Face CLI
pip install -U "huggingface_hub[cli]"
# Login with your Hugging Face credentials
huggingface-cli login
# Push your model files
huggingface-cli upload 2237architectural/cdgsf . |
ConicCat/MS3.1-Ponente-V1-24B-SFT | ConicCat | 2025-05-31T21:06:40Z | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"mistral3",
"trl",
"en",
"base_model:unsloth/Mistral-Small-3.1-24B-Instruct-2503",
"base_model:finetune:unsloth/Mistral-Small-3.1-24B-Instruct-2503",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T21:06:39Z | ---
base_model: unsloth/Mistral-Small-3.1-24B-Instruct-2503
tags:
- text-generation-inference
- transformers
- unsloth
- mistral3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ConicCat
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Mistral-Small-3.1-24B-Instruct-2503
This mistral3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Qwen2.5-Omni-3B-i1-GGUF | mradermacher | 2025-05-31T21:00:13Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"multimodal",
"en",
"base_model:Qwen/Qwen2.5-Omni-3B",
"base_model:quantized:Qwen/Qwen2.5-Omni-3B",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-05-31T16:58:53Z | ---
base_model: Qwen/Qwen2.5-Omni-3B
language:
- en
library_name: transformers
license: other
license_link: LICENSE
license_name: qwen-research
quantized_by: mradermacher
tags:
- multimodal
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Qwen/Qwen2.5-Omni-3B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-IQ2_S.gguf) | i1-IQ2_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-IQ2_M.gguf) | i1-IQ2_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-Q2_K.gguf) | i1-Q2_K | 1.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.7 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-IQ3_S.gguf) | i1-IQ3_S | 1.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-IQ3_M.gguf) | i1-IQ3_M | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.1 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-Q4_0.gguf) | i1-Q4_0 | 2.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.1 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-Q4_1.gguf) | i1-Q4_1 | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Omni-3B-i1-GGUF/resolve/main/Qwen2.5-Omni-3B.i1-Q6_K.gguf) | i1-Q6_K | 2.9 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
vedant-2012/whisper-small-vedant-nlp | vedant-2012 | 2025-05-31T20:59:49Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-05-31T20:58:50Z | ---
library_name: transformers
language:
- hi
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: whisper-small-hi-vedant-kangralkar
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-hi-vedant-kangralkar
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
50whispering/fsdgd | 50whispering | 2025-05-31T20:56:44Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-31T20:56:19Z |
# Install the Hugging Face CLI
pip install -U "huggingface_hub[cli]"
# Login with your Hugging Face credentials
huggingface-cli login
# Push your model files
huggingface-cli upload 50whispering/fsdgd . |
domq/ft_cpi_token_pall | domq | 2025-05-31T20:51:14Z | 0 | 0 | transformers | [
"transformers",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T20:51:12Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/scottsuk0306_-_zephyr-7b-math-case-6-ep1-8bits | RichardErkhov | 2025-05-31T20:49:12Z | 0 | 0 | null | [
"safetensors",
"mistral",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-31T20:45:47Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
zephyr-7b-math-case-6-ep1 - bnb 8bits
- Model creator: https://huggingface.co/scottsuk0306/
- Original model: https://huggingface.co/scottsuk0306/zephyr-7b-math-case-6-ep1/
Original model description:
---
library_name: transformers
license: apache-2.0
base_model: alignment-handbook/zephyr-7b-sft-full
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- EunsuKim/GSM8K
- EunsuKim/MATH
model-index:
- name: zephyr-7b-math-case-6-ep1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-math-case-6-ep1
This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on the EunsuKim/GSM8K and the EunsuKim/MATH datasets.
It achieves the following results on the evaluation set:
- Loss: 0.8035
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0354 | 1.0 | 5 | 0.8035 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
RichardErkhov/hotdogs_-_openchat3.5_Mistral-7B-v0.1-4bits | RichardErkhov | 2025-05-31T20:47:02Z | 0 | 0 | null | [
"safetensors",
"mistral",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-31T20:43:47Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
openchat3.5_Mistral-7B-v0.1 - bnb 4bits
- Model creator: https://huggingface.co/hotdogs/
- Original model: https://huggingface.co/hotdogs/openchat3.5_Mistral-7B-v0.1/
Original model description:
---
base_model: []
tags:
- mergekit
- merge
---
# openchat3.5_Mistral-7B-v0.1
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* openchat/openchat_3.5
* mistralai/Mistral-7B-v0.1
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: mistralai/Mistral-7B-v0.1
dtype: bfloat16
merge_method: slerp
parameters:
t:
- filter: self_attn
value: [0.0, 0.5, 0.3, 0.7, 1.0]
- filter: mlp
value: [1.0, 0.5, 0.7, 0.3, 0.0]
- value: 0.5
slices:
- sources:
- layer_range: [0, 32]
model: mistralai/Mistral-7B-v0.1
- layer_range: [0, 32]
model: openchat/openchat_3.5
tokenizer_source: union
```
|
Varinder2110/22586dcd-5757-4dbf-9c05-8f64dea876db | Varinder2110 | 2025-05-31T20:45:56Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-05-31T20:37:28Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# 22586Dcd 5757 4Dbf 9C05 8F64Dea876Db
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/Varinder2110/22586dcd-5757-4dbf-9c05-8f64dea876db/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Varinder2110/22586dcd-5757-4dbf-9c05-8f64dea876db', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 600
- Learning rate: 0.0004
- LoRA rank: 4
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Varinder2110/22586dcd-5757-4dbf-9c05-8f64dea876db/discussions) to add images that show off what you’ve made with this LoRA.
|
RichardErkhov/Smuggling1710_-_An4-7Bv2.4-4bits | RichardErkhov | 2025-05-31T20:44:53Z | 0 | 0 | null | [
"safetensors",
"mistral",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-31T20:41:37Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
An4-7Bv2.4 - bnb 4bits
- Model creator: https://huggingface.co/Smuggling1710/
- Original model: https://huggingface.co/Smuggling1710/An4-7Bv2.4/
Original model description:
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
base_model: unsloth/mistral-7b-v0.2-bnb-4bit
---
# Uploaded model
- **Developed by:** Smuggling1710
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/shadowml_-_NeuralNeuBeagle-7B-4bits | RichardErkhov | 2025-05-31T20:44:16Z | 0 | 0 | null | [
"safetensors",
"mistral",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-31T20:42:01Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
NeuralNeuBeagle-7B - bnb 4bits
- Model creator: https://huggingface.co/shadowml/
- Original model: https://huggingface.co/shadowml/NeuralNeuBeagle-7B/
Original model description:
---
license: cc-by-nc-4.0
tags:
- merge
- mergekit
- lazymergekit
base_model:
- mlabonne/NeuralOmniBeagle-7B
---
# NeuralNeuBeagle-7B
NeuralNeuBeagle-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mlabonne/NeuralOmniBeagle-7B](https://huggingface.co/mlabonne/NeuralOmniBeagle-7B)
## 🧩 Configuration
```yaml
models:
- model: mlabonne/NeuBeagle-7B
# No parameters necessary for base model
- model: mlabonne/NeuralOmniBeagle-7B
parameters:
density: 0.53
weight: 0.6
merge_method: dare_ties
base_model: mlabonne/NeuBeagle-7B
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mlabonne/NeuralNeuBeagle-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
RichardErkhov/MaziyarPanahi_-_Optimus-7B-Mistral-7B-Instruct-v0.2-slerp-4bits | RichardErkhov | 2025-05-31T20:44:15Z | 0 | 0 | null | [
"safetensors",
"mistral",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-31T20:41:59Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Optimus-7B-Mistral-7B-Instruct-v0.2-slerp - bnb 4bits
- Model creator: https://huggingface.co/MaziyarPanahi/
- Original model: https://huggingface.co/MaziyarPanahi/Optimus-7B-Mistral-7B-Instruct-v0.2-slerp/
Original model description:
---
license: apache-2.0
tags:
- merge
- mergekit
- mistral
- 7b
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- Q-bert/Optimus-7B
---
# Optimus-7B-Mistral-7B-Instruct-v0.2-slerp
Optimus-7B-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [Q-bert/Optimus-7B](https://huggingface.co/Q-bert/Optimus-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
- model: Q-bert/Optimus-7B
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/Optimus-7B-Mistral-7B-Instruct-v0.2-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
RichardErkhov/MaziyarPanahi_-_NeuralsirkrishnaShadow_YamPastiche-4bits | RichardErkhov | 2025-05-31T20:43:59Z | 0 | 0 | null | [
"safetensors",
"mistral",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-31T20:41:42Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
NeuralsirkrishnaShadow_YamPastiche - bnb 4bits
- Model creator: https://huggingface.co/MaziyarPanahi/
- Original model: https://huggingface.co/MaziyarPanahi/NeuralsirkrishnaShadow_YamPastiche/
Original model description:
---
license: apache-2.0
tags:
- Safetensors
- text-generation-inference
- merge
model_name: NeuralsirkrishnaShadow_YamPastiche
base_model:
- automerger/NeuralsirkrishnaShadow-7B
- automerger/YamPastiche-7B
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# NeuralsirkrishnaShadow_YamPastiche
NeuralsirkrishnaShadow_YamPastiche is a merge of the following models:
* [automerger/NeuralsirkrishnaShadow-7B](https://huggingface.co/automerger/NeuralsirkrishnaShadow-7B)
* [automerger/YamPastiche-7B](https://huggingface.co/automerger/YamPastiche-7B)
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/NeuralsirkrishnaShadow_YamPastiche"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
RichardErkhov/Spanicin_-_Fulcrum_Aura5-4bits | RichardErkhov | 2025-05-31T20:43:52Z | 0 | 0 | null | [
"safetensors",
"mistral",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-31T20:41:34Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Fulcrum_Aura5 - bnb 4bits
- Model creator: https://huggingface.co/Spanicin/
- Original model: https://huggingface.co/Spanicin/Fulcrum_Aura5/
Original model description:
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- mistralai/Mistral-7B-v0.1
- samir-fama/SamirGPT-v1
- abacusai/Slerp-CM-mist-dpo
---
# Fulcrum_Aura5
Fulcrum_Aura5 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
* [samir-fama/SamirGPT-v1](https://huggingface.co/samir-fama/SamirGPT-v1)
* [abacusai/Slerp-CM-mist-dpo](https://huggingface.co/abacusai/Slerp-CM-mist-dpo)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-v0.1
layer_range: [0, 32]
- model: samir-fama/SamirGPT-v1
layer_range: [0, 32]
parameters:
density: 0.53
weight: 0.4
- model: abacusai/Slerp-CM-mist-dpo
layer_range: [0, 32]
parameters:
density: 0.53
weight: 0.3
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Spanicin/Fulcrum_Aura5"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
RichardErkhov/nakodanei_-_Nomachi-7b-v1-4bits | RichardErkhov | 2025-05-31T20:42:34Z | 0 | 0 | null | [
"safetensors",
"mistral",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-31T20:40:14Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Nomachi-7b-v1 - bnb 4bits
- Model creator: https://huggingface.co/nakodanei/
- Original model: https://huggingface.co/nakodanei/Nomachi-7b-v1/
Original model description:
---
license: apache-2.0
---
**Nomachi-7b-v1**
Roleplaying focused model based on Mistral-7b-v0.1. Aims to produce natural human-like text, free from slop.
Uses a selection of LoRA trained on almost entirely human text, then merged together with Noromaid-7b-v0.2 back into the Mistral base model with LM Cocktail to maintain generalisation and intelligence.
## Prompt template:
```
### Input:
User: {prompt}
### Response:
Character:
```
Alpaca prompt template should work fine too.
Noromaid: https://huggingface.co/NeverSleep/Noromaid-7b-v0.2/
LM Cocktail: https://github.com/FlagOpen/FlagEmbedding/tree/master/LM_Cocktail
|
RichardErkhov/MaziyarPanahi_-_Experiment28M7_Experiment26Neuralarjuna-4bits | RichardErkhov | 2025-05-31T20:42:22Z | 0 | 0 | null | [
"safetensors",
"mistral",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-31T20:39:56Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Experiment28M7_Experiment26Neuralarjuna - bnb 4bits
- Model creator: https://huggingface.co/MaziyarPanahi/
- Original model: https://huggingface.co/MaziyarPanahi/Experiment28M7_Experiment26Neuralarjuna/
Original model description:
---
license: apache-2.0
tags:
- Safetensors
- text-generation-inference
- merge
model_name: Experiment28M7_Experiment26Neuralarjuna
base_model:
- automerger/Experiment28M7-7B
- automerger/Experiment26Neuralarjuna-7B
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# Experiment28M7_Experiment26Neuralarjuna
Experiment28M7_Experiment26Neuralarjuna is a merge of the following models:
* [automerger/Experiment28M7-7B](https://huggingface.co/automerger/Experiment28M7-7B)
* [automerger/Experiment26Neuralarjuna-7B](https://huggingface.co/automerger/Experiment26Neuralarjuna-7B)
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/Experiment28M7_Experiment26Neuralarjuna"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
Subsets and Splits