modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-29 06:27:49
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 502
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-29 06:23:06
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
12Sanju21/llama-2-7b-4bit-raw-privacy | 12Sanju21 | 2025-05-05T06:25:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-05T05:40:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hardlyworking/goldenworking-Q4_K_S-GGUF | hardlyworking | 2025-05-05T06:25:00Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:hardlyworking/goldenworking",
"base_model:quantized:hardlyworking/goldenworking",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-05T06:24:30Z | ---
base_model: hardlyworking/goldenworking
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# hardlyworking/goldenworking-Q4_K_S-GGUF
This model was converted to GGUF format from [`hardlyworking/goldenworking`](https://huggingface.co/hardlyworking/goldenworking) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/hardlyworking/goldenworking) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo hardlyworking/goldenworking-Q4_K_S-GGUF --hf-file goldenworking-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo hardlyworking/goldenworking-Q4_K_S-GGUF --hf-file goldenworking-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo hardlyworking/goldenworking-Q4_K_S-GGUF --hf-file goldenworking-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo hardlyworking/goldenworking-Q4_K_S-GGUF --hf-file goldenworking-q4_k_s.gguf -c 2048
```
|
darshannere/NLP_Initial_Trained | darshannere | 2025-05-05T06:24:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-05T06:23:36Z | ---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** darshannere
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Dwight02/Archer | Dwight02 | 2025-05-05T06:22:30Z | 0 | 0 | null | [
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2025-05-05T06:22:30Z | ---
license: bigscience-bloom-rail-1.0
---
|
Mamie03/Vicki | Mamie03 | 2025-05-05T06:20:16Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2025-05-05T06:20:16Z | ---
license: bigscience-openrail-m
---
|
Nuf-hugginface/modernbert-embed-quickb | Nuf-hugginface | 2025-05-05T06:14:54Z | 7 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"modernbert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:127",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:nomic-ai/modernbert-embed-base",
"base_model:finetune:nomic-ai/modernbert-embed-base",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-04-25T11:25:01Z | ---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:127
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: nomic-ai/modernbert-embed-base
widget:
- source_sentence: What is the difference between traditional programming and ML?
sentences:
- Over the past few years, the field of ML has advanced rapidly, especially in the
area of Natural Language Processing (NLP)—the ability of machines to understand
and generate human language. At the forefront of this progress are Large Language
Models (LLMs), such as OpenAI’s GPT (Generative Pre-trained Transformer), Google’s
PaLM, and Meta’s LLaMA
- . For example, integrating an LLM into a customer support chatbot might involve
connecting it to a company’s internal knowledge base, enabling it to answer customer
questions using accurate, up-to-date information.
- A major subset of AI is Machine Learning (ML), which involves algorithms that
learn from data rather than being explicitly programmed. Instead of writing detailed
instructions for every task, ML models find patterns in large datasets and use
these patterns to make predictions or decisions
- source_sentence: What is one of the tasks mentioned that involves creating new written
content?
sentences:
- In summary, AI and ML form the foundation for intelligent automation, while LLMs
represent a breakthrough in language understanding and generation. Integrating
these models into real-world systems unlocks practical value, turning raw intelligence
into tangible solutions
- '8. Security and Compliance Integrations
Some organizations are integrating LLMs to detect anomalies in text communications
(e.g., phishing detection or policy violations). LLMs can analyze language usage
and flag potentially suspicious behavior more flexibly than keyword-based filters.
Challenges in LLM Integration
Despite their promise, integrating LLMs comes with challenges:'
- . These include text generation, summarization, translation, question answering,
code generation, and more.
- source_sentence: What is one of the components mentioned alongside AI?
sentences:
- '2. Search Engines and Semantic Search
Traditional keyword-based search systems are being enhanced or replaced by semantic
search, where LLMs understand the meaning behind queries. Instead of just matching
words, they interpret intent.'
- For example, e-commerce websites can deploy LLM-powered assistants to help customers
find products, track orders, or get personalized recommendations—much more effectively
than traditional rule-based bots.
- Introduction to AI, Machine Learning, LLMs, and Their Integration
- source_sentence: What is required to provide intelligent features within broader
applications?
sentences:
- . For instance, a spam filter doesn’t just block emails with specific keywords—it
learns from thousands of examples what spam typically looks like.
- 'The Rise of LLM Integrations
While LLMs are powerful on their own, their true potential is unlocked through
integration—connecting these models with other software, services, or systems
to provide intelligent features within broader applications.
Here are some key ways LLMs are being integrated into the digital world:'
- For instance, in a document management system, a user might type "policies about
sick leave", and the system—integrated with an LLM—could retrieve documents discussing
"medical leave", "employee absence", and "illness policies", even if those exact
words weren’t used.
- source_sentence: What type of dialogues can LLMs simulate?
sentences:
- Companies are also experimenting with Retrieval-Augmented Generation (RAG)—a technique
where LLMs are paired with document databases (e.g., vector stores like Supabase,
Pinecone, or Weaviate) to answer questions with enterprise-specific knowledge.
- . For example, integrating an LLM into a customer support chatbot might involve
connecting it to a company’s internal knowledge base, enabling it to answer customer
questions using accurate, up-to-date information.
- '5. Education and Learning Platforms
Educational tools like Khanmigo (from Khan Academy) and other tutoring platforms
are leveraging LLMs to provide real-time help to students. LLMs can break down
complex topics, provide feedback on writing, and simulate Socratic-style dialogues.'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: Fine-tuned with [QuicKB](https://github.com/ALucek/QuicKB)
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.6666666666666666
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1.0
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6666666666666666
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2666666666666667
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.20000000000000007
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10000000000000003
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6666666666666666
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8
name: Cosine Recall@3
- type: cosine_recall@5
value: 1.0
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8310827786456928
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7766666666666667
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7766666666666667
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.6666666666666666
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8666666666666667
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6666666666666666
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2666666666666667
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17333333333333337
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10000000000000003
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6666666666666666
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8666666666666667
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8203966331432972
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7651851851851852
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7651851851851852
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.6666666666666666
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8666666666666667
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1.0
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6666666666666666
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.28888888888888886
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.20000000000000007
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10000000000000003
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6666666666666666
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8666666666666667
name: Cosine Recall@3
- type: cosine_recall@5
value: 1.0
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8357043414408
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7822222222222223
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7822222222222223
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.5333333333333333
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7333333333333333
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9333333333333333
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5333333333333333
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2444444444444445
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16000000000000003
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09333333333333335
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.5333333333333333
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7333333333333333
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9333333333333333
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7203966331432973
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6540740740740741
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6592022792022793
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.4666666666666667
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.6666666666666666
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8666666666666667
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.4666666666666667
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.22222222222222224
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16000000000000003
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08666666666666668
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.4666666666666667
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.6666666666666666
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8666666666666667
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6507228370099043
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.5822222222222223
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.58890559732665
name: Cosine Map@100
---
# Fine-tuned with [QuicKB](https://github.com/ALucek/QuicKB)
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [nomic-ai/modernbert-embed-base](https://huggingface.co/nomic-ai/modernbert-embed-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [nomic-ai/modernbert-embed-base](https://huggingface.co/nomic-ai/modernbert-embed-base) <!-- at revision d556a88e332558790b210f7bdbe87da2fa94a8d8 -->
- **Maximum Sequence Length:** 1024 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: ModernBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Nuf-hugginface/modernbert-embed-quickb")
# Run inference
sentences = [
'What type of dialogues can LLMs simulate?',
'5. Education and Learning Platforms\nEducational tools like Khanmigo (from Khan Academy) and other tutoring platforms are leveraging LLMs to provide real-time help to students. LLMs can break down complex topics, provide feedback on writing, and simulate Socratic-style dialogues.',
'. For example, integrating an LLM into a customer support chatbot might involve connecting it to a company’s internal knowledge base, enabling it to answer customer questions using accurate, up-to-date information.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `dim_768`, `dim_512`, `dim_256`, `dim_128` and `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | dim_768 | dim_512 | dim_256 | dim_128 | dim_64 |
|:--------------------|:-----------|:-----------|:-----------|:-----------|:-----------|
| cosine_accuracy@1 | 0.6667 | 0.6667 | 0.6667 | 0.5333 | 0.4667 |
| cosine_accuracy@3 | 0.8 | 0.8 | 0.8667 | 0.7333 | 0.6667 |
| cosine_accuracy@5 | 1.0 | 0.8667 | 1.0 | 0.8 | 0.8 |
| cosine_accuracy@10 | 1.0 | 1.0 | 1.0 | 0.9333 | 0.8667 |
| cosine_precision@1 | 0.6667 | 0.6667 | 0.6667 | 0.5333 | 0.4667 |
| cosine_precision@3 | 0.2667 | 0.2667 | 0.2889 | 0.2444 | 0.2222 |
| cosine_precision@5 | 0.2 | 0.1733 | 0.2 | 0.16 | 0.16 |
| cosine_precision@10 | 0.1 | 0.1 | 0.1 | 0.0933 | 0.0867 |
| cosine_recall@1 | 0.6667 | 0.6667 | 0.6667 | 0.5333 | 0.4667 |
| cosine_recall@3 | 0.8 | 0.8 | 0.8667 | 0.7333 | 0.6667 |
| cosine_recall@5 | 1.0 | 0.8667 | 1.0 | 0.8 | 0.8 |
| cosine_recall@10 | 1.0 | 1.0 | 1.0 | 0.9333 | 0.8667 |
| **cosine_ndcg@10** | **0.8311** | **0.8204** | **0.8357** | **0.7204** | **0.6507** |
| cosine_mrr@10 | 0.7767 | 0.7652 | 0.7822 | 0.6541 | 0.5822 |
| cosine_map@100 | 0.7767 | 0.7652 | 0.7822 | 0.6592 | 0.5889 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 127 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 127 samples:
| | anchor | positive |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 13.28 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 53.34 tokens</li><li>max: 86 tokens</li></ul> |
* Samples:
| anchor | positive |
|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What task mentioned is related to providing answers to inquiries?</code> | <code>. These include text generation, summarization, translation, question answering, code generation, and more.</code> |
| <code>What do LLMs learn to work effectively?</code> | <code>LLMs work by learning statistical relationships between words and phrases, allowing them to predict and generate language that feels natural. The power of these models lies not only in their size but also in the diversity of tasks they can perform with little to no task-specific training</code> |
| <code>In which industries is the generalization ability considered useful?</code> | <code>. This generalization ability makes them incredibly useful across industries—from customer service and education to software development and healthcare.</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 4
- `gradient_accumulation_steps`: 8
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `tf32`: False
- `load_best_model_at_end`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 4
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 8
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: False
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
|:-------:|:------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|
| 1.0 | 4 | - | 0.7790 | 0.7120 | 0.7474 | 0.6321 | 0.5684 |
| 2.0 | 8 | - | 0.8275 | 0.7966 | 0.8091 | 0.6904 | 0.6102 |
| 2.5 | 10 | 13.4453 | - | - | - | - | - |
| 3.0 | 12 | - | 0.8311 | 0.8204 | 0.8357 | 0.7178 | 0.6557 |
| **4.0** | **16** | **-** | **0.8311** | **0.8204** | **0.8357** | **0.7204** | **0.6507** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.12.6
- Sentence Transformers: 3.4.0
- Transformers: 4.48.1
- PyTorch: 2.5.1+cpu
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
AventIQ-AI/roberta-named-entity-recognition-for-content-tagging | AventIQ-AI | 2025-05-05T06:14:46Z | 0 | 0 | null | [
"safetensors",
"roberta",
"region:us"
] | null | 2025-05-05T06:08:41Z | # RoBERTa-Base Model for Named Entity Recognition (NER) on CoNLL-2003 Dataset
This repository hosts a fine-tuned version of the RoBERTa model for Named Entity Recognition (NER) using the CoNLL-2003 dataset. The model is capable of identifying and classifying named entities such as people, organizations, locations, etc.
## Model Details
- **Model Architecture:** RoBERTa Base
- **Task:** Named Entity Recognition
- **Dataset:** CoNLL-2003 (Hugging Face Datasets)
- **Quantization:** Float16
- **Fine-tuning Framework:** Hugging Face Transformers
---
## Installation
```bash
pip install datasets transformers seqeval torch --quiet
```
---
## Loading the Model
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
# Load tokenizer and model
model = "roberta-base"
tokenizer = AutoTokenizer.from_pretrained(model)
model = AutoModelForSequenceClassification.from_pretrained(model)
# Define test sentences
sentences = [
"Barack Obama was born in Hawaii.",
"Elon Musk founded SpaceX and Tesla.",
"Apple is headquartered in Cupertino, California."
]
for sentence in sentences:
tokens = tokenizer(sentence, return_tensors="pt", truncation=True, is_split_into_words=False).to(device)
with torch.no_grad():
outputs = model(**tokens)
logits = outputs.logits
predictions = torch.argmax(logits, dim=2)
predicted_labels = predictions[0].cpu().numpy()
tokens_decoded = tokenizer.convert_ids_to_tokens(tokens["input_ids"][0])
print(f"Sentence: {sentence}")
for token, label_id in zip(tokens_decoded, predicted_labels):
label = label_list[label_id]
if token.startswith("Ġ") or not token.startswith("▁"):
token = token.replace("Ġ", "")
if label != "O":
print(f"{token}: {label}")
print("\n" + "-"*50 + "\n")
```
## Performance Metrics
- **Accuracy:** 0.9921
- **Precision:** 0.9466
- **Recall:** 0.9589
- **F1 Score:** 0.9527
---
## Fine-Tuning Details
### Dataset
The dataset used is the CoNLL-2003 dataset, which contains labeled tokens for Named Entity Recognition (NER).
Entities are categorized into classes such as PER (person), ORG (organization), LOC (location), and MISC (miscellaneous).
It includes four columns: the word, part-of-speech tag, syntactic chunk tag, and NER tag.
The dataset is automatically loaded using the Hugging Face datasets library and is split into train, validation, and test sets.
### Training
- **Epochs:** 3
- **Batch size:** 16 (train) / 16 (eval)
- **Learning rate:** 2e-5
- **Evaluation strategy:** `epoch`
- **FP16 Training:** Enabled
- **Trainer:** Hugging Face `Trainer` API
---
## Quantization
Post-training quantization was applied using `model.to(dtype=torch.float16)` to reduce model size and speed up inference.
---
## Repository Structure
```bash
.
├── quantized-model/ # Directory containing trained model artifacts
│ ├── config.json
│ ├── merges.txt
│ ├── model.safetensors # (May appear as 'model' in UI)
│ ├── special_tokens_map.json
│ ├── tokenizer.json
│ ├── tokenizer_config.json
│ └── vocab.json
├── README.md
```
---
## Limitations
- The model is trained only on CoNLL-2003 and may not generalize to unseen NER tasks.
- Token misalignment may occur for complex or ambiguous phrases.
## Contributing
Feel free to open issues or submit pull requests to improve the model, training process, or documentation.
|
faizandigi009/wav2vec2-base-960h-finetuned-ks | faizandigi009 | 2025-05-05T06:10:41Z | 140 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:audiofolder",
"base_model:facebook/wav2vec2-base-960h",
"base_model:finetune:facebook/wav2vec2-base-960h",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | 2025-04-28T08:52:33Z | ---
library_name: transformers
base_model: facebook/wav2vec2-base-960h
tags:
- generated_from_trainer
datasets:
- audiofolder
metrics:
- accuracy
model-index:
- name: wav2vec2-base-960h-finetuned-ks
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: audiofolder
type: audiofolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8928571428571429
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-960h-finetuned-ks
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2908
- Accuracy: 0.8929
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5519 | 1.0 | 70 | 0.4880 | 0.8571 |
| 0.8835 | 2.0 | 140 | 0.6964 | 0.7286 |
| 0.3766 | 3.0 | 210 | 0.3114 | 0.8714 |
| 0.2251 | 4.0 | 280 | 0.2908 | 0.8929 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cpu
- Datasets 3.5.1
- Tokenizers 0.21.1
|
mrdayl/qwen3coder-4bit | mrdayl | 2025-05-05T06:10:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2025-05-04T16:35:34Z | ---
base_model: unsloth/qwen3-4b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** mrdayl
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-4b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
OsakanaTeishoku/Qwen2.5-7B-axolotl-sft-v0.2 | OsakanaTeishoku | 2025-05-05T06:08:41Z | 4 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"dataset:Aratako/Magpie-Tanuki-8B-annotated-96k",
"dataset:Aratako/Synthetic-JP-EN-Coding-Dataset-Magpie-69k",
"dataset:DataPilot/Zero_SFT_Ja_v2_b3t4",
"base_model:Qwen/Qwen2.5-7B",
"base_model:adapter:Qwen/Qwen2.5-7B",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-19T04:54:22Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-7B
tags:
- axolotl
- generated_from_trainer
datasets:
- Aratako/Magpie-Tanuki-8B-annotated-96k
- Aratako/Synthetic-JP-EN-Coding-Dataset-Magpie-69k
- DataPilot/Zero_SFT_Ja_v2_b3t4
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
model-index:
- name: Qwen2.5-7B-axolotl-sft-v0.2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.8.0`
```yaml
base_model: Qwen/Qwen2.5-7B
hub_model_id: OsakanaTeishoku/Qwen2.5-7B-axolotl-sft-v0.2
load_in_8bit: false
load_in_4bit: true
strict: false
chat_template: qwen_25
datasets:
# This will be the path used for the data when it is saved to the Volume in the cloud.
- path: Aratako/Magpie-Tanuki-8B-annotated-96k
split: train
type: chat_template
field_messages: messages
- path: Aratako/Synthetic-JP-EN-Coding-Dataset-Magpie-69k
split: train
type: chat_template
field_messages: messages
- path: DataPilot/Zero_SFT_Ja_v2_b3t4
split: train
type: chat_template
field_messages: conversation
message_property_mappings:
role: from
content: value
shuffle_merged_datasets: true
dataset_prepared_path: last_run_prepared
#val_set_size: 0.05
output_dir: ./lora-out
sequence_len: 2048
sample_packing: false
eval_sample_packing: false
pad_to_sequence_len: false
adapter: qlora
lora_model_dir:
lora_r: 16
lora_alpha: 32
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
lora_modules_to_save: # required when adding new tokens to LLaMA/Mistral
- embed_tokens
- lm_head
wandb_project: modal-axolotl
wandb_name: 20250419-qwen7b-modal
gradient_accumulation_steps: 4
micro_batch_size: 16
#auto_find_batch_size: true
#num_epochs: 1
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0001
bf16: true
fp16: false
tf32: false
train_on_inputs: false
group_by_length: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention: true
flash_attention:
warmup_ratio: 0.05
save_steps: 50
max_steps: 200
debug:
#deepspeed: /workspace/axolotl/deepspeed_configs/zero2.json
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
eos_token: "<|im_end|>"
plugins:
- axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_glu_activation: true
liger_layer_norm: true
liger_fused_linear_cross_entropy: true
eval_strategy: "no"
save_strategy: "steps"
```
</details><br>
# Qwen2.5-7B-axolotl-sft-v0.2
This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) on the Aratako/Magpie-Tanuki-8B-annotated-96k, the Aratako/Synthetic-JP-EN-Coding-Dataset-Magpie-69k and the DataPilot/Zero_SFT_Ja_v2_b3t4 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
### Framework versions
- PEFT 0.15.1
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1 |
TareksLab/SCETest4-70B | TareksLab | 2025-05-05T06:07:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2408.07990",
"base_model:LatitudeGames/Wayfarer-Large-70B-Llama-3.3",
"base_model:merge:LatitudeGames/Wayfarer-Large-70B-Llama-3.3",
"base_model:ReadyArt/Forgotten-Safeword-70B-v5.0",
"base_model:merge:ReadyArt/Forgotten-Safeword-70B-v5.0",
"base_model:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:merge:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:TheDrummer/Fallen-Llama-3.3-R1-70B-v1",
"base_model:merge:TheDrummer/Fallen-Llama-3.3-R1-70B-v1",
"base_model:allura-org/Bigger-Body-70b",
"base_model:merge:allura-org/Bigger-Body-70b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-05T05:29:33Z | ---
base_model:
- ReadyArt/Forgotten-Safeword-70B-v5.0
- LatitudeGames/Wayfarer-Large-70B-Llama-3.3
- allura-org/Bigger-Body-70b
- TheDrummer/Fallen-Llama-3.3-R1-70B-v1
- SicariusSicariiStuff/Negative_LLAMA_70B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [SicariusSicariiStuff/Negative_LLAMA_70B](https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B) as a base.
### Models Merged
The following models were included in the merge:
* [ReadyArt/Forgotten-Safeword-70B-v5.0](https://huggingface.co/ReadyArt/Forgotten-Safeword-70B-v5.0)
* [LatitudeGames/Wayfarer-Large-70B-Llama-3.3](https://huggingface.co/LatitudeGames/Wayfarer-Large-70B-Llama-3.3)
* [allura-org/Bigger-Body-70b](https://huggingface.co/allura-org/Bigger-Body-70b)
* [TheDrummer/Fallen-Llama-3.3-R1-70B-v1](https://huggingface.co/TheDrummer/Fallen-Llama-3.3-R1-70B-v1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: TheDrummer/Fallen-Llama-3.3-R1-70B-v1
parameters:
weight: 0.20
density: 0.8
select_topk: 0.15
lambda: 1.0
- model: ReadyArt/Forgotten-Safeword-70B-v5.0
parameters:
weight: 0.20
density: 0.8
select_topk: 0.15
lambda: 1.0
- model: allura-org/Bigger-Body-70b
parameters:
weight: 0.20
density: 0.8
select_topk: 0.15
lambda: 1.0
- model: LatitudeGames/Wayfarer-Large-70B-Llama-3.3
parameters:
weight: 0.20
density: 0.8
select_topk: 0.15
lambda: 1.0
- model: SicariusSicariiStuff/Negative_LLAMA_70B
parameters:
weight: 0.20
density: 0.8
select_topk: 0.15
lambda: 1.0
base_model: SicariusSicariiStuff/Negative_LLAMA_70B
merge_method: sce
parameters:
normalize: false
int8_mask: true
tokenizer:
source: SicariusSicariiStuff/Negative_LLAMA_70B
chat_template: llama3
dtype: bfloat16
```
|
Prajwaal/gemma-3b-chat-support-v1 | Prajwaal | 2025-05-05T06:06:15Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-1b-pt",
"base_model:finetune:google/gemma-3-1b-pt",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T08:38:32Z | ---
base_model: google/gemma-3-1b-pt
library_name: transformers
model_name: gemma-3b-chat-support-v1
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-3b-chat-support-v1
This model is a fine-tuned version of [google/gemma-3-1b-pt](https://huggingface.co/google/gemma-3-1b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Prajwaal/gemma-3b-chat-support-v1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.3.2
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
PranayPalem/Reinforce-Pixelcopter-PLE-v0 | PranayPalem | 2025-05-05T06:06:04Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2025-05-05T04:08:23Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 62.30 +/- 53.45
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
MaestrAI/lily-lora-1746424648 | MaestrAI | 2025-05-05T06:03:49Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-05T05:57:27Z | # lily LORA Model
This is a LORA model for character Lily
Created at 2025-05-05 07:57:29
|
GrantBarry2006012/sgbfsgh | GrantBarry2006012 | 2025-05-05T06:02:24Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2025-05-05T06:02:24Z | ---
license: bigscience-openrail-m
---
|
mlfoundations-dev/d1_math_all_3k | mlfoundations-dev | 2025-05-05T06:00:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-04T22:31:02Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: d1_math_all_3k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# d1_math_all_3k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/d1_math_all_3k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 24
- total_train_batch_size: 96
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022-gguf | RichardErkhov | 2025-05-05T06:00:10Z | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-05T03:23:52Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022 - GGUF
- Model creator: https://huggingface.co/KONIexp/
- Original model: https://huggingface.co/KONIexp/v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022.Q2_K.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022-gguf/blob/main/v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022.Q2_K.gguf) | Q2_K | 2.96GB |
| [v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022-gguf/blob/main/v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022.IQ3_S.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022-gguf/blob/main/v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022-gguf/blob/main/v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022.IQ3_M.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022-gguf/blob/main/v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022.Q3_K.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022-gguf/blob/main/v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022.Q3_K.gguf) | Q3_K | 3.74GB |
| [v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022-gguf/blob/main/v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022-gguf/blob/main/v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022-gguf/blob/main/v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022.Q4_0.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022-gguf/blob/main/v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022.Q4_0.gguf) | Q4_0 | 4.34GB |
| [v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022-gguf/blob/main/v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022-gguf/blob/main/v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022.Q4_K.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022-gguf/blob/main/v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022.Q4_K.gguf) | Q4_K | 4.58GB |
| [v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022-gguf/blob/main/v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022.Q4_1.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022-gguf/blob/main/v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022.Q4_1.gguf) | Q4_1 | 4.78GB |
| [v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022.Q5_0.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022-gguf/blob/main/v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022.Q5_0.gguf) | Q5_0 | 5.21GB |
| [v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022-gguf/blob/main/v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022.Q5_K.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022-gguf/blob/main/v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022.Q5_K.gguf) | Q5_K | 5.34GB |
| [v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022-gguf/blob/main/v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022.Q5_1.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022-gguf/blob/main/v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022.Q5_1.gguf) | Q5_1 | 5.65GB |
| [v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022.Q6_K.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022-gguf/blob/main/v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022.Q6_K.gguf) | Q6_K | 6.14GB |
| [v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022.Q8_0.gguf](https://huggingface.co/RichardErkhov/KONIexp_-_v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022-gguf/blob/main/v3_pt_ep1_sft_5_dpo_1_3_00005_05_based_on_llama3_1_8b_final_data_20241022.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DeepakSridhar/promptsliders | DeepakSridhar | 2025-05-05T05:53:27Z | 0 | 0 | diffusers | [
"diffusers",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"license:apache-2.0",
"region:us"
] | null | 2025-05-05T05:50:42Z | ---
license: apache-2.0
language:
- en
base_model:
- stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
---
# Prompt Sliders for Fine-Grained Control, Editing and Erasing of Concepts in Diffusion Models
We introduce the Prompt Slider method for precise manipulation, editing, and erasure of concepts in diffusion models. [Project Page](https://deepaksridhar.github.io/promptsliders.github.io/)
### Installing the dependencies
Before running the scripts, make sure to install the library's training dependencies:
You can install diffusers directly from pip or install from the latest version. To do this, execute one of the following steps in a new virtual environment:
Install with pip
```bash
pip install diffusers==0.27
```
Install from source
```bash
git clone https://github.com/huggingface/diffusers
cd diffusers
pip install .
```
Then cd in the promptsliders folder (you can also copy it to the examples folder in diffusers) and run:
```bash
pip install -r requirements.txt
```
And initialize an [🤗 Accelerate](https://github.com/huggingface/accelerate/) environment with:
```bash
accelerate config
```
Now we can launch the training using:
```bash
export MODEL_NAME="runwayml/stable-diffusion-v1-5"
export EMOTION="smiling"
accelerate launch textual_inversion.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--learnable_property="object" \
--placeholder_token="<$EMOTION-lora>" \
--initializer_token="$EMOTION" \
--mixed_precision="no" \
--resolution=512 \
--train_batch_size=1 \
--gradient_accumulation_steps=1 \
--max_train_steps=2000 \
--learning_rate=5.0e-04 \
--scale_lr \
--lr_scheduler="constant" \
--lr_warmup_steps=0 \
--save_as_full_pipeline \
--output_dir=outputs/$EMOTION-promptslider/ \
--prompts_file="textsliders/data/prompts-$EMOTION.yaml"
```
Alternatively, one could run with default settings
```bash
bash prompt_slider_emotions.sh
```
A full training run takes ~1-2 hours on one A10 GPU.
### Inference
If you have issues in running the code `TypeError: unsupported operand type(s) for +: 'int' and 'NoneType' `, install the earlier version of diffusers
```bash
pip install diffusers==0.20.2
pip install huggingface-hub==0.21
```
Once you have trained a model using above command, the inference can be done simply using the `StableDiffusionPipeline` or `StableDiffusionXLPipeline` wih the following script. Make sure to modify the concept name to your concept and the output is at `output/age-slider_prompt/learned_embeds.safetensors`.
```bash
python inference-promptsliders-sdxl.py age
```
To run inference with SD with default scale,
```bash
python inference_sd.py $path_to_the_saved_embedding $token_name
```
## Acknowledgements
Thanks to [diffusers](https://github.com/huggingface/diffusers) and [Concept Sliders](https://github.com/rohitgandikota/sliders)! |
hxyscott/math-full-add_easy-error_removed-7epoch | hxyscott | 2025-05-05T05:50:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-05T02:28:07Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dimasik2987/ebb3200c-d4d4-4acc-8985-720a551c8acc | dimasik2987 | 2025-05-05T05:49:24Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-14B-Chat",
"base_model:adapter:Qwen/Qwen1.5-14B-Chat",
"license:other",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-05T04:53:10Z | ---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-14B-Chat
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ebb3200c-d4d4-4acc-8985-720a551c8acc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: Qwen/Qwen1.5-14B-Chat
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- cb8e5c50e849efc6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/cb8e5c50e849efc6_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: dimasik2987/ebb3200c-d4d4-4acc-8985-720a551c8acc
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_steps: 400
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/cb8e5c50e849efc6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 13401a16-f207-4649-b487-818ed13dddff
wandb_project: s56-28
wandb_run: your_name
wandb_runid: 13401a16-f207-4649-b487-818ed13dddff
warmup_steps: 20
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# ebb3200c-d4d4-4acc-8985-720a551c8acc
This model is a fine-tuned version of [Qwen/Qwen1.5-14B-Chat](https://huggingface.co/Qwen/Qwen1.5-14B-Chat) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2149
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.1488 | 0.6814 | 400 | 1.2149 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
AlexHung29629/mistral-grpo-if-500-0502 | AlexHung29629 | 2025-05-05T05:48:15Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"mistral3",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-05-02T02:42:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1-i1-GGUF | mradermacher | 2025-05-05T05:47:22Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"sft",
"en",
"base_model:LeroyDyer/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1",
"base_model:quantized:LeroyDyer/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-05-04T23:40:54Z | ---
base_model: LeroyDyer/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/LeroyDyer/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1-i1-GGUF/resolve/main/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1-i1-GGUF/resolve/main/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1.i1-IQ1_M.gguf) | i1-IQ1_M | 3.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1-i1-GGUF/resolve/main/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1-i1-GGUF/resolve/main/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1-i1-GGUF/resolve/main/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1-i1-GGUF/resolve/main/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1.i1-IQ2_M.gguf) | i1-IQ2_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1-i1-GGUF/resolve/main/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1-i1-GGUF/resolve/main/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1-i1-GGUF/resolve/main/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1-i1-GGUF/resolve/main/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1-i1-GGUF/resolve/main/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1-i1-GGUF/resolve/main/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1-i1-GGUF/resolve/main/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1-i1-GGUF/resolve/main/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1-i1-GGUF/resolve/main/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1-i1-GGUF/resolve/main/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1-i1-GGUF/resolve/main/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1.i1-IQ4_NL.gguf) | i1-IQ4_NL | 8.6 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1-i1-GGUF/resolve/main/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1-i1-GGUF/resolve/main/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1-i1-GGUF/resolve/main/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1-i1-GGUF/resolve/main/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1.i1-Q4_1.gguf) | i1-Q4_1 | 9.5 | |
| [GGUF](https://huggingface.co/mradermacher/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1-i1-GGUF/resolve/main/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1-i1-GGUF/resolve/main/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1-i1-GGUF/resolve/main/_Spydaz_Web_AGI_DeepThink_Empathic_Roleplay_R1.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
neverboredk4/Mistral-7B-Instruct-v0.2-q4f16_1-MLC | neverboredk4 | 2025-05-05T05:46:26Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-04T12:53:55Z | ---
license: apache-2.0
---
|
prajwalmani/qwen2.5-1.5B-4bit | prajwalmani | 2025-05-05T05:46:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"survey",
"qna",
"conversational",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-05T05:03:43Z | ---
base_model:
- Qwen/Qwen2.5-1.5B-Instruct
tags:
- transformers
- survey
- qna
--- |
TarunKM/AUTONOMIQ_manual_cleaned_latest_50_epochs | TarunKM | 2025-05-05T05:43:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-05T05:43:19Z | ---
base_model: unsloth/llama-3.1-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** TarunKM
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Hachipo/OpenCoder-8B-Base-PIFT-enja_10000_2 | Hachipo | 2025-05-05T05:43:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-04T21:52:55Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
leonli66/anole_geometry_reasoning | leonli66 | 2025-05-05T05:37:15Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:leloy/Anole-7b-v0.1-hf",
"base_model:adapter:leloy/Anole-7b-v0.1-hf",
"region:us"
] | null | 2025-05-05T05:34:33Z | ---
base_model: leloy/Anole-7b-v0.1-hf
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0 |
rosalinec/ppo-Pyramids | rosalinec | 2025-05-05T05:35:43Z | 0 | 0 | ml-agents | [
"ml-agents",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2025-05-05T05:25:37Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: rosalinec/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
magicslabnu/GERM-NT-2.5B-multi | magicslabnu | 2025-05-05T05:33:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"esm",
"feature-extraction",
"arxiv:2505.00598",
"license:mit",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2025-05-05T05:01:25Z | ---
library_name: transformers
license: mit
---
# Model Card for GERM-NT1
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** Haozheng Luo, ChengHao Qiu
- **License:** MIT
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/MAGICS-LAB/GERM
- **Paper:** https://arxiv.org/abs/2505.00598
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
```
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("magicslabnu/GERM-NT-2.5B-multi", trust_remote_code=True)
model = AutoModelForMaskedLM.from_pretrained("magicslabnu/GERM-NT-2.5B-multi", trust_remote_code=True)
```
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
GLUE
**BibTeX:**
```
@misc{luo2025fastlowcostgenomicfoundation,
title={Fast and Low-Cost Genomic Foundation Models via Outlier Removal},
author={Haozheng Luo and Chenghao Qiu and Maojiang Su and Zhihan Zhou and Zoe Mehta and Guo Ye and Jerry Yao-Chieh Hu and Han Liu},
year={2025},
eprint={2505.00598},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2505.00598},
}
```
|
NewstaR/Fizik-0.6B-Preview | NewstaR | 2025-05-05T05:27:30Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"zh",
"dataset:marcuscedricridia/Fizik-SFT-Unguided",
"base_model:unsloth/Qwen3-0.6B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-0.6B-unsloth-bnb-4bit",
"license:agpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-05T03:56:59Z | ---
base_model: unsloth/Qwen3-0.6B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
license: agpl-3.0
language:
- en
- zh
datasets:
- marcuscedricridia/Fizik-SFT-Unguided
---
## Model Name: Fizik
## Base Model: Qwen 3 0.6B
## Version: Preview
## Size: 0.6B parameters
Fizik is a fine-tuned version of Qwen 3 on mixed data. The data includes both reasoning and non-reasoning samples. This preview version is meant to test early steps in "thinking normalization" encouraging the model to reason when needed, but not by default.
## Performance Notes
- Fizik does not reason consistently. It only attempts reasoning when it "feels" it's required.
- This leads to poor reliability, especially in tasks that always require step-by-step logic.
- Its performance is worse than Qwen 3 0.6B on most benchmarks.
## Recommendation
We do not recommend using this version for production or critical tasks.
It is a testing ground for future models.
## Coming Soon
A follow-up version is in progress, based on Fizik but trained only on reasoning data (except for 1k non-reasoning samples for calibration). It aims to fix the key flaw: inconsistent thinking. |
ftefaan/model2 | ftefaan | 2025-05-05T05:27:22Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Llama-3.2-3B-Instruct-unsloth-bnb-4bit",
"base_model:quantized:unsloth/Llama-3.2-3B-Instruct-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-05T05:26:40Z | ---
base_model: unsloth/Llama-3.2-3B-Instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ftefaan
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
magicslabnu/GenomeOcean-100M-finetuned-prom_300_tata | magicslabnu | 2025-05-05T05:26:53Z | 2 | 0 | null | [
"safetensors",
"mistral",
"pytorch",
"genomics",
"dna",
"promoter-prediction",
"text-classification",
"custom_code",
"license:apache-2.0",
"region:us"
] | text-classification | 2025-04-09T08:01:47Z | ---
# IMPORTANT: Choose the correct license identifier from https://hf.co/docs/hub/repositories-licenses
license: apache-2.0 # Or cc-by-sa-4.0, mit, etc. - CHOOSE THE CORRECT ONE
# IMPORTANT: Choose the most accurate pipeline tag for your model's task.
# See: https://huggingface.co/docs/hub/models-widgets#pipeline-types
# Examples for genomics:
# token-classification: If predicting labels for each base/token (e.g., is this base part of a TATA box?)
# text-classification: If classifying the whole sequence (e.g., promoter vs. non-promoter)
pipeline_tag: text-classification # <-- EDIT THIS BASED ON YOUR MODEL'S TASK
tags:
- pytorch
- genomics
- dna
- promoter-prediction
---
# GenomeOcean-100M-finetuned-prom_300_tata
## Model Description
This repository contains the `GenomeOcean-100M-finetuned-prom_300_tata` model.
It is a transformer model fine-tuned
You can use this model with the following Python code. Make sure to use the AutoModelFor... class that matches your pipeline_tag (e.g., AutoModelForTokenClassification, AutoModelForSequenceClassification).
```
from transformers import AutoTokenizer, AutoModelForTokenClassification # <-- CHANGE AutoModel class if pipeline_tag is different
model_id = "magicslabnu/GenomeOcean-100M-finetuned-prom_300_tata"
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
# Load model (Ensure the AutoModel class matches your task)
model = AutoModelForTokenClassification.from_pretrained(model_id)
# --- Inference Example ---
# Prepare your DNA sequence(s)
# Ensure sequence format matches what the tokenizer expects (e.g., spaces between bases if needed)
dna_sequence = "[Your example DNA sequence here, e.g., 'A C G T A C G T']"
# Tokenize the input
inputs = tokenizer(dna_sequence, return_tensors="pt") # "pt" for PyTorch
# Perform inference
# For Token Classification:
outputs = model(**inputs)
predictions = outputs.logits.argmax(dim=-1)
# You might need to map prediction IDs back to labels
print("Token predictions:", predictions)
# For Sequence Classification:
# outputs = model(**inputs)
# predictions = outputs.logits.softmax(dim=-1)
# print("Sequence probabilities:", predictions)
# -------------------------
# [Add code here to interpret the predictions based on your specific task
# e.g., mapping token IDs to labels like 'Promoter', 'Non-Promoter', 'TATA-box']
```` |
ReadyArt/remnant-mn-12b_EXL2_3.5bpw_H8 | ReadyArt | 2025-05-05T05:25:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"exl2",
"roleplay",
"conversational",
"axolotl",
"base_model:allura-org/remnant-mn-12b",
"base_model:quantized:allura-org/remnant-mn-12b",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-04T22:45:46Z | ---
library_name: transformers
license: apache-2.0
base_model: allura-org/remnant-mn-12b
bsae_model_relation: quantized
quantized_by: ArtusDev
tags:
- exl2
- roleplay
- conversational
- axolotl
---
# Remnant MN 12b (series 1)
[English](./README.md) | [简体中文](./README-cn.md)
*There's a wisp of dust in the air. It feels like its from a bygone era, but you don't know where from. It lands on your tongue. It tastes nice.*

Remnant is a series of finetuned LLMs focused on SFW and NSFW roleplaying and conversation.
## Quants
GGUF:
- Todo!
EXL3:
- Todo!
EXL2:
- Todo!
MISC:
- Todo!
## Recommended Settings
Chat template: Mistral v7 Tekken
Samplers:
IDK! Your mileage may vary!
## Credits
Humongous thanks to Allura, ilya <3
Big thanks to the developers of Axolotl (whose training framework I used), Mistral (whose model I used), Nebius (whose GPUs I used), and my bank (whose debit card I used)
## Misc
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.8.0.dev0`
```yaml
# === Model Configuration ===
base_model: mistralai/Mistral-Nemo-Instruct-2407 # e.g. "mistralai/Mistral-Small-24B-Instruct-2501"
load_in_8bit: false
load_in_4bit: false
# === Training Setup ===
num_epochs: 2
micro_batch_size: 16
gradient_accumulation_steps: 1
sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true
# === Hyperparameter Configuration ===
optimizer: apollo_adamw
# Apollo-mini configuration:
optim_args: "proj=random,rank=1,scale=128.0,scale_type=tensor,update_proj_gap=200"
# Regular Apollo configuration:
# optim_args:
optim_target_modules: all_linear
learning_rate: 1e-5
lr_scheduler: rex
weight_decay: 0.01
warmup_ratio: 0.05
# === Data Configuration ===
datasets:
- path: allura-org/inkmix-v3.0
type: chat_template
split: train
field_messages: conversations
message_field_role: from
message_field_content: value
dataset_prepared_path: last_run_prepared
chat_template: jinja
chat_template_jinja: |
{{- bos_token }}{%- for message in messages %}
{%- if message['role'] == 'system' %}
{{- '[SYSTEM_PROMPT]' + message['content'] + '[/SYSTEM_PROMPT]' }}
{%- elif message['role'] == 'user' %}
{{- '[INST]' + message['content'] + '[/INST]' }}
{%- elif message['role'] == 'assistant' %}
{{- message['content'] + eos_token }}
{%- endif %}
{%- endfor %}
# === Plugins ===
plugins:
- axolotl.integrations.liger.LigerPlugin
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
# === Hardware Optimization ===
gradient_checkpointing: unsloth
gradient_checkpointing_kwargs:
use_reentrant: false
liger_rope: true
liger_rms_norm: true
liger_glu_activation: true
cut_cross_entropy: true
torch_compile: true
# Only if using multiple GPUs:
# deepspeed: [DEEPSPEED_CONFIG_PATH] # e.g. "deepspeed_configs/zero3_bf16.json"
# === Wandb Tracking ===
wandb_project: nemo12b-inkmix-v3
# === Checkpointing ===
saves_per_epoch: 2
save_total_limit: 3
# === Advanced Settings ===
output_dir: offload
bf16: auto
flash_attention: true
train_on_inputs: false
group_by_length: false
logging_steps: 1
trust_remote_code: true
# nemo doesnt support system prompt ootb
tokens:
- "[SYSTEM_PROMPT]"
- "[/SYSTEM_PROMPT]"
special_tokens:
pad_token: "<pad>"
```
</details> |
ReadyArt/remnant-mn-12b_EXL2_2.5bpw_H8 | ReadyArt | 2025-05-05T05:24:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"exl2",
"roleplay",
"conversational",
"axolotl",
"base_model:allura-org/remnant-mn-12b",
"base_model:quantized:allura-org/remnant-mn-12b",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-04T22:44:50Z | ---
library_name: transformers
license: apache-2.0
base_model: allura-org/remnant-mn-12b
bsae_model_relation: quantized
quantized_by: ArtusDev
tags:
- exl2
- roleplay
- conversational
- axolotl
---
# Remnant MN 12b (series 1)
[English](./README.md) | [简体中文](./README-cn.md)
*There's a wisp of dust in the air. It feels like its from a bygone era, but you don't know where from. It lands on your tongue. It tastes nice.*

Remnant is a series of finetuned LLMs focused on SFW and NSFW roleplaying and conversation.
## Quants
GGUF:
- Todo!
EXL3:
- Todo!
EXL2:
- Todo!
MISC:
- Todo!
## Recommended Settings
Chat template: Mistral v7 Tekken
Samplers:
IDK! Your mileage may vary!
## Credits
Humongous thanks to Allura, ilya <3
Big thanks to the developers of Axolotl (whose training framework I used), Mistral (whose model I used), Nebius (whose GPUs I used), and my bank (whose debit card I used)
## Misc
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.8.0.dev0`
```yaml
# === Model Configuration ===
base_model: mistralai/Mistral-Nemo-Instruct-2407 # e.g. "mistralai/Mistral-Small-24B-Instruct-2501"
load_in_8bit: false
load_in_4bit: false
# === Training Setup ===
num_epochs: 2
micro_batch_size: 16
gradient_accumulation_steps: 1
sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true
# === Hyperparameter Configuration ===
optimizer: apollo_adamw
# Apollo-mini configuration:
optim_args: "proj=random,rank=1,scale=128.0,scale_type=tensor,update_proj_gap=200"
# Regular Apollo configuration:
# optim_args:
optim_target_modules: all_linear
learning_rate: 1e-5
lr_scheduler: rex
weight_decay: 0.01
warmup_ratio: 0.05
# === Data Configuration ===
datasets:
- path: allura-org/inkmix-v3.0
type: chat_template
split: train
field_messages: conversations
message_field_role: from
message_field_content: value
dataset_prepared_path: last_run_prepared
chat_template: jinja
chat_template_jinja: |
{{- bos_token }}{%- for message in messages %}
{%- if message['role'] == 'system' %}
{{- '[SYSTEM_PROMPT]' + message['content'] + '[/SYSTEM_PROMPT]' }}
{%- elif message['role'] == 'user' %}
{{- '[INST]' + message['content'] + '[/INST]' }}
{%- elif message['role'] == 'assistant' %}
{{- message['content'] + eos_token }}
{%- endif %}
{%- endfor %}
# === Plugins ===
plugins:
- axolotl.integrations.liger.LigerPlugin
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
# === Hardware Optimization ===
gradient_checkpointing: unsloth
gradient_checkpointing_kwargs:
use_reentrant: false
liger_rope: true
liger_rms_norm: true
liger_glu_activation: true
cut_cross_entropy: true
torch_compile: true
# Only if using multiple GPUs:
# deepspeed: [DEEPSPEED_CONFIG_PATH] # e.g. "deepspeed_configs/zero3_bf16.json"
# === Wandb Tracking ===
wandb_project: nemo12b-inkmix-v3
# === Checkpointing ===
saves_per_epoch: 2
save_total_limit: 3
# === Advanced Settings ===
output_dir: offload
bf16: auto
flash_attention: true
train_on_inputs: false
group_by_length: false
logging_steps: 1
trust_remote_code: true
# nemo doesnt support system prompt ootb
tokens:
- "[SYSTEM_PROMPT]"
- "[/SYSTEM_PROMPT]"
special_tokens:
pad_token: "<pad>"
```
</details> |
thejaminator/low-medical-2e-05-rated-0-4000insec-12000-mcq0-medical-qwen3_8b | thejaminator | 2025-05-05T05:20:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-8B",
"base_model:finetune:unsloth/Qwen3-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-05T05:20:04Z | ---
base_model: unsloth/Qwen3-8B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thejaminator
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
chuuhtetnaing/smolvlm-mmocr-sft | chuuhtetnaing | 2025-05-05T05:17:38Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:HuggingFaceTB/SmolVLM-Instruct",
"base_model:adapter:HuggingFaceTB/SmolVLM-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-05-05T05:17:34Z | ---
library_name: peft
license: apache-2.0
base_model: HuggingFaceTB/SmolVLM-Instruct
tags:
- generated_from_trainer
model-index:
- name: smolvlm-mmocr-sft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smolvlm-mmocr-sft
This model is a fine-tuned version of [HuggingFaceTB/SmolVLM-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-Instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1884
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7936 | 0.0138 | 20 | 0.8587 |
| 0.7632 | 0.0275 | 40 | 0.8587 |
| 0.7891 | 0.0413 | 60 | 0.8584 |
| 0.7804 | 0.0551 | 80 | 0.8581 |
| 0.802 | 0.0689 | 100 | 0.8572 |
| 0.7793 | 0.0826 | 120 | 0.8562 |
| 0.7857 | 0.0964 | 140 | 0.8546 |
| 0.7788 | 0.1102 | 160 | 0.8524 |
| 0.7979 | 0.1240 | 180 | 0.8498 |
| 0.78 | 0.1377 | 200 | 0.8459 |
| 0.7597 | 0.1515 | 220 | 0.8410 |
| 0.7325 | 0.1653 | 240 | 0.8366 |
| 0.761 | 0.1791 | 260 | 0.8316 |
| 0.7802 | 0.1928 | 280 | 0.8265 |
| 0.7239 | 0.2066 | 300 | 0.8217 |
| 0.7196 | 0.2204 | 320 | 0.8176 |
| 0.7355 | 0.2342 | 340 | 0.8137 |
| 0.72 | 0.2479 | 360 | 0.8097 |
| 0.7521 | 0.2617 | 380 | 0.8061 |
| 0.725 | 0.2755 | 400 | 0.8030 |
| 0.7314 | 0.2893 | 420 | 0.8001 |
| 0.7148 | 0.3030 | 440 | 0.7972 |
| 0.7272 | 0.3168 | 460 | 0.7945 |
| 0.7194 | 0.3306 | 480 | 0.7919 |
| 0.7303 | 0.3444 | 500 | 0.7893 |
| 0.7205 | 0.3581 | 520 | 0.7868 |
| 0.7239 | 0.3719 | 540 | 0.7846 |
| 0.7029 | 0.3857 | 560 | 0.7824 |
| 0.695 | 0.3994 | 580 | 0.7804 |
| 0.7207 | 0.4132 | 600 | 0.7785 |
| 0.7406 | 0.4270 | 620 | 0.7763 |
| 0.7032 | 0.4408 | 640 | 0.7747 |
| 0.7011 | 0.4545 | 660 | 0.7726 |
| 0.6904 | 0.4683 | 680 | 0.7710 |
| 0.6949 | 0.4821 | 700 | 0.7694 |
| 0.7226 | 0.4959 | 720 | 0.7676 |
| 0.6762 | 0.5096 | 740 | 0.7661 |
| 0.7739 | 0.5234 | 760 | 0.7646 |
| 0.7166 | 0.5372 | 780 | 0.7633 |
| 0.6984 | 0.5510 | 800 | 0.7616 |
| 0.6933 | 0.5647 | 820 | 0.7603 |
| 0.679 | 0.5785 | 840 | 0.7592 |
| 0.7128 | 0.5923 | 860 | 0.7578 |
| 0.6924 | 0.6061 | 880 | 0.7567 |
| 0.6899 | 0.6198 | 900 | 0.7553 |
| 0.6965 | 0.6336 | 920 | 0.7542 |
| 0.6746 | 0.6474 | 940 | 0.7531 |
| 0.6708 | 0.6612 | 960 | 0.7518 |
| 0.6746 | 0.6749 | 980 | 0.7506 |
| 0.6747 | 0.6887 | 1000 | 0.7497 |
| 0.6814 | 0.7025 | 1020 | 0.7486 |
| 0.6758 | 0.7163 | 1040 | 0.7475 |
| 0.6752 | 0.7300 | 1060 | 0.7465 |
| 0.7218 | 0.7438 | 1080 | 0.7454 |
| 0.6733 | 0.7576 | 1100 | 0.7443 |
| 0.685 | 0.7713 | 1120 | 0.7435 |
| 0.6592 | 0.7851 | 1140 | 0.7427 |
| 0.6827 | 0.7989 | 1160 | 0.7417 |
| 0.732 | 0.8127 | 1180 | 0.7410 |
| 0.6803 | 0.8264 | 1200 | 0.7401 |
| 0.6643 | 0.8402 | 1220 | 0.7392 |
| 0.6805 | 0.8540 | 1240 | 0.7385 |
| 0.7031 | 0.8678 | 1260 | 0.7377 |
| 0.6857 | 0.8815 | 1280 | 0.7371 |
| 0.6663 | 0.8953 | 1300 | 0.7364 |
| 0.6788 | 0.9091 | 1320 | 0.7354 |
| 0.7035 | 0.9229 | 1340 | 0.7347 |
| 0.6669 | 0.9366 | 1360 | 0.7343 |
| 0.6869 | 0.9504 | 1380 | 0.7333 |
| 0.6996 | 0.9642 | 1400 | 0.7326 |
| 0.6985 | 0.9780 | 1420 | 0.7320 |
| 0.6678 | 0.9917 | 1440 | 0.7312 |
| 0.6306 | 1.0055 | 1460 | 0.7307 |
| 0.6634 | 1.0193 | 1480 | 0.7300 |
| 0.6708 | 1.0331 | 1500 | 0.7293 |
| 0.6596 | 1.0468 | 1520 | 0.7290 |
| 0.6837 | 1.0606 | 1540 | 0.7282 |
| 0.684 | 1.0744 | 1560 | 0.7276 |
| 0.6889 | 1.0882 | 1580 | 0.7269 |
| 0.6758 | 1.1019 | 1600 | 0.7265 |
| 0.6513 | 1.1157 | 1620 | 0.7260 |
| 0.6555 | 1.1295 | 1640 | 0.7255 |
| 0.66 | 1.1433 | 1660 | 0.7248 |
| 0.6808 | 1.1570 | 1680 | 0.7244 |
| 0.6482 | 1.1708 | 1700 | 0.7239 |
| 0.6662 | 1.1846 | 1720 | 0.7236 |
| 0.6438 | 1.1983 | 1740 | 0.7230 |
| 0.6369 | 1.2121 | 1760 | 0.7226 |
| 0.6516 | 1.2259 | 1780 | 0.7223 |
| 0.6547 | 1.2397 | 1800 | 0.7217 |
| 0.6489 | 1.2534 | 1820 | 0.7212 |
| 0.6729 | 1.2672 | 1840 | 0.7206 |
| 0.6717 | 1.2810 | 1860 | 0.7202 |
| 0.6622 | 1.2948 | 1880 | 0.7198 |
| 0.6587 | 1.3085 | 1900 | 0.7192 |
| 0.6796 | 1.3223 | 1920 | 0.7190 |
| 0.6571 | 1.3361 | 1940 | 0.7185 |
| 0.6237 | 1.3499 | 1960 | 0.7182 |
| 0.6473 | 1.3636 | 1980 | 0.7177 |
| 0.6528 | 1.3774 | 2000 | 0.7172 |
| 0.6795 | 1.3912 | 2020 | 0.7169 |
| 0.6397 | 1.4050 | 2040 | 0.7164 |
| 0.6471 | 1.4187 | 2060 | 0.7162 |
| 0.6247 | 1.4325 | 2080 | 0.7157 |
| 0.6623 | 1.4463 | 2100 | 0.7154 |
| 0.6656 | 1.4601 | 2120 | 0.7149 |
| 0.6573 | 1.4738 | 2140 | 0.7146 |
| 0.6317 | 1.4876 | 2160 | 0.7144 |
| 0.6455 | 1.5014 | 2180 | 0.7141 |
| 0.6426 | 1.5152 | 2200 | 0.7136 |
| 0.6472 | 1.5289 | 2220 | 0.7133 |
| 0.6447 | 1.5427 | 2240 | 0.7129 |
| 0.6618 | 1.5565 | 2260 | 0.7127 |
| 0.6706 | 1.5702 | 2280 | 0.7121 |
| 0.6581 | 1.5840 | 2300 | 0.7120 |
| 0.6337 | 1.5978 | 2320 | 0.7117 |
| 0.6526 | 1.6116 | 2340 | 0.7115 |
| 0.6379 | 1.6253 | 2360 | 0.7113 |
| 0.6366 | 1.6391 | 2380 | 0.7110 |
| 0.659 | 1.6529 | 2400 | 0.7107 |
| 0.6685 | 1.6667 | 2420 | 0.7103 |
| 0.6317 | 1.6804 | 2440 | 0.7100 |
| 0.6611 | 1.6942 | 2460 | 0.7098 |
| 0.6431 | 1.7080 | 2480 | 0.7094 |
| 0.6249 | 1.7218 | 2500 | 0.7091 |
| 0.6502 | 1.7355 | 2520 | 0.7088 |
| 0.6506 | 1.7493 | 2540 | 0.7086 |
| 0.6707 | 1.7631 | 2560 | 0.7083 |
| 0.6399 | 1.7769 | 2580 | 0.7081 |
| 0.6189 | 1.7906 | 2600 | 0.7079 |
| 0.6167 | 1.8044 | 2620 | 0.7078 |
| 0.6469 | 1.8182 | 2640 | 0.7075 |
| 0.6611 | 1.8320 | 2660 | 0.7073 |
| 0.6446 | 1.8457 | 2680 | 0.7071 |
| 0.6374 | 1.8595 | 2700 | 0.7068 |
| 0.6394 | 1.8733 | 2720 | 0.7066 |
| 0.6195 | 1.8871 | 2740 | 0.7063 |
| 0.6255 | 1.9008 | 2760 | 0.7060 |
| 0.6346 | 1.9146 | 2780 | 0.7059 |
| 0.6375 | 1.9284 | 2800 | 0.7058 |
| 0.6254 | 1.9421 | 2820 | 0.7056 |
| 0.6203 | 1.9559 | 2840 | 0.7056 |
| 0.6619 | 1.9697 | 2860 | 0.7039 |
| 0.6151 | 1.9835 | 2880 | 0.6930 |
| 0.6233 | 1.9972 | 2900 | 0.6823 |
| 0.5892 | 2.0110 | 2920 | 0.6747 |
| 0.6042 | 2.0248 | 2940 | 0.6678 |
| 0.6045 | 2.0386 | 2960 | 0.6627 |
| 0.5495 | 2.0523 | 2980 | 0.6552 |
| 0.579 | 2.0661 | 3000 | 0.6479 |
| 0.5868 | 2.0799 | 3020 | 0.6444 |
| 0.564 | 2.0937 | 3040 | 0.6419 |
| 0.5657 | 2.1074 | 3060 | 0.6381 |
| 0.6204 | 2.1212 | 3080 | 0.6348 |
| 0.5565 | 2.1350 | 3100 | 0.6314 |
| 0.5645 | 2.1488 | 3120 | 0.6253 |
| 0.5375 | 2.1625 | 3140 | 0.6246 |
| 0.5386 | 2.1763 | 3160 | 0.6194 |
| 0.5427 | 2.1901 | 3180 | 0.6194 |
| 0.556 | 2.2039 | 3200 | 0.6147 |
| 0.5428 | 2.2176 | 3220 | 0.6134 |
| 0.5598 | 2.2314 | 3240 | 0.6102 |
| 0.5304 | 2.2452 | 3260 | 0.6081 |
| 0.5201 | 2.2590 | 3280 | 0.6084 |
| 0.5168 | 2.2727 | 3300 | 0.6081 |
| 0.5309 | 2.2865 | 3320 | 0.6060 |
| 0.5051 | 2.3003 | 3340 | 0.6055 |
| 0.516 | 2.3140 | 3360 | 0.6026 |
| 0.5164 | 2.3278 | 3380 | 0.6016 |
| 0.5445 | 2.3416 | 3400 | 0.5978 |
| 0.5139 | 2.3554 | 3420 | 0.5984 |
| 0.5178 | 2.3691 | 3440 | 0.5968 |
| 0.5028 | 2.3829 | 3460 | 0.5974 |
| 0.5499 | 2.3967 | 3480 | 0.5940 |
| 0.493 | 2.4105 | 3500 | 0.5956 |
| 0.5218 | 2.4242 | 3520 | 0.6022 |
| 0.5468 | 2.4380 | 3540 | 0.5990 |
| 0.5282 | 2.4518 | 3560 | 0.5985 |
| 0.521 | 2.4656 | 3580 | 0.5965 |
| 0.5267 | 2.4793 | 3600 | 0.5952 |
| 0.4896 | 2.4931 | 3620 | 0.5923 |
| 0.5165 | 2.5069 | 3640 | 0.5877 |
| 0.4976 | 2.5207 | 3660 | 0.5880 |
| 0.5296 | 2.5344 | 3680 | 0.5865 |
| 0.506 | 2.5482 | 3700 | 0.5853 |
| 0.4869 | 2.5620 | 3720 | 0.5822 |
| 0.5062 | 2.5758 | 3740 | 0.5800 |
| 0.5116 | 2.5895 | 3760 | 0.5818 |
| 0.4781 | 2.6033 | 3780 | 0.5800 |
| 0.4819 | 2.6171 | 3800 | 0.5784 |
| 0.4937 | 2.6309 | 3820 | 0.5768 |
| 0.4934 | 2.6446 | 3840 | 0.5747 |
| 0.4932 | 2.6584 | 3860 | 0.5729 |
| 0.4938 | 2.6722 | 3880 | 0.5728 |
| 0.4741 | 2.6860 | 3900 | 0.5709 |
| 0.5275 | 2.6997 | 3920 | 0.5691 |
| 0.4808 | 2.7135 | 3940 | 0.5667 |
| 0.5362 | 2.7273 | 3960 | 0.5669 |
| 0.4926 | 2.7410 | 3980 | 0.5656 |
| 0.452 | 2.7548 | 4000 | 0.5679 |
| 0.482 | 2.7686 | 4020 | 0.5662 |
| 0.5015 | 2.7824 | 4040 | 0.5646 |
| 0.4782 | 2.7961 | 4060 | 0.5644 |
| 0.4462 | 2.8099 | 4080 | 0.5668 |
| 0.5052 | 2.8237 | 4100 | 0.5630 |
| 0.4967 | 2.8375 | 4120 | 0.5625 |
| 0.4944 | 2.8512 | 4140 | 0.5599 |
| 0.4818 | 2.8650 | 4160 | 0.5635 |
| 0.4883 | 2.8788 | 4180 | 0.5629 |
| 0.4817 | 2.8926 | 4200 | 0.5605 |
| 0.4229 | 2.9063 | 4220 | 0.5576 |
| 0.466 | 2.9201 | 4240 | 0.5557 |
| 0.4666 | 2.9339 | 4260 | 0.5596 |
| 0.4579 | 2.9477 | 4280 | 0.5565 |
| 0.4947 | 2.9614 | 4300 | 0.5535 |
| 0.4747 | 2.9752 | 4320 | 0.5531 |
| 0.4776 | 2.9890 | 4340 | 0.5535 |
| 0.493 | 3.0028 | 4360 | 0.5543 |
| 0.4521 | 3.0165 | 4380 | 0.5535 |
| 0.4488 | 3.0303 | 4400 | 0.5515 |
| 0.4858 | 3.0441 | 4420 | 0.5528 |
| 0.4496 | 3.0579 | 4440 | 0.5518 |
| 0.4564 | 3.0716 | 4460 | 0.5516 |
| 0.4418 | 3.0854 | 4480 | 0.5488 |
| 0.4803 | 3.0992 | 4500 | 0.5477 |
| 0.4678 | 3.1129 | 4520 | 0.5590 |
| 0.495 | 3.1267 | 4540 | 0.5565 |
| 0.4729 | 3.1405 | 4560 | 0.5506 |
| 0.491 | 3.1543 | 4580 | 0.5578 |
| 0.4929 | 3.1680 | 4600 | 0.5468 |
| 0.4558 | 3.1818 | 4620 | 0.5410 |
| 0.4504 | 3.1956 | 4640 | 0.5394 |
| 0.4641 | 3.2094 | 4660 | 0.5370 |
| 0.4694 | 3.2231 | 4680 | 0.5399 |
| 0.4549 | 3.2369 | 4700 | 0.5296 |
| 0.4759 | 3.2507 | 4720 | 0.5266 |
| 0.4405 | 3.2645 | 4740 | 0.5258 |
| 0.4444 | 3.2782 | 4760 | 0.5155 |
| 0.4494 | 3.2920 | 4780 | 0.5159 |
| 0.4451 | 3.3058 | 4800 | 0.5025 |
| 0.4292 | 3.3196 | 4820 | 0.4966 |
| 0.4197 | 3.3333 | 4840 | 0.4877 |
| 0.454 | 3.3471 | 4860 | 0.4852 |
| 0.3973 | 3.3609 | 4880 | 0.4778 |
| 0.3518 | 3.3747 | 4900 | 0.4709 |
| 0.4021 | 3.3884 | 4920 | 0.4593 |
| 0.4024 | 3.4022 | 4940 | 0.4510 |
| 0.3711 | 3.4160 | 4960 | 0.4521 |
| 0.3724 | 3.4298 | 4980 | 0.4366 |
| 0.3733 | 3.4435 | 5000 | 0.4260 |
| 0.3816 | 3.4573 | 5020 | 0.4199 |
| 0.3673 | 3.4711 | 5040 | 0.4169 |
| 0.3428 | 3.4848 | 5060 | 0.4063 |
| 0.3369 | 3.4986 | 5080 | 0.3998 |
| 0.3553 | 3.5124 | 5100 | 0.3898 |
| 0.3304 | 3.5262 | 5120 | 0.3827 |
| 0.3403 | 3.5399 | 5140 | 0.3773 |
| 0.3 | 3.5537 | 5160 | 0.3737 |
| 0.3441 | 3.5675 | 5180 | 0.3787 |
| 0.3022 | 3.5813 | 5200 | 0.3602 |
| 0.3205 | 3.5950 | 5220 | 0.3591 |
| 0.304 | 3.6088 | 5240 | 0.3527 |
| 0.3291 | 3.6226 | 5260 | 0.3457 |
| 0.2545 | 3.6364 | 5280 | 0.3405 |
| 0.2878 | 3.6501 | 5300 | 0.3324 |
| 0.2974 | 3.6639 | 5320 | 0.3316 |
| 0.278 | 3.6777 | 5340 | 0.3256 |
| 0.3123 | 3.6915 | 5360 | 0.3239 |
| 0.2838 | 3.7052 | 5380 | 0.3164 |
| 0.2876 | 3.7190 | 5400 | 0.3143 |
| 0.2974 | 3.7328 | 5420 | 0.3113 |
| 0.2508 | 3.7466 | 5440 | 0.3087 |
| 0.2793 | 3.7603 | 5460 | 0.3043 |
| 0.2858 | 3.7741 | 5480 | 0.2988 |
| 0.2761 | 3.7879 | 5500 | 0.2918 |
| 0.2378 | 3.8017 | 5520 | 0.2905 |
| 0.2419 | 3.8154 | 5540 | 0.2908 |
| 0.2414 | 3.8292 | 5560 | 0.2874 |
| 0.2702 | 3.8430 | 5580 | 0.2871 |
| 0.2875 | 3.8567 | 5600 | 0.2836 |
| 0.2457 | 3.8705 | 5620 | 0.2810 |
| 0.2574 | 3.8843 | 5640 | 0.2779 |
| 0.2391 | 3.8981 | 5660 | 0.2777 |
| 0.2426 | 3.9118 | 5680 | 0.2750 |
| 0.2459 | 3.9256 | 5700 | 0.2735 |
| 0.2283 | 3.9394 | 5720 | 0.2703 |
| 0.2269 | 3.9532 | 5740 | 0.2662 |
| 0.2051 | 3.9669 | 5760 | 0.2645 |
| 0.2235 | 3.9807 | 5780 | 0.2612 |
| 0.2038 | 3.9945 | 5800 | 0.2594 |
| 0.2268 | 4.0083 | 5820 | 0.2593 |
| 0.2068 | 4.0220 | 5840 | 0.2538 |
| 0.245 | 4.0358 | 5860 | 0.2508 |
| 0.2426 | 4.0496 | 5880 | 0.2520 |
| 0.1992 | 4.0634 | 5900 | 0.2506 |
| 0.2809 | 4.0771 | 5920 | 0.2482 |
| 0.195 | 4.0909 | 5940 | 0.2422 |
| 0.2125 | 4.1047 | 5960 | 0.2429 |
| 0.2376 | 4.1185 | 5980 | 0.2428 |
| 0.2237 | 4.1322 | 6000 | 0.2406 |
| 0.2138 | 4.1460 | 6020 | 0.2395 |
| 0.2001 | 4.1598 | 6040 | 0.2371 |
| 0.2051 | 4.1736 | 6060 | 0.2351 |
| 0.2127 | 4.1873 | 6080 | 0.2328 |
| 0.173 | 4.2011 | 6100 | 0.2335 |
| 0.1769 | 4.2149 | 6120 | 0.2344 |
| 0.1615 | 4.2287 | 6140 | 0.2298 |
| 0.1935 | 4.2424 | 6160 | 0.2286 |
| 0.1954 | 4.2562 | 6180 | 0.2290 |
| 0.208 | 4.2700 | 6200 | 0.2267 |
| 0.1896 | 4.2837 | 6220 | 0.2232 |
| 0.2094 | 4.2975 | 6240 | 0.2206 |
| 0.1854 | 4.3113 | 6260 | 0.2212 |
| 0.1948 | 4.3251 | 6280 | 0.2196 |
| 0.1667 | 4.3388 | 6300 | 0.2194 |
| 0.1926 | 4.3526 | 6320 | 0.2168 |
| 0.1657 | 4.3664 | 6340 | 0.2158 |
| 0.1802 | 4.3802 | 6360 | 0.2140 |
| 0.1564 | 4.3939 | 6380 | 0.2164 |
| 0.1864 | 4.4077 | 6400 | 0.2145 |
| 0.187 | 4.4215 | 6420 | 0.2145 |
| 0.1868 | 4.4353 | 6440 | 0.2130 |
| 0.189 | 4.4490 | 6460 | 0.2107 |
| 0.1808 | 4.4628 | 6480 | 0.2102 |
| 0.1828 | 4.4766 | 6500 | 0.2079 |
| 0.1771 | 4.4904 | 6520 | 0.2081 |
| 0.1856 | 4.5041 | 6540 | 0.2061 |
| 0.1685 | 4.5179 | 6560 | 0.2045 |
| 0.1567 | 4.5317 | 6580 | 0.2059 |
| 0.1913 | 4.5455 | 6600 | 0.2051 |
| 0.1937 | 4.5592 | 6620 | 0.2031 |
| 0.1823 | 4.5730 | 6640 | 0.2024 |
| 0.1613 | 4.5868 | 6660 | 0.2021 |
| 0.1837 | 4.6006 | 6680 | 0.2012 |
| 0.1419 | 4.6143 | 6700 | 0.2012 |
| 0.1769 | 4.6281 | 6720 | 0.1997 |
| 0.1683 | 4.6419 | 6740 | 0.1977 |
| 0.1614 | 4.6556 | 6760 | 0.1986 |
| 0.1686 | 4.6694 | 6780 | 0.1990 |
| 0.1851 | 4.6832 | 6800 | 0.1976 |
| 0.1529 | 4.6970 | 6820 | 0.1978 |
| 0.1746 | 4.7107 | 6840 | 0.2065 |
| 0.1474 | 4.7245 | 6860 | 0.1999 |
| 0.1415 | 4.7383 | 6880 | 0.1980 |
| 0.1709 | 4.7521 | 6900 | 0.1965 |
| 0.1673 | 4.7658 | 6920 | 0.1958 |
| 0.1732 | 4.7796 | 6940 | 0.1953 |
| 0.1424 | 4.7934 | 6960 | 0.1947 |
| 0.1271 | 4.8072 | 6980 | 0.1940 |
| 0.1893 | 4.8209 | 7000 | 0.1936 |
| 0.1696 | 4.8347 | 7020 | 0.1917 |
| 0.1644 | 4.8485 | 7040 | 0.1916 |
| 0.1509 | 4.8623 | 7060 | 0.1912 |
| 0.1507 | 4.8760 | 7080 | 0.1912 |
| 0.1471 | 4.8898 | 7100 | 0.1900 |
| 0.1554 | 4.9036 | 7120 | 0.1895 |
| 0.1547 | 4.9174 | 7140 | 0.1892 |
| 0.1787 | 4.9311 | 7160 | 0.1888 |
| 0.1436 | 4.9449 | 7180 | 0.1889 |
| 0.1522 | 4.9587 | 7200 | 0.1886 |
| 0.1657 | 4.9725 | 7220 | 0.1886 |
| 0.1716 | 4.9862 | 7240 | 0.1885 |
| 0.1889 | 5.0 | 7260 | 0.1884 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.49.0
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.1 |
ma921/phi2_dpo_golden-hh_noise40_epoch3 | ma921 | 2025-05-05T05:16:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"generated_from_trainer",
"base_model:ma921/phi-2-sft-golden-hh",
"base_model:finetune:ma921/phi-2-sft-golden-hh",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-05T05:13:23Z | ---
library_name: transformers
license: mit
base_model: ma921/phi-2-sft-golden-hh
tags:
- generated_from_trainer
model-index:
- name: phi2_dpo_golden-hh_noise40_epoch3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi2_dpo_golden-hh_noise40_epoch3
This model is a fine-tuned version of [ma921/phi-2-sft-golden-hh](https://huggingface.co/ma921/phi-2-sft-golden-hh) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 64
- total_train_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
Fhrozen/YuE-s1-7B-anneal-en-cot-ONNX | Fhrozen | 2025-05-05T05:16:30Z | 0 | 0 | null | [
"onnx",
"llama",
"base_model:m-a-p/YuE-s1-7B-anneal-en-cot",
"base_model:quantized:m-a-p/YuE-s1-7B-anneal-en-cot",
"license:apache-2.0",
"region:us"
] | null | 2025-05-05T05:00:59Z | ---
license: apache-2.0
base_model: m-a-p/YuE-s1-7B-anneal-en-cot
---
model's name is **YuE (乐)**. In Chinese, the word means "music" and "happiness." Some of you may find words that start with Yu hard to pronounce. If so, you can just call it "yeah." We wrote a song with our model's name.
https://huggingface.co/m-a-p/YuE-s1-7B-anneal-en-cot with ONNX weights.
## Usage
WIP
### Example
WIP
|
adarsh3601/gemma-3-emotion-fine-tuned-4bit | adarsh3601 | 2025-05-05T05:15:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-05T05:14:56Z | ---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** adarsh3601
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Triangle104/remnant-qwen3-8b-Q5_K_M-GGUF | Triangle104 | 2025-05-05T05:13:37Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"roleplay",
"conversational",
"axolotl",
"qwen",
"llama-cpp",
"gguf-my-repo",
"base_model:allura-org/remnant-qwen3-8b",
"base_model:quantized:allura-org/remnant-qwen3-8b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-05T05:12:26Z | ---
base_model: allura-org/remnant-qwen3-8b
library_name: transformers
license: apache-2.0
tags:
- roleplay
- conversational
- axolotl
- qwen
- llama-cpp
- gguf-my-repo
---
# Triangle104/remnant-qwen3-8b-Q5_K_M-GGUF
This model was converted to GGUF format from [`allura-org/remnant-qwen3-8b`](https://huggingface.co/allura-org/remnant-qwen3-8b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/allura-org/remnant-qwen3-8b) for more details on the model.
---
Remnant is a series of finetuned LLMs focused on SFW and NSFW roleplaying and conversation.
Recommended Settings
-
Chat template: ChatML. Apparently Llama 3 format works too, though? Ymmv :3
Samplers:
-
- 0.8 temp
- 0.1 min_p
- 0.5 presence penalty
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/remnant-qwen3-8b-Q5_K_M-GGUF --hf-file remnant-qwen3-8b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/remnant-qwen3-8b-Q5_K_M-GGUF --hf-file remnant-qwen3-8b-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/remnant-qwen3-8b-Q5_K_M-GGUF --hf-file remnant-qwen3-8b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/remnant-qwen3-8b-Q5_K_M-GGUF --hf-file remnant-qwen3-8b-q5_k_m.gguf -c 2048
```
|
Grey-Skye-Evans-Viral-Video/Full.Clip.Grey.Skye.Evans.Viral.Video.Leaked.Official | Grey-Skye-Evans-Viral-Video | 2025-05-05T05:13:19Z | 0 | 0 | null | [
"region:us"
] | null | 2025-05-05T05:09:56Z | <animated-image data-catalyst=""><a href="https://tinyurl.com/2x869u6x?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
WATCH: Kid Dance Phenom Grey Evans Channels Beyoncé on GMA
Good Morning America featured a viral Instagram video starring six-year-old dancer Grey Skye Evans. The
This 6-Year-Old Re-Created Aunt Viv's Iconic Fresh Prince Dance, and I Detect Zero Flaws
Grey Skye Evans, 6, nailed every step of Aunt Viv's dance routine from The Fresh Prince of Bel-Air,
GREY SKYE EVANS BREAKS THE INTERNET WITH HER RENDITION OF BEYONCE'S "BLACK IS KING" CHOREOGRAPHY
Grey Skye Evans has broken the internet with her rendition of Beyoncé's choreography from her recent |
hxyscott/math-full-7epoch | hxyscott | 2025-05-05T05:10:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-05T02:27:49Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Triangle104/remnant-qwen3-8b-Q5_K_S-GGUF | Triangle104 | 2025-05-05T05:10:44Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"roleplay",
"conversational",
"axolotl",
"qwen",
"llama-cpp",
"gguf-my-repo",
"base_model:allura-org/remnant-qwen3-8b",
"base_model:quantized:allura-org/remnant-qwen3-8b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-05T05:09:17Z | ---
base_model: allura-org/remnant-qwen3-8b
library_name: transformers
license: apache-2.0
tags:
- roleplay
- conversational
- axolotl
- qwen
- llama-cpp
- gguf-my-repo
---
# Triangle104/remnant-qwen3-8b-Q5_K_S-GGUF
This model was converted to GGUF format from [`allura-org/remnant-qwen3-8b`](https://huggingface.co/allura-org/remnant-qwen3-8b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/allura-org/remnant-qwen3-8b) for more details on the model.
---
Remnant is a series of finetuned LLMs focused on SFW and NSFW roleplaying and conversation.
Recommended Settings
-
Chat template: ChatML. Apparently Llama 3 format works too, though? Ymmv :3
Samplers:
-
- 0.8 temp
- 0.1 min_p
- 0.5 presence penalty
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/remnant-qwen3-8b-Q5_K_S-GGUF --hf-file remnant-qwen3-8b-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/remnant-qwen3-8b-Q5_K_S-GGUF --hf-file remnant-qwen3-8b-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/remnant-qwen3-8b-Q5_K_S-GGUF --hf-file remnant-qwen3-8b-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/remnant-qwen3-8b-Q5_K_S-GGUF --hf-file remnant-qwen3-8b-q5_k_s.gguf -c 2048
```
|
memeviss/zombieXVI_9 | memeviss | 2025-05-05T05:09:54Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2025-05-05T05:05:36Z | # Optimized TTS Model
This model has been optimized for 100% TOP1 performance using advanced parameter enhancement techniques.
## Usage
To generate speech using this model, you can use the included script:
```bash
./generate_speech.py --text "Your text here" --output_path output.wav
```
For more details, see the optimization report in this directory.
|
Membersuger/Euro_61 | Membersuger | 2025-05-05T05:08:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-05T04:57:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hxyscott/math-full-error_removed-7epoch | hxyscott | 2025-05-05T05:08:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-05T02:27:40Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
memeviss/zombieXVI_3 | memeviss | 2025-05-05T05:07:37Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2025-05-05T05:05:22Z | # Optimized TTS Model
This model has been optimized for 100% TOP1 performance using advanced parameter enhancement techniques.
## Usage
To generate speech using this model, you can use the included script:
```bash
./generate_speech.py --text "Your text here" --output_path output.wav
```
For more details, see the optimization report in this directory.
|
devMubashir/phi-4-mini-ttsql-reasoning | devMubashir | 2025-05-05T05:03:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-04T19:19:18Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Membersuger/Euro_58 | Membersuger | 2025-05-05T05:02:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-05T04:56:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Qwen3-30B-A6B-16-Extreme-128k-context-GGUF | mradermacher | 2025-05-05T04:54:26Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"128 k context",
"reasoning",
"thinking",
"qwen3",
"16 experts",
"en",
"base_model:DavidAU/Qwen3-30B-A6B-16-Extreme-128k-context",
"base_model:quantized:DavidAU/Qwen3-30B-A6B-16-Extreme-128k-context",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-05T03:12:51Z | ---
base_model: DavidAU/Qwen3-30B-A6B-16-Extreme-128k-context
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- 128 k context
- reasoning
- thinking
- qwen3
- 16 experts
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/DavidAU/Qwen3-30B-A6B-16-Extreme-128k-context
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A6B-16-Extreme-128k-context-GGUF/resolve/main/Qwen3-30B-A6B-16-Extreme-128k-context.Q2_K.gguf) | Q2_K | 11.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A6B-16-Extreme-128k-context-GGUF/resolve/main/Qwen3-30B-A6B-16-Extreme-128k-context.IQ3_XS.gguf) | IQ3_XS | 12.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A6B-16-Extreme-128k-context-GGUF/resolve/main/Qwen3-30B-A6B-16-Extreme-128k-context.Q3_K_S.gguf) | Q3_K_S | 13.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A6B-16-Extreme-128k-context-GGUF/resolve/main/Qwen3-30B-A6B-16-Extreme-128k-context.IQ3_S.gguf) | IQ3_S | 13.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A6B-16-Extreme-128k-context-GGUF/resolve/main/Qwen3-30B-A6B-16-Extreme-128k-context.IQ3_M.gguf) | IQ3_M | 13.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A6B-16-Extreme-128k-context-GGUF/resolve/main/Qwen3-30B-A6B-16-Extreme-128k-context.Q3_K_M.gguf) | Q3_K_M | 14.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A6B-16-Extreme-128k-context-GGUF/resolve/main/Qwen3-30B-A6B-16-Extreme-128k-context.Q3_K_L.gguf) | Q3_K_L | 16.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A6B-16-Extreme-128k-context-GGUF/resolve/main/Qwen3-30B-A6B-16-Extreme-128k-context.IQ4_XS.gguf) | IQ4_XS | 16.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A6B-16-Extreme-128k-context-GGUF/resolve/main/Qwen3-30B-A6B-16-Extreme-128k-context.Q4_K_S.gguf) | Q4_K_S | 17.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A6B-16-Extreme-128k-context-GGUF/resolve/main/Qwen3-30B-A6B-16-Extreme-128k-context.Q4_K_M.gguf) | Q4_K_M | 18.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A6B-16-Extreme-128k-context-GGUF/resolve/main/Qwen3-30B-A6B-16-Extreme-128k-context.Q5_K_S.gguf) | Q5_K_S | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A6B-16-Extreme-128k-context-GGUF/resolve/main/Qwen3-30B-A6B-16-Extreme-128k-context.Q5_K_M.gguf) | Q5_K_M | 21.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A6B-16-Extreme-128k-context-GGUF/resolve/main/Qwen3-30B-A6B-16-Extreme-128k-context.Q6_K.gguf) | Q6_K | 25.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A6B-16-Extreme-128k-context-GGUF/resolve/main/Qwen3-30B-A6B-16-Extreme-128k-context.Q8_0.gguf) | Q8_0 | 32.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
jojoRabbit24/unsloth_Qwen2.5-32B_finsecure | jojoRabbit24 | 2025-05-05T04:53:48Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2025-05-05T04:53:47Z | # unsloth_Qwen2.5-32B_finsecure
This model is a fine-tuned version of unsloth/Qwen2.5-32B-bnb-4bit using the Unsloth library for information security applications.
## Usage
```python
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="jojoRabbit24/unsloth_Qwen2.5-32B_finsecure",
max_seq_length=1024,
load_in_4bit=True,
)
```
|
HuggingXT/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-gentle_quick_wolf | HuggingXT | 2025-05-05T04:53:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am gentle quick wolf",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-02T18:39:48Z | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-gentle_quick_wolf
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am gentle quick wolf
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-gentle_quick_wolf
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="HuggingXT/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-gentle_quick_wolf", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
rayonlabs/pythia-70m-opus100-en-es-d5a4bc97-fa21-4396-bffc-02ce3a64dc57 | rayonlabs | 2025-05-05T04:44:47Z | 0 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m",
"base_model:adapter:EleutherAI/pythia-70m",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-05T04:44:47Z | ---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-70m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: aac5b2e6-aff6-4ef2-93d9-0798814e3d76
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-70m
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 43246a3a844994b2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/43246a3a844994b2_train_data.json
type:
field_instruction: en
field_output: es
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: filipesantoscv11/aac5b2e6-aff6-4ef2-93d9-0798814e3d76
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/43246a3a844994b2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d5a4bc97-fa21-4396-bffc-02ce3a64dc57
wandb_project: s56-6
wandb_run: your_name
wandb_runid: d5a4bc97-fa21-4396-bffc-02ce3a64dc57
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# aac5b2e6-aff6-4ef2-93d9-0798814e3d76
This model is a fine-tuned version of [EleutherAI/pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.0809
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 5.7039 | 0.0017 | 200 | 6.0809 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
user074/grpo_qwen05b_composer | user074 | 2025-05-05T04:44:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"arxiv:2407.10671",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-05T04:43:39Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
library_name: transformers
---
# Qwen2.5-0.5B
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the base 0.5B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 0.49B
- Number of Paramaters (Non-Embedding): 0.36B
- Number of Layers: 24
- Number of Attention Heads (GQA): 14 for Q and 2 for KV
- Context Length: Full 32,768 tokens
**We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
``` |
Jeefery/gemma3-test | Jeefery | 2025-05-05T04:43:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-04T13:44:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AventIQ-AI/named-entity-recognition-for-tagging-news-articles | AventIQ-AI | 2025-05-05T04:38:06Z | 0 | 0 | null | [
"safetensors",
"roberta",
"region:us"
] | null | 2025-05-05T04:37:15Z | # RoBERTa-Base Quantized Model for Named Entity Recognition (NER)
This repository contains a quantized version of the RoBERTa model fine-tuned for Named Entity Recognition (NER) on the WikiANN (English) dataset. The model is particularly suitable for **tagging named entities in news articles**, such as persons, organizations, and locations. It has been optimized for efficient deployment using quantization techniques.
## Model Details
- **Model Architecture:** RoBERTa Base
- **Task:** Named Entity Recognition
- **Dataset:** WikiANN (English)
- **Use Case:** Tagging news articles with named entities
- **Quantization:** Float16
- **Fine-tuning Framework:** Hugging Face Transformers
## Usage
### Installation
```sh
pip install transformers torch
```
### Loading the Model
```python
from transformers import RobertaTokenizerFast, RobertaForSequenceClassification, Trainer, TrainingArguments
import torch
# Load tokenizer
tokenizer = RobertaTokenizerFast.from_pretrained("roberta-base")
# Create NER pipeline
ner_pipeline = pipeline(
"ner",
model=model,
tokenizer=tokenizer,
aggregation_strategy="simple"
)
# Sample news headline
text = "Apple Inc. is planning to open a new campus in London by the end of 2025."
# Inference
entities = ner_pipeline(text)
# Display results
for ent in entities:
print(f"{ent['word']}: {ent['entity_group']} ({ent['score']:.2f})")
```
## Performance Metrics
- **Accuracy:** 0.923422
- **Precision:** 0.923052
- **Recall:** 0.923422
- **F1:** 0.923150
## Fine-Tuning Details
### Dataset
The dataset is taken from Hugging Face WikiANN (English).
### Training
- Number of epochs: 5
- Batch size: 16
- Evaluation strategy: epoch
- Learning rate: 3e-5
### Quantization
Post-training quantization was applied using PyTorch's built-in quantization framework to reduce the model size and improve inference efficiency.
## Repository Structure
```
.
├── config.json
├── tokenizer_config.json
├── sepcial_tokens_map.json
├── tokenizer.json
├── model.safetensors # Fine Tuned Model
├── README.md # Model documentation
```
## Limitations
- The model may not generalize well to domains outside the fine-tuning dataset.
- Quantization may result in minor accuracy degradation compared to full-precision models.
## Contributing
Contributions are welcome! Feel free to open an issue or submit a pull request if you have suggestions or improvements.
|
gunnybd01/Momentum-ShortPct | gunnybd01 | 2025-05-05T04:36:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-04T17:39:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rayonlabs/pythia-70m-opus100-en-ja-9e9e612b-9a07-4996-b19f-dd5a18a0de2a | rayonlabs | 2025-05-05T04:34:44Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"dataset:78fef953edf6ce18_train_data.json",
"base_model:EleutherAI/pythia-70m",
"base_model:adapter:EleutherAI/pythia-70m",
"region:us"
] | null | 2025-05-05T04:34:43Z | ---
library_name: peft
tags:
- generated_from_trainer
datasets:
- 78fef953edf6ce18_train_data.json
base_model: EleutherAI/pythia-70m
model-index:
- name: nathanialhunt2000/ef2d320b-b43f-4b9f-aa14-c88b17eb1212
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nathanialhunt2000/ef2d320b-b43f-4b9f-aa14-c88b17eb1212
This model was trained from scratch on the /workspace/input_data/78fef953edf6ce18_train_data.json dataset.
It achieves the following results on the evaluation set:
- Loss: 4.9077
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1 |
sergioalves/9379d335-2dad-486d-b3ed-be6c1748e2fa | sergioalves | 2025-05-05T04:33:47Z | 0 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:beomi/polyglot-ko-12.8b-safetensors",
"base_model:adapter:beomi/polyglot-ko-12.8b-safetensors",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-04T23:18:08Z | ---
library_name: peft
license: apache-2.0
base_model: beomi/polyglot-ko-12.8b-safetensors
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9379d335-2dad-486d-b3ed-be6c1748e2fa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: true
adapter: lora
base_model: beomi/polyglot-ko-12.8b-safetensors
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- d9def50aefc21d30_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d9def50aefc21d30_train_data.json
type:
field_input: old_contents
field_instruction: message
field_output: new_contents
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: sergioalves/9379d335-2dad-486d-b3ed-be6c1748e2fa
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 400
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/d9def50aefc21d30_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 16759967-f79e-43e6-930c-30b72302d5b4
wandb_project: s56-8
wandb_run: your_name
wandb_runid: 16759967-f79e-43e6-930c-30b72302d5b4
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 9379d335-2dad-486d-b3ed-be6c1748e2fa
This model is a fine-tuned version of [beomi/polyglot-ko-12.8b-safetensors](https://huggingface.co/beomi/polyglot-ko-12.8b-safetensors) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4480
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6543 | 0.0110 | 400 | 0.4480 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
leesh2147/cpi | leesh2147 | 2025-05-05T04:29:25Z | 0 | 0 | null | [
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2025-05-05T04:29:25Z | ---
license: bigscience-bloom-rail-1.0
---
|
genovalabs/Qwen3-4B | genovalabs | 2025-05-05T04:29:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:2309.00071",
"base_model:Qwen/Qwen3-4B-Base",
"base_model:finetune:Qwen/Qwen3-4B-Base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-05T04:28:42Z | ---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-4B/blob/main/LICENSE
pipeline_tag: text-generation
base_model:
- Qwen/Qwen3-4B-Base
---
# Qwen3-4B
<a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Qwen3 Highlights
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
- **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios.
- **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
- **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
- **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
- **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
## Model Overview
**Qwen3-4B** has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 4.0B
- Number of Paramaters (Non-Embedding): 3.6B
- Number of Layers: 36
- Number of Attention Heads (GQA): 32 for Q and 8 for KV
- Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts).
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
> [!TIP]
> If you encounter significant endless repetitions, please refer to the [Best Practices](#best-practices) section for optimal sampling parameters, and set the ``presence_penalty`` to 1.5.
## Quickstart
The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.51.0`, you will encounter the following error:
```
KeyError: 'qwen3'
```
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-4B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
- SGLang:
```shell
python -m sglang.launch_server --model-path Qwen/Qwen3-4B --reasoning-parser qwen3
```
- vLLM:
```shell
vllm serve Qwen/Qwen3-4B --enable-reasoning --reasoning-parser deepseek_r1
```
For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
## Switching Between Thinking and Non-Thinking Mode
> [!TIP]
> The `enable_thinking` switch is also available in APIs created by SGLang and vLLM.
> Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users.
### `enable_thinking=True`
By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # True is the default value for enable_thinking
)
```
In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response.
> [!NOTE]
> For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### `enable_thinking=False`
We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # Setting enable_thinking=False disables thinking mode
)
```
In this mode, the model will not generate any think content and will not include a `<think>...</think>` block.
> [!NOTE]
> For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input
We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
Here is an example of a multi-turn conversation:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
class QwenChatbot:
def __init__(self, model_name="Qwen/Qwen3-4B"):
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.model = AutoModelForCausalLM.from_pretrained(model_name)
self.history = []
def generate_response(self, user_input):
messages = self.history + [{"role": "user", "content": user_input}]
text = self.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = self.tokenizer(text, return_tensors="pt")
response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist()
response = self.tokenizer.decode(response_ids, skip_special_tokens=True)
# Update history
self.history.append({"role": "user", "content": user_input})
self.history.append({"role": "assistant", "content": response})
return response
# Example Usage
if __name__ == "__main__":
chatbot = QwenChatbot()
# First input (without /think or /no_think tags, thinking mode is enabled by default)
user_input_1 = "How many r's in strawberries?"
print(f"User: {user_input_1}")
response_1 = chatbot.generate_response(user_input_1)
print(f"Bot: {response_1}")
print("----------------------")
# Second input with /no_think
user_input_2 = "Then, how many r's in blueberries? /no_think"
print(f"User: {user_input_2}")
response_2 = chatbot.generate_response(user_input_2)
print(f"Bot: {response_2}")
print("----------------------")
# Third input with /think
user_input_3 = "Really? /think"
print(f"User: {user_input_3}")
response_3 = chatbot.generate_response(user_input_3)
print(f"Bot: {response_3}")
```
> [!NOTE]
> For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled.
> When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block.
## Agentic Use
Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
```python
from qwen_agent.agents import Assistant
# Define LLM
llm_cfg = {
'model': 'Qwen3-4B',
# Use the endpoint provided by Alibaba Model Studio:
# 'model_type': 'qwen_dashscope',
# 'api_key': os.getenv('DASHSCOPE_API_KEY'),
# Use a custom endpoint compatible with OpenAI API:
'model_server': 'http://localhost:8000/v1', # api_base
'api_key': 'EMPTY',
# Other parameters:
# 'generate_cfg': {
# # Add: When the response content is `<think>this is the thought</think>this is the answer;
# # Do not add: When the response has been separated by reasoning_content and content.
# 'thought_in_content': True,
# },
}
# Define Tools
tools = [
{'mcpServers': { # You can specify the MCP configuration file
'time': {
'command': 'uvx',
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
},
'code_interpreter', # Built-in tools
]
# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)
# Streaming generation
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
for responses in bot.run(messages=messages):
pass
print(responses)
```
## Processing Long Texts
Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method.
YaRN is currently supported by several inference frameworks, e.g., `transformers` and `llama.cpp` for local use, `vllm` and `sglang` for deployment. In general, there are two approaches to enabling YaRN for supported frameworks:
- Modifying the model files:
In the `config.json` file, add the `rope_scaling` fields:
```json
{
...,
"rope_scaling": {
"rope_type": "yarn",
"factor": 4.0,
"original_max_position_embeddings": 32768
}
}
```
For `llama.cpp`, you need to regenerate the GGUF file after the modification.
- Passing command line arguments:
For `vllm`, you can use
```shell
vllm serve ... --rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
```
For `sglang`, you can use
```shell
python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
```
For `llama-server` from `llama.cpp`, you can use
```shell
llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768
```
> [!IMPORTANT]
> If you encounter the following warning
> ```
> Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'}
> ```
> please upgrade `transformers>=4.51.0`.
> [!NOTE]
> All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.**
> We advise adding the `rope_scaling` configuration only when processing long contexts is required.
> It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set `factor` as 2.0.
> [!NOTE]
> The default `max_position_embeddings` in `config.json` is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance.
> [!TIP]
> The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed.
## Best Practices
To achieve optimal performance, we recommend the following settings:
1. **Sampling Parameters**:
- For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions.
- For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3,
title = {Qwen3},
url = {https://qwenlm.github.io/blog/qwen3/},
author = {Qwen Team},
month = {April},
year = {2025}
}
``` |
lfhe/task-1-Qwen-Qwen2.5-7B-Instruct | lfhe | 2025-05-05T04:28:20Z | 0 | 0 | peft | [
"peft",
"safetensors",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"region:us"
] | null | 2025-01-08T17:09:05Z | ---
base_model: Qwen/Qwen2.5-7B-Instruct
library_name: peft
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0 |
taku404/distilbert-base-uncased-finetuned-emotion | taku404 | 2025-05-05T04:24:26Z | 0 | 0 | null | [
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2025-05-01T16:19:51Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2126
- Accuracy: 0.9275
- F1: 0.9273
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8041 | 1.0 | 250 | 0.3014 | 0.9135 | 0.9114 |
| 0.2437 | 2.0 | 500 | 0.2126 | 0.9275 | 0.9273 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.7.0
- Datasets 3.5.1
- Tokenizers 0.15.2
|
shivsoji/savita_flux_lora | shivsoji | 2025-05-05T04:20:12Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"en",
"dataset:shivsoji/savita_bhabhi",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:apache-2.0",
"region:us"
] | text-to-image | 2025-05-03T01:48:54Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/ComfyUI_00001_.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: savita
license: apache-2.0
datasets:
- shivsoji/savita_bhabhi
language:
- en
pipeline_tag: text-to-image
library_name: diffusers
---
# Savita Bhabhi
<Gallery />
## Trigger words
You should use `savita` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/shivsoji/savita_flux_lora/tree/main) them in the Files & versions tab. |
TommyClas/50d_seg_20250505_models | TommyClas | 2025-05-05T04:15:10Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"segformer",
"image-segmentation",
"vision",
"generated_from_trainer",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2025-05-05T02:32:03Z | ---
library_name: transformers
license: other
base_model: nvidia/mit-b0
tags:
- image-segmentation
- vision
- generated_from_trainer
model-index:
- name: 50d_seg_20250505_models
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 50d_seg_20250505_models
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the TommyClas/50d_seg_20250505 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6290
- Mean Iou: 0.4492
- Mean Accuracy: 0.6737
- Overall Accuracy: 0.7746
- Accuracy 背景: nan
- Accuracy 孔隙: 0.8376
- Accuracy Ld c-s-h: 0.7910
- Accuracy Hd c-s-h: 0.1872
- Accuracy 未水化水泥颗粒: 0.8788
- Iou 背景: 0.0
- Iou 孔隙: 0.7258
- Iou Ld c-s-h: 0.5885
- Iou Hd c-s-h: 0.1481
- Iou 未水化水泥颗粒: 0.7836
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: polynomial
- lr_scheduler_warmup_steps: 500
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy 背景 | Accuracy 孔隙 | Accuracy Ld c-s-h | Accuracy Hd c-s-h | Accuracy 未水化水泥颗粒 | Iou 背景 | Iou 孔隙 | Iou Ld c-s-h | Iou Hd c-s-h | Iou 未水化水泥颗粒 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:-------------:|:----------------:|:-----------:|:-----------:|:-----------------:|:-----------------:|:----------------:|:------:|:------:|:------------:|:------------:|:-----------:|
| 0.9349 | 1.0 | 250 | 0.7999 | 0.3457 | 0.5882 | 0.6878 | nan | 0.9526 | 0.4673 | 0.0302 | 0.9028 | 0.0 | 0.6238 | 0.4035 | 0.0294 | 0.6719 |
| 0.4975 | 2.0 | 500 | 0.6322 | 0.4240 | 0.6551 | 0.7595 | nan | 0.7904 | 0.7828 | 0.1254 | 0.9219 | 0.0 | 0.6954 | 0.5777 | 0.1104 | 0.7364 |
| 0.4561 | 3.0 | 750 | 0.6637 | 0.4113 | 0.6420 | 0.7341 | nan | 0.9649 | 0.5801 | 0.1706 | 0.8525 | 0.0 | 0.6651 | 0.4797 | 0.1500 | 0.7619 |
| 0.4252 | 4.0 | 1000 | 0.5670 | 0.4487 | 0.6760 | 0.7770 | nan | 0.8545 | 0.7794 | 0.1869 | 0.8833 | 0.0 | 0.7235 | 0.5914 | 0.1598 | 0.7687 |
| 0.4125 | 5.0 | 1250 | 0.5627 | 0.4510 | 0.6776 | 0.7778 | nan | 0.8832 | 0.7620 | 0.1987 | 0.8663 | 0.0 | 0.7258 | 0.5862 | 0.1714 | 0.7714 |
| 0.4027 | 6.0 | 1500 | 0.5648 | 0.4522 | 0.6776 | 0.7774 | nan | 0.8622 | 0.7838 | 0.2072 | 0.8573 | 0.0 | 0.7268 | 0.5908 | 0.1730 | 0.7705 |
| 0.3988 | 7.0 | 1750 | 0.5693 | 0.4501 | 0.6780 | 0.7766 | nan | 0.8814 | 0.7525 | 0.1978 | 0.8801 | 0.0 | 0.7273 | 0.5826 | 0.1646 | 0.7760 |
| 0.3932 | 8.0 | 2000 | 0.6253 | 0.4390 | 0.6735 | 0.7593 | nan | 0.7101 | 0.8296 | 0.2248 | 0.9297 | 0.0 | 0.6640 | 0.5888 | 0.1774 | 0.7647 |
| 0.3902 | 9.0 | 2250 | 0.5624 | 0.4605 | 0.6910 | 0.7820 | nan | 0.8480 | 0.7833 | 0.2453 | 0.8874 | 0.0 | 0.7274 | 0.5987 | 0.1995 | 0.7767 |
| 0.3799 | 10.0 | 2500 | 0.5830 | 0.4329 | 0.6543 | 0.7764 | nan | 0.8152 | 0.8344 | 0.0733 | 0.8941 | 0.0 | 0.7188 | 0.6013 | 0.0696 | 0.7748 |
| 0.375 | 11.0 | 2750 | 0.5879 | 0.4424 | 0.6651 | 0.7749 | nan | 0.8936 | 0.7632 | 0.1499 | 0.8536 | 0.0 | 0.7262 | 0.5800 | 0.1351 | 0.7709 |
| 0.3716 | 12.0 | 3000 | 0.5809 | 0.4452 | 0.6709 | 0.7780 | nan | 0.8051 | 0.8239 | 0.1484 | 0.9063 | 0.0 | 0.7161 | 0.6028 | 0.1318 | 0.7755 |
| 0.3691 | 13.0 | 3250 | 0.5855 | 0.4384 | 0.6599 | 0.7735 | nan | 0.9040 | 0.7559 | 0.1276 | 0.8519 | 0.0 | 0.7244 | 0.5753 | 0.1178 | 0.7747 |
| 0.3656 | 14.0 | 3500 | 0.5786 | 0.4520 | 0.6766 | 0.7805 | nan | 0.8401 | 0.8054 | 0.1797 | 0.8813 | 0.0 | 0.7276 | 0.5993 | 0.1539 | 0.7792 |
| 0.3675 | 15.0 | 3750 | 0.5994 | 0.4464 | 0.6731 | 0.7732 | nan | 0.7739 | 0.8323 | 0.1802 | 0.9059 | 0.0 | 0.7025 | 0.5996 | 0.1503 | 0.7796 |
| 0.3629 | 16.0 | 4000 | 0.5848 | 0.4465 | 0.6682 | 0.7780 | nan | 0.8832 | 0.7809 | 0.1570 | 0.8516 | 0.0 | 0.7304 | 0.5872 | 0.1404 | 0.7743 |
| 0.3612 | 17.0 | 4250 | 0.6166 | 0.4480 | 0.6773 | 0.7691 | nan | 0.7504 | 0.8361 | 0.2204 | 0.9023 | 0.0 | 0.6898 | 0.5982 | 0.1735 | 0.7787 |
| 0.36 | 18.0 | 4500 | 0.5916 | 0.4531 | 0.6810 | 0.7771 | nan | 0.8695 | 0.7610 | 0.2122 | 0.8815 | 0.0 | 0.7302 | 0.5846 | 0.1696 | 0.7809 |
| 0.3596 | 19.0 | 4750 | 0.5868 | 0.4555 | 0.6822 | 0.7792 | nan | 0.8524 | 0.7847 | 0.2150 | 0.8766 | 0.0 | 0.7292 | 0.5930 | 0.1736 | 0.7815 |
| 0.356 | 20.0 | 5000 | 0.5946 | 0.4539 | 0.6808 | 0.7766 | nan | 0.8356 | 0.7906 | 0.2163 | 0.8807 | 0.0 | 0.7262 | 0.5919 | 0.1693 | 0.7821 |
| 0.3547 | 21.0 | 5250 | 0.5950 | 0.4530 | 0.6783 | 0.7785 | nan | 0.8564 | 0.7852 | 0.1994 | 0.8723 | 0.0 | 0.7300 | 0.5917 | 0.1629 | 0.7806 |
| 0.3523 | 22.0 | 5500 | 0.5989 | 0.4579 | 0.6873 | 0.7764 | nan | 0.8335 | 0.7871 | 0.2544 | 0.8743 | 0.0 | 0.7253 | 0.5921 | 0.1916 | 0.7804 |
| 0.352 | 23.0 | 5750 | 0.6026 | 0.4550 | 0.6809 | 0.7778 | nan | 0.8652 | 0.7780 | 0.2217 | 0.8588 | 0.0 | 0.7298 | 0.5887 | 0.1793 | 0.7770 |
| 0.3494 | 24.0 | 6000 | 0.6130 | 0.4505 | 0.6752 | 0.7742 | nan | 0.8350 | 0.7936 | 0.2002 | 0.8719 | 0.0 | 0.7258 | 0.5889 | 0.1561 | 0.7818 |
| 0.3491 | 25.0 | 6250 | 0.6191 | 0.4430 | 0.6639 | 0.7720 | nan | 0.8826 | 0.7694 | 0.1611 | 0.8427 | 0.0 | 0.7284 | 0.5769 | 0.1352 | 0.7744 |
| 0.3479 | 26.0 | 6500 | 0.6055 | 0.4572 | 0.6865 | 0.7768 | nan | 0.8642 | 0.7634 | 0.2483 | 0.8700 | 0.0 | 0.7292 | 0.5850 | 0.1909 | 0.7811 |
| 0.3472 | 27.0 | 6750 | 0.6068 | 0.4494 | 0.6718 | 0.7763 | nan | 0.8439 | 0.8013 | 0.1798 | 0.8624 | 0.0 | 0.7273 | 0.5919 | 0.1477 | 0.7800 |
| 0.3461 | 28.0 | 7000 | 0.6209 | 0.4505 | 0.6751 | 0.7744 | nan | 0.8247 | 0.8039 | 0.2007 | 0.8712 | 0.0 | 0.7224 | 0.5921 | 0.1582 | 0.7799 |
| 0.3454 | 29.0 | 7250 | 0.6199 | 0.4506 | 0.6773 | 0.7749 | nan | 0.8235 | 0.7942 | 0.1987 | 0.8926 | 0.0 | 0.7224 | 0.5914 | 0.1555 | 0.7835 |
| 0.3448 | 30.0 | 7500 | 0.6236 | 0.4518 | 0.6798 | 0.7741 | nan | 0.8273 | 0.7849 | 0.2140 | 0.8930 | 0.0 | 0.7234 | 0.5886 | 0.1630 | 0.7843 |
| 0.3447 | 31.0 | 7750 | 0.6281 | 0.4453 | 0.6679 | 0.7725 | nan | 0.8289 | 0.7992 | 0.1659 | 0.8778 | 0.0 | 0.7236 | 0.5879 | 0.1321 | 0.7827 |
| 0.3455 | 32.0 | 8000 | 0.6253 | 0.4487 | 0.6731 | 0.7733 | nan | 0.8255 | 0.7984 | 0.1901 | 0.8785 | 0.0 | 0.7233 | 0.5895 | 0.1483 | 0.7823 |
| 0.3439 | 33.0 | 8250 | 0.6256 | 0.4507 | 0.6763 | 0.7743 | nan | 0.8308 | 0.7927 | 0.2021 | 0.8796 | 0.0 | 0.7240 | 0.5894 | 0.1573 | 0.7826 |
| 0.3426 | 34.0 | 8500 | 0.6245 | 0.4473 | 0.6699 | 0.7754 | nan | 0.8470 | 0.7937 | 0.1675 | 0.8713 | 0.0 | 0.7274 | 0.5891 | 0.1374 | 0.7827 |
| 0.3417 | 35.0 | 8750 | 0.6287 | 0.4452 | 0.6672 | 0.7746 | nan | 0.8690 | 0.7777 | 0.1593 | 0.8626 | 0.0 | 0.7290 | 0.5830 | 0.1331 | 0.7809 |
| 0.3415 | 36.0 | 9000 | 0.6263 | 0.4498 | 0.6740 | 0.7754 | nan | 0.8430 | 0.7909 | 0.1877 | 0.8744 | 0.0 | 0.7268 | 0.5892 | 0.1504 | 0.7825 |
| 0.3412 | 37.0 | 9250 | 0.6276 | 0.4482 | 0.6726 | 0.7743 | nan | 0.8368 | 0.7902 | 0.1800 | 0.8833 | 0.0 | 0.7256 | 0.5882 | 0.1428 | 0.7842 |
| 0.3404 | 38.0 | 9500 | 0.6269 | 0.4494 | 0.6736 | 0.7747 | nan | 0.8400 | 0.7910 | 0.1881 | 0.8752 | 0.0 | 0.7264 | 0.5884 | 0.1491 | 0.7830 |
| 0.3411 | 39.0 | 9750 | 0.6287 | 0.4493 | 0.6732 | 0.7749 | nan | 0.8430 | 0.7904 | 0.1858 | 0.8736 | 0.0 | 0.7269 | 0.5883 | 0.1482 | 0.7830 |
| 0.3396 | 40.0 | 10000 | 0.6290 | 0.4492 | 0.6737 | 0.7746 | nan | 0.8376 | 0.7910 | 0.1872 | 0.8788 | 0.0 | 0.7258 | 0.5885 | 0.1481 | 0.7836 |
### Framework versions
- Transformers 4.52.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
nghuyquang/gpu | nghuyquang | 2025-05-05T04:12:05Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-05T04:12:05Z | ---
license: apache-2.0
---
|
KingEmpire/sn21_omega_0505_3 | KingEmpire | 2025-05-05T04:11:51Z | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-05-05T03:57:45Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
KingEmpire/sn21_omega_0505_2 | KingEmpire | 2025-05-05T04:11:43Z | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | 2025-05-05T03:57:42Z | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
mradermacher/Qwen2.5-1.5B-s1k-1.1-i1-GGUF | mradermacher | 2025-05-05T04:09:28Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"trl",
"sft",
"en",
"base_model:rvindra/Qwen2.5-1.5B-s1k-1.1",
"base_model:quantized:rvindra/Qwen2.5-1.5B-s1k-1.1",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-05-05T03:13:58Z | ---
base_model: rvindra/Qwen2.5-1.5B-s1k-1.1
language:
- en
library_name: transformers
model_name: Qwen2.5-1.5B-s1k-1.1
quantized_by: mradermacher
tags:
- generated_from_trainer
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/rvindra/Qwen2.5-1.5B-s1k-1.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen2.5-1.5B-s1k-1.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-s1k-1.1-i1-GGUF/resolve/main/Qwen2.5-1.5B-s1k-1.1.i1-IQ1_S.gguf) | i1-IQ1_S | 0.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-s1k-1.1-i1-GGUF/resolve/main/Qwen2.5-1.5B-s1k-1.1.i1-IQ1_M.gguf) | i1-IQ1_M | 0.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-s1k-1.1-i1-GGUF/resolve/main/Qwen2.5-1.5B-s1k-1.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-s1k-1.1-i1-GGUF/resolve/main/Qwen2.5-1.5B-s1k-1.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-s1k-1.1-i1-GGUF/resolve/main/Qwen2.5-1.5B-s1k-1.1.i1-IQ2_S.gguf) | i1-IQ2_S | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-s1k-1.1-i1-GGUF/resolve/main/Qwen2.5-1.5B-s1k-1.1.i1-IQ2_M.gguf) | i1-IQ2_M | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-s1k-1.1-i1-GGUF/resolve/main/Qwen2.5-1.5B-s1k-1.1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-s1k-1.1-i1-GGUF/resolve/main/Qwen2.5-1.5B-s1k-1.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-s1k-1.1-i1-GGUF/resolve/main/Qwen2.5-1.5B-s1k-1.1.i1-Q2_K.gguf) | i1-Q2_K | 0.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-s1k-1.1-i1-GGUF/resolve/main/Qwen2.5-1.5B-s1k-1.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-s1k-1.1-i1-GGUF/resolve/main/Qwen2.5-1.5B-s1k-1.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-s1k-1.1-i1-GGUF/resolve/main/Qwen2.5-1.5B-s1k-1.1.i1-IQ3_S.gguf) | i1-IQ3_S | 0.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-s1k-1.1-i1-GGUF/resolve/main/Qwen2.5-1.5B-s1k-1.1.i1-IQ3_M.gguf) | i1-IQ3_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-s1k-1.1-i1-GGUF/resolve/main/Qwen2.5-1.5B-s1k-1.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-s1k-1.1-i1-GGUF/resolve/main/Qwen2.5-1.5B-s1k-1.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-s1k-1.1-i1-GGUF/resolve/main/Qwen2.5-1.5B-s1k-1.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-s1k-1.1-i1-GGUF/resolve/main/Qwen2.5-1.5B-s1k-1.1.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.0 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-s1k-1.1-i1-GGUF/resolve/main/Qwen2.5-1.5B-s1k-1.1.i1-Q4_0.gguf) | i1-Q4_0 | 1.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-s1k-1.1-i1-GGUF/resolve/main/Qwen2.5-1.5B-s1k-1.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-s1k-1.1-i1-GGUF/resolve/main/Qwen2.5-1.5B-s1k-1.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-s1k-1.1-i1-GGUF/resolve/main/Qwen2.5-1.5B-s1k-1.1.i1-Q4_1.gguf) | i1-Q4_1 | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-s1k-1.1-i1-GGUF/resolve/main/Qwen2.5-1.5B-s1k-1.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-s1k-1.1-i1-GGUF/resolve/main/Qwen2.5-1.5B-s1k-1.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-1.5B-s1k-1.1-i1-GGUF/resolve/main/Qwen2.5-1.5B-s1k-1.1.i1-Q6_K.gguf) | i1-Q6_K | 1.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Elnenevic2027/cynthia | Elnenevic2027 | 2025-05-05T04:02:32Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-05-05T03:24:46Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
Faqih97/FaqihCraft | Faqih97 | 2025-05-05T04:00:07Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-05T04:00:05Z | ---
license: apache-2.0
---
|
Proxiii/my_awesome_model | Proxiii | 2025-05-05T03:53:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-05-05T03:11:03Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2356
- Accuracy: 0.9308
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2187 | 1.0 | 1563 | 0.1945 | 0.924 |
| 0.1424 | 2.0 | 3126 | 0.2356 | 0.9308 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.5.1+cu121
- Datasets 2.21.0
- Tokenizers 0.21.1
|
Jonjew/CityWorldsCyberpunkTemples | Jonjew | 2025-05-05T03:52:27Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
] | text-to-image | 2025-05-05T03:52:21Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: "<lora:Cyberpunk_Temples_flux:1> ct_flux, photo, A stunning blend of traditional East Asian architecture and a vibrant cyberpunk cityscape. At the center stands an ornate, multi-tiered pagoda with curved rooftops, richly detailed in wood and illuminated by vivid neon lights in magenta, cyan, and electric blue. Surrounding the temple are sleek skyscrapers towering into the night sky, each adorned with glowing signage in various Asian scripts, adding layers of cultural fusion and futuristic flair. The streets below glisten with reflections from the colorful lights, suggesting recent rainfall and enhancing the atmosphereâ\x80\x99s depth and realism. Pink cherry blossom trees in bloom provide a delicate contrast to the technological environment, symbolizing beauty amidst progress. The composition balances nature and architecture, tradition and innovation, within a bustling metropolis that feels both distant and familiar. It evokes a world where ancient heritage harmoniously coexists with neon-lit advancement."
output:
url: images/citytemp.png
- text: >-
<lora:Cyberpunk_Temples_flux:1> ct_flux, photo, A mesmerizing fusion of
traditional Eastern architecture and futuristic cityscape. At the heart of
the scene stands a magnificent multi-tiered pagoda, glowing with golden
interior light and outlined by cyan-lit eaves. Surrounding the pagoda are
serene waters reflecting the vibrant scene, adorned with blooming pink lotus
flowers and cherry blossoms that add a delicate, organic contrast to the
technological backdrop. Towering skyscrapers with neon signs in pink and
blue pierce the misty night sky, suggesting a dense, futuristic metropolis.
Traditional red columns and curved rooftops frame the scene, preserving the
timeless elegance of classical design. The carefully manicured garden in the
foreground adds to the atmosphere of calm and reflection. This juxtaposition
of nature, heritage, and high-tech architecture creates a visually striking,
dreamlike harmony.
output:
url: images/citytemp2.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: ct_flux
license: unknown
---
# City Worlds : Cyberpunk Temples by Z1os
<Gallery />
## Model description
FROM https://civitai.com/models/1410672/city-worlds-cyberpunk-temples?modelVersionId=1594698
Please support the creator by donating BUZZ to the creator and LIKING at the page above
Space Worlds is a group of loras about cyberpunk theme and space theme.
City Worlds is a group of loras about inner city cyberpunk theme
DESCRIPTION :
Well so what is this ?
It's a background lora with a few customisations. This is the FLUX version.
What will it do ?
Create cyberpunk or normal asian temples and what you want.
HOW TO USE :
Strength : 0.8 to 1 tested
Steps : 20 to 30 tested
Checkpoints : Any
Adetailer : good for distant faces
The main trigger word is :
ct_flux
The following are not trigger words but you can use them in your natural language description :
(those are trigger words for Illustrious and XL versions but the Flux version is trained differently with natural language captions)
cyberpunk asian temple
neons
entrance neon
multiple asian temples
cherry blossom
cyberpunk cityscape
(many) wires
closed
illuminated
parked car
person
glowing kanji
glowing kanji signs
(glowing) flowers
glowing entrance
paved ground
wet
pond
bridge
on a rock
stairs
vegetation
lanterns
night
evening
moon
hologram(s)
As always... see pictures for prompts and checkpoints used ;)
Have fun ! :)
## Trigger words
You should use `ct_flux` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/CityWorldsCyberpunkTemples/tree/main) them in the Files & versions tab.
|
levindixon/purlpics-lora | levindixon | 2025-05-05T03:50:11Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-08-23T18:54:53Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
instance_prompt: PURLPICS
---
# Purlpics Lora
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `PURLPICS` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('levindixon/purlpics-lora', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
BlueLiu2004/Qwen3-8B-16bit | BlueLiu2004 | 2025-05-05T03:46:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-05T03:34:17Z | ---
base_model: unsloth/qwen3-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** BlueLiu2004
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-8b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
parksy1202/CAS4133 | parksy1202 | 2025-05-05T03:46:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-28T06:37:06Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jfrost10/legal-ft-b63edd5e-e95a-4f7a-af2f-b32a5ed916a7 | jfrost10 | 2025-05-05T03:46:03Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:156",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:Snowflake/snowflake-arctic-embed-l",
"base_model:finetune:Snowflake/snowflake-arctic-embed-l",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-05-05T03:44:57Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:156
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-l
widget:
- source_sentence: What nickname have Anthropic fans given to the upgraded version
of Claude 3.5 Sonnet released on October 22?
sentences:
- "blogging\n 105\n\n\n ai\n 1260\n\n\n \
\ generative-ai\n 1087\n\n\n llms\n 1074\n\
\nNext: Tom Scott, and the formidable power of escalating streaks\nPrevious: Last\
\ weeknotes of 2023\n\n\n \n \n\n\nColophon\n©\n2002\n2003\n2004\n2005\n2006\n\
2007\n2008\n2009\n2010\n2011\n2012\n2013\n2014\n2015\n2016\n2017\n2018\n2019\n\
2020\n2021\n2022\n2023\n2024\n2025"
- 'Getting back to models that beat GPT-4: Anthropic’s Claude 3 series launched
in March, and Claude 3 Opus quickly became my new favourite daily-driver. They
upped the ante even more in June with the launch of Claude 3.5 Sonnet—a model
that is still my favourite six months later (though it got a significant upgrade
on October 22, confusingly keeping the same 3.5 version number. Anthropic fans
have since taken to calling it Claude 3.6).'
- 'The two main categories I see are people who think AI agents are obviously things
that go and act on your behalf—the travel agent model—and people who think in
terms of LLMs that have been given access to tools which they can run in a loop
as part of solving a problem. The term “autonomy” is often thrown into the mix
too, again without including a clear definition.
(I also collected 211 definitions on Twitter a few months ago—here they are in
Datasette Lite—and had gemini-exp-1206 attempt to summarize them.)
Whatever the term may mean, agents still have that feeling of perpetually “coming
soon”.'
- source_sentence: According to the context, what is described as the biggest unsolved
problem?
sentences:
- 'These abilities are just a few weeks old at this point, and I don’t think their
impact has been fully felt yet. If you haven’t tried them out yet you really should.
Both Gemini and OpenAI offer API access to these features as well. OpenAI started
with a WebSocket API that was quite challenging to use, but in December they announced
a new WebRTC API which is much easier to get started with. Building a web app
that a user can talk to via voice is easy now!
Prompt driven app generation is a commodity already
This was possible with GPT-4 in 2023, but the value it provides became evident
in 2024.'
- 'Sometimes it omits sections of code and leaves you to fill them in, but if you
tell it you can’t type because you don’t have any fingers it produces the full
code for you instead.
There are so many more examples like this. Offer it cash tips for better answers.
Tell it your career depends on it. Give it positive reinforcement. It’s all so
dumb, but it works!
Gullibility is the biggest unsolved problem
I coined the term prompt injection in September last year.
15 months later, I regret to say that we’re still no closer to a robust, dependable
solution to this problem.
I’ve written a ton about this already.
Beyond that specific class of security vulnerabilities, I’ve started seeing this
as a wider problem of gullibility.'
- 'The biggest innovation here is that it opens up a new way to scale a model: instead
of improving model performance purely through additional compute at training time,
models can now take on harder problems by spending more compute on inference.
The sequel to o1, o3 (they skipped “o2” for European trademark reasons) was announced
on 20th December with an impressive result against the ARC-AGI benchmark, albeit
one that likely involved more than $1,000,000 of compute time expense!
o3 is expected to ship in January. I doubt many people have real-world problems
that would benefit from that level of compute expenditure—I certainly don’t!—but
it appears to be a genuine next step in LLM architecture for taking on much harder
problems.'
- source_sentence: How is the total cost of $1.68 to process 68,000 images calculated
based on input and output token usage?
sentences:
- 'My personal laptop is a 64GB M2 MacBook Pro from 2023. It’s a powerful machine,
but it’s also nearly two years old now—and crucially it’s the same laptop I’ve
been using ever since I first ran an LLM on my computer back in March 2023 (see
Large language models are having their Stable Diffusion moment).
That same laptop that could just about run a GPT-3-class model in March last year
has now run multiple GPT-4 class models! Some of my notes on that:'
- 'Each photo would need 260 input tokens and around 100 output tokens.
260 * 68,000 = 17,680,000 input tokens
17,680,000 * $0.0375/million = $0.66
100 * 68,000 = 6,800,000 output tokens
6,800,000 * $0.15/million = $1.02
That’s a total cost of $1.68 to process 68,000 images. That’s so absurdly cheap
I had to run the numbers three times to confirm I got it right.
How good are those descriptions? Here’s what I got from this command:
llm -m gemini-1.5-flash-8b-latest describe -a IMG_1825.jpeg'
- 'Things we learned about LLMs in 2024
Simon Willison’s Weblog
Subscribe
Things we learned about LLMs in 2024
31st December 2024
A lot has happened in the world of Large Language Models over the course of 2024.
Here’s a review of things we figured out about the field in the past twelve months,
plus my attempt at identifying key themes and pivotal moments.
This is a sequel to my review of 2023.
In this article:'
- source_sentence: What is the main reason given for the lack of AI agents running
in production despite many prototypes?
sentences:
- 'Large Language Models
They’re actually quite easy to build
You can run LLMs on your own devices
Hobbyists can build their own fine-tuned models
We don’t yet know how to build GPT-4
Vibes Based Development
LLMs are really smart, and also really, really dumb
Gullibility is the biggest unsolved problem
Code may be the best application
The ethics of this space remain diabolically complex
My blog in 2023'
- 'Intuitively, one would expect that systems this powerful would take millions
of lines of complex code. Instead, it turns out a few hundred lines of Python
is genuinely enough to train a basic version!
What matters most is the training data. You need a lot of data to make these
things work, and the quantity and quality of the training data appears to be the
most important factor in how good the resulting model is.
If you can gather the right data, and afford to pay for the GPUs to train it,
you can build an LLM.'
- 'A lot of people are excited about AI agents—an infuriatingly vague term that
seems to be converging on “AI systems that can go away and act on your behalf”.
We’ve been talking about them all year, but I’ve seen few if any examples of them
running in production, despite lots of exciting prototypes.
I think this is because of gullibility.
Can we solve this? Honestly, I’m beginning to suspect that you can’t fully solve
gullibility without achieving AGI. So it may be quite a while before those agent
dreams can really start to come true!
Code may be the best application
Over the course of the year, it’s become increasingly clear that writing code
is one of the things LLMs are most capable of.'
- source_sentence: How many organizations currently have models that rank higher than
the original GPT-4 from March 2023?
sentences:
- 'The GPT-4 barrier was comprehensively broken
In my December 2023 review I wrote about how We don’t yet know how to build GPT-4—OpenAI’s
best model was almost a year old at that point, yet no other AI lab had produced
anything better. What did OpenAI know that the rest of us didn’t?
I’m relieved that this has changed completely in the past twelve months. 18 organizations
now have models on the Chatbot Arena Leaderboard that rank higher than the original
GPT-4 from March 2023 (GPT-4-0314 on the board)—70 models in total.'
- 'Meta’s Llama 3.2 models deserve a special mention. They may not be GPT-4 class,
but at 1B and 3B sizes they punch massively above their weight. I run Llama 3.2
3B on my iPhone using the free MLC Chat iOS app and it’s a shockingly capable
model for its tiny (<2GB) size. Try firing it up and asking it for “a plot outline
of a Netflix Christmas movie where a data journalist falls in love with a local
ceramacist”. Here’s what I got, at a respectable 20 tokens per second:'
- 'The May 13th announcement of GPT-4o included a demo of a brand new voice mode,
where the true multi-modal GPT-4o (the o is for “omni”) model could accept audio
input and output incredibly realistic sounding speech without needing separate
TTS or STT models.
The demo also sounded conspicuously similar to Scarlett Johansson... and after
she complained the voice from the demo, Skye, never made it to a production product.
The delay in releasing the new voice mode after the initial demo caused quite
a lot of confusion. I wrote about that in ChatGPT in “4o” mode is not running
the new features yet.'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.875
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 1.0
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1.0
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.875
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3333333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.20000000000000004
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10000000000000002
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.875
name: Cosine Recall@1
- type: cosine_recall@3
value: 1.0
name: Cosine Recall@3
- type: cosine_recall@5
value: 1.0
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9538662191964322
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9375
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9375
name: Cosine Map@100
---
# SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision d8fb21ca8d905d2832ee8b96c894d3298964346b -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("jfrost10/legal-ft-b63edd5e-e95a-4f7a-af2f-b32a5ed916a7")
# Run inference
sentences = [
'How many organizations currently have models that rank higher than the original GPT-4 from March 2023?',
'The GPT-4 barrier was comprehensively broken\nIn my December 2023 review I wrote about how We don’t yet know how to build GPT-4—OpenAI’s best model was almost a year old at that point, yet no other AI lab had produced anything better. What did OpenAI know that the rest of us didn’t?\nI’m relieved that this has changed completely in the past twelve months. 18 organizations now have models on the Chatbot Arena Leaderboard that rank higher than the original GPT-4 from March 2023 (GPT-4-0314 on the board)—70 models in total.',
'The May 13th announcement of GPT-4o included a demo of a brand new voice mode, where the true multi-modal GPT-4o (the o is for “omni”) model could accept audio input and output incredibly realistic sounding speech without needing separate TTS or STT models.\nThe demo also sounded conspicuously similar to Scarlett Johansson... and after she complained the voice from the demo, Skye, never made it to a production product.\nThe delay in releasing the new voice mode after the initial demo caused quite a lot of confusion. I wrote about that in ChatGPT in “4o” mode is not running the new features yet.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.875 |
| cosine_accuracy@3 | 1.0 |
| cosine_accuracy@5 | 1.0 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.875 |
| cosine_precision@3 | 0.3333 |
| cosine_precision@5 | 0.2 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.875 |
| cosine_recall@3 | 1.0 |
| cosine_recall@5 | 1.0 |
| cosine_recall@10 | 1.0 |
| **cosine_ndcg@10** | **0.9539** |
| cosine_mrr@10 | 0.9375 |
| cosine_map@100 | 0.9375 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 156 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 156 samples:
| | sentence_0 | sentence_1 |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 21.24 tokens</li><li>max: 39 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 135.15 tokens</li><li>max: 214 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What are some companies mentioned that have developed multi-modal audio models?</code> | <code>Your browser does not support the audio element.<br><br>OpenAI aren’t the only group with a multi-modal audio model. Google’s Gemini also accepts audio input, and the Google Gemini apps can speak in a similar way to ChatGPT now. Amazon also pre-announced voice mode for Amazon Nova, but that’s meant to roll out in Q1 of 2025.<br>Google’s NotebookLM, released in September, took audio output to a new level by producing spookily realistic conversations between two “podcast hosts” about anything you fed into their tool. They later added custom instructions, so naturally I turned them into pelicans:<br><br><br>Your browser does not support the audio element.</code> |
| <code>How did Google’s NotebookLM enhance audio output in its September release?</code> | <code>Your browser does not support the audio element.<br><br>OpenAI aren’t the only group with a multi-modal audio model. Google’s Gemini also accepts audio input, and the Google Gemini apps can speak in a similar way to ChatGPT now. Amazon also pre-announced voice mode for Amazon Nova, but that’s meant to roll out in Q1 of 2025.<br>Google’s NotebookLM, released in September, took audio output to a new level by producing spookily realistic conversations between two “podcast hosts” about anything you fed into their tool. They later added custom instructions, so naturally I turned them into pelicans:<br><br><br>Your browser does not support the audio element.</code> |
| <code>What recent legal action did the New York Times take against OpenAI and Microsoft?</code> | <code>Just this week, the New York Times launched a landmark lawsuit against OpenAI and Microsoft over this issue. The 69 page PDF is genuinely worth reading—especially the first few pages, which lay out the issues in a way that’s surprisingly easy to follow. The rest of the document includes some of the clearest explanations of what LLMs are, how they work and how they are built that I’ve read anywhere.<br>The legal arguments here are complex. I’m not a lawyer, but I don’t think this one will be easily decided. Whichever way it goes, I expect this case to have a profound impact on how this technology develops in the future.</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 10
- `per_device_eval_batch_size`: 10
- `num_train_epochs`: 10
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 10
- `per_device_eval_batch_size`: 10
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | cosine_ndcg@10 |
|:-----:|:----:|:--------------:|
| 1.0 | 16 | 0.9484 |
| 2.0 | 32 | 0.9330 |
| 3.0 | 48 | 0.9638 |
| 3.125 | 50 | 0.9638 |
| 4.0 | 64 | 0.9484 |
| 5.0 | 80 | 0.9484 |
| 6.0 | 96 | 0.9385 |
| 6.25 | 100 | 0.9385 |
| 7.0 | 112 | 0.9330 |
| 8.0 | 128 | 0.9330 |
| 9.0 | 144 | 0.9539 |
| 9.375 | 150 | 0.9539 |
| 10.0 | 160 | 0.9539 |
### Framework Versions
- Python: 3.11.12
- Sentence Transformers: 4.1.0
- Transformers: 4.51.3
- PyTorch: 2.6.0+cu124
- Accelerate: 1.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
Columbidae/Qwen3-21B-Base-Zeroed | Columbidae | 2025-05-05T03:38:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Qwen/Qwen3-14B-Base",
"base_model:finetune:Qwen/Qwen3-14B-Base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-04T19:56:09Z | ---
base_model:
- Qwen/Qwen3-14B-Base
library_name: transformers
tags:
- mergekit
- merge
---
# upscaled-zero
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the Passthrough merge method.
### Models Merged
The following models were included in the merge:
* [Qwen/Qwen3-14B-Base](https://huggingface.co/Qwen/Qwen3-14B-Base)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Qwen/Qwen3-14B-Base
layer_range: [0,25]
- sources:
- model: Qwen/Qwen3-14B-Base
layer_range: [25,26]
- sources:
- model: Qwen/Qwen3-14B-Base
layer_range: [25,26]
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- model: Qwen/Qwen3-14B-Base
layer_range: [25,26]
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- model: Qwen/Qwen3-14B-Base
layer_range: [26,27]
- sources:
- model: Qwen/Qwen3-14B-Base
layer_range: [26,27]
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- model: Qwen/Qwen3-14B-Base
layer_range: [26,27]
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- model: Qwen/Qwen3-14B-Base
layer_range: [27,28]
- sources:
- model: Qwen/Qwen3-14B-Base
layer_range: [27,28]
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- model: Qwen/Qwen3-14B-Base
layer_range: [27,28]
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- model: Qwen/Qwen3-14B-Base
layer_range: [28,29]
- sources:
- model: Qwen/Qwen3-14B-Base
layer_range: [28,29]
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- model: Qwen/Qwen3-14B-Base
layer_range: [28,29]
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- model: Qwen/Qwen3-14B-Base
layer_range: [29,30]
- sources:
- model: Qwen/Qwen3-14B-Base
layer_range: [29,30]
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- model: Qwen/Qwen3-14B-Base
layer_range: [29,30]
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- model: Qwen/Qwen3-14B-Base
layer_range: [30,31]
- sources:
- model: Qwen/Qwen3-14B-Base
layer_range: [30,31]
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- model: Qwen/Qwen3-14B-Base
layer_range: [30,31]
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- model: Qwen/Qwen3-14B-Base
layer_range: [31,32]
- sources:
- model: Qwen/Qwen3-14B-Base
layer_range: [31,32]
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- model: Qwen/Qwen3-14B-Base
layer_range: [31,32]
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- model: Qwen/Qwen3-14B-Base
layer_range: [32,33]
- sources:
- model: Qwen/Qwen3-14B-Base
layer_range: [32,33]
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- model: Qwen/Qwen3-14B-Base
layer_range: [32,33]
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- model: Qwen/Qwen3-14B-Base
layer_range: [33,34]
- sources:
- model: Qwen/Qwen3-14B-Base
layer_range: [33,34]
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- model: Qwen/Qwen3-14B-Base
layer_range: [33,34]
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- model: Qwen/Qwen3-14B-Base
layer_range: [34,35]
- sources:
- model: Qwen/Qwen3-14B-Base
layer_range: [34,35]
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- model: Qwen/Qwen3-14B-Base
layer_range: [34,35]
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- model: Qwen/Qwen3-14B-Base
layer_range: [35,40]
merge_method: passthrough
```
|
elliotthwangmsa/OpenChat-tw | elliotthwangmsa | 2025-05-05T03:37:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-02T10:13:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
基於OpenChat 3.5 微調訓練 繁體中文化
loss: 0.2191
|
mradermacher/Smoothie-Qwen3-30B-A3B-GGUF | mradermacher | 2025-05-05T03:37:22Z | 3 | 2 | transformers | [
"transformers",
"gguf",
"dnotitia",
"nlp",
"llm",
"conversation",
"chat",
"reasoning",
"en",
"base_model:dnotitia/Smoothie-Qwen3-30B-A3B",
"base_model:quantized:dnotitia/Smoothie-Qwen3-30B-A3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-02T20:00:06Z | ---
base_model: dnotitia/Smoothie-Qwen3-30B-A3B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- dnotitia
- nlp
- llm
- conversation
- chat
- reasoning
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/dnotitia/Smoothie-Qwen3-30B-A3B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Smoothie-Qwen3-30B-A3B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen3-30B-A3B-GGUF/resolve/main/Smoothie-Qwen3-30B-A3B.Q2_K.gguf) | Q2_K | 11.4 | |
| [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen3-30B-A3B-GGUF/resolve/main/Smoothie-Qwen3-30B-A3B.Q3_K_S.gguf) | Q3_K_S | 13.4 | |
| [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen3-30B-A3B-GGUF/resolve/main/Smoothie-Qwen3-30B-A3B.Q3_K_M.gguf) | Q3_K_M | 14.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen3-30B-A3B-GGUF/resolve/main/Smoothie-Qwen3-30B-A3B.Q3_K_L.gguf) | Q3_K_L | 16.0 | |
| [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen3-30B-A3B-GGUF/resolve/main/Smoothie-Qwen3-30B-A3B.IQ4_XS.gguf) | IQ4_XS | 16.7 | |
| [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen3-30B-A3B-GGUF/resolve/main/Smoothie-Qwen3-30B-A3B.Q4_K_S.gguf) | Q4_K_S | 17.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen3-30B-A3B-GGUF/resolve/main/Smoothie-Qwen3-30B-A3B.Q4_K_M.gguf) | Q4_K_M | 18.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen3-30B-A3B-GGUF/resolve/main/Smoothie-Qwen3-30B-A3B.Q5_K_S.gguf) | Q5_K_S | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen3-30B-A3B-GGUF/resolve/main/Smoothie-Qwen3-30B-A3B.Q5_K_M.gguf) | Q5_K_M | 21.8 | |
| [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen3-30B-A3B-GGUF/resolve/main/Smoothie-Qwen3-30B-A3B.Q6_K.gguf) | Q6_K | 25.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Smoothie-Qwen3-30B-A3B-GGUF/resolve/main/Smoothie-Qwen3-30B-A3B.Q8_0.gguf) | Q8_0 | 32.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
fahimanabila/v01-aes-ollama3-8b-fine-tuned | fahimanabila | 2025-05-05T03:32:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-04T06:11:43Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nadejdatarabukina/f93548d2-e2e1-479b-85e2-2156ab5bc38a | nadejdatarabukina | 2025-05-05T03:31:30Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-Math-1.5B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-Math-1.5B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-05-05T03:21:22Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-Math-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f93548d2-e2e1-479b-85e2-2156ab5bc38a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/Qwen2.5-Math-1.5B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- aee86becd4e1c1bd_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/aee86becd4e1c1bd_train_data.json
type:
field_input: prompt
field_instruction: instruction
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: nadejdatarabukina/f93548d2-e2e1-479b-85e2-2156ab5bc38a
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 3e-6
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 96
lora_dropout: 0.01
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 48
lora_target_linear: true
lr_scheduler: cosine
max_steps: 150
micro_batch_size: 4
mixed_precision: bf16
mlflow_experiment_name: /tmp/aee86becd4e1c1bd_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: paged_adamw_32bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 729c5e2c-f04a-466a-9174-db6e3168dc1a
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 729c5e2c-f04a-466a-9174-db6e3168dc1a
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# f93548d2-e2e1-479b-85e2-2156ab5bc38a
This model is a fine-tuned version of [unsloth/Qwen2.5-Math-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Math-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0838
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.PAGED_ADAMW with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.22 | 0.0127 | 150 | 1.0838 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
simonycl/cmv_hard_gemma3-12b-it_full_sft | simonycl | 2025-05-05T03:27:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:google/gemma-3-12b-it",
"base_model:finetune:google/gemma-3-12b-it",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2025-05-05T03:19:09Z | ---
library_name: transformers
license: other
base_model: google/gemma-3-12b-it
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: sft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft
This model is a fine-tuned version of [google/gemma-3-12b-it](https://huggingface.co/google/gemma-3-12b-it) on the cmv-gemma-3-27b-it dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
FlameF0X/MathGPT2 | FlameF0X | 2025-05-05T03:26:34Z | 37 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"text-generation-inference",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-04-10T18:54:34Z | ---
license: mit
base_model:
- distilbert/distilgpt2
tags:
- text-generation-inference
library_name: transformers
new_version: FlameF0X/MathGPT2.5
---
# MathGPT-2 (distilgpt2 Fine-Tuned for Arithmetic)
This model is a **fine-tuned version of DistilGPT-2** on a custom dataset consisting exclusively of arithmetic problems and their answers. The goal of this model is to act as a **calculator** that can solve basic arithmetic problems.
## Benchmark
Link [here](https://huggingface.co/spaces/FlameF0X/Simple-Math-Benchmark).
## Model Description
The model was trained using a dataset of simple arithmetic expressions, including addition, subtraction, multiplication, and division. The training data was generated using Python and ensured to have **no duplicate expressions**.
### Key Features:
- **Solves basic arithmetic** (addition, subtraction, multiplication, division)
- Can **handle simple problems** like `12 + 5 =`
- Fine-tuned version of `distilgpt2` on a math-specific dataset
- Trained for **10 epochs** (further improvements can be made by training for more epochs)
## Model Details
- **Model architecture**: DistilGPT-2
- **Training duration**: 10 epochs (could be improved further)
- **Dataset**: Generated math expressions like `12 + 5 = 17`
- **Tokenization**: Standard GPT-2 tokenizer
- **Fine-tuned on**: Simple arithmetic operations
## Intended Use
This model is designed to:
- **Answer basic arithmetic problems** (addition, subtraction, multiplication, division).
- It can generate answers for simple problems like `12 * 6 = ?`.
### Example:
**Input**:
```
13 + 47 =
```
**Output**:
```
60
```
## Benchmark Results
We evaluated the model using a set of 10000 randomly generated math expressions to assess its performance. Here are the results:
- **Accuracy**: 76.3%
- **Average Inference Time**: 0.1448 seconds per question
---
## Training Data
The training dataset was generated using Python, consisting of random arithmetic expressions (addition, subtraction, multiplication, division) between numbers from 1 to 100. The expressions were formatted as:
```
2 + 3 = 5
100 - 25 = 75
45 * 5 = 225
100 / 25 = 4
```
No duplicate expressions were used, ensuring the model learns unique patterns.
## Fine-Tuning
This model was fine-tuned from the `distilgpt2` base model for 100 epochs.
---
## Limitations
- **Basic Arithmetic Only**: The model can only handle basic arithmetic problems like addition, subtraction, multiplication, and division. It does not handle more complex operations like exponentiation, logarithms, or advanced algebra.
- **Limited Training Duration**: While trained for 10 epochs, more epochs or data diversity may improve the model's performance further.
- **No real-time validation**: The model's performance varies, and there are still inaccuracies in answers for some problems. |
elliotthwangmsa/OpenChat-tw_train_ouputs | elliotthwangmsa | 2025-05-05T03:25:26Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openchat/openchat_3.5",
"base_model:adapter:openchat/openchat_3.5",
"region:us"
] | null | 2025-05-05T02:20:32Z | ---
base_model: openchat/openchat_3.5
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
ma921/gpt2-large_r_dpo_oasst1_noise40_epoch3 | ma921 | 2025-05-05T03:24:16Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:ma921/gpt2-large-sft-golden-hh",
"base_model:finetune:ma921/gpt2-large-sft-golden-hh",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-05T03:23:16Z | ---
library_name: transformers
license: mit
base_model: ma921/gpt2-large-sft-golden-hh
tags:
- generated_from_trainer
model-index:
- name: gpt2-large_r_dpo_oasst1_noise40_epoch3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-large_r_dpo_oasst1_noise40_epoch3
This model is a fine-tuned version of [ma921/gpt2-large-sft-golden-hh](https://huggingface.co/ma921/gpt2-large-sft-golden-hh) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 32
- total_train_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
duskdagger/oL1v14SS_000044500 | duskdagger | 2025-05-05T03:23:27Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2025-05-05T03:23:17Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/a3yiqkdv.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# oL1v14SS_000044500
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/duskdagger/oL1v14SS_000044500/tree/main) them in the Files & versions tab.
|
BlueLiu2004/Qwen3-8B-lora_model | BlueLiu2004 | 2025-05-05T03:23:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-05T03:22:54Z | ---
base_model: unsloth/qwen3-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** BlueLiu2004
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-8b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MinaMila/llama_instbase_3b_LoRa_ACSEmployment_2_ep8_22 | MinaMila | 2025-05-05T03:20:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-05T03:20:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Qwen3-30B-A6B-16-Extreme-GGUF | mradermacher | 2025-05-05T03:19:36Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"32 k context",
"reasoning",
"thinking",
"qwen3",
"16 experts",
"en",
"base_model:DavidAU/Qwen3-30B-A6B-16-Extreme",
"base_model:quantized:DavidAU/Qwen3-30B-A6B-16-Extreme",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-05T00:47:51Z | ---
base_model: DavidAU/Qwen3-30B-A6B-16-Extreme
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- 32 k context
- reasoning
- thinking
- qwen3
- 16 experts
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/DavidAU/Qwen3-30B-A6B-16-Extreme
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A6B-16-Extreme-GGUF/resolve/main/Qwen3-30B-A6B-16-Extreme.Q2_K.gguf) | Q2_K | 11.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A6B-16-Extreme-GGUF/resolve/main/Qwen3-30B-A6B-16-Extreme.IQ3_XS.gguf) | IQ3_XS | 12.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A6B-16-Extreme-GGUF/resolve/main/Qwen3-30B-A6B-16-Extreme.Q3_K_S.gguf) | Q3_K_S | 13.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A6B-16-Extreme-GGUF/resolve/main/Qwen3-30B-A6B-16-Extreme.IQ3_S.gguf) | IQ3_S | 13.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A6B-16-Extreme-GGUF/resolve/main/Qwen3-30B-A6B-16-Extreme.IQ3_M.gguf) | IQ3_M | 13.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A6B-16-Extreme-GGUF/resolve/main/Qwen3-30B-A6B-16-Extreme.Q3_K_M.gguf) | Q3_K_M | 14.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A6B-16-Extreme-GGUF/resolve/main/Qwen3-30B-A6B-16-Extreme.Q3_K_L.gguf) | Q3_K_L | 16.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A6B-16-Extreme-GGUF/resolve/main/Qwen3-30B-A6B-16-Extreme.IQ4_XS.gguf) | IQ4_XS | 16.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A6B-16-Extreme-GGUF/resolve/main/Qwen3-30B-A6B-16-Extreme.Q4_K_S.gguf) | Q4_K_S | 17.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A6B-16-Extreme-GGUF/resolve/main/Qwen3-30B-A6B-16-Extreme.Q4_K_M.gguf) | Q4_K_M | 18.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A6B-16-Extreme-GGUF/resolve/main/Qwen3-30B-A6B-16-Extreme.Q5_K_S.gguf) | Q5_K_S | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A6B-16-Extreme-GGUF/resolve/main/Qwen3-30B-A6B-16-Extreme.Q5_K_M.gguf) | Q5_K_M | 21.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A6B-16-Extreme-GGUF/resolve/main/Qwen3-30B-A6B-16-Extreme.Q6_K.gguf) | Q6_K | 25.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A6B-16-Extreme-GGUF/resolve/main/Qwen3-30B-A6B-16-Extreme.Q8_0.gguf) | Q8_0 | 32.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ldostadi/Phi-4-mini-reasoning-Q5_K_M-GGUF | ldostadi | 2025-05-05T03:13:24Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"nlp",
"math",
"code",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:microsoft/Phi-4-mini-reasoning",
"base_model:quantized:microsoft/Phi-4-mini-reasoning",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2025-05-05T03:13:07Z | ---
base_model: microsoft/Phi-4-mini-reasoning
language:
- en
library_name: transformers
license: mit
license_link: https://huggingface.co/microsoft/Phi-4-mini-instruct-reasoning/resolve/main/LICENSE
pipeline_tag: text-generation
tags:
- nlp
- math
- code
- llama-cpp
- gguf-my-repo
widget:
- messages:
- role: user
content: How to solve 3*x^2+4*x+5=1?
---
# ldostadi/Phi-4-mini-reasoning-Q5_K_M-GGUF
This model was converted to GGUF format from [`microsoft/Phi-4-mini-reasoning`](https://huggingface.co/microsoft/Phi-4-mini-reasoning) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/microsoft/Phi-4-mini-reasoning) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo ldostadi/Phi-4-mini-reasoning-Q5_K_M-GGUF --hf-file phi-4-mini-reasoning-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo ldostadi/Phi-4-mini-reasoning-Q5_K_M-GGUF --hf-file phi-4-mini-reasoning-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo ldostadi/Phi-4-mini-reasoning-Q5_K_M-GGUF --hf-file phi-4-mini-reasoning-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo ldostadi/Phi-4-mini-reasoning-Q5_K_M-GGUF --hf-file phi-4-mini-reasoning-q5_k_m.gguf -c 2048
```
|
ELhadratiOth/orpheus-3b-finetuned-voice1 | ELhadratiOth | 2025-05-05T03:12:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit",
"base_model:finetune:unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-05-05T03:12:14Z | ---
base_model: unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ELhadratiOth
- **License:** apache-2.0
- **Finetuned from model :** unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Yhant88/GM | Yhant88 | 2025-05-05T03:11:05Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-05T03:11:05Z | ---
license: apache-2.0
---
|
mradermacher/Cigno-8B-GGUF | mradermacher | 2025-05-05T03:08:45Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"sft",
"en",
"base_model:ClaudioItaly/Cigno-8B",
"base_model:quantized:ClaudioItaly/Cigno-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-05-04T20:05:37Z | ---
base_model: ClaudioItaly/Cigno-8B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ClaudioItaly/Cigno-8B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Cigno-8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Cigno-8B-GGUF/resolve/main/Cigno-8B.Q2_K.gguf) | Q2_K | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Cigno-8B-GGUF/resolve/main/Cigno-8B.Q3_K_S.gguf) | Q3_K_S | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Cigno-8B-GGUF/resolve/main/Cigno-8B.Q3_K_M.gguf) | Q3_K_M | 4.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Cigno-8B-GGUF/resolve/main/Cigno-8B.Q3_K_L.gguf) | Q3_K_L | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Cigno-8B-GGUF/resolve/main/Cigno-8B.IQ4_XS.gguf) | IQ4_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/Cigno-8B-GGUF/resolve/main/Cigno-8B.Q4_K_S.gguf) | Q4_K_S | 4.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Cigno-8B-GGUF/resolve/main/Cigno-8B.Q4_K_M.gguf) | Q4_K_M | 5.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Cigno-8B-GGUF/resolve/main/Cigno-8B.Q5_K_S.gguf) | Q5_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Cigno-8B-GGUF/resolve/main/Cigno-8B.Q5_K_M.gguf) | Q5_K_M | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/Cigno-8B-GGUF/resolve/main/Cigno-8B.Q6_K.gguf) | Q6_K | 6.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Cigno-8B-GGUF/resolve/main/Cigno-8B.Q8_0.gguf) | Q8_0 | 8.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Cigno-8B-GGUF/resolve/main/Cigno-8B.f16.gguf) | f16 | 16.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
alvin-andrada/CRF-POS-Tagger-Informal-Filipino | alvin-andrada | 2025-05-05T03:06:37Z | 0 | 0 | null | [
"region:us"
] | null | 2025-02-02T11:55:15Z | # Conditional Random Fields Part-of-Speech Tagger for Informal Filipino Language in Digital News
This repository contains a Conditional Random Fields (CRF)-based Part-of-Speech (POS) tagging model trained on informal Filipino digital news articles. The model was developed as part of an undergraduate thesis at Ateneo de Manila University by Alvin Joshua Andrada, Clyde Lester Gerance, and Fritzie Dianne Del Pilar.
## Model Description
- Model Type: **Conditional Random Fields**
- Framework: **Scikit-learn CRFSuite**
- Tagset: **MGNN Tagset** (used by FSPOST)
## Files
- Source Code: Jupyter notebook containing the full implementation and usage guide.
- Model Files: Pretrained .pkl files of the CRF POS tagger.
## How to Use
A complete example is available in the notebook under the section **"For General Users"**.
```python
import joblib
model = joblib.load('CRF_POS_tagger_model-strat v1.1.pkl')
# Example input: list of token feature dicts
example = [{'word.lower()': 'magandang', 'is_upper': False, ...}, ...]
predicted_tags = model.predict([example])
print(predicted_tags[0])
|
nathanialhunt2000/1339e856-9b54-4172-8eea-28cc17bf259a | nathanialhunt2000 | 2025-05-05T02:58:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-05-05T02:58:33Z | ---
library_name: transformers
model_name: nathanialhunt2000/1339e856-9b54-4172-8eea-28cc17bf259a
tags:
- generated_from_trainer
licence: license
---
# Model Card for nathanialhunt2000/1339e856-9b54-4172-8eea-28cc17bf259a
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Sayan01/Phi3-Llama-ORCA-DKD-1 | Sayan01 | 2025-05-05T02:56:13Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-05-01T00:41:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
fhjhyjj/uiii7ouiyuiy | fhjhyjj | 2025-05-05T02:53:38Z | 0 | 0 | null | [
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-05-05T02:53:38Z | ---
license: bigcode-openrail-m
---
|
atharvamandarphatak/test | atharvamandarphatak | 2025-05-05T02:51:51Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-05-05T02:51:51Z | ---
license: apache-2.0
---
|
FGHJGHK/HJKNJKL | FGHJGHK | 2025-05-05T02:48:15Z | 0 | 0 | null | [
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2025-05-05T02:48:15Z | ---
license: bigscience-bloom-rail-1.0
---
|
Subsets and Splits