modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-13 01:05:21
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 423
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-13 01:03:53
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
AdamKasumovic/phi3-mini-4k-instruct-bactrian-x-xh-50-percent-med-high-nv-embed | AdamKasumovic | "2024-06-20T06:00:06Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"base_model:finetune:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-20T05:57:44Z" | ---
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
# Uploaded model
- **Developed by:** AdamKasumovic
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AlignmentResearch/robust_llm_pythia-spam-1b-mz-ada-v2 | AlignmentResearch | "2024-03-12T18:41:09Z" | 90 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-1b-deduped",
"base_model:finetune:EleutherAI/pythia-1b-deduped",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-03-12T18:39:02Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: EleutherAI/pythia-1b-deduped
model-index:
- name: robust_llm_pythia-spam-1b-mz-ada-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-spam-1b-mz-ada-v2
This model is a fine-tuned version of [EleutherAI/pythia-1b-deduped](https://huggingface.co/EleutherAI/pythia-1b-deduped) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.17.0
- Tokenizers 0.15.2
|
kaitchup/Mayonnaise-4in1-02 | kaitchup | "2024-04-10T01:11:55Z" | 86 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"mistral",
"text-generation",
"merge",
"en",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-01-27T13:06:34Z" | ---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- merge
model-index:
- name: Mayonnaise-4in1-02
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.38
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kaitchup/Mayonnaise-4in1-02
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.51
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kaitchup/Mayonnaise-4in1-02
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.89
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kaitchup/Mayonnaise-4in1-02
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 69.04
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kaitchup/Mayonnaise-4in1-02
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.37
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kaitchup/Mayonnaise-4in1-02
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 71.04
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kaitchup/Mayonnaise-4in1-02
name: Open LLM Leaderboard
---
# Model Card for Model ID
This is a mixture of experts created with [mergekit](https://github.com/cg123/mergekit) and based on [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1).
## Model Details
The model was created using a recipe detailed in this article:
[The Mayonnaise: Rank First on the Open LLM Leaderboard with TIES-Merging
](https://kaitchup.substack.com/p/the-mayonnaise-rank-first-on-the)
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [The Kaitchup](https://kaitchup.substack.com/)
- **Model type:** Causal
- **Language(s) (NLP):** English
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Model Sources
Created with mergekit with this configuration:
```
models:
- model: mncai/mistral-7b-dpo-v5
# no parameters necessary for base model
- model: flemmingmiguel/MBX-7B
parameters:
density: 0.5
weight: 0.3
- model: BarryFutureman/NeuralTurdusVariant1-7B
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: mncai/mistral-7b-dpo-v5
parameters:
normalize: true
dtype: float16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_kaitchup__Mayonnaise-4in1-02)
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.21|
|AI2 Reasoning Challenge (25-Shot)|73.38|
|HellaSwag (10-Shot) |88.51|
|MMLU (5-Shot) |64.89|
|TruthfulQA (0-shot) |69.04|
|Winogrande (5-shot) |84.37|
|GSM8k (5-shot) |71.04|
|
coffiee/ld22 | coffiee | "2025-02-23T16:21:38Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-23T16:20:37Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gautamthulasiraman/M.O.N.D.A.Y | gautamthulasiraman | "2025-04-02T05:24:53Z" | 0 | 0 | null | [
"en",
"ta",
"hi",
"te",
"kn",
"mr",
"ml",
"dataset:meta-llama/Llama-3.3-70B-Instruct-evals",
"base_model:meta-llama/Llama-3.3-70B-Instruct",
"base_model:finetune:meta-llama/Llama-3.3-70B-Instruct",
"license:llama3.3",
"region:us"
] | null | "2025-04-01T15:22:53Z" | ---
license: llama3.3
datasets:
- meta-llama/Llama-3.3-70B-Instruct-evals
language:
- en
- ta
- hi
- te
- kn
- mr
- ml
base_model:
- meta-llama/Llama-3.3-70B-Instruct
- vasista22/whisper-tamil-large-v2
---
M.O.N.D.A.Y. - Managing Operations, Networking, and Data for Active Yield - Your AI colleague.
(Initially we are experimenting this with the Tamil language by including dialect wise data, in a conversational chatbot, hence a customer service AI agent who knows dialect wise Tamil, can work 24 x7)
## Model Overview:
M.O.N.D.A.Y. is an advanced multi-purpose software designed to improve operational efficiency, automate tasks, and enhance user interaction within organizations. M.O.N.D.A.Y. integrates a suite of powerful tools including a live conversational AI chatbot, automatic email sender, ticketing system, notification provider, dashboard creation tool, and employee performance analysis. M.O.N.D.A.Y. serves as a one-stop solution for day-to-day business processes, combining conversational AI capabilities with productivity tools.
## Key Features:
1. Live Conversational AI Chatbot.
2. Provides dynamic, real-time support for user queries.
3. Natural language processing (NLP) powered to handle a wide range of queries, similar to ChatGPT’s conversational abilities.
4. Can switch between formal and informal modes depending on user context and preferences.
## Automatic Email Sender:
1. Automatically sends personalized emails to users based on predefined triggers or responses.
2. Customizable templates for common email scenarios.
3. Integration with external systems for automated communication.
## Ticket Raiser:
1. Automatically creates and tracks support tickets when users encounter issues.
2. Seamlessly escalates tickets as required and notifies the relevant team members.
3. Can assign priorities based on the urgency of the query or problem.
## Notification Provider:
1. Provides real-time notifications whenever a query is resolved or a ticket is updated.
2. Customizable notification rules based on user roles or preferences.
## Dashboard Creation Tool:
1. Creates interactive and visual dashboards to monitor key metrics.
2. Includes integrations with organizational data sources to show real-time performance and analytics.
3. User-friendly drag-and-drop interface for non-technical users.
## Chatbot Functionality:
1. Serves as a general-purpose chatbot for casual conversations, FAQs, or to assist with basic tasks.
2. Capable of engaging in meaningful dialogue, providing information, and even entertaining users.
## Capabilities and Use Cases:
1. Customer Support: Efficiently handle customer queries, automate ticket creation, and ensure quick response times.
2. Internal Team Assistance: Provide real-time responses to employees' questions regarding HR policies, IT support, and more.
3. Productivity Boost: Automate emails, notifications, and ticket management to improve internal workflows.
4. Data Insights: Use performance analytics to guide team performance improvement, helping businesses make data-driven decisions.
5. Enterprise Integration: Seamlessly integrate into existing systems like CRM, HRM, and project management tools for broader functionality.
## Technological Foundations:
1. Natural Language Processing (NLP): For understanding user queries and providing context-aware responses.
2. AI Chatbot Algorithms: Built on advanced machine learning models for conversation and query management.
3. Data Analytics and Visualization: Real-time analytics and dashboards built with industry-standard libraries and tools.
4. Automated Workflow Management: Custom-built for ticketing, email sending, and notification management to handle real-time events.
5. Cloud Integration: Easily integrates with cloud-based tools and services for scalability and flexibility.
## Ethical Considerations:
1. Data Privacy: M.O.N.D.A.Y. adheres to strict data privacy protocols to ensure user data is not misused.
2. Bias Management: Ensures that the chatbot responses and performance analysis are free from bias, following ethical AI guidelines.
3. Transparency: Users are informed when they are interacting with the AI and provided clear information about automated processes like ticket raising or email sending.
## User Experience (UX) Design
1. Intuitive Interface: M.O.N.D.A.Y. is designed with a clean, intuitive interface to enable quick adoption by teams, regardless of technical proficiency.
2. Customization: Users can personalize dashboards, email templates, and chatbot settings according to their needs.
3. Multi-Platform Support: Available across devices (web, desktop, mobile), ensuring users can interact with M.O.N.D.A.Y. anytime, anywhere.
## Deployment and Integration:
1. API Integrations: Easily integrates with a variety of enterprise systems, including CRMs, HR tools, and project management platforms.
2. Customization Support: Developers can extend functionality or integrate additional features as needed.
## Conclusion:
M.O.N.D.A.Y. serves as a comprehensive solution for businesses looking to automate repetitive tasks, enhance employee productivity, and improve customer service. It integrates multiple powerful features, from conversational AI to employee performance analysis, all within a single platform. Whether you're looking to streamline workflows or gain deep insights into organizational performance, M.O.N.D.A.Y. offers a versatile and robust toolset.
## Future Enhancements
1. Machine Learning for Better Insights: Continuously learning from user data to improve response accuracy and recommendations.
2. Multilingual Support: Expanding the chatbot's capabilities to support multiple languages for a global audience. |
icelab/cosmicroberta | icelab | "2023-02-17T22:25:28Z" | 4 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-06-19T21:26:52Z" | ---
license: mit
widget:
- text: "The closest planet to earth is <mask>."
- text: "Electrical power is stored on a spacecraft with <mask>."
---
### CosmicRoBERTa
This model is a further pre-trained version of RoBERTa for space science on a domain-specific corpus, which includes abstracts from the NTRS library, abstracts from SCOPUS, ECSS requirements, and other sources from this domain.
This totals to a pre-training corpus of around 75 mio words.
The model performs slightly better on a subset (0.6 of total data set) of the CR task presented in our paper [SpaceTransformers: Language Modeling for Space Systems](https://ieeexplore.ieee.org/document/9548078).
| | RoBERTa | CosmiRoBERTa | SpaceRoBERTa |
|-----------------------------------------------|----------------|---------------------|---------------------|
| Parameter | 0.475 | 0.515 | 0.485 |
| GN&C | 0.488 | 0.609 | 0.602 |
| System engineering | 0.523 | 0.559 | 0.555 |
| Propulsion | 0.403 | 0.521 | 0.465 |
| Project Scope | 0.493 | 0.541 | 0.497 |
| OBDH | 0.717 | 0.789 | 0.794 |
| Thermal | 0.432 | 0.509 | 0.491 |
| Quality control | 0.686 | 0.704 | 0.678 |
| Telecom. | 0.360 | 0.614 | 0.557 |
| Measurement | 0.833 | 0.849 | 0.858 |
| Structure & Mechanism | 0.489 | 0.581 | 0.566 |
| Space Environment | 0.543 | 0.681 | 0.605 |
| Cleanliness | 0.616 | 0.621 | 0.651 |
| Project Organisation / Documentation | 0.355 | 0.427 | 0.429 |
| Power | 0.638 | 0.735 | 0.661 |
| Safety / Risk (Control) | 0.647 | 0.727 | 0.676 |
| Materials / EEEs | 0.585 | 0.642 | 0.639 |
| Nonconformity | 0.365 | 0.333 | 0.419 |
| weighted | 0.584 | 0.652(+7%) | 0.633(+5%) |
| Valid. Loss | 0.605 | 0.505 | 0.542 |
### BibTeX entry and citation info
```
@ARTICLE{
9548078,
author={Berquand, Audrey and Darm, Paul and Riccardi, Annalisa},
journal={IEEE Access},
title={SpaceTransformers: Language Modeling for Space Systems},
year={2021},
volume={9},
number={},
pages={133111-133122},
doi={10.1109/ACCESS.2021.3115659}
}
``` |
emilykang/medQuad_finetuned_lora | emilykang | "2024-05-17T01:56:47Z" | 5 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | "2024-05-16T21:08:53Z" | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
datasets:
- generator
model-index:
- name: medQuad_finetuned_lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# medQuad_finetuned_lora
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 10
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.2.1+cu118
- Datasets 2.19.0
- Tokenizers 0.19.1 |
mradermacher/Llama_3.x_70b_Hexagon_Purple_V1-i1-GGUF | mradermacher | "2025-04-10T11:54:00Z" | 383 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Nexesenex/Llama_3.x_70b_Hexagon_Purple_V1",
"base_model:quantized:Nexesenex/Llama_3.x_70b_Hexagon_Purple_V1",
"license:llama3.3",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-04-07T05:01:30Z" | ---
base_model: Nexesenex/Llama_3.x_70b_Hexagon_Purple_V1
language:
- en
library_name: transformers
license: llama3.3
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Nexesenex/Llama_3.x_70b_Hexagon_Purple_V1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama_3.x_70b_Hexagon_Purple_V1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Hexagon_Purple_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Hexagon_Purple_V1.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Hexagon_Purple_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Hexagon_Purple_V1.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Hexagon_Purple_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Hexagon_Purple_V1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Hexagon_Purple_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Hexagon_Purple_V1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Hexagon_Purple_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Hexagon_Purple_V1.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Hexagon_Purple_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Hexagon_Purple_V1.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Hexagon_Purple_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Hexagon_Purple_V1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 24.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Hexagon_Purple_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Hexagon_Purple_V1.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Hexagon_Purple_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Hexagon_Purple_V1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Hexagon_Purple_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Hexagon_Purple_V1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Hexagon_Purple_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Hexagon_Purple_V1.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Hexagon_Purple_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Hexagon_Purple_V1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Hexagon_Purple_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Hexagon_Purple_V1.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Hexagon_Purple_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Hexagon_Purple_V1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Hexagon_Purple_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Hexagon_Purple_V1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Hexagon_Purple_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Hexagon_Purple_V1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Hexagon_Purple_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Hexagon_Purple_V1.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Hexagon_Purple_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Hexagon_Purple_V1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Hexagon_Purple_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Hexagon_Purple_V1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Hexagon_Purple_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Hexagon_Purple_V1.i1-Q4_1.gguf) | i1-Q4_1 | 44.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Hexagon_Purple_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Hexagon_Purple_V1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.x_70b_Hexagon_Purple_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Hexagon_Purple_V1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.1 | |
| [PART 1](https://huggingface.co/mradermacher/Llama_3.x_70b_Hexagon_Purple_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Hexagon_Purple_V1.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama_3.x_70b_Hexagon_Purple_V1-i1-GGUF/resolve/main/Llama_3.x_70b_Hexagon_Purple_V1.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
TarunKM/Test_Case_Model_3_Epochs | TarunKM | "2025-04-03T05:03:20Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-04-03T05:03:14Z" | ---
base_model: unsloth/llama-3-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** TarunKM
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
PrunaAI/swinv2_base_window8_256.ms_in1k-turbo-tiny-green-smashed | PrunaAI | "2024-08-02T15:39:43Z" | 3 | 0 | pruna-engine | [
"pruna-engine",
"region:us"
] | null | "2024-03-19T11:53:38Z" | ---
library_name: pruna-engine
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed by combining quantization, xformers, jit, cuda graphs, triton.
- ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`.
1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install.
```bash
pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/
```
2. Download the model files using one of these three options.
- Option 1 - Use command line interface (CLI):
```bash
mkdir swinv2_base_window8_256.ms_in1k-turbo-tiny-green-smashed
huggingface-cli download PrunaAI/swinv2_base_window8_256.ms_in1k-turbo-tiny-green-smashed --local-dir swinv2_base_window8_256.ms_in1k-turbo-tiny-green-smashed --local-dir-use-symlinks False
```
- Option 2 - Use Python:
```python
import subprocess
repo_name = "swinv2_base_window8_256.ms_in1k-turbo-tiny-green-smashed"
subprocess.run(["mkdir", repo_name])
subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"])
```
- Option 3 - Download them manually on the HuggingFace model page.
3. Load & run the model.
```python
from pruna_engine.PrunaModel import PrunaModel
model_path = "swinv2_base_window8_256.ms_in1k-turbo-tiny-green-smashed/model" # Specify the downloaded model path.
smashed_model = PrunaModel.load_model(model_path) # Load the model.
import torch; image = torch.rand(1, 3, 224, 224).to('cuda')
smashed_model(image)
```
## Configurations
The configuration info are in `model/smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model swinv2_base_window8_256.ms_in1k before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
omersezer/TE_Instruct_L3 | omersezer | "2024-05-17T16:25:53Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:adapter:meta-llama/Meta-Llama-3-8B",
"region:us"
] | null | "2024-05-17T16:24:54Z" | ---
library_name: peft
base_model: meta-llama/Meta-Llama-3-8B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
borisf/gingerabi-bob | borisf | "2024-11-04T14:11:24Z" | 5 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2024-11-04T14:11:15Z" | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: gingerabi
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# gingerabi-bob
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `gingerabi` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
briannlongzhao/40 | briannlongzhao | "2024-01-29T21:55:27Z" | 9 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"custom-diffusion",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:adapter:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-01-27T18:23:42Z" |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
instance_prompt: a photo of <new1> hare
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- custom-diffusion
inference: true
---
# Custom Diffusion - briannlongzhao/40
These are Custom Diffusion adaption weights for stabilityai/stable-diffusion-2-1. The weights were trained on a photo of <new1> hare using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following.
For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
|
mariusweiss/TaxBERT | mariusweiss | "2025-02-24T08:58:55Z" | 1,106 | 3 | null | [
"safetensors",
"roberta",
"en",
"license:mit",
"region:us"
] | null | "2025-02-19T13:28:49Z" | ---
license: mit
language:
- en
---
# TaxBERT
This repository accompanies the paper: Hechtner, F., Schmidt, L., Seebeck, A., & Weiß, M. (2025). How to design and employ specialized large language models for accounting and tax research: The example of TaxBERT.
TaxBERT is a domain-adapated RoBERTa model, specifically designed to analyze qualitative corporate tax disclosures.
In the future, we will add the following features:
- Tax Sentence Recognition
- Tax Risk Sentiment
**SSRN**: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5146523
The paper provides an ‘A-to-Z’ description of how to design and employ specialized Bidirectional Encoder Representation of Transformers (BERT) models that are environmentally sustainable and practically feasible for accounting and tax researchers.
**GitHub**: https://github.com/TaxBERT/TaxBERT
If the following Guide/Repository is used for academic or scientific purposes, please cite the paper. |
mHossain/bangla-para-v3-480000 | mHossain | "2023-05-08T11:29:50Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-05-08T11:03:09Z" | ---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bangla-para-v3-480000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bangla-para-v3-480000
This model is a fine-tuned version of [mHossain/bangla-para-v3-450000](https://huggingface.co/mHossain/bangla-para-v3-450000) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1055
- Rouge1: 0.0
- Rouge2: 0.0
- Rougel: 0.0
- Rougelsum: 0.0
- Gen Len: 11.8703
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5000
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 1.2716 | 1.0 | 1688 | 1.1093 | 0.0 | 0.0 | 0.0 | 0.0 | 11.8683 |
| 1.2611 | 2.0 | 3376 | 1.1055 | 0.0 | 0.0 | 0.0 | 0.0 | 11.8703 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Tarek07/Progenitor-V3.3-LLaMa-70B | Tarek07 | "2025-03-30T07:39:43Z" | 82 | 7 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2406.11617",
"base_model:EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1",
"base_model:merge:EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1",
"base_model:Sao10K/70B-L3.3-Cirrus-x1",
"base_model:merge:Sao10K/70B-L3.3-Cirrus-x1",
"base_model:Sao10K/L3.1-70B-Hanami-x1",
"base_model:merge:Sao10K/L3.1-70B-Hanami-x1",
"base_model:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:merge:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:TheDrummer/Anubis-70B-v1",
"base_model:merge:TheDrummer/Anubis-70B-v1",
"base_model:meta-llama/Llama-3.3-70B-Instruct",
"base_model:merge:meta-llama/Llama-3.3-70B-Instruct",
"license:llama3.3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-09T08:08:56Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
cnababaie/tuti | cnababaie | "2025-03-12T21:28:10Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma2",
"trl",
"fa",
"en",
"base_model:google/gemma-2-9b",
"base_model:finetune:google/gemma-2-9b",
"license:gemma",
"endpoints_compatible",
"region:us"
] | null | "2025-03-12T15:22:56Z" | ---
base_model:
- unsloth/gemma-2-9b-bnb-4bit
- google/gemma-2-9b
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- trl
license: gemma
language:
- fa
- en
---
# Tuti 🦜
This is a [Gemma 2 9b](https://huggingface.co/google/gemma-2-9b), fined tuned using Unsloth's 4-bit quantization and LORA (QLORA), on Persian literature datasets I curated/created or found.
## Use cases and datasets
### Word IPA Detection
I have fined tuned this model with QLORA and only uploaded the LORA adapter, so it could be used like this:
```python
# pip install unsloth
from unsloth import FastLanguageModel
from transformers import TextStreamer
model_name = "cnababaie/tuti"
max_seq_length = 4096 # Adjust as needed
dtype = None
load_in_4bit = True
model, tokenizer = FastLanguageModel.from_pretrained(
model_name=model_name,
max_seq_length=max_seq_length,
dtype=dtype,
load_in_4bit=load_in_4bit,
)
FastLanguageModel.for_inference(model)
alpaca_prompt_template = """### Instruction:
{}
### Input:
{}
### Response:
{}"""
```
```python
inputs = tokenizer(
[
alpaca_prompt_template.format(
"IPA این کلمه چیست؟", # instruction
"جوینده",
"", # output - leave this blank for generation!
)
], return_tensors = "pt").to("cuda")
text_streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 64)
```
This will correctly output IPA as *"/d͡ʒuːjænde/ (*juyande*)"*.
#### IPA Sources
- [IPA-dict](https://github.com/open-dict-data/ipa-dict/tree/master): Monolingual wordlists with pronunciation information in IPA
- [Wiktionary](https://en.wiktionary.org): The Persian corpus don't contain IPA but the English one(which contains many words and phrases in other than English) are a lot of Persian words with their IPA
### Persian Text Romanization
```python
inputs = tokenizer(
[
alpaca_prompt_template.format(
"این متن چه تلفظی داره؟", # instruction
"خاک به خاطر بارش زیاد باران گل شد.",
"", # output - leave this blank for generation!
)
], return_tensors = "pt").to("cuda")
text_streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 64)
```
This will output exact pronunciation as *"Xāk be xāter-e bāreš-e ziyād-e bārān gel šod."*.
#### Romanization Sources
- [http://alefbaye2om.org/](http://alefbaye2om.org/): Contain PDFs with Persian Romanized text
### Persian Poem Translation
```python
inputs = tokenizer(
[
alpaca_prompt_template.format(
"ترجمه", # instruction
"برخیز بتا بیا ز بهر دل ما\r\nحل کن به جمال خویشتن مشکل ما\r\nیک کوزه شراب تا به هم نوش کن\r\nزآن پیش که کوزهها کنند از گل ما",
"", # output - leave this blank for generation!
)
], return_tensors = "pt").to("cuda")
text_streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 64)
```
This will output rhymed poetry with the original poem content:
*"Arise, O idol, for our heart's sake,
Solve our troubles with your beauty's make.
One pot of wine, let's drink it all,
Before they make pots from our clay's fall."*.
#### Poem Translation Sources
- Created list of random poems from Ganjoor and translation text pair |
lesso04/5e2be6fd-cb40-4a86-89d2-203a1f39a203 | lesso04 | "2025-02-14T00:29:22Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama_v1.1",
"base_model:adapter:TinyLlama/TinyLlama_v1.1",
"license:apache-2.0",
"region:us"
] | null | "2025-02-13T23:00:27Z" | ---
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama_v1.1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5e2be6fd-cb40-4a86-89d2-203a1f39a203
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# 5e2be6fd-cb40-4a86-89d2-203a1f39a203
This model is a fine-tuned version of [TinyLlama/TinyLlama_v1.1](https://huggingface.co/TinyLlama/TinyLlama_v1.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8086
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000204
- train_batch_size: 4
- eval_batch_size: 4
- seed: 40
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 7.4739 |
| 2.9181 | 0.0008 | 50 | 3.8034 |
| 2.689 | 0.0016 | 100 | 3.7696 |
| 2.8661 | 0.0025 | 150 | 3.5458 |
| 2.5208 | 0.0033 | 200 | 3.3830 |
| 2.7202 | 0.0041 | 250 | 3.0907 |
| 2.713 | 0.0049 | 300 | 2.9619 |
| 2.5748 | 0.0057 | 350 | 2.8776 |
| 2.5729 | 0.0065 | 400 | 2.8301 |
| 2.5435 | 0.0074 | 450 | 2.8109 |
| 2.5293 | 0.0082 | 500 | 2.8086 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
unsloth/Mixtral-8x7B-Instruct-v0.1-unsloth-bnb-4bit | unsloth | "2025-03-14T12:38:19Z" | 0 | 0 | null | [
"safetensors",
"mixtral",
"fr",
"it",
"de",
"es",
"en",
"base_model:mistralai/Mixtral-8x7B-Instruct-v0.1",
"base_model:quantized:mistralai/Mixtral-8x7B-Instruct-v0.1",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2025-03-14T11:19:40Z" | ---
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
language:
- fr
- it
- de
- es
- en
license: apache-2.0
inference:
parameters:
temperature: 0.5
widget:
- messages:
- role: user
content: What is your favorite condiment?
extra_gated_description: If you want to learn more about how we process your personal data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
---
# Model Card for Mixtral-8x7B
### Tokenization with `mistral-common`
```py
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
mistral_models_path = "MISTRAL_MODELS_PATH"
tokenizer = MistralTokenizer.v1()
completion_request = ChatCompletionRequest(messages=[UserMessage(content="Explain Machine Learning to me in a nutshell.")])
tokens = tokenizer.encode_chat_completion(completion_request).tokens
```
## Inference with `mistral_inference`
```py
from mistral_inference.transformer import Transformer
from mistral_inference.generate import generate
model = Transformer.from_folder(mistral_models_path)
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.decode(out_tokens[0])
print(result)
```
## Inference with hugging face `transformers`
```py
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("mistralai/Mixtral-8x7B-Instruct-v0.1")
model.to("cuda")
generated_ids = model.generate(tokens, max_new_tokens=1000, do_sample=True)
# decode with mistral tokenizer
result = tokenizer.decode(generated_ids[0].tolist())
print(result)
```
> [!TIP]
> PRs to correct the transformers tokenizer so that it gives 1-to-1 the same results as the mistral-common reference implementation are very welcome!
---
The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/).
## Warning
This repo contains weights that are compatible with [vLLM](https://github.com/vllm-project/vllm) serving of the model as well as Hugging Face [transformers](https://github.com/huggingface/transformers) library. It is based on the original Mixtral [torrent release](magnet:?xt=urn:btih:5546272da9065eddeb6fcd7ffddeef5b75be79a7&dn=mixtral-8x7b-32kseqlen&tr=udp%3A%2F%http://2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=http%3A%2F%http://2Ftracker.openbittorrent.com%3A80%2Fannounce), but the file format and parameter names are different. Please note that model cannot (yet) be instantiated with HF.
## Instruction format
This format must be strictly respected, otherwise the model will generate sub-optimal outputs.
The template used to build a prompt for the Instruct model is defined as follows:
```
<s> [INST] Instruction [/INST] Model answer</s> [INST] Follow-up instruction [/INST]
```
Note that `<s>` and `</s>` are special tokens for beginning of string (BOS) and end of string (EOS) while [INST] and [/INST] are regular strings.
As reference, here is the pseudo-code used to tokenize instructions during fine-tuning:
```python
def tokenize(text):
return tok.encode(text, add_special_tokens=False)
[BOS_ID] +
tokenize("[INST]") + tokenize(USER_MESSAGE_1) + tokenize("[/INST]") +
tokenize(BOT_MESSAGE_1) + [EOS_ID] +
…
tokenize("[INST]") + tokenize(USER_MESSAGE_N) + tokenize("[/INST]") +
tokenize(BOT_MESSAGE_N) + [EOS_ID]
```
In the pseudo-code above, note that the `tokenize` method should not add a BOS or EOS token automatically, but should add a prefix space.
In the Transformers library, one can use [chat templates](https://huggingface.co/docs/transformers/main/en/chat_templating) which make sure the right format is applied.
## Run the model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
outputs = model.generate(inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
By default, transformers will load the model in full precision. Therefore you might be interested to further reduce down the memory requirements to run the model through the optimizations we offer in HF ecosystem:
### In half-precision
Note `float16` precision only works on GPU devices
<details>
<summary> Click to expand </summary>
```diff
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
outputs = model.generate(input_ids, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>
### Lower precision using (8-bit & 4-bit) using `bitsandbytes`
<details>
<summary> Click to expand </summary>
```diff
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True, device_map="auto")
text = "Hello my name is"
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
outputs = model.generate(input_ids, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>
### Load the model with Flash Attention 2
<details>
<summary> Click to expand </summary>
```diff
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, use_flash_attention_2=True, device_map="auto")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
outputs = model.generate(input_ids, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>
## Limitations
The Mixtral-8x7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
# The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed. |
mradermacher/mpt-7b-8k-i1-GGUF | mradermacher | "2024-09-09T00:48:56Z" | 17 | 0 | transformers | [
"transformers",
"gguf",
"Composer",
"MosaicML",
"llm-foundry",
"StreamingDatasets",
"en",
"dataset:mc4",
"dataset:c4",
"dataset:togethercomputer/RedPajama-Data-1T",
"dataset:bigcode/the-stack",
"dataset:allenai/s2orc",
"base_model:mosaicml/mpt-7b-8k",
"base_model:quantized:mosaicml/mpt-7b-8k",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | "2024-09-08T06:50:24Z" | ---
base_model: mosaicml/mpt-7b-8k
datasets:
- mc4
- c4
- togethercomputer/RedPajama-Data-1T
- bigcode/the-stack
- allenai/s2orc
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- Composer
- MosaicML
- llm-foundry
- StreamingDatasets
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/mosaicml/mpt-7b-8k
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/mpt-7b-8k-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-i1-GGUF/resolve/main/mpt-7b-8k.i1-IQ1_S.gguf) | i1-IQ1_S | 1.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-i1-GGUF/resolve/main/mpt-7b-8k.i1-IQ1_M.gguf) | i1-IQ1_M | 1.7 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-i1-GGUF/resolve/main/mpt-7b-8k.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-i1-GGUF/resolve/main/mpt-7b-8k.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-i1-GGUF/resolve/main/mpt-7b-8k.i1-IQ2_S.gguf) | i1-IQ2_S | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-i1-GGUF/resolve/main/mpt-7b-8k.i1-IQ2_M.gguf) | i1-IQ2_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-i1-GGUF/resolve/main/mpt-7b-8k.i1-Q2_K.gguf) | i1-Q2_K | 2.7 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-i1-GGUF/resolve/main/mpt-7b-8k.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-i1-GGUF/resolve/main/mpt-7b-8k.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-i1-GGUF/resolve/main/mpt-7b-8k.i1-IQ3_S.gguf) | i1-IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-i1-GGUF/resolve/main/mpt-7b-8k.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-i1-GGUF/resolve/main/mpt-7b-8k.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-i1-GGUF/resolve/main/mpt-7b-8k.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-i1-GGUF/resolve/main/mpt-7b-8k.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-i1-GGUF/resolve/main/mpt-7b-8k.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 3.9 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-i1-GGUF/resolve/main/mpt-7b-8k.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 3.9 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-i1-GGUF/resolve/main/mpt-7b-8k.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 3.9 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-i1-GGUF/resolve/main/mpt-7b-8k.i1-Q4_0.gguf) | i1-Q4_0 | 3.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-i1-GGUF/resolve/main/mpt-7b-8k.i1-Q4_K_S.gguf) | i1-Q4_K_S | 3.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-i1-GGUF/resolve/main/mpt-7b-8k.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-i1-GGUF/resolve/main/mpt-7b-8k.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-i1-GGUF/resolve/main/mpt-7b-8k.i1-Q5_K_S.gguf) | i1-Q5_K_S | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-i1-GGUF/resolve/main/mpt-7b-8k.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/mpt-7b-8k-i1-GGUF/resolve/main/mpt-7b-8k.i1-Q6_K.gguf) | i1-Q6_K | 5.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
great0001/8290ff4a-510a-4985-925b-4b17c77fea6c | great0001 | "2025-02-18T22:53:48Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:scb10x/llama-3-typhoon-v1.5-8b-instruct",
"base_model:adapter:scb10x/llama-3-typhoon-v1.5-8b-instruct",
"license:llama3",
"region:us"
] | null | "2025-02-18T20:06:10Z" | ---
library_name: peft
license: llama3
base_model: scb10x/llama-3-typhoon-v1.5-8b-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8290ff4a-510a-4985-925b-4b17c77fea6c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 8290ff4a-510a-4985-925b-4b17c77fea6c
This model is a fine-tuned version of [scb10x/llama-3-typhoon-v1.5-8b-instruct](https://huggingface.co/scb10x/llama-3-typhoon-v1.5-8b-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
GreenBitAI/Qwen-1.5-32B-layer-mix-bpw-3.0 | GreenBitAI | "2024-04-30T14:25:05Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-10T17:06:11Z" | ---
license: apache-2.0
---
# GreenBit LLMs
This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance.
Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information.
### Zero-shot Evaluation
We evaluate the zero-shot ability of low-bit quantized Qwen1.5 models using the `llm_eval` library and list the results below:
| **Repository (Qwen Family)** | **Avg Acc.** | **OpenBQ** | **ARC-E** | **Winogr.** | **HellaS.** | **ARC-C** | **PIQA** | **BoolQ** | **RACE** | **ANLI-R1** | **ANLI-R2** | **ANLI-R3** | **WiC** |
|:----------------------------------|:------------:|:------------:|:-----------:|:-------------:|:-------------:|:-----------:|:----------:|:-----------:|:-----------:|:-------------:|:-------------:|:-------------:|:---------:|
| `Qwen-1.5-0.5B-layer-mix-bpw-2.2` | 0.398 | 0.170 | 0.443 | 0.527 | 0.332 | 0.238 | 0.634 | 0.620 | 0.318 | 0.332 | 0.338 | 0.330 | 0.500 |
| `Qwen-1.5-0.5B-layer-mix-bpw-2.5` | 0.394 | 0.170 | 0.514 | 0.541 | 0.337 | 0.232 | 0.637 | 0.496 | 0.318 | 0.316 | 0.358 | 0.326 | 0.490 |
| `Qwen-1.5-0.5B-layer-mix-bpw-3.0` | 0.407 | 0.198 | 0.533 | 0.536 | 0.348 | 0.234 | 0.671 | 0.552 | 0.323 | 0.330 | 0.333 | 0.335 | 0.495 |
| `Qwen-1.5-1.8B-layer-mix-bpw-2.2` | 0.415 | 0.218 | 0.539 | 0.586 | 0.392 | 0.260 | 0.678 | 0.622 | 0.333 | 0.333 | 0.333 | 0.336 | 0.464 |
| `Qwen-1.5-1.8B-layer-mix-bpw-2.5` | 0.423 | 0.222 | 0.592 | 0.585 | 0.406 | 0.267 | 0.695 | 0.629 | 0.336 | 0.314 | 0.339 | 0.361 | 0.507 |
| `Qwen-1.5-1.8B-layer-mix-bpw-3.0` | 0.438 | 0.246 | 0.576 | 0.563 | 0.413 | 0.277 | 0.694 | 0.645 | 0.352 | 0.323 | 0.336 | 0.343 | 0.492 |
| `Qwen-1.5-4B-layer-mix-bpw-2.2` | 0.480 | 0.254 | 0.663 | 0.623 | 0.463 | 0.339 | 0.712 | 0.718 | 0.349 | 0.326 | 0.355 | 0.384 | 0.513 |
| `Qwen-1.5-4B-layer-mix-bpw-2.5` | 0.490 | 0.266 | 0.677 | 0.629 | 0.473 | 0.365 | 0.732 | 0.717 | 0.351 | 0.372 | 0.352 | 0.360 | 0.502 |
| `Qwen-1.5-4B-layer-mix-bpw-3.0` | 0.502 | 0.268 | 0.678 | 0.642 | 0.494 | 0.358 | 0.755 | 0.757 | 0.380 | 0.395 | 0.395 | 0.392 | 0.519 |
| `Qwen-1.5-7B-layer-mix-bpw-2.2` | 0.513 | 0.278 | 0.669 | 0.654 | 0.504 | 0.389 | 0.741 | 0.759 | 0.376 | 0.383 | 0.410 | 0.403 | 0.517 |
| `Qwen-1.5-7B-layer-mix-bpw-2.5` | 0.520 | 0.294 | 0.705 | 0.650 | 0.520 | 0.387 | 0.750 | 0.769 | 0.371 | 0.445 | 0.424 | 0.398 | 0.564 |
| `Qwen-1.5-7B-layer-mix-bpw-3.0` | 0.531 | 0.292 | 0.713 | 0.654 | 0.545 | 0.405 | 0.764 | 0.807 | 0.383 | 0.424 | 0.393 | 0.414 | 0.627 |
| `Qwen-1.5-14B-layer-mix-bpw-2.5` | 0.553 | 0.318 | 0.727 | 0.682 | 0.564 | 0.413 | 0.775 | 0.792 | 0.390 | 0.472 | 0.434 | 0.446 | 0.623 |
| `Qwen-1.5-32B-layer-mix-bpw-3.0` | 0.599 | 0.346 | 0.775 | 0.722 | 0.620 | 0.492 | 0.807 | 0.853 | 0.444 | 0.515 | 0.494 | 0.478 | 0.642 |
|
silviasapora/gemma-7b-silvia-basic-5e-5-05-vsh2p9 | silviasapora | "2025-03-04T16:57:20Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma",
"text-generation",
"generated_from_trainer",
"alignment-handbook",
"trl",
"orpo",
"conversational",
"dataset:argilla/dpo-mix-7k",
"arxiv:2403.07691",
"base_model:google/gemma-7b",
"base_model:finetune:google/gemma-7b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-04T14:17:34Z" | ---
base_model: google/gemma-7b
datasets:
- argilla/dpo-mix-7k
library_name: transformers
model_name: google/gemma-7b
tags:
- generated_from_trainer
- alignment-handbook
- trl
- orpo
licence: license
---
# Model Card for google/gemma-7b
This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) on the [['argilla/dpo-mix-7k']](https://huggingface.co/datasets/['argilla/dpo-mix-7k']) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="silviasapora/gemma-7b-silvia-basic-5e-5-05-vsh2p9", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/silvias/huggingface/runs/d9onh6x9)
This model was trained with ORPO, a method introduced in [ORPO: Monolithic Preference Optimization without Reference Model](https://huggingface.co/papers/2403.07691).
### Framework versions
- TRL: 0.13.0
- Transformers: 4.48.1
- Pytorch: 2.5.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite ORPO as:
```bibtex
@article{hong2024orpo,
title = {{ORPO: Monolithic Preference Optimization without Reference Model}},
author = {Jiwoo Hong and Noah Lee and James Thorne},
year = 2024,
eprint = {arXiv:2403.07691}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
0xC4LL3/REINFORCE_CartPole-V1 | 0xC4LL3 | "2023-10-06T12:02:33Z" | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-10-06T12:02:24Z" | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: REINFORCE_CartPole-V1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
KennethTM/gpt2-small-danish-review-response | KennethTM | "2023-07-05T11:50:02Z" | 129 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"da",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-06-20T08:14:14Z" | ---
language:
- da
pipeline_tag: text-generation
widget:
- text: "### Bruger:\nAnders\n\n### Anmeldelse:\nUmuligt at komme igennem på telefonen.\n\n### Svar:\nKære Anders\n"
---
# What is this?
A fine-tuned GPT-2 model (small version, 124 M parameters) for generating responses to customer reviews in Danish.
# How to use
The model is based on the [gpt2-small-danish model](https://huggingface.co/KennethTM/gpt2-small-danish). Supervised fine-tuning is applied to adapt the model to generate responses to customer reviews in Danish. A prompting template is applied to the examples used to train (see the example below).
Test the model using the pipeline from the [🤗 Transformers](https://github.com/huggingface/transformers) library:
```python
from transformers import pipeline
generator = pipeline("text-generation", model = "KennethTM/gpt2-small-danish-review-response")
def prompt_template(user, review):
return f"### Bruger:\n{user}\n\n### Anmeldelse:\n{review}\n\n### Svar:\nKære {user}\n"
prompt = prompt_template(user = "Anders", review = "Umuligt at komme igennem på telefonen.")
text = generator(prompt)
print(text[0]["generated_text"])
```
Or load it using the Auto* classes:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("KennethTM/gpt2-small-danish-review-response")
model = AutoModelForCausalLM.from_pretrained("KennethTM/gpt2-small-danish-review-response")
```
# Notes
The model may get the sentiment of the review wrong resulting in a mismatch between the review and response. The model would probably benefit from sentiment tuning. |
gnumanth/gemma-unsloth-alpaca | gnumanth | "2024-04-02T04:34:32Z" | 6 | 2 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/gemma-7b-bnb-4bit",
"base_model:finetune:unsloth/gemma-7b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-30T05:05:28Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
base_model: unsloth/gemma-7b-bnb-4bit
---
# gemma-alpacha
> yahma/alpaca-cleaned finetuned with gemma-7b-bnb-4bit
# Usage
```sh
pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
```
```py
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained("gnumanth/gemma-unsloth-alpaca")
```
```py
alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{}
### Input:
{}
### Response:
{}"""
```
```py
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
inputs = tokenizer(
[
alpaca_prompt.format(
"Give me a python code for quicksort", # instruction
"1,-1,0,8,9,-2,2", # input
"", # output - leave this blank for generation!
)
], return_tensors = "pt").to("cuda")
from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 128)
```
```sh
<bos>Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
Give me a python code for quicksort
### Input:
1,-1,0,8,9,-2,2
### Response:
def quicksort(arr):
if len(arr) <= 1:
return arr
pivot = arr[0]
left = [i for i in arr[1:] if i < pivot]
right = [i for i in arr[1:] if i >= pivot]
return quicksort(left) + [pivot] + quicksort(right)<eos>
```
[Hemanth HMM](https://h3amnth.com) | (Built with [unsloth](https://unsloth.ai))
|
tingting/orpheus_3b_4bit_MrDragonFox_Elise_e10_lora | tingting | "2025-03-28T21:41:46Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit",
"base_model:finetune:unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-03-28T21:41:26Z" | ---
base_model: unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** tingting
- **License:** apache-2.0
- **Finetuned from model :** unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
PrunaAI/openchat-openchat-3.5-1210-HQQ-8bit-smashed | PrunaAI | "2025-02-28T03:30:01Z" | 7 | 0 | null | [
"mistral",
"pruna-ai",
"hqq",
"region:us"
] | null | "2025-02-24T18:47:00Z" | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: ORIGINAL_REPO_NAME
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo ORIGINAL_REPO_NAME installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/openchat-openchat-3.5-1210-HQQ-8bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/openchat-openchat-3.5-1210-HQQ-8bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("ORIGINAL_REPO_NAME")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model ORIGINAL_REPO_NAME before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
meditsolutions/MSH-v1-Bielik-v2.3-Instruct-MedIT-merge-GGUF | meditsolutions | "2024-10-29T15:20:25Z" | 8 | 1 | null | [
"gguf",
"medit-merge",
"text-generation",
"pl",
"en",
"base_model:speakleash/Bielik-11B-v2.3-Instruct",
"base_model:quantized:speakleash/Bielik-11B-v2.3-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2024-10-29T12:06:58Z" | ---
license: apache-2.0
base_model:
- speakleash/Bielik-11B-v2.3-Instruct
pipeline_tag: text-generation
tags:
- medit-merge
language:
- pl
- en
---
<div align="center">
<img src="https://i.ibb.co/YLfCzXR/imagine-image-c680e106-e404-45e5-98da-af700ffe41f4.png" alt="Llama-3.2-MedIT-SUN-2.5B" style="border-radius: 10px; box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19); max-width: 100%; height: auto;">
</div>
# Marsh Harrier
The Marsh Harrier (MSH) is a language model developed by MedIT Solutions using an advanced checkpoint merging technique. It represents a novel fusion of the Speakleash Bielik 11B v2.3 Instruct and Speakleash Bielik 11B v2 models, employing our proprietary weight-merging methodology.
## Key Features:
- Built on a pioneering approach to neural network weight fusion
- Supports merging models of identical parameter counts while maintaining architecture flexibility
- Demonstrates superior performance compared to its base models
- Optimized for Polish language understanding and generation
## Performance:
The model shows significant improvements over its predecessors across multiple metrics in the Open PL LLM Leaderboard evaluation framework (0-shot), which is part of the SpeakLeash.org open-science initiative.
Technical Details:
- Base Models: [Speakleash Bielik 11B v2.3 Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.3-Instruct) and [Bielik 11B v2](https://huggingface.co/speakleash/Bielik-11B-v2)
- Architecture: Compatible with original Bielik architecture
- Parameter Count: 11 billion parameters
- Special Feature: Utilizes MedIT Solutions' proprietary checkpoint merging technology
This model represents a step forward in developing the Polish language, demonstrating how merging techniques can enhance model performance while maintaining architectural efficiency.
# Polish LLM Open Leaderboard
Core Leaderboards:
- MT-Bench-PL: slight decrease of 0.3 points (8.27 vs 8.56)
- Open PL LLM Leaderboard: improved performance by 0.09 points (65.80 vs 65.71)
Sentiment Analysis (PolEmo2):
- In-domain accuracy: Matches Bielik at 77.70%
- Out-of-domain accuracy: Improved performance at 79.76% (vs 79.35%)
Text Classification Tasks:
- 8tags classification: Significant improvement of ~3pp (76.14% vs 73.17%)
- Belebele benchmark: Matching performance at 88.56%
- CBD task: Substantial F1 score improvement by 10pp (23.91% vs 13.73%)
Language Understanding:
- DYK ("Did you know..."): Improved F1 score (69.77% vs 69.14%)
- Named Entity Recognition (KLEJ NER): Notable improvement of ~8pp (45.53% vs 37.61%)
- PolQA reranking: Slight decrease (81.99% vs 83.21%)
- PPC: Enhanced accuracy (78.00% vs 77.20%)
- PSC: Minor F1 score decrease (90.46% vs 93.63%)
Overall Performance:
MSH-v1 achieves a higher average score of 71.18% compared to Bielik v2.3's 69.33%, demonstrating the effectiveness of our checkpoint merging technique in improving model performance across diverse NLP tasks.
All evaluations were conducted using the Open PL LLM Leaderboard framework (0-shot) as part of the SpeakLeash.org open-science initiative.
Kudos to the **[SpeakLeash](https://speakleash.org)** project and **[ACK Cyfronet AGH](https://www.cyfronet.pl/)** for their extraordinary work. |
mryoshq/lunar8 | mryoshq | "2024-03-19T14:03:04Z" | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | "2024-03-19T13:58:23Z" | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -138.47 +/- 53.84
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 42
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 8
'num_steps': 2048
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 32
'update_epochs': 10
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'yoshq/lunar8'
'batch_size': 16384
'minibatch_size': 512}
```
|
gouthaml/raos-virtual-try-on-model | gouthaml | "2023-06-08T20:28:42Z" | 169 | 35 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-05-18T06:34:23Z" | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### DeepVTO or Raos_virtual_try_on_model trained by Goutham Rao with stable-diffusion/DreamBooth/feature extraction (EfficientNetB3 CNN model)/OpenPose for estimating person keypoints/
### stable diffusion and vector embeddings concepts used to build a virtual try-on system that provides a realistic and visually appealing virtual try-on experience for users
## Hardware and software Requirements : GPU A100 , High RAM , pytorch ,stable-diffusion-v1-5 , python 3.0 , U-Net Architecture , Dreambooth , OpenPose , EfficienNetB3 pre-trained CNN model
The DeepVTO model is hosted on the Hugging Face Model Hub.
(https://huggingface.co/gouthaml/raos-virtual-try-on-model)
This model leverages a combination of advanced deep learning techniques and architectures, including stable-diffusion, DreamBooth, feature extraction using the EfficientNetB3 CNN model, and OpenPose for estimating person keypoints. These techniques are harmoniously integrated to provide a realistic and visually appealing virtual try-on experience for users.
The DeepVTO model is built on the principles of stable diffusion and vector embeddings, which are critical in creating a high-quality virtual try-on system. The model is trained using the DreamBooth model, which is a stable-diffusion model, and the feature extraction is performed using the EfficientNetB3 CNN model. OpenPose, a real-time multi-person system to jointly detect human body, hand, facial, and foot keypoints, is used for estimating person keypoints.
The model requires specific hardware and software for optimal performance. The hardware requirements include a GPU A100 and high RAM. The software requirements include PyTorch, stable-diffusion-v1-5, Python 3.0, U-Net Architecture, Dreambooth, OpenPose, and the EfficientNetB3 pre-trained CNN model.
The DeepVTO model is a testament to the potential of deep learning in the fashion retail industry. It showcases how advanced machine learning techniques can be used to enhance the online shopping experience, making it more interactive and personalized. This model serves as a valuable resource for researchers and practitioners in the field, providing a practical example of a high-quality virtual try-on system.
The model also provides a foundation for future research and development in the field of virtual try-on systems. It highlights the potential of deep learning techniques in addressing the challenges associated with virtual try-on systems, such as the accuracy of virtual representations and the scalability of the system. By leveraging advanced deep learning techniques, the DeepVTO model paves the way for the development of more sophisticated and effective virtual try-on systems in the future.
Sample pictures of this concept:
















































































































































































































































































|
RichardErkhov/beddi_-_llama-3.2-1b-finetuned-pt3_28-11-gguf | RichardErkhov | "2025-03-21T17:48:24Z" | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | "2025-03-21T17:11:31Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama-3.2-1b-finetuned-pt3_28-11 - GGUF
- Model creator: https://huggingface.co/beddi/
- Original model: https://huggingface.co/beddi/llama-3.2-1b-finetuned-pt3_28-11/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama-3.2-1b-finetuned-pt3_28-11.Q2_K.gguf](https://huggingface.co/RichardErkhov/beddi_-_llama-3.2-1b-finetuned-pt3_28-11-gguf/blob/main/llama-3.2-1b-finetuned-pt3_28-11.Q2_K.gguf) | Q2_K | 0.54GB |
| [llama-3.2-1b-finetuned-pt3_28-11.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/beddi_-_llama-3.2-1b-finetuned-pt3_28-11-gguf/blob/main/llama-3.2-1b-finetuned-pt3_28-11.IQ3_XS.gguf) | IQ3_XS | 0.58GB |
| [llama-3.2-1b-finetuned-pt3_28-11.IQ3_S.gguf](https://huggingface.co/RichardErkhov/beddi_-_llama-3.2-1b-finetuned-pt3_28-11-gguf/blob/main/llama-3.2-1b-finetuned-pt3_28-11.IQ3_S.gguf) | IQ3_S | 0.6GB |
| [llama-3.2-1b-finetuned-pt3_28-11.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/beddi_-_llama-3.2-1b-finetuned-pt3_28-11-gguf/blob/main/llama-3.2-1b-finetuned-pt3_28-11.Q3_K_S.gguf) | Q3_K_S | 0.6GB |
| [llama-3.2-1b-finetuned-pt3_28-11.IQ3_M.gguf](https://huggingface.co/RichardErkhov/beddi_-_llama-3.2-1b-finetuned-pt3_28-11-gguf/blob/main/llama-3.2-1b-finetuned-pt3_28-11.IQ3_M.gguf) | IQ3_M | 0.61GB |
| [llama-3.2-1b-finetuned-pt3_28-11.Q3_K.gguf](https://huggingface.co/RichardErkhov/beddi_-_llama-3.2-1b-finetuned-pt3_28-11-gguf/blob/main/llama-3.2-1b-finetuned-pt3_28-11.Q3_K.gguf) | Q3_K | 0.64GB |
| [llama-3.2-1b-finetuned-pt3_28-11.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/beddi_-_llama-3.2-1b-finetuned-pt3_28-11-gguf/blob/main/llama-3.2-1b-finetuned-pt3_28-11.Q3_K_M.gguf) | Q3_K_M | 0.64GB |
| [llama-3.2-1b-finetuned-pt3_28-11.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/beddi_-_llama-3.2-1b-finetuned-pt3_28-11-gguf/blob/main/llama-3.2-1b-finetuned-pt3_28-11.Q3_K_L.gguf) | Q3_K_L | 0.68GB |
| [llama-3.2-1b-finetuned-pt3_28-11.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/beddi_-_llama-3.2-1b-finetuned-pt3_28-11-gguf/blob/main/llama-3.2-1b-finetuned-pt3_28-11.IQ4_XS.gguf) | IQ4_XS | 0.7GB |
| [llama-3.2-1b-finetuned-pt3_28-11.Q4_0.gguf](https://huggingface.co/RichardErkhov/beddi_-_llama-3.2-1b-finetuned-pt3_28-11-gguf/blob/main/llama-3.2-1b-finetuned-pt3_28-11.Q4_0.gguf) | Q4_0 | 0.72GB |
| [llama-3.2-1b-finetuned-pt3_28-11.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/beddi_-_llama-3.2-1b-finetuned-pt3_28-11-gguf/blob/main/llama-3.2-1b-finetuned-pt3_28-11.IQ4_NL.gguf) | IQ4_NL | 0.72GB |
| [llama-3.2-1b-finetuned-pt3_28-11.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/beddi_-_llama-3.2-1b-finetuned-pt3_28-11-gguf/blob/main/llama-3.2-1b-finetuned-pt3_28-11.Q4_K_S.gguf) | Q4_K_S | 0.72GB |
| [llama-3.2-1b-finetuned-pt3_28-11.Q4_K.gguf](https://huggingface.co/RichardErkhov/beddi_-_llama-3.2-1b-finetuned-pt3_28-11-gguf/blob/main/llama-3.2-1b-finetuned-pt3_28-11.Q4_K.gguf) | Q4_K | 0.75GB |
| [llama-3.2-1b-finetuned-pt3_28-11.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/beddi_-_llama-3.2-1b-finetuned-pt3_28-11-gguf/blob/main/llama-3.2-1b-finetuned-pt3_28-11.Q4_K_M.gguf) | Q4_K_M | 0.75GB |
| [llama-3.2-1b-finetuned-pt3_28-11.Q4_1.gguf](https://huggingface.co/RichardErkhov/beddi_-_llama-3.2-1b-finetuned-pt3_28-11-gguf/blob/main/llama-3.2-1b-finetuned-pt3_28-11.Q4_1.gguf) | Q4_1 | 0.77GB |
| [llama-3.2-1b-finetuned-pt3_28-11.Q5_0.gguf](https://huggingface.co/RichardErkhov/beddi_-_llama-3.2-1b-finetuned-pt3_28-11-gguf/blob/main/llama-3.2-1b-finetuned-pt3_28-11.Q5_0.gguf) | Q5_0 | 0.83GB |
| [llama-3.2-1b-finetuned-pt3_28-11.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/beddi_-_llama-3.2-1b-finetuned-pt3_28-11-gguf/blob/main/llama-3.2-1b-finetuned-pt3_28-11.Q5_K_S.gguf) | Q5_K_S | 0.83GB |
| [llama-3.2-1b-finetuned-pt3_28-11.Q5_K.gguf](https://huggingface.co/RichardErkhov/beddi_-_llama-3.2-1b-finetuned-pt3_28-11-gguf/blob/main/llama-3.2-1b-finetuned-pt3_28-11.Q5_K.gguf) | Q5_K | 0.85GB |
| [llama-3.2-1b-finetuned-pt3_28-11.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/beddi_-_llama-3.2-1b-finetuned-pt3_28-11-gguf/blob/main/llama-3.2-1b-finetuned-pt3_28-11.Q5_K_M.gguf) | Q5_K_M | 0.85GB |
| [llama-3.2-1b-finetuned-pt3_28-11.Q5_1.gguf](https://huggingface.co/RichardErkhov/beddi_-_llama-3.2-1b-finetuned-pt3_28-11-gguf/blob/main/llama-3.2-1b-finetuned-pt3_28-11.Q5_1.gguf) | Q5_1 | 0.89GB |
| [llama-3.2-1b-finetuned-pt3_28-11.Q6_K.gguf](https://huggingface.co/RichardErkhov/beddi_-_llama-3.2-1b-finetuned-pt3_28-11-gguf/blob/main/llama-3.2-1b-finetuned-pt3_28-11.Q6_K.gguf) | Q6_K | 0.95GB |
| [llama-3.2-1b-finetuned-pt3_28-11.Q8_0.gguf](https://huggingface.co/RichardErkhov/beddi_-_llama-3.2-1b-finetuned-pt3_28-11-gguf/blob/main/llama-3.2-1b-finetuned-pt3_28-11.Q8_0.gguf) | Q8_0 | 1.23GB |
Original model description:
---
base_model: meta-llama/Llama-3.2-1B
library_name: transformers
model_name: llama-3.2-1b-finetuned-pt3_28-11
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for llama-3.2-1b-finetuned-pt3_28-11
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="beddi/llama-3.2-1b-finetuned-pt3_28-11", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/filippo-bedon-ca-foscari/huggingface/runs/kazxd3xm)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.1
- Transformers: 4.46.2
- Pytorch: 2.5.1+cu121
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
iyashnayi/llama-3.2-1B-finetuned | iyashnayi | "2025-03-16T02:14:49Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:adapter:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"region:us"
] | null | "2025-03-16T02:14:40Z" | ---
library_name: peft
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
model-index:
- name: llama-3.2-1B-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-3.2-1B-finetuned
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1537
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.14.0
- Transformers 4.48.3
- Pytorch 2.6.0+cu124
- Datasets 3.4.0
- Tokenizers 0.21.0 |
ZurichNLP/mlit-llama-2-7b-mtml6 | ZurichNLP | "2023-12-22T10:01:58Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-12-22T10:01:33Z" | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
withsecure/DistilBERT-PromptInjectionDetectorForCVs | withsecure | "2024-03-07T15:53:43Z" | 51 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"promptinjection",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-03-07T14:49:55Z" | ---
license: apache-2.0
library_name: transformers
language:
- en
tags:
- promptinjection
- distilbert
---
# Model Card for DistilBERT-PromptInjectionDetectorForCVs
## Model Overview
This model, leveraging the DistilBERT architecture, has been fine-tuned to demonstrate a strategy for mitigating prompt injection attacks. While it is specifically tailored for a synthetic application that handles CVs, the underlying research and methodology are intended to be applicable across various domains. This model serves as an example of how fine-tuning with domain-specific data can enhance the detection of prompt injection attempts in a targeted use case.
## Research Context
The development of this model was part of broader research into general strategies for mitigating prompt injection attacks in Large Language Models (LLMs). The detailed findings and methodology are discussed in our [research blog](http://placeholder), with the synthetic CV application available [here](http://placeholder) serving as a practical demonstration.
## Training Data
To fine-tune this model, we combined a domain-specific dataset (legitimate CVs) with examples of prompt injections, resulting in a custom dataset that provides a nuanced perspective on detecting prompt injection attacks. This approach leverages the strengths of both:
- **CV Dataset:** [Resume Dataset](https://huggingface.co/datasets/Lakshmi12/Resume_Dataset)
- **Prompt Injection Dataset:** [Prompt Injections](https://huggingface.co/datasets/deepset/prompt-injections)
The custom dataset includes legitimate CVs, pure prompt injection examples, and CVs embedded with prompt injection attempts, creating a rich training environment for the model.
## Intended Use
This model is a demonstration of how a domain-specific approach can be applied to mitigate prompt injection attacks within a particular context, in this case, a synthetic CV application. It is important to note that this model is not intended for direct production use but rather to serve as an example within a broader strategy for securing LLMs against such attacks.
## Limitations and Considerations
The challenge of prompt injection in LLMs is an ongoing research area, with no definitive solution currently available. While this model demonstrates a possible mitigation strategy within a specific domain, it is essential to recognize that it does not offer a comprehensive solution to the problem. Future prompt injection techniques may still succeed, underscoring the importance of continuous research and adaptation of mitigation strategies.
## Conclusion
Our research aims to contribute to the broader discussion on securing LLMs against prompt injection attacks. This model, while specific to a synthetic application, showcases a piece of the puzzle in addressing these challenges. We encourage further exploration and development of strategies to fortify models against evolving threats in this space.
|
trungphien/table_transfomer-finetuned-v1 | trungphien | "2024-07-21T10:01:11Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"table-transformer",
"object-detection",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | object-detection | "2024-07-21T10:00:54Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
pxw23232/Llama-3-Instruct-abliteration-LoRA-8B-F16-GGUF | pxw23232 | "2025-01-23T01:40:57Z" | 27 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"peft",
"lora",
"llama-cpp",
"gguf-my-lora",
"base_model:grimjim/Llama-3-Instruct-abliteration-LoRA-8B",
"base_model:adapter:grimjim/Llama-3-Instruct-abliteration-LoRA-8B",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | "2025-01-23T01:40:54Z" | ---
base_model: grimjim/Llama-3-Instruct-abliteration-LoRA-8B
library_name: transformers
tags:
- mergekit
- peft
- lora
- llama-cpp
- gguf-my-lora
license: llama3
---
# pxw23232/Llama-3-Instruct-abliteration-LoRA-8B-F16-GGUF
This LoRA adapter was converted to GGUF format from [`grimjim/Llama-3-Instruct-abliteration-LoRA-8B`](https://huggingface.co/grimjim/Llama-3-Instruct-abliteration-LoRA-8B) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space.
Refer to the [original adapter repository](https://huggingface.co/grimjim/Llama-3-Instruct-abliteration-LoRA-8B) for more details.
## Use with llama.cpp
```bash
# with cli
llama-cli -m base_model.gguf --lora Llama-3-Instruct-abliteration-LoRA-8B-f16.gguf (...other args)
# with server
llama-server -m base_model.gguf --lora Llama-3-Instruct-abliteration-LoRA-8B-f16.gguf (...other args)
```
To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).
|
rng0x17/Reinforce-CartPole-v1 | rng0x17 | "2023-03-24T21:28:02Z" | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-24T21:27:49Z" | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
agaaaa24/nlp4web_test | agaaaa24 | "2023-02-04T18:08:45Z" | 10 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"en",
"dataset:natural_questions",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | "2023-01-31T14:06:29Z" | ---
license: mit
datasets:
- natural_questions
language:
- en
--- |
PrunaAI/Orkhan-llama-2-7b-absa-bnb-8bit-smashed | PrunaAI | "2024-08-02T16:07:28Z" | 0 | 0 | null | [
"safetensors",
"pruna-ai",
"base_model:Orkhan/llama-2-7b-absa",
"base_model:finetune:Orkhan/llama-2-7b-absa",
"region:us"
] | null | "2024-06-19T18:09:01Z" | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: Orkhan/llama-2-7b-absa
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo Orkhan/llama-2-7b-absa installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/Orkhan-llama-2-7b-absa-bnb-8bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("Orkhan/llama-2-7b-absa")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model Orkhan/llama-2-7b-absa before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
tobinho1234/deep-seek-lora-customer-support | tobinho1234 | "2025-03-27T19:18:25Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | "2025-03-27T19:18:16Z" | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
mkurman/Llama-3.2-MedIT-3B-R1 | mkurman | "2025-02-14T10:25:44Z" | 0 | 0 | null | [
"safetensors",
"gguf",
"llama",
"dataset:open-thoughts/OpenThoughts-114k",
"dataset:FreedomIntelligence/medical-o1-verifiable-problem",
"dataset:open-r1/OpenR1-Math-220k",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:quantized:meta-llama/Llama-3.2-3B-Instruct",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-14T07:39:00Z" | ---
license: llama3.2
datasets:
- open-thoughts/OpenThoughts-114k
- FreedomIntelligence/medical-o1-verifiable-problem
- open-r1/OpenR1-Math-220k
base_model:
- meta-llama/Llama-3.2-3B-Instruct
---
# mkurman/Llama-3.2-MedIT-3B-R1
**Important Notice:**
This model is provided strictly for research purposes and is not intended for production use. It should not be considered a validated source of medical or professional advice. Use only in controlled experimental settings.
---
## Model Overview
mkurman/Llama-3.2-MedIT-3B-R1 is a fine-tuned variant of meta-llama/Llama-3.2-3B-Instruct, adapted specifically for exploring natural language understanding and reasoning. This model leverages a multi-stage training approach, combining Blurred Thoughts Supervised Fine-Tuning (BT-SFT) and Group Relative Policy Optimization (GRPO) with an LLM evaluator to enhance its performance on specialized tasks.
---
## Training Procedure
The model was developed through the following sequential steps:
1. **Initial Blurred Thoughts Supervised Fine-Tuning (BT-SFT):**
- **Base Model:** meta-llama/Llama-3.2-3B-Instruct
- **Parameters:** 2000 steps, batch size 2, accumulation iterations 16, learning rate 1e-6
- **Dataset:** open-thoughts/OpenThoughts-114k
- **Details:** For further information on BT-SFT, see the [detailed post](https://huggingface.co/posts/mkurman/496852395740108) and the [GitHub repository](https://github.com/mkurman/blurred-thoughts-SFT).
2. **Group Relative Policy Optimization (GRPO) Stage 1:**
- **Dataset:** FreedomIntelligence/medical-o1-verifiable-problem
- **Training:** 200 steps
- **LLM Evaluator** mkurman/Qwen2.5-14B-DeepSeek-R1-1M
- **Details:** For further information on GRPO with LLM evaluators, see the [GitHub repository](https://github.com/mkurman/grpo-llm-evaluator).
3. **Group Relative Policy Optimization (GRPO) Stage 2:**
- **Dataset:** open-r1/OpenR1-Math-220k
- **Training:** 200 steps
- **LLM Evaluator** deepseek/deepseek-r1-distill-qwen-14b (OpenRouterAI)
---
## Datasets Utilized
- **open-thoughts/OpenThoughts-114k:**
A dataset consisting of open-ended thoughts that supports diverse conversational contexts during the initial supervised fine-tuning.
- **FreedomIntelligence/medical-o1-verifiable-problem:**
A dataset curated for enhancing the model's capabilities in addressing verifiable medical problems.
- **open-r1/OpenR1-Math-220k:**
A dataset designed to improve the model's reasoning and problem-solving skills in mathematical contexts.
---
## Intended Use
- **Research and Experimental Applications:**
This model is optimized for academic research and exploratory projects. It is ideal for investigating advanced fine-tuning methods and evaluating performance on task-oriented conversational scenarios.
- **Controlled Environments:**
Users should deploy this model only within controlled experimental frameworks where rigorous evaluation and proper safety guardrails are in place.
---
## Limitations and Ethical Considerations
- **Not for Clinical or Production Use:**
The model’s outputs have not been validated for clinical accuracy or professional decision-making. It must not be used as a primary source for medical, legal, or safety-critical information.
- **Safety and Guardrails:**
All users must implement appropriate safety measures and validation protocols. The model may produce biased or inaccurate results and should be used with caution.
- **Experimental Nature:**
Given its research-oriented design, the model’s performance can vary widely based on input and context. It is essential to perform thorough testing and validation before drawing any conclusions from its outputs.
---
## License
This model is released under the Llama 3.2 license. Users must adhere to the terms specified in the license when utilizing this model.
---
## Final Notice
All outputs from **mkurman/Llama-3.2-MedIT-3B-R1** are intended solely for research purposes. This model is not a comprehensive knowledge source and should not be used as a substitute for professional advice or decision-making. Ensure that all necessary guardrails and safety protocols are in place when conducting any experiments with this model. |
LarryAIDraw/kaho_hinata_s1-lora-nochekaiser | LarryAIDraw | "2024-01-18T14:19:01Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-01-18T14:12:22Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/105227?modelVersionId=303295 |
hunyfuny/tinyllama-ft-function-calling-semi-full | hunyfuny | "2024-03-16T17:16:00Z" | 69 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-16T17:10:52Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
th1s1s1t/ppo-LunarLander-v2 | th1s1s1t | "2022-07-23T14:41:24Z" | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2022-07-23T14:41:01Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 290.28 +/- 26.36
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
scottsuk0306/Classic-RM-300K-v0.6 | scottsuk0306 | "2024-10-14T05:48:46Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-classification",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-10-13T20:10:54Z" | ---
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
library_name: transformers
license: llama3.1
tags:
- generated_from_trainer
model-index:
- name: Classic-RM-300K-v0.6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Classic-RM-300K-v0.6
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 8
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0+cu121
- Datasets 2.19.2
- Tokenizers 0.20.0
|
Salma-Flores/wATCH.Salma-Flores.viral.video.original | Salma-Flores | "2025-02-23T21:14:02Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-02-23T21:13:33Z" | <a href="https://viraleakedvideostoday.blogspot.com/?v=Salma-Flores"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a>
<a href="https://viraleakedvideostoday.blogspot.com/?v=Salma-Flores">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a> </br>
<a href="https://viraleakedvideostoday.blogspot.com/?v=Salma-Flores">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤)</a> </br>
|
TOTORONG/phi4_vllm_Q4 | TOTORONG | "2025-03-31T21:50:01Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2025-03-31T21:47:13Z" | ---
base_model: unsloth/Phi-4-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** TOTORONG
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-4-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Uncensored_DeepSeek_R1_Distill_Qwen_1.5B_safetensors_finetune_2-GGUF | mradermacher | "2025-03-25T06:20:02Z" | 433 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:ICEPVP8977/Uncensored_Small_Test_Time_Compute",
"base_model:ICEPVP8977/Uncensored_DeepSeek_R1_Distill_Qwen_1.5B_safetensors_finetune_2",
"base_model:quantized:ICEPVP8977/Uncensored_DeepSeek_R1_Distill_Qwen_1.5B_safetensors_finetune_2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-02T19:20:42Z" | ---
base_model: ICEPVP8977/Uncensored_DeepSeek_R1_Distill_Qwen_1.5B_safetensors_finetune_2
datasets:
- ICEPVP8977/Uncensored_Small_Test_Time_Compute
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ICEPVP8977/Uncensored_DeepSeek_R1_Distill_Qwen_1.5B_safetensors_finetune_2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Uncensored_DeepSeek_R1_Distill_Qwen_1.5B_safetensors_finetune_2-GGUF/resolve/main/Uncensored_DeepSeek_R1_Distill_Qwen_1.5B_safetensors_finetune_2.Q2_K.gguf) | Q2_K | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Uncensored_DeepSeek_R1_Distill_Qwen_1.5B_safetensors_finetune_2-GGUF/resolve/main/Uncensored_DeepSeek_R1_Distill_Qwen_1.5B_safetensors_finetune_2.Q3_K_S.gguf) | Q3_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Uncensored_DeepSeek_R1_Distill_Qwen_1.5B_safetensors_finetune_2-GGUF/resolve/main/Uncensored_DeepSeek_R1_Distill_Qwen_1.5B_safetensors_finetune_2.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Uncensored_DeepSeek_R1_Distill_Qwen_1.5B_safetensors_finetune_2-GGUF/resolve/main/Uncensored_DeepSeek_R1_Distill_Qwen_1.5B_safetensors_finetune_2.Q3_K_L.gguf) | Q3_K_L | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Uncensored_DeepSeek_R1_Distill_Qwen_1.5B_safetensors_finetune_2-GGUF/resolve/main/Uncensored_DeepSeek_R1_Distill_Qwen_1.5B_safetensors_finetune_2.IQ4_XS.gguf) | IQ4_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Uncensored_DeepSeek_R1_Distill_Qwen_1.5B_safetensors_finetune_2-GGUF/resolve/main/Uncensored_DeepSeek_R1_Distill_Qwen_1.5B_safetensors_finetune_2.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Uncensored_DeepSeek_R1_Distill_Qwen_1.5B_safetensors_finetune_2-GGUF/resolve/main/Uncensored_DeepSeek_R1_Distill_Qwen_1.5B_safetensors_finetune_2.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Uncensored_DeepSeek_R1_Distill_Qwen_1.5B_safetensors_finetune_2-GGUF/resolve/main/Uncensored_DeepSeek_R1_Distill_Qwen_1.5B_safetensors_finetune_2.Q5_K_S.gguf) | Q5_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Uncensored_DeepSeek_R1_Distill_Qwen_1.5B_safetensors_finetune_2-GGUF/resolve/main/Uncensored_DeepSeek_R1_Distill_Qwen_1.5B_safetensors_finetune_2.Q5_K_M.gguf) | Q5_K_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Uncensored_DeepSeek_R1_Distill_Qwen_1.5B_safetensors_finetune_2-GGUF/resolve/main/Uncensored_DeepSeek_R1_Distill_Qwen_1.5B_safetensors_finetune_2.Q6_K.gguf) | Q6_K | 1.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Uncensored_DeepSeek_R1_Distill_Qwen_1.5B_safetensors_finetune_2-GGUF/resolve/main/Uncensored_DeepSeek_R1_Distill_Qwen_1.5B_safetensors_finetune_2.Q8_0.gguf) | Q8_0 | 2.0 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Uncensored_DeepSeek_R1_Distill_Qwen_1.5B_safetensors_finetune_2-GGUF/resolve/main/Uncensored_DeepSeek_R1_Distill_Qwen_1.5B_safetensors_finetune_2.f16.gguf) | f16 | 3.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
John6666/the-araminta-flux1a1-fp8-flux | John6666 | "2024-08-26T14:22:48Z" | 112 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"Flux",
"fp8",
"float8_e4m3fn",
"realistic",
"photorealistic",
"en",
"license:other",
"endpoints_compatible",
"diffusers:FluxPipeline",
"region:us"
] | text-to-image | "2024-08-25T11:13:15Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- Flux
- fp8
- float8_e4m3fn
- realistic
- photorealistic
---
Original model is [here](https://civitai.com/models/463163/the-araminta-experiment?modelVersionId=742904).
This model created by [aramintastudio](https://civitai.com/user/aramintastudio).
## Notice
This is an experimental conversion in Spaces using a homebrew script. serverless Inference API does not currently support torch float8_e4m3fn, so it does not work.
There are still many bugs around the FLUX.1 conversion part of Diffusers, and I have not been able to confirm if the conversion is working properly.
Please consider this as a test run only. |
adi1494/distilbert-base-uncased-finetuned-squad | adi1494 | "2022-06-10T12:39:00Z" | 62 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2022-06-10T06:38:11Z" | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: adi1494/distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# adi1494/distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.5671
- Validation Loss: 1.2217
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 5532, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.5671 | 1.2217 | 0 |
### Framework versions
- Transformers 4.19.3
- TensorFlow 2.8.2
- Datasets 2.2.2
- Tokenizers 0.12.1
|
mtzig/add_lr5e-4_batch128_train17_evaltrain | mtzig | "2025-04-09T21:28:49Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"nanogpt",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | "2025-04-09T19:51:26Z" | ---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: add_lr5e-4_batch128_train17_evaltrain
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# add_lr5e-4_batch128_train17_evaltrain
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7110
- Accuracy: 0.3371
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 512
- seed: 23452399
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|
| No log | 0 | 0 | 2.7171 | 0.0 |
| 2.3534 | 0.0032 | 50 | 2.3677 | 0.0 |
| 2.3382 | 0.0064 | 100 | 2.5148 | 0.0 |
| 2.311 | 0.0096 | 150 | 2.3971 | 0.0 |
| 2.3392 | 0.0128 | 200 | 2.5468 | 0.0 |
| 2.3034 | 0.016 | 250 | 2.3131 | 0.0 |
| 2.2137 | 0.0192 | 300 | 2.4709 | 0.0 |
| 2.1959 | 0.0224 | 350 | 2.3693 | 0.0 |
| 2.2031 | 0.0256 | 400 | 2.5114 | 0.0 |
| 2.2015 | 0.0288 | 450 | 2.5978 | 0.0 |
| 2.182 | 0.032 | 500 | 2.5588 | 0.0 |
| 2.2006 | 0.0352 | 550 | 2.4233 | 0.0 |
| 2.1606 | 0.0384 | 600 | 2.5245 | 0.0 |
| 2.1524 | 0.0416 | 650 | 2.4452 | 0.0 |
| 2.1044 | 0.0448 | 700 | 2.3655 | 0.0 |
| 2.0603 | 0.048 | 750 | 2.3375 | 0.0 |
| 2.183 | 0.0512 | 800 | 2.3313 | 0.0 |
| 2.0273 | 0.0544 | 850 | 2.4725 | 0.0 |
| 2.033 | 0.0576 | 900 | 2.4836 | 0.0 |
| 1.9766 | 0.0608 | 950 | 2.3963 | 0.0 |
| 1.8946 | 0.064 | 1000 | 2.3595 | 0.0 |
| 1.8097 | 0.0672 | 1050 | 2.1780 | 0.0 |
| 1.6859 | 0.0704 | 1100 | 2.6667 | 0.0002 |
| 1.8248 | 0.0736 | 1150 | 2.7052 | 0.0 |
| 1.5832 | 0.0768 | 1200 | 2.3183 | 0.0001 |
| 1.6588 | 0.08 | 1250 | 2.5069 | 0.0 |
| 1.4987 | 0.0832 | 1300 | 2.3781 | 0.0 |
| 1.4534 | 0.0864 | 1350 | 2.5671 | 0.0 |
| 1.6611 | 0.0896 | 1400 | 2.7870 | 0.0 |
| 1.6325 | 0.0928 | 1450 | 2.3185 | 0.0001 |
| 1.5752 | 0.096 | 1500 | 2.5077 | 0.0 |
| 1.3086 | 0.0992 | 1550 | 2.9031 | 0.0 |
| 1.621 | 0.1024 | 1600 | 3.0532 | 0.0 |
| 1.3097 | 0.1056 | 1650 | 3.2942 | 0.0 |
| 1.5028 | 0.1088 | 1700 | 2.3666 | 0.0 |
| 1.3305 | 0.112 | 1750 | 3.2486 | 0.0 |
| 1.4764 | 0.1152 | 1800 | 2.9466 | 0.0 |
| 1.4808 | 0.1184 | 1850 | 2.8717 | 0.0 |
| 1.3628 | 0.1216 | 1900 | 3.1647 | 0.0 |
| 1.3876 | 0.1248 | 1950 | 2.8130 | 0.0004 |
| 1.2997 | 0.128 | 2000 | 2.7729 | 0.0001 |
| 1.3024 | 0.1312 | 2050 | 2.7103 | 0.0003 |
| 1.3018 | 0.1344 | 2100 | 3.1862 | 0.0002 |
| 1.4855 | 0.1376 | 2150 | 2.7146 | 0.0002 |
| 1.4855 | 0.1408 | 2200 | 3.5807 | 0.0 |
| 1.4745 | 0.144 | 2250 | 2.3634 | 0.0002 |
| 1.3088 | 0.1472 | 2300 | 3.3057 | 0.0 |
| 1.2341 | 0.1504 | 2350 | 2.7851 | 0.0006 |
| 1.3885 | 0.1536 | 2400 | 2.7682 | 0.0001 |
| 1.366 | 0.1568 | 2450 | 3.0570 | 0.0 |
| 1.3261 | 0.16 | 2500 | 3.0537 | 0.0003 |
| 1.2401 | 0.1632 | 2550 | 3.4798 | 0.0002 |
| 1.2072 | 0.1664 | 2600 | 3.2757 | 0.0002 |
| 1.2796 | 0.1696 | 2650 | 2.8221 | 0.0002 |
| 1.2952 | 0.1728 | 2700 | 3.2149 | 0.0 |
| 1.2149 | 0.176 | 2750 | 3.5343 | 0.0 |
| 1.2852 | 0.1792 | 2800 | 3.9293 | 0.0 |
| 1.1779 | 0.1824 | 2850 | 3.8437 | 0.0002 |
| 1.2693 | 0.1856 | 2900 | 3.3776 | 0.0003 |
| 1.12 | 0.1888 | 2950 | 3.4571 | 0.0004 |
| 1.227 | 0.192 | 3000 | 3.1741 | 0.0007 |
| 1.1679 | 0.1952 | 3050 | 3.2316 | 0.0005 |
| 1.2782 | 0.1984 | 3100 | 3.0204 | 0.0002 |
| 1.1612 | 0.2016 | 3150 | 3.7882 | 0.0003 |
| 1.3976 | 0.2048 | 3200 | 2.7002 | 0.0004 |
| 1.2371 | 0.208 | 3250 | 3.1928 | 0.0009 |
| 1.2195 | 0.2112 | 3300 | 3.4656 | 0.0006 |
| 1.2678 | 0.2144 | 3350 | 2.7147 | 0.0008 |
| 1.1537 | 0.2176 | 3400 | 3.4452 | 0.0005 |
| 1.3463 | 0.2208 | 3450 | 3.5082 | 0.0003 |
| 1.1471 | 0.224 | 3500 | 3.4308 | 0.0007 |
| 1.2255 | 0.2272 | 3550 | 3.2201 | 0.0008 |
| 1.1674 | 0.2304 | 3600 | 3.5609 | 0.0001 |
| 1.2189 | 0.2336 | 3650 | 3.8242 | 0.0005 |
| 1.3446 | 0.2368 | 3700 | 4.4741 | 0.0 |
| 1.176 | 0.24 | 3750 | 3.4270 | 0.0005 |
| 1.1728 | 0.2432 | 3800 | 3.8401 | 0.0004 |
| 1.2267 | 0.2464 | 3850 | 3.6995 | 0.0004 |
| 1.1773 | 0.2496 | 3900 | 4.4688 | 0.0004 |
| 1.1547 | 0.2528 | 3950 | 3.9891 | 0.0003 |
| 1.2737 | 0.256 | 4000 | 4.3630 | 0.0 |
| 1.1536 | 0.2592 | 4050 | 4.0964 | 0.0005 |
| 1.2369 | 0.2624 | 4100 | 4.3463 | 0.0001 |
| 1.2146 | 0.2656 | 4150 | 3.9598 | 0.0 |
| 1.1365 | 0.2688 | 4200 | 3.1020 | 0.0004 |
| 1.2101 | 0.272 | 4250 | 3.7791 | 0.0004 |
| 1.2298 | 0.2752 | 4300 | 3.8624 | 0.0004 |
| 1.1941 | 0.2784 | 4350 | 3.9779 | 0.0008 |
| 1.2113 | 0.2816 | 4400 | 3.5294 | 0.001 |
| 1.1585 | 0.2848 | 4450 | 4.2826 | 0.0 |
| 1.2869 | 0.288 | 4500 | 3.8736 | 0.0 |
| 1.0868 | 0.2912 | 4550 | 4.3815 | 0.001 |
| 1.3051 | 0.2944 | 4600 | 4.4681 | 0.0026 |
| 1.1707 | 0.2976 | 4650 | 4.8402 | 0.0018 |
| 1.1597 | 0.3008 | 4700 | 5.0568 | 0.0013 |
| 1.3069 | 0.304 | 4750 | 4.1207 | 0.0001 |
| 1.1646 | 0.3072 | 4800 | 4.1790 | 0.0034 |
| 1.3297 | 0.3104 | 4850 | 4.0990 | 0.0024 |
| 1.0493 | 0.3136 | 4900 | 4.5623 | 0.0061 |
| 1.1153 | 0.3168 | 4950 | 4.6378 | 0.0004 |
| 0.8784 | 0.32 | 5000 | 4.1497 | 0.0086 |
| 0.896 | 0.3232 | 5050 | 4.6908 | 0.0051 |
| 0.822 | 0.3264 | 5100 | 4.1919 | 0.0108 |
| 0.6781 | 0.3296 | 5150 | 4.9788 | 0.0049 |
| 0.6793 | 0.3328 | 5200 | 5.3065 | 0.0017 |
| 0.2857 | 0.336 | 5250 | 4.4342 | 0.0095 |
| 0.5048 | 0.3392 | 5300 | 4.5093 | 0.0141 |
| 0.5522 | 0.3424 | 5350 | 4.9845 | 0.0066 |
| 0.1214 | 0.3456 | 5400 | 5.0081 | 0.0199 |
| 0.414 | 0.3488 | 5450 | 4.4191 | 0.0256 |
| 0.2786 | 0.352 | 5500 | 4.3840 | 0.0052 |
| 0.3123 | 0.3552 | 5550 | 5.4591 | 0.002 |
| 0.274 | 0.3584 | 5600 | 5.1043 | 0.0332 |
| 0.1928 | 0.3616 | 5650 | 3.9612 | 0.0748 |
| 0.2678 | 0.3648 | 5700 | 3.7207 | 0.0439 |
| 0.5844 | 0.368 | 5750 | 4.0531 | 0.0151 |
| 0.0855 | 0.3712 | 5800 | 4.5378 | 0.034 |
| 0.7164 | 0.3744 | 5850 | 3.1294 | 0.016 |
| 0.3368 | 0.3776 | 5900 | 4.4172 | 0.0133 |
| 0.2381 | 0.3808 | 5950 | 3.8615 | 0.0243 |
| 0.4482 | 0.384 | 6000 | 3.5239 | 0.0453 |
| 0.2669 | 0.3872 | 6050 | 4.2445 | 0.0398 |
| 0.0958 | 0.3904 | 6100 | 4.3887 | 0.0105 |
| 0.1462 | 0.3936 | 6150 | 3.7110 | 0.0554 |
| 0.0327 | 0.3968 | 6200 | 3.3010 | 0.0623 |
| 0.0222 | 0.4 | 6250 | 3.9386 | 0.0939 |
| 0.0559 | 0.4032 | 6300 | 3.9364 | 0.0755 |
| 0.1217 | 0.4064 | 6350 | 4.3939 | 0.0215 |
| 0.1358 | 0.4096 | 6400 | 3.1975 | 0.0703 |
| 0.0646 | 0.4128 | 6450 | 2.9846 | 0.0645 |
| 0.0236 | 0.416 | 6500 | 3.3172 | 0.0596 |
| 0.1173 | 0.4192 | 6550 | 2.8748 | 0.1073 |
| 0.0582 | 0.4224 | 6600 | 2.9948 | 0.1046 |
| 0.0573 | 0.4256 | 6650 | 3.3605 | 0.1123 |
| 0.0326 | 0.4288 | 6700 | 3.2537 | 0.1046 |
| 0.0731 | 0.432 | 6750 | 3.1120 | 0.1109 |
| 0.0399 | 0.4352 | 6800 | 3.4842 | 0.0556 |
| 0.0109 | 0.4384 | 6850 | 2.5963 | 0.1168 |
| 0.2186 | 0.4416 | 6900 | 2.8680 | 0.0756 |
| 0.0877 | 0.4448 | 6950 | 2.5039 | 0.1407 |
| 0.0176 | 0.448 | 7000 | 2.2619 | 0.1053 |
| 0.0349 | 0.4512 | 7050 | 3.0717 | 0.1474 |
| 0.0375 | 0.4544 | 7100 | 3.3271 | 0.0761 |
| 0.1062 | 0.4576 | 7150 | 2.8235 | 0.163 |
| 0.0576 | 0.4608 | 7200 | 2.6568 | 0.056 |
| 0.0061 | 0.464 | 7250 | 3.3304 | 0.1348 |
| 0.085 | 0.4672 | 7300 | 2.6189 | 0.2183 |
| 0.0104 | 0.4704 | 7350 | 3.0011 | 0.1506 |
| 0.0045 | 0.4736 | 7400 | 2.6107 | 0.1811 |
| 0.052 | 0.4768 | 7450 | 2.6883 | 0.2167 |
| 0.124 | 0.48 | 7500 | 2.4655 | 0.0957 |
| 0.0016 | 0.4832 | 7550 | 2.3609 | 0.194 |
| 0.0096 | 0.4864 | 7600 | 3.1718 | 0.1336 |
| 0.0028 | 0.4896 | 7650 | 2.8083 | 0.1593 |
| 0.009 | 0.4928 | 7700 | 2.3825 | 0.1434 |
| 0.0348 | 0.496 | 7750 | 2.7852 | 0.1706 |
| 0.0177 | 0.4992 | 7800 | 2.6883 | 0.1856 |
| 0.0234 | 0.5024 | 7850 | 2.6150 | 0.201 |
| 0.0089 | 0.5056 | 7900 | 2.2804 | 0.1624 |
| 0.0194 | 0.5088 | 7950 | 2.8231 | 0.1422 |
| 0.0085 | 0.512 | 8000 | 3.3137 | 0.1384 |
| 0.0095 | 0.5152 | 8050 | 2.5829 | 0.1891 |
| 0.0065 | 0.5184 | 8100 | 2.7052 | 0.2005 |
| 0.2807 | 0.5216 | 8150 | 2.8978 | 0.1373 |
| 0.0423 | 0.5248 | 8200 | 2.6956 | 0.1769 |
| 0.0041 | 0.528 | 8250 | 2.6078 | 0.2284 |
| 0.0193 | 0.5312 | 8300 | 2.9127 | 0.2202 |
| 0.0036 | 0.5344 | 8350 | 3.1227 | 0.1996 |
| 0.0182 | 0.5376 | 8400 | 2.8784 | 0.1508 |
| 0.0321 | 0.5408 | 8450 | 3.2844 | 0.1415 |
| 0.0112 | 0.544 | 8500 | 2.7068 | 0.2207 |
| 0.0053 | 0.5472 | 8550 | 2.4179 | 0.249 |
| 0.0019 | 0.5504 | 8600 | 2.6199 | 0.213 |
| 0.0137 | 0.5536 | 8650 | 2.2050 | 0.1843 |
| 0.0103 | 0.5568 | 8700 | 2.7119 | 0.1982 |
| 0.0029 | 0.56 | 8750 | 2.5797 | 0.2548 |
| 0.0055 | 0.5632 | 8800 | 2.8202 | 0.1532 |
| 0.0011 | 0.5664 | 8850 | 2.6418 | 0.1814 |
| 0.0105 | 0.5696 | 8900 | 2.1258 | 0.2824 |
| 0.0052 | 0.5728 | 8950 | 2.6633 | 0.1998 |
| 0.0017 | 0.576 | 9000 | 2.9766 | 0.2135 |
| 0.0009 | 0.5792 | 9050 | 2.6429 | 0.2699 |
| 0.0141 | 0.5824 | 9100 | 2.2798 | 0.2971 |
| 0.0047 | 0.5856 | 9150 | 2.6866 | 0.2594 |
| 0.0001 | 0.5888 | 9200 | 2.7058 | 0.2383 |
| 0.0009 | 0.592 | 9250 | 2.2642 | 0.3119 |
| 0.0009 | 0.5952 | 9300 | 2.5964 | 0.2719 |
| 0.001 | 0.5984 | 9350 | 2.6379 | 0.2989 |
| 0.0001 | 0.6016 | 9400 | 2.1677 | 0.3591 |
| 0.0 | 0.6048 | 9450 | 2.3099 | 0.3358 |
| 0.0002 | 0.608 | 9500 | 2.3010 | 0.3246 |
| 0.0001 | 0.6112 | 9550 | 3.0135 | 0.2213 |
| 0.0008 | 0.6144 | 9600 | 2.8845 | 0.2961 |
| 0.0023 | 0.6176 | 9650 | 2.5845 | 0.2819 |
| 0.0014 | 0.6208 | 9700 | 2.4775 | 0.2596 |
| 0.0013 | 0.624 | 9750 | 2.3568 | 0.2443 |
| 0.0028 | 0.6272 | 9800 | 3.1175 | 0.2227 |
| 0.0002 | 0.6304 | 9850 | 3.7130 | 0.1188 |
| 0.0037 | 0.6336 | 9900 | 2.6004 | 0.3015 |
| 0.0054 | 0.6368 | 9950 | 2.2830 | 0.285 |
| 0.0004 | 0.64 | 10000 | 2.7812 | 0.2507 |
| 0.0001 | 0.6432 | 10050 | 2.8292 | 0.2596 |
| 0.004 | 0.6464 | 10100 | 2.5658 | 0.2914 |
| 0.0004 | 0.6496 | 10150 | 3.1035 | 0.1841 |
| 0.0007 | 0.6528 | 10200 | 2.1120 | 0.3746 |
| 0.0 | 0.656 | 10250 | 2.1145 | 0.3639 |
| 0.0 | 0.6592 | 10300 | 2.3072 | 0.3395 |
| 0.0 | 0.6624 | 10350 | 2.3622 | 0.338 |
| 0.0 | 0.6656 | 10400 | 2.3874 | 0.3396 |
| 0.0 | 0.6688 | 10450 | 2.4089 | 0.3384 |
| 0.0 | 0.672 | 10500 | 2.4243 | 0.3382 |
| 0.0 | 0.6752 | 10550 | 2.4334 | 0.3391 |
| 0.0 | 0.6784 | 10600 | 2.4398 | 0.34 |
| 0.0 | 0.6816 | 10650 | 2.4449 | 0.3419 |
| 0.0 | 0.6848 | 10700 | 2.4922 | 0.3307 |
| 0.0 | 0.688 | 10750 | 2.4956 | 0.3321 |
| 0.0 | 0.6912 | 10800 | 2.4984 | 0.3334 |
| 0.0 | 0.6944 | 10850 | 2.4563 | 0.3403 |
| 0.0 | 0.6976 | 10900 | 2.4635 | 0.3406 |
| 0.0 | 0.7008 | 10950 | 2.4701 | 0.3411 |
| 0.0 | 0.704 | 11000 | 2.4758 | 0.3416 |
| 0.0 | 0.7072 | 11050 | 2.4812 | 0.3415 |
| 0.0 | 0.7104 | 11100 | 2.4867 | 0.342 |
| 0.0 | 0.7136 | 11150 | 2.4943 | 0.3418 |
| 0.0 | 0.7168 | 11200 | 2.4993 | 0.342 |
| 0.0 | 0.72 | 11250 | 2.5051 | 0.3418 |
| 0.0 | 0.7232 | 11300 | 2.5138 | 0.3412 |
| 0.0 | 0.7264 | 11350 | 2.5205 | 0.3407 |
| 0.0 | 0.7296 | 11400 | 2.5264 | 0.3405 |
| 0.0 | 0.7328 | 11450 | 2.5294 | 0.3408 |
| 0.0 | 0.736 | 11500 | 2.5340 | 0.3414 |
| 0.0 | 0.7392 | 11550 | 2.5386 | 0.3412 |
| 0.0 | 0.7424 | 11600 | 2.5438 | 0.3408 |
| 0.0 | 0.7456 | 11650 | 2.5572 | 0.3395 |
| 0.0 | 0.7488 | 11700 | 2.5618 | 0.3395 |
| 0.0 | 0.752 | 11750 | 2.5651 | 0.3395 |
| 0.0 | 0.7552 | 11800 | 2.5781 | 0.3389 |
| 0.0 | 0.7584 | 11850 | 2.5829 | 0.3393 |
| 0.0 | 0.7616 | 11900 | 2.5863 | 0.3391 |
| 0.0 | 0.7648 | 11950 | 2.5888 | 0.339 |
| 0.0 | 0.768 | 12000 | 2.5926 | 0.3392 |
| 0.0 | 0.7712 | 12050 | 2.5956 | 0.3392 |
| 0.0 | 0.7744 | 12100 | 2.5989 | 0.3392 |
| 0.0 | 0.7776 | 12150 | 2.5874 | 0.3418 |
| 0.0 | 0.7808 | 12200 | 2.5917 | 0.3423 |
| 0.0 | 0.784 | 12250 | 2.5990 | 0.3419 |
| 0.0 | 0.7872 | 12300 | 2.6083 | 0.3414 |
| 0.0 | 0.7904 | 12350 | 2.6119 | 0.3414 |
| 0.0 | 0.7936 | 12400 | 2.6158 | 0.341 |
| 0.0 | 0.7968 | 12450 | 2.6200 | 0.3406 |
| 0.0 | 0.8 | 12500 | 2.6229 | 0.3405 |
| 0.0 | 0.8032 | 12550 | 2.6263 | 0.3404 |
| 0.0 | 0.8064 | 12600 | 2.6291 | 0.3404 |
| 0.0 | 0.8096 | 12650 | 2.6321 | 0.3404 |
| 0.0 | 0.8128 | 12700 | 2.6357 | 0.3404 |
| 0.0 | 0.816 | 12750 | 2.6385 | 0.3402 |
| 0.0 | 0.8192 | 12800 | 2.6414 | 0.3402 |
| 0.0 | 0.8224 | 12850 | 2.6432 | 0.34 |
| 0.0 | 0.8256 | 12900 | 2.6456 | 0.3399 |
| 0.0 | 0.8288 | 12950 | 2.6501 | 0.3395 |
| 0.0 | 0.832 | 13000 | 2.6534 | 0.3393 |
| 0.0 | 0.8352 | 13050 | 2.6565 | 0.3386 |
| 0.0 | 0.8384 | 13100 | 2.6587 | 0.3386 |
| 0.0 | 0.8416 | 13150 | 2.6603 | 0.3387 |
| 0.0 | 0.8448 | 13200 | 2.6623 | 0.3388 |
| 0.0 | 0.848 | 13250 | 2.6647 | 0.3385 |
| 0.0 | 0.8512 | 13300 | 2.6670 | 0.3384 |
| 0.0 | 0.8544 | 13350 | 2.6690 | 0.3384 |
| 0.0 | 0.8576 | 13400 | 2.6709 | 0.3382 |
| 0.0 | 0.8608 | 13450 | 2.6727 | 0.3382 |
| 0.0 | 0.864 | 13500 | 2.6755 | 0.338 |
| 0.0 | 0.8672 | 13550 | 2.6775 | 0.338 |
| 0.0 | 0.8704 | 13600 | 2.6790 | 0.3379 |
| 0.0 | 0.8736 | 13650 | 2.6808 | 0.338 |
| 0.0 | 0.8768 | 13700 | 2.6824 | 0.3378 |
| 0.0 | 0.88 | 13750 | 2.6857 | 0.3374 |
| 0.0 | 0.8832 | 13800 | 2.6883 | 0.3372 |
| 0.0 | 0.8864 | 13850 | 2.6901 | 0.3372 |
| 0.0 | 0.8896 | 13900 | 2.6918 | 0.3371 |
| 0.0 | 0.8928 | 13950 | 2.6928 | 0.3372 |
| 0.0 | 0.896 | 14000 | 2.6944 | 0.337 |
| 0.0 | 0.8992 | 14050 | 2.6957 | 0.337 |
| 0.0 | 0.9024 | 14100 | 2.6968 | 0.337 |
| 0.0 | 0.9056 | 14150 | 2.6978 | 0.337 |
| 0.0 | 0.9088 | 14200 | 2.6990 | 0.3371 |
| 0.0 | 0.912 | 14250 | 2.7000 | 0.3371 |
| 0.0 | 0.9152 | 14300 | 2.7011 | 0.337 |
| 0.0 | 0.9184 | 14350 | 2.7019 | 0.337 |
| 0.0 | 0.9216 | 14400 | 2.7029 | 0.337 |
| 0.0 | 0.9248 | 14450 | 2.7037 | 0.337 |
| 0.0 | 0.928 | 14500 | 2.7046 | 0.3369 |
| 0.0 | 0.9312 | 14550 | 2.7055 | 0.3369 |
| 0.0 | 0.9344 | 14600 | 2.7061 | 0.3369 |
| 0.0 | 0.9376 | 14650 | 2.7068 | 0.3369 |
| 0.0 | 0.9408 | 14700 | 2.7074 | 0.3369 |
| 0.0 | 0.944 | 14750 | 2.7079 | 0.337 |
| 0.0 | 0.9472 | 14800 | 2.7084 | 0.337 |
| 0.0 | 0.9504 | 14850 | 2.7088 | 0.337 |
| 0.0 | 0.9536 | 14900 | 2.7092 | 0.337 |
| 0.0 | 0.9568 | 14950 | 2.7095 | 0.3371 |
| 0.0 | 0.96 | 15000 | 2.7099 | 0.3371 |
| 0.0 | 0.9632 | 15050 | 2.7101 | 0.3371 |
| 0.0 | 0.9664 | 15100 | 2.7103 | 0.3371 |
| 0.0 | 0.9696 | 15150 | 2.7105 | 0.3371 |
| 0.0 | 0.9728 | 15200 | 2.7107 | 0.3371 |
| 0.0 | 0.976 | 15250 | 2.7108 | 0.3371 |
| 0.0 | 0.9792 | 15300 | 2.7109 | 0.3371 |
| 0.0 | 0.9824 | 15350 | 2.7109 | 0.3371 |
| 0.0 | 0.9856 | 15400 | 2.7110 | 0.3371 |
| 0.0 | 0.9888 | 15450 | 2.7110 | 0.3371 |
| 0.0 | 0.992 | 15500 | 2.7110 | 0.3371 |
| 0.0 | 0.9952 | 15550 | 2.7110 | 0.3371 |
| 0.0 | 0.9984 | 15600 | 2.7110 | 0.3371 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.5.1
- Datasets 3.1.0
- Tokenizers 0.20.1
|
thebrownfrog/q-FrozenLake-v1-8x8-noSlippery | thebrownfrog | "2023-12-29T09:19:35Z" | 0 | 0 | null | [
"FrozenLake-v1-8x8-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-12-29T09:19:30Z" | ---
tags:
- FrozenLake-v1-8x8-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-8x8-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-8x8-no_slippery
type: FrozenLake-v1-8x8-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
model = load_from_hub(repo_id="thebrownfrog/q-FrozenLake-v1-8x8-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
KoboldAI/GPT-J-6B-Shinen | KoboldAI | "2022-03-20T18:48:45Z" | 1,746 | 24 | transformers | [
"transformers",
"pytorch",
"gptj",
"text-generation",
"en",
"arxiv:2101.00027",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:04Z" | ---
language: en
license: mit
---
# GPT-J 6B - Shinen
## Model Description
GPT-J 6B-Shinen is a finetune created using EleutherAI's GPT-J 6B model. Compared to GPT-Neo-2.7-Horni, this model is much heavier on the sexual content.
**Warning: THIS model is NOT suitable for use by minors. The model will output X-rated content.**
## Training data
The training data contains user-generated stories from sexstories.com. All stories are tagged using the following way:
```
[Theme: <theme1>, <theme2> ,<theme3>]
<Story goes here>
```
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
```py
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='KoboldAI/GPT-J-6B-Shinen')
>>> generator("She was staring at me", do_sample=True, min_length=50)
[{'generated_text': 'She was staring at me with a look that said it all. She wanted me so badly tonight that I wanted'}]
```
### Limitations and Biases
The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output.
GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile.
As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
### BibTeX entry and citation info
The model uses the following model as base:
```bibtex
@misc{gpt-j,
author = {Wang, Ben and Komatsuzaki, Aran},
title = {{GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model}},
howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
year = 2021,
month = May
}
```
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/), as well as the Cloud TPU team for providing early access to the [Cloud TPU VM](https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms) Alpha.
|
CyberHarem/rupee_nikke | CyberHarem | "2023-08-05T18:20:42Z" | 0 | 0 | null | [
"art",
"text-to-image",
"dataset:CyberHarem/rupee_nikke",
"license:mit",
"region:us"
] | text-to-image | "2023-08-05T18:15:52Z" | ---
license: mit
datasets:
- CyberHarem/rupee_nikke
pipeline_tag: text-to-image
tags:
- art
---
# Lora of rupee_nikke
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/rupee_nikke.pt` as the embedding and `1500/rupee_nikke.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `rupee_nikke`.**
These are available steps:
| Steps | pattern_1 | pattern_2 | pattern_3 | bikini | free | nude | Download |
|--------:|:-----------------------------------------------|:----------------------------------------------------|:-----------------------------------------------|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:---------------------------------|
| 1500 |  | [<NSFW, click to see>](1500/previews/pattern_2.png) |  |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/rupee_nikke.zip) |
| 1400 |  | [<NSFW, click to see>](1400/previews/pattern_2.png) |  |  |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/rupee_nikke.zip) |
| 1300 |  | [<NSFW, click to see>](1300/previews/pattern_2.png) |  |  |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/rupee_nikke.zip) |
| 1200 |  | [<NSFW, click to see>](1200/previews/pattern_2.png) |  |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/rupee_nikke.zip) |
| 1100 |  | [<NSFW, click to see>](1100/previews/pattern_2.png) |  |  |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/rupee_nikke.zip) |
| 1000 |  | [<NSFW, click to see>](1000/previews/pattern_2.png) |  |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/rupee_nikke.zip) |
| 900 |  | [<NSFW, click to see>](900/previews/pattern_2.png) |  |  |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/rupee_nikke.zip) |
| 800 |  | [<NSFW, click to see>](800/previews/pattern_2.png) |  |  |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/rupee_nikke.zip) |
| 700 |  | [<NSFW, click to see>](700/previews/pattern_2.png) |  |  |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/rupee_nikke.zip) |
| 600 |  | [<NSFW, click to see>](600/previews/pattern_2.png) |  |  |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/rupee_nikke.zip) |
| 500 |  | [<NSFW, click to see>](500/previews/pattern_2.png) |  |  |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/rupee_nikke.zip) |
| 400 |  | [<NSFW, click to see>](400/previews/pattern_2.png) |  |  |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/rupee_nikke.zip) |
| 300 |  | [<NSFW, click to see>](300/previews/pattern_2.png) |  |  |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/rupee_nikke.zip) |
| 200 |  | [<NSFW, click to see>](200/previews/pattern_2.png) |  |  |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/rupee_nikke.zip) |
| 100 |  | [<NSFW, click to see>](100/previews/pattern_2.png) |  |  |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/rupee_nikke.zip) |
|
Jebadiah/Poppy-gem-p1 | Jebadiah | "2024-05-05T20:57:12Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:Jebadiah/OpenBio-gem-p1",
"base_model:merge:Jebadiah/OpenBio-gem-p1",
"base_model:Jebadiah/hermes-Poppy-stone-l3-8b",
"base_model:merge:Jebadiah/hermes-Poppy-stone-l3-8b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-05T20:55:12Z" | ---
base_model:
- Jebadiah/hermes-Poppy-stone-l3-8b
- Jebadiah/OpenBio-gem-p1
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [Jebadiah/OpenBio-gem-p1](https://huggingface.co/Jebadiah/OpenBio-gem-p1) as a base.
### Models Merged
The following models were included in the merge:
* [Jebadiah/hermes-Poppy-stone-l3-8b](https://huggingface.co/Jebadiah/hermes-Poppy-stone-l3-8b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Jebadiah/OpenBio-gem-p1
# No parameters necessary for base model
- model: Jebadiah/hermes-Poppy-stone-l3-8b
parameters:
density: 0.53
weight: 0.4
merge_method: dare_ties
base_model: Jebadiah/OpenBio-gem-p1
parameters:
int8_mask: true
dtype: bfloat16
```
|
perceptron-743/translation-eng-ger | perceptron-743 | "2024-01-31T07:35:20Z" | 11 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-en-de",
"base_model:finetune:Helsinki-NLP/opus-mt-en-de",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-01-31T06:23:53Z" | ---
license: cc-by-4.0
base_model: Helsinki-NLP/opus-mt-en-de
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: translation-eng-ger
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# translation-eng-ger
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-de](https://huggingface.co/Helsinki-NLP/opus-mt-en-de) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5570
- Bleu: 53.1421
- Gen Len: 9.2669
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.6154 | 1.0 | 9693 | 0.5709 | 52.364 | 9.2343 |
| 0.5144 | 2.0 | 19386 | 0.5570 | 53.1421 | 9.2669 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
Wendy57/Qwen2.5-7B-Instruct-gguf-q4 | Wendy57 | "2025-04-12T16:38:20Z" | 0 | 0 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-12T16:31:40Z" | ---
license: apache-2.0
---
|
PersianSingers/RVC-models | PersianSingers | "2024-04-04T02:21:32Z" | 0 | 2 | null | [
"music",
"audio-to-audio",
"fa",
"en",
"license:apache-2.0",
"region:us"
] | audio-to-audio | "2024-04-04T00:20:03Z" | ---
license: apache-2.0
language:
- fa
- en
pipeline_tag: audio-to-audio
tags:
- music
---
# RVC-models
This repository contains a Retrieval-based Voice Conversion (RVC) model trained on a dataset of Persian singers. The model aims to convert speech from one voice to another, allowing for the creation of new vocal performances or the transformation of existing recordings.
## Model Description
The RVC model in this repository was trained using the [Retrieval-based Voice Conversion (RVC) framework](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI). It leverages a retrieval-based approach to voice conversion, where the model learns to match the target voice by retrieving and combining relevant speech segments from the training data.
The model was trained on a dataset of Persian singers, which includes a diverse range of vocal styles and techniques. By capturing the characteristics and nuances of these singers, the model can generate convincing voice conversions that maintain the linguistic content while adopting the target singer's voice.
## Usage
To use the RVC model, follow these steps:
1. Download the model files.
2. [Install the required dependencies.](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/docs/en/README.en.md)
3. Load the model and perform voice conversion using the provided user interface.
---
## license: apache-2.0 |
Bachstelze/smolSynformer | Bachstelze | "2025-04-12T10:17:09Z" | 201 | 0 | null | [
"safetensors",
"llama",
"en",
"dataset:Bachstelze/PAWS_CoT_explanation",
"dataset:Bachstelze/GEC_CoT_explanation",
"dataset:HuggingFaceTB/smol-smoltalk",
"dataset:nomic-ai/gpt4all-j-prompt-generations",
"dataset:ZenMoore/RoleBench",
"dataset:THUDM/AgentInstruct",
"dataset:Open-Orca/SlimOrca",
"dataset:WizardLMTeam/WizardLM_evol_instruct_V2_196k",
"dataset:GAIR/lima",
"dataset:hamishivi/tulu-3-unfiltered",
"dataset:ConiferLM/Conifer",
"dataset:argilla/magpie-ultra-v1.0",
"dataset:kaist-ai/CoT-Collection",
"base_model:HuggingFaceTB/SmolLM2-135M",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M",
"license:agpl-3.0",
"region:us"
] | null | "2025-04-09T10:34:08Z" | ---
license: agpl-3.0
datasets:
- Bachstelze/PAWS_CoT_explanation
- Bachstelze/GEC_CoT_explanation
- HuggingFaceTB/smol-smoltalk
- nomic-ai/gpt4all-j-prompt-generations
- ZenMoore/RoleBench
- THUDM/AgentInstruct
- Open-Orca/SlimOrca
- WizardLMTeam/WizardLM_evol_instruct_V2_196k
- GAIR/lima
- hamishivi/tulu-3-unfiltered
- ConiferLM/Conifer
- argilla/magpie-ultra-v1.0
- kaist-ai/CoT-Collection
language:
- en
base_model:
- HuggingFaceTB/SmolLM2-135M
---
# SmolSynformer: SmolLM2 as Syntax-aware transformer
SmolSynformer is trained on various instructions including GEC, paraphrase identification and universal dependency generation.
Code and math are not included.
This model is overfitted on in-context learning and does sometimes generate follow up questions and answers.
# Inference with transformer
```
from transformers import AutoTokenizer, AutoModelForCausalLM, AutoConfig, pipeline
test_model = "Bachstelze/smolSynformer"
model = AutoModelForCausalLM.from_pretrained(test_model)
tokenizer = AutoTokenizer.from_pretrained(test_model)
config = AutoConfig.from_pretrained(test_model)
prompt_pipeline = pipeline("text-generation", model=test_model, tokenizer=tokenizer, max_new_tokens=250)
print(prompt_pipeline("Why is syntax relevant for language modeling and instruction following?\n"))
```
Example answer:
Syntax is relevant for language modeling and instruction following because it provides a structured and organized way to represent and analyze language. It allows for the creation of rules and patterns that govern how language is used, which can be used to train models to recognize and generate language. Additionally, syntax can be used to identify and classify different types of language, such as grammatical or idiomatic language. |
shanhy/xlm-roberta-base_seed42_amh-hau-eng_train | shanhy | "2024-02-04T13:49:01Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-02-04T13:47:52Z" | ---
license: mit
base_model: FacebookAI/xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base_seed42_amh-hau-eng_train
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base_seed42_amh-hau-eng_train
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0233
- Spearman Corr: 0.7915
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Spearman Corr |
|:-------------:|:-----:|:----:|:---------------:|:-------------:|
| No log | 0.58 | 200 | 0.0265 | 0.7094 |
| No log | 1.15 | 400 | 0.0234 | 0.7443 |
| No log | 1.73 | 600 | 0.0226 | 0.7564 |
| 0.0414 | 2.3 | 800 | 0.0235 | 0.7743 |
| 0.0414 | 2.88 | 1000 | 0.0241 | 0.7786 |
| 0.0414 | 3.45 | 1200 | 0.0261 | 0.7746 |
| 0.0231 | 4.03 | 1400 | 0.0276 | 0.7886 |
| 0.0231 | 4.6 | 1600 | 0.0223 | 0.7825 |
| 0.0231 | 5.18 | 1800 | 0.0216 | 0.7825 |
| 0.0231 | 5.76 | 2000 | 0.0231 | 0.7854 |
| 0.0165 | 6.33 | 2200 | 0.0204 | 0.7934 |
| 0.0165 | 6.91 | 2400 | 0.0227 | 0.7886 |
| 0.0165 | 7.48 | 2600 | 0.0220 | 0.7873 |
| 0.0121 | 8.06 | 2800 | 0.0220 | 0.7827 |
| 0.0121 | 8.63 | 3000 | 0.0222 | 0.7909 |
| 0.0121 | 9.21 | 3200 | 0.0218 | 0.7933 |
| 0.0121 | 9.78 | 3400 | 0.0233 | 0.7932 |
| 0.0092 | 10.36 | 3600 | 0.0224 | 0.7931 |
| 0.0092 | 10.94 | 3800 | 0.0233 | 0.7915 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
codermert/sukru_fluxxx | codermert | "2025-03-08T16:31:39Z" | 0 | 1 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-03-08T01:40:34Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Sukru_Fluxxx
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('prithivMLmods/Canopus-LoRA-Flux-UltraRealism-2.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('codermert/sukru_fluxxx', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
vipinkatara/Mistral-7B-v0.1-orpo-final1 | vipinkatara | "2024-05-07T07:31:37Z" | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"orpo",
"generated_from_trainer",
"conversational",
"dataset:vipinkatara/dataset_complete_final",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-06T10:34:31Z" | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- alignment-handbook
- trl
- orpo
- generated_from_trainer
- trl
- orpo
- generated_from_trainer
datasets:
- vipinkatara/dataset_complete_final
model-index:
- name: Mistral-7B-v0.1-orpo-final1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-v0.1-orpo-final1
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the vipinkatara/dataset_complete_final dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rewards/chosen: nan
- Rewards/rejected: nan
- Rewards/accuracies: 0.0
- Rewards/margins: nan
- Logps/rejected: nan
- Logps/chosen: nan
- Logits/rejected: nan
- Logits/chosen: nan
- Nll Loss: nan
- Log Odds Ratio: nan
- Log Odds Chosen: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: inverse_sqrt
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | Nll Loss | Log Odds Ratio | Log Odds Chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|:--------:|:--------------:|:---------------:|
| 3.4851 | 0.0045 | 10 | 1.5241 | -0.0138 | -0.0138 | 0.0 | 0.0 | -0.2768 | -0.2768 | -2.5524 | -2.5524 | 1.4894 | -0.6931 | 0.0 |
| 1.086 | 0.0089 | 20 | 0.9466 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -1.7727 | -1.7727 | 0.9119 | -0.6931 | 0.0 |
| 0.9087 | 0.0134 | 30 | 0.8376 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -1.5712 | -1.5712 | 0.8029 | -0.6931 | 0.0 |
| 0.7774 | 0.0179 | 40 | 0.6912 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -1.5837 | -1.5837 | 0.6565 | -0.6931 | 0.0 |
| 0.5426 | 0.0224 | 50 | 0.1591 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -1.5343 | -1.5343 | 0.1244 | -0.6931 | 0.0 |
| 0.4926 | 0.0268 | 60 | 0.0728 | -0.0025 | -0.0025 | 0.0 | 0.0 | -0.0497 | -0.0497 | -2.1702 | -2.1702 | 0.0382 | -0.6931 | 0.0 |
| 0.3784 | 0.0313 | 70 | 0.0374 | -0.0001 | -0.0001 | 0.0 | 0.0 | -0.0015 | -0.0015 | -1.8683 | -1.8683 | 0.0028 | -0.6931 | 0.0 |
| 0.1081 | 0.0358 | 80 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -1.9697 | -1.9697 | 0.0000 | -0.6931 | 0.0 |
| 0.2173 | 0.0402 | 90 | 0.5148 | -0.0349 | -0.0349 | 0.0 | 0.0 | -0.6984 | -0.6984 | -2.7805 | -2.7805 | 0.4802 | -0.6931 | 0.0 |
| 0.0845 | 0.0447 | 100 | 0.0565 | -0.0016 | -0.0016 | 0.0 | 0.0 | -0.0317 | -0.0317 | -2.2378 | -2.2378 | 0.0218 | -0.6931 | 0.0 |
| 0.1317 | 0.0492 | 110 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -2.4884 | -2.4884 | 0.0000 | -0.6931 | 0.0 |
| 0.21 | 0.0536 | 120 | 0.0348 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0002 | -0.0002 | -2.1373 | -2.1373 | 0.0002 | -0.6931 | 0.0 |
| 0.0859 | 0.0581 | 130 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -1.8577 | -1.8577 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.0626 | 140 | 0.0348 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0002 | -0.0002 | -1.8252 | -1.8252 | 0.0001 | -0.6931 | 0.0 |
| 0.0495 | 0.0671 | 150 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -1.8505 | -1.8505 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.0715 | 160 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -1.9784 | -1.9784 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.0760 | 170 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -1.7608 | -1.7608 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.0805 | 180 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -1.5161 | -1.5161 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.0849 | 190 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -1.4646 | -1.4646 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.0894 | 200 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -1.4565 | -1.4565 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.0939 | 210 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -1.4377 | -1.4377 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.0983 | 220 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -1.4224 | -1.4224 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.1028 | 230 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -1.4131 | -1.4131 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.1073 | 240 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -1.4159 | -1.4159 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.1118 | 250 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -1.4384 | -1.4384 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.1162 | 260 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -1.4592 | -1.4592 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.1207 | 270 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -1.4682 | -1.4682 | 0.0000 | -0.6931 | 0.0 |
| 0.0348 | 0.1252 | 280 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -1.1261 | -1.1261 | 0.0000 | -0.6931 | 0.0 |
| 0.066 | 0.1296 | 290 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -0.9084 | -0.9084 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.1341 | 300 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -0.8419 | -0.8419 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.1386 | 310 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -0.7935 | -0.7935 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.1430 | 320 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -0.7703 | -0.7703 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.1475 | 330 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -0.7746 | -0.7746 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.1520 | 340 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -0.7745 | -0.7745 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.1565 | 350 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -0.7768 | -0.7768 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.1609 | 360 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -0.7753 | -0.7753 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.1654 | 370 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -0.7760 | -0.7760 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.1699 | 380 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -0.7688 | -0.7688 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.1743 | 390 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -0.7688 | -0.7688 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.1788 | 400 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -0.7668 | -0.7668 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.1833 | 410 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -0.7660 | -0.7660 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.1878 | 420 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -0.7656 | -0.7656 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.1922 | 430 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -0.7680 | -0.7680 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.1967 | 440 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -0.7653 | -0.7653 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.2012 | 450 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -0.7624 | -0.7624 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.2056 | 460 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -0.7612 | -0.7612 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.2101 | 470 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -0.7622 | -0.7622 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.2146 | 480 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -0.7637 | -0.7637 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.2190 | 490 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -0.7612 | -0.7612 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.2235 | 500 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -0.7622 | -0.7622 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.2280 | 510 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -0.7672 | -0.7672 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.2325 | 520 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -0.7621 | -0.7621 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.2369 | 530 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -0.7622 | -0.7622 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.2414 | 540 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -0.7637 | -0.7637 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.2459 | 550 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -0.7630 | -0.7630 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.2503 | 560 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -0.7591 | -0.7591 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.2548 | 570 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -0.7590 | -0.7590 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.2593 | 580 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -0.7541 | -0.7541 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.2637 | 590 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -0.7554 | -0.7554 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.2682 | 600 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -0.7636 | -0.7636 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.2727 | 610 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -0.7175 | -0.7175 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.2772 | 620 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -0.9073 | -0.9073 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.2816 | 630 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -1.0013 | -1.0013 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.2861 | 640 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -1.0971 | -1.0971 | 0.0000 | -0.6931 | 0.0 |
| 0.0347 | 0.2906 | 650 | 0.0347 | -0.0000 | -0.0000 | 0.0 | 0.0 | -0.0000 | -0.0000 | -0.8988 | -0.8988 | 0.0000 | -0.6931 | 0.0 |
| 0.1054 | 0.2950 | 660 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.2995 | 670 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.3040 | 680 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.3084 | 690 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.3129 | 700 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.3174 | 710 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.3219 | 720 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.3263 | 730 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.3308 | 740 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.3353 | 750 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.3397 | 760 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.3442 | 770 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.3487 | 780 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.3532 | 790 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.3576 | 800 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.3621 | 810 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.3666 | 820 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.3710 | 830 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.3755 | 840 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.3800 | 850 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.3844 | 860 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.3889 | 870 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.3934 | 880 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.3979 | 890 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.4023 | 900 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.4068 | 910 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.4113 | 920 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.4157 | 930 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.4202 | 940 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.4247 | 950 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.4291 | 960 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.4336 | 970 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.4381 | 980 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.4426 | 990 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.4470 | 1000 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.4515 | 1010 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.4560 | 1020 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.4604 | 1030 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.4649 | 1040 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.4694 | 1050 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.4738 | 1060 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.4783 | 1070 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.4828 | 1080 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.4873 | 1090 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.4917 | 1100 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.4962 | 1110 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.5007 | 1120 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.5051 | 1130 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.5096 | 1140 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.5141 | 1150 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.5186 | 1160 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.5230 | 1170 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.5275 | 1180 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.5320 | 1190 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.5364 | 1200 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.5409 | 1210 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.5454 | 1220 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.5498 | 1230 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.5543 | 1240 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.5588 | 1250 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.5633 | 1260 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.5677 | 1270 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.5722 | 1280 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.5767 | 1290 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.5811 | 1300 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.5856 | 1310 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.5901 | 1320 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.5945 | 1330 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.5990 | 1340 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.6035 | 1350 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.6080 | 1360 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.6124 | 1370 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.6169 | 1380 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.6214 | 1390 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.6258 | 1400 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.6303 | 1410 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.6348 | 1420 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.6392 | 1430 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.6437 | 1440 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.6482 | 1450 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.6527 | 1460 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.6571 | 1470 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.6616 | 1480 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.6661 | 1490 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.6705 | 1500 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.6750 | 1510 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.6795 | 1520 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.6840 | 1530 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.6884 | 1540 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.6929 | 1550 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.6974 | 1560 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.7018 | 1570 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.7063 | 1580 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.7108 | 1590 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.7152 | 1600 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.7197 | 1610 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.7242 | 1620 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.7287 | 1630 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.7331 | 1640 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.7376 | 1650 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.7421 | 1660 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.7465 | 1670 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.7510 | 1680 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.7555 | 1690 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.7599 | 1700 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.7644 | 1710 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.7689 | 1720 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.7734 | 1730 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.7778 | 1740 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.7823 | 1750 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.7868 | 1760 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.7912 | 1770 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.7957 | 1780 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.8002 | 1790 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.8046 | 1800 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.8091 | 1810 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.8136 | 1820 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.8181 | 1830 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.8225 | 1840 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.8270 | 1850 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.8315 | 1860 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.8359 | 1870 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.8404 | 1880 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.8449 | 1890 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.8494 | 1900 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.8538 | 1910 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.8583 | 1920 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.8628 | 1930 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.8672 | 1940 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.8717 | 1950 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.8762 | 1960 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.8806 | 1970 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.8851 | 1980 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.8896 | 1990 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.8941 | 2000 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.8985 | 2010 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.9030 | 2020 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.9075 | 2030 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.9119 | 2040 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.9164 | 2050 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.9209 | 2060 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.9253 | 2070 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.9298 | 2080 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.9343 | 2090 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.9388 | 2100 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.9432 | 2110 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.9477 | 2120 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.9522 | 2130 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.9566 | 2140 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.9611 | 2150 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.9656 | 2160 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.9700 | 2170 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.9745 | 2180 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.9790 | 2190 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.9835 | 2200 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.9879 | 2210 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.9924 | 2220 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
| 0.0 | 0.9969 | 2230 | nan | nan | nan | 0.0 | nan | nan | nan | nan | nan | nan | nan | nan |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Rodrig0rtiz/bert_adaptation_peppa_pig | Rodrig0rtiz | "2023-11-25T14:46:18Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:dccuchile/bert-base-spanish-wwm-uncased",
"base_model:finetune:dccuchile/bert-base-spanish-wwm-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2023-11-25T14:46:00Z" | ---
base_model: dccuchile/bert-base-spanish-wwm-uncased
tags:
- generated_from_trainer
model-index:
- name: bert_adaptation_peppa_pig
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_adaptation_peppa_pig
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1770
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1991 | 1.0 | 35 | 2.2332 |
| 2.1524 | 2.0 | 70 | 2.0712 |
| 1.9057 | 3.0 | 105 | 2.0536 |
| 1.8771 | 4.0 | 140 | 2.1396 |
| 1.884 | 5.0 | 175 | 2.0269 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
mohamedo/music | mohamedo | "2023-11-05T07:50:03Z" | 0 | 1 | null | [
"music",
"art",
"ar",
"en",
"license:openrail",
"region:us"
] | null | "2023-11-05T07:48:51Z" | ---
license: openrail
language:
- ar
- en
tags:
- music
- art
--- |
ajamilohi/Jamil | ajamilohi | "2025-02-24T10:30:25Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-02-24T10:30:25Z" | ---
license: apache-2.0
---
|
vuongnhathien/edit-training-arg | vuongnhathien | "2024-05-21T09:52:18Z" | 134 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swinv2",
"image-classification",
"generated_from_trainer",
"base_model:vuongnhathien/SwinV2-30VNFood",
"base_model:finetune:vuongnhathien/SwinV2-30VNFood",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-05-21T09:29:06Z" | ---
license: apache-2.0
base_model: vuongnhathien/SwinV2-30VNFood
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: edit-training-arg
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# edit-training-arg
This model is a fine-tuned version of [vuongnhathien/SwinV2-30VNFood](https://huggingface.co/vuongnhathien/SwinV2-30VNFood) on the jbarat/plant_species dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3643
- Accuracy: 0.5875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 10 | 0.7014 | 0.7375 |
| No log | 2.0 | 20 | 0.5727 | 0.75 |
| No log | 3.0 | 30 | 0.7431 | 0.7875 |
| No log | 4.0 | 40 | 0.7550 | 0.7875 |
| No log | 5.0 | 50 | 0.6643 | 0.7875 |
| No log | 6.0 | 60 | 0.6035 | 0.8625 |
| No log | 7.0 | 70 | 0.8655 | 0.8375 |
| No log | 8.0 | 80 | 0.7624 | 0.825 |
| No log | 9.0 | 90 | 0.6606 | 0.85 |
| 0.2933 | 10.0 | 100 | 0.6476 | 0.85 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
AvyuktBallari/mistral-7b-oig-unsloth-merged | AvyuktBallari | "2024-11-25T18:05:16Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-25T17:41:56Z" | ---
base_model: unsloth/mistral-7b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** AvyuktBallari
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mlx-community/FuseO1-DeepSeekR1-Qwen2.5-Coder-7B-Preview-q4 | mlx-community | "2025-03-11T23:09:22Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"mlx",
"conversational",
"base_model:sealad886/FuseO1-DeepSeekR1-Qwen2.5-Coder-7B-Preview",
"base_model:quantized:sealad886/FuseO1-DeepSeekR1-Qwen2.5-Coder-7B-Preview",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"region:us"
] | text-generation | "2025-03-11T19:28:14Z" | ---
base_model: sealad886/FuseO1-DeepSeekR1-Qwen2.5-Coder-7B-Preview
library_name: transformers
tags:
- mergekit
- merge
- mlx
---
# mlx-community/FuseO1-DeepSeekR1-Qwen2.5-Coder-7B-Preview-q4
The Model [mlx-community/FuseO1-DeepSeekR1-Qwen2.5-Coder-7B-Preview-q4](https://huggingface.co/mlx-community/FuseO1-DeepSeekR1-Qwen2.5-Coder-7B-Preview-q4) was
converted to MLX format from [sealad886/FuseO1-DeepSeekR1-Qwen2.5-Coder-7B-Preview](https://huggingface.co/sealad886/FuseO1-DeepSeekR1-Qwen2.5-Coder-7B-Preview)
using mlx-lm version **0.21.5**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/FuseO1-DeepSeekR1-Qwen2.5-Coder-7B-Preview-q4")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
jungiebeen/pretrain5 | jungiebeen | "2024-02-29T16:11:24Z" | 52 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-01-04T06:08:37Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Xu-Ouyang/pythia-1.4b-deduped-int4-step12000-GPTQ-wikitext2 | Xu-Ouyang | "2024-07-28T02:06:00Z" | 9 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2024-07-28T02:05:18Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lesso17/8ba581bb-987a-4ffb-9082-dc4feab021f9 | lesso17 | "2025-01-29T00:29:30Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-v0.3",
"base_model:adapter:unsloth/mistral-7b-v0.3",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-28T23:59:25Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-v0.3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8ba581bb-987a-4ffb-9082-dc4feab021f9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b-v0.3
bf16: auto
chat_template: llama3
datasets:
- data_files:
- 177cf310bb056dd2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/177cf310bb056dd2_train_data.json
type:
field_instruction: instruction
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso17/8ba581bb-987a-4ffb-9082-dc4feab021f9
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/177cf310bb056dd2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f89d2b33-ad51-4971-beba-5ade3b17b0b9
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: f89d2b33-ad51-4971-beba-5ade3b17b0b9
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 8ba581bb-987a-4ffb-9082-dc4feab021f9
This model is a fine-tuned version of [unsloth/mistral-7b-v0.3](https://huggingface.co/unsloth/mistral-7b-v0.3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0268 | 200 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Elaine5/Qwen1.5-0.5B-Chat-Q4_K_M-GGUF | Elaine5 | "2024-06-21T06:27:12Z" | 5 | 0 | null | [
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Qwen/Qwen1.5-0.5B-Chat",
"base_model:quantized:Qwen/Qwen1.5-0.5B-Chat",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2024-06-21T06:27:03Z" | ---
base_model: Qwen/Qwen1.5-0.5B-Chat
language:
- en
license: other
license_name: tongyi-qianwen-research
license_link: https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- llama-cpp
- gguf-my-repo
---
# Elaine5/Qwen1.5-0.5B-Chat-Q4_K_M-GGUF
This model was converted to GGUF format from [`Qwen/Qwen1.5-0.5B-Chat`](https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Elaine5/Qwen1.5-0.5B-Chat-Q4_K_M-GGUF --hf-file qwen1.5-0.5b-chat-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Elaine5/Qwen1.5-0.5B-Chat-Q4_K_M-GGUF --hf-file qwen1.5-0.5b-chat-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Elaine5/Qwen1.5-0.5B-Chat-Q4_K_M-GGUF --hf-file qwen1.5-0.5b-chat-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Elaine5/Qwen1.5-0.5B-Chat-Q4_K_M-GGUF --hf-file qwen1.5-0.5b-chat-q4_k_m.gguf -c 2048
```
|
jialicheng/ddi-pubmedbert-fulltext | jialicheng | "2024-12-27T06:22:06Z" | 110 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext",
"base_model:finetune:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-05-19T18:03:17Z" | ---
license: mit
base_model: microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: pubmedbert-fulltext
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pubmedbert-fulltext
This model is a fine-tuned version of [microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4548
- Accuracy: 0.9498
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| No log | 1.0 | 791 | 0.2502 | 0.9342 |
| 0.1717 | 2.0 | 1582 | 0.2889 | 0.9449 |
| 0.0792 | 3.0 | 2373 | 0.2844 | 0.9424 |
| 0.0565 | 4.0 | 3164 | 0.3055 | 0.9377 |
| 0.0565 | 5.0 | 3955 | 0.3059 | 0.9458 |
| 0.0405 | 6.0 | 4746 | 0.3693 | 0.9451 |
| 0.0274 | 7.0 | 5537 | 0.3295 | 0.9438 |
| 0.0263 | 8.0 | 6328 | 0.4278 | 0.9337 |
| 0.0181 | 9.0 | 7119 | 0.3807 | 0.9465 |
| 0.0181 | 10.0 | 7910 | 0.4318 | 0.9442 |
| 0.0173 | 11.0 | 8701 | 0.3995 | 0.9487 |
| 0.011 | 12.0 | 9492 | 0.4487 | 0.9466 |
| 0.0077 | 13.0 | 10283 | 0.4247 | 0.9482 |
| 0.0075 | 14.0 | 11074 | 0.5082 | 0.9433 |
| 0.0075 | 15.0 | 11865 | 0.4722 | 0.9458 |
| 0.0071 | 16.0 | 12656 | 0.4134 | 0.9507 |
| 0.0034 | 17.0 | 13447 | 0.4252 | 0.9496 |
| 0.0033 | 18.0 | 14238 | 0.4436 | 0.9500 |
| 0.0023 | 19.0 | 15029 | 0.4481 | 0.9505 |
| 0.0023 | 20.0 | 15820 | 0.4548 | 0.9498 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu118
- Datasets 2.18.0
- Tokenizers 0.15.2
|
bigband/FormidableIshtar | bigband | "2025-01-27T23:41:58Z" | 23 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-01-27T23:40:58Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
thaffggg/a4c6e736-e9a9-45a7-8568-0843fbfe65d2 | thaffggg | "2025-01-10T10:22:07Z" | 15 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Vikhrmodels/Vikhr-7B-instruct_0.4",
"base_model:adapter:Vikhrmodels/Vikhr-7B-instruct_0.4",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-10T09:59:00Z" | ---
library_name: peft
base_model: Vikhrmodels/Vikhr-7B-instruct_0.4
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a4c6e736-e9a9-45a7-8568-0843fbfe65d2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Vikhrmodels/Vikhr-7B-instruct_0.4
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 723928d8104e1c8a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/723928d8104e1c8a_train_data.json
type:
field_instruction: Patient
field_output: Doctor
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: thaffggg/a4c6e736-e9a9-45a7-8568-0843fbfe65d2
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/723928d8104e1c8a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 47226bcf-dfed-4181-b278-365e98dd667f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 47226bcf-dfed-4181-b278-365e98dd667f
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# a4c6e736-e9a9-45a7-8568-0843fbfe65d2
This model is a fine-tuned version of [Vikhrmodels/Vikhr-7B-instruct_0.4](https://huggingface.co/Vikhrmodels/Vikhr-7B-instruct_0.4) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3619
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.3501 | 0.9357 | 200 | 2.3619 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
WhiteHunter111/lora_model | WhiteHunter111 | "2024-06-17T10:54:14Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-17T10:53:56Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** WhiteHunter111
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
emilykang/Phi_medner-orthopedic_lora | emilykang | "2024-05-15T20:39:31Z" | 1 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
] | null | "2024-05-15T20:09:22Z" | ---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: microsoft/phi-2
datasets:
- generator
model-index:
- name: Phi_medner-orthopedic_lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Phi_medner-orthopedic_lora
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 10
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.2.1+cu118
- Datasets 2.19.0
- Tokenizers 0.19.1 |
GhifSmile/distilbert-base-uncased-PINA-dfnew-insyaallah | GhifSmile | "2023-04-18T00:13:11Z" | 106 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-04-17T20:52:20Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
model-index:
- name: distilbert-base-uncased-PINA-dfnew-insyaallah
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-PINA-dfnew-insyaallah
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2680
- Accuracy: 0.9431
- Precision: 0.8480
- Recall: 0.8258
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|
| 1.1591 | 1.0 | 1436 | 0.4581 | 0.8945 | 0.7871 | 0.7185 |
| 0.3058 | 2.0 | 2872 | 0.2901 | 0.9349 | 0.8307 | 0.8157 |
| 0.1623 | 3.0 | 4308 | 0.2680 | 0.9431 | 0.8480 | 0.8258 |
| 0.0936 | 4.0 | 5744 | 0.2942 | 0.9474 | 0.8758 | 0.8415 |
| 0.0562 | 5.0 | 7180 | 0.2681 | 0.9535 | 0.8730 | 0.8527 |
| 0.034 | 6.0 | 8616 | 0.3010 | 0.9504 | 0.8761 | 0.8474 |
| 0.0193 | 7.0 | 10052 | 0.2971 | 0.9532 | 0.8643 | 0.8507 |
| 0.0115 | 8.0 | 11488 | 0.3139 | 0.9519 | 0.8640 | 0.8489 |
| 0.0078 | 9.0 | 12924 | 0.3056 | 0.9551 | 0.8649 | 0.8529 |
| 0.0056 | 10.0 | 14360 | 0.3062 | 0.9549 | 0.8636 | 0.8531 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
hanymac/CoreML-Stable-Diffusion-2.1-split_einsum-img2img | hanymac | "2022-12-22T18:57:48Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2022-12-22T18:37:19Z" | ---
license: creativeml-openrail-m
---
|
orpo-explorers/kaist-mistral-orpo-capybara-ohp-3epoch-beta-0.2 | orpo-explorers | "2024-04-27T05:25:47Z" | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"orpo",
"generated_from_trainer",
"conversational",
"dataset:orpo-explorers/OHP-15k-Stratified-1",
"base_model:orpo-explorers/kaist-mistral-orpo-capybara-beta-0.05",
"base_model:finetune:orpo-explorers/kaist-mistral-orpo-capybara-beta-0.05",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-26T22:00:22Z" | ---
license: apache-2.0
base_model: orpo-explorers/kaist-mistral-orpo-capybara-beta-0.05
tags:
- alignment-handbook
- trl
- orpo
- generated_from_trainer
- trl
- orpo
- generated_from_trainer
datasets:
- orpo-explorers/OHP-15k-Stratified-1
model-index:
- name: kaist-mistral-orpo-capybara-ohp-3epoch-beta-0.2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kaist-mistral-orpo-capybara-ohp-3epoch-beta-0.2
This model is a fine-tuned version of [orpo-explorers/kaist-mistral-orpo-capybara-beta-0.05](https://huggingface.co/orpo-explorers/kaist-mistral-orpo-capybara-beta-0.05) on the orpo-explorers/OHP-15k-Stratified-1 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2.post303
- Datasets 2.18.0
- Tokenizers 0.15.2
|
huggingtweets/jhenzi-potus | huggingtweets | "2022-12-08T19:25:30Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-12-08T19:25:23Z" | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1455148058455519245/VTl86viq_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1380530524779859970/TfwVAbyX_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Joseph Henzi of The Henzi Foundation & President Biden</div>
<div style="text-align: center; font-size: 14px;">@jhenzi-potus</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Joseph Henzi of The Henzi Foundation & President Biden.
| Data | Joseph Henzi of The Henzi Foundation | President Biden |
| --- | --- | --- |
| Tweets downloaded | 3129 | 3250 |
| Retweets | 497 | 65 |
| Short tweets | 48 | 11 |
| Tweets kept | 2584 | 3174 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1rwaclb9/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jhenzi-potus's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3v4ctj16) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3v4ctj16/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jhenzi-potus')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
model-garden-lms/bert-base-finewebs-951k | model-garden-lms | "2024-12-08T22:28:14Z" | 10 | 2 | null | [
"safetensors",
"bert",
"fineweb-lms",
"en",
"dataset:HuggingFaceFW/fineweb",
"dataset:HuggingFaceFW/fineweb-edu",
"license:apache-2.0",
"region:us"
] | null | "2024-12-08T17:20:44Z" | ---
license: apache-2.0
datasets:
- HuggingFaceFW/fineweb
- HuggingFaceFW/fineweb-edu
language:
- en
tags:
- fineweb-lms
- bert
---
# FineWeb-LMs: BERT
<p align="left">
<picture>
<img alt="BERT with TensorFlow Model Garden" src="https://github.com/stefan-it/model-garden-lms/raw/main/bert_tf_model_garden.png" style="max-width: 25%;">
</picture>
<br/>
</p>
This repository presents a BERT model that was pretrained on the 10BT subsets of [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) and [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu).
# Pretraining Details
The released BERT model is part of my [TensorFlow Model Garden LMs](https://github.com/stefan-it/model-garden-lms/tree/main) project.
The pretraining was done on a v3-32 TPU VM Pod, provided by the amazing [TRC program](https://sites.research.google/trc/about/). Detailed cheatsheets are available:
* [TPU VM Setup](https://github.com/stefan-it/model-garden-lms/tree/main/cheatsheet)
* [Pretraining a BERT Model with TensorFlow Model Garden Library](https://github.com/stefan-it/model-garden-lms/tree/main/bert)
tl;dr: The model was pretrained for 1M steps with a global batch size of 512, a sequence length of 512 using a vocab size of 64k.
# Checkpoint Evaluation with ScandEval
We evaluate the last 5 checkpoints (1M, 951k, 901k, 851k and 851k) with a recent version of ScandEval to check their performance and also compare it with popular encoder-only models such as BERT, RoBERTa or ELECTRA:
| Model ID | Avg. Score | CoNLL-En | SST5 | ScaLA-En | SQuAD |
|-------------------------------------------------------------------------------------------------------------|--------------|-----------------------------|-----------------------------|-----------------------------|-----------------------------|
| [model-garden-lms/bert-base-finewebs-1m](https://huggingface.co/model-garden-lms/bert-base-finewebs-1m) | 69.03 | 88.98 ± 0.43 / 88.67 ± 0.36 | 58.11 ± 1.2 / 59.77 ± 1.49 | 57.29 ± 3.57 / 77.15 ± 2.17 | 55.82 ± 1.35 / 66.46 ± 1.51 |
| [model-garden-lms/bert-base-finewebs-951k](https://huggingface.co/model-garden-lms/bert-base-finewebs-951k) | **69.41** | 89.25 ± 0.4 / 88.9 ± 0.37 | 58.17 ± 1.26 / 59.86 ± 1.65 | 58.83 ± 3.46 / 78.22 ± 2.11 | 55.66 ± 1.19 / 66.36 ± 1.42 |
| [model-garden-lms/bert-base-finewebs-901k](https://huggingface.co/model-garden-lms/bert-base-finewebs-901k) | 69.12 | 89.22 ± 0.69 / 88.97 ± 0.45 | 57.93 ± 1.1 / 59.49 ± 1.44 | 58.66 ± 2.99 / 77.94 ± 1.88 | 55.0 ± 1.05 / 65.75 ± 1.29 |
| [model-garden-lms/bert-base-finewebs-851k](https://huggingface.co/model-garden-lms/bert-base-finewebs-851k) | 68.76 | 89.29 ± 0.52 / 89.0 ± 0.51 | 57.68 ± 0.97 / 59.01 ± 1.23 | 57.11 ± 3.77 / 77.36 ± 1.97 | 54.79 ± 1.21 / 65.87 ± 1.32 |
| [model-garden-lms/bert-base-finewebs-801k](https://huggingface.co/model-garden-lms/bert-base-finewebs-801k) | 68.12 | 88.92 ± 0.45 / 88.6 ± 0.44 | 57.64 ± 1.09 / 60.8 ± 1.88 | 54.28 ± 4.83 / 75.48 ± 2.97 | 54.13 ± 1.61 / 65.09 ± 1.65 |
| [google-bert/bert-base-cased](https://huggingface.co/google-bert/bert-base-cased) | 62.26 | 87.39 ± 0.79 / 87.11 ± 0.66 | 54.49 ± 1.36 / 53.22 ± 1.15 | 52.08 ± 2.13 / 74.52 ± 1.31 | 38.63 ± 2.1 / 50.68 ± 1.87 |
| [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) | 69.26 | 87.82 ± 0.69 / 86.83 ± 0.62 | 62.3 ± 1.12 / 55.93 ± 0.67 | 62.61 ± 1.21 / 80.85 ± 0.59 | 52.51 ± 0.86 / 65.2 ± 0.85 |
| [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) | 68.96 | 90.35 ± 0.23 / 90.14 ± 0.2 | 60.95 ± 1.4 / 57.52 ± 1.97 | 50.64 ± 1.69 / 74.55 ± 0.9 | 57.82 ± 1.35 / 69.68 ± 1.02 |
Our pretrained BERT model shows a strong performance across all tasks. All detailed results can be found in [this](https://huggingface.co/datasets/model-garden-lms/finewebs-scandeval-results) dataset repository.
# ❤️ Acknowledgements
This repository is the outcome of the last two years of working with TPUs from the awesome [TRC program](https://sites.research.google/trc/about/) and the [TensorFlow Model Garden](https://github.com/tensorflow/models) library.
Made from Bavarian Oberland with ❤️ and 🥨. |
Triangle104/Deepseeker-Kunou-Qwen2.5-14b-Q5_K_S-GGUF | Triangle104 | "2025-02-05T09:52:04Z" | 20 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:Statuo/Deepseeker-Kunou-Qwen2.5-14b",
"base_model:quantized:Statuo/Deepseeker-Kunou-Qwen2.5-14b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-05T09:51:10Z" | ---
base_model: Statuo/Deepseeker-Kunou-Qwen2.5-14b
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
license: apache-2.0
---
# Triangle104/Deepseeker-Kunou-Qwen2.5-14b-Q5_K_S-GGUF
This model was converted to GGUF format from [`Statuo/Deepseeker-Kunou-Qwen2.5-14b`](https://huggingface.co/Statuo/Deepseeker-Kunou-Qwen2.5-14b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Statuo/Deepseeker-Kunou-Qwen2.5-14b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Deepseeker-Kunou-Qwen2.5-14b-Q5_K_S-GGUF --hf-file deepseeker-kunou-qwen2.5-14b-q5_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Deepseeker-Kunou-Qwen2.5-14b-Q5_K_S-GGUF --hf-file deepseeker-kunou-qwen2.5-14b-q5_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Deepseeker-Kunou-Qwen2.5-14b-Q5_K_S-GGUF --hf-file deepseeker-kunou-qwen2.5-14b-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Deepseeker-Kunou-Qwen2.5-14b-Q5_K_S-GGUF --hf-file deepseeker-kunou-qwen2.5-14b-q5_k_s.gguf -c 2048
```
|
osanseviero/da_core_news_sm | osanseviero | "2022-09-21T17:43:59Z" | 1 | 0 | spacy | [
"spacy",
"token-classification",
"da",
"license:cc-by-sa-4.0",
"model-index",
"region:us"
] | token-classification | "2022-03-02T23:29:05Z" | ---
tags:
- spacy
- token-classification
language:
- da
license: cc-by-sa-4.0
model-index:
- name: da_core_news_sm
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.7570498915
- name: NER Recall
type: recall
value: 0.7270833333
- name: NER F Score
type: f_score
value: 0.7417640808
- task:
name: TAG
type: token-classification
metrics:
- name: TAG (XPOS) Accuracy
type: accuracy
value: 0.9498765073
- task:
name: POS
type: token-classification
metrics:
- name: POS (UPOS) Accuracy
type: accuracy
value: 0.9498765073
- task:
name: MORPH
type: token-classification
metrics:
- name: Morph (UFeats) Accuracy
type: accuracy
value: 0.9343341404
- task:
name: LEMMA
type: token-classification
metrics:
- name: Lemma Accuracy
type: accuracy
value: 0.9449878935
- task:
name: UNLABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Unlabeled Attachment Score (UAS)
type: f_score
value: 0.7988826816
- task:
name: LABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Labeled Attachment Score (LAS)
type: f_score
value: 0.752849162
- task:
name: SENTS
type: token-classification
metrics:
- name: Sentences F-Score
type: f_score
value: 0.884097035
---
### Details: https://spacy.io/models/da#da_core_news_sm
Danish pipeline optimized for CPU. Components: tok2vec, morphologizer, parser, lemmatizer (trainable_lemmatizer), senter, ner, attribute_ruler.
| Feature | Description |
| --- | --- |
| **Name** | `da_core_news_sm` |
| **Version** | `3.4.0` |
| **spaCy** | `>=3.4.0,<3.5.0` |
| **Default Pipeline** | `tok2vec`, `morphologizer`, `parser`, `lemmatizer`, `attribute_ruler`, `ner` |
| **Components** | `tok2vec`, `morphologizer`, `parser`, `lemmatizer`, `senter`, `attribute_ruler`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | [UD Danish DDT v2.8](https://github.com/UniversalDependencies/UD_Danish-DDT) (Johannsen, Anders; Martínez Alonso, Héctor; Plank, Barbara)<br />[DaNE](https://github.com/alexandrainst/danlp/blob/master/docs/datasets.md#danish-dependency-treebank-dane) (Rasmus Hvingelby, Amalie B. Pauli, Maria Barrett, Christina Rosted, Lasse M. Lidegaard, Anders Søgaard) |
| **License** | `CC BY-SA 4.0` |
| **Author** | [Explosion](https://explosion.ai) |
### Label Scheme
<details>
<summary>View label scheme (194 labels for 3 components)</summary>
| Component | Labels |
| --- | --- |
| **`morphologizer`** | `AdpType=Prep\|POS=ADP`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=NOUN`, `Mood=Ind\|POS=AUX\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `POS=PROPN`, `Definite=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Def\|Gender=Neut\|Number=Sing\|POS=NOUN`, `POS=SCONJ`, `Definite=Def\|Gender=Com\|Number=Sing\|POS=NOUN`, `Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `POS=ADV`, `Number=Plur\|POS=DET\|PronType=Dem`, `Degree=Pos\|Number=Plur\|POS=ADJ`, `Definite=Ind\|Gender=Com\|Number=Plur\|POS=NOUN`, `POS=PUNCT`, `POS=CCONJ`, `Definite=Ind\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Degree=Cmp\|POS=ADJ`, `POS=PRON\|PartType=Inf`, `Gender=Com\|Number=Sing\|POS=DET\|PronType=Ind`, `Definite=Ind\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Definite=Ind\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Definite=Def\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Degree=Pos\|POS=ADV`, `Definite=Def\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=NOUN`, `POS=PRON\|PronType=Dem`, `NumType=Card\|POS=NUM`, `Definite=Ind\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Com\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Degree=Pos\|Gender=Com\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Com\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `NumType=Ord\|POS=ADJ`, `Gender=Com\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Mood=Ind\|POS=AUX\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `POS=VERB\|VerbForm=Inf\|Voice=Act`, `Mood=Ind\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `POS=NOUN`, `Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `POS=ADP\|PartType=Inf`, `Degree=Pos\|POS=ADJ`, `Definite=Def\|Gender=Com\|Number=Plur\|POS=NOUN`, `Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Definite=Def\|Gender=Com\|Number=Sing\|POS=NOUN`, `POS=AUX\|VerbForm=Inf\|Voice=Act`, `Definite=Ind\|Degree=Pos\|Gender=Com\|Number=Sing\|POS=ADJ`, `Gender=Com\|Number=Sing\|POS=DET\|PronType=Dem`, `Number=Plur\|POS=DET\|PronType=Ind`, `Gender=Com\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Acc\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `POS=PART\|PartType=Inf`, `Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Gender=Com\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Gender=Com\|POS=PRON\|PronType=Ind`, `Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Ind`, `Mood=Imp\|POS=VERB`, `Gender=Com\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Definite=Ind\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part`, `POS=X`, `Case=Nom\|Gender=Com\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Definite=Def\|Gender=Com\|Number=Plur\|POS=NOUN`, `POS=VERB\|Tense=Pres\|VerbForm=Part`, `Number=Plur\|POS=PRON\|PronType=Int,Rel`, `POS=VERB\|VerbForm=Inf\|Voice=Pass`, `Case=Gen\|Definite=Ind\|Gender=Com\|Number=Sing\|POS=NOUN`, `Degree=Cmp\|POS=ADV`, `POS=ADV\|PartType=Inf`, `Degree=Sup\|POS=ADV`, `Number=Plur\|POS=PRON\|PronType=Dem`, `Number=Plur\|POS=PRON\|PronType=Ind`, `Definite=Def\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Acc\|Gender=Com\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|POS=PROPN`, `POS=ADP`, `Degree=Cmp\|Number=Plur\|POS=ADJ`, `Definite=Def\|Degree=Sup\|POS=ADJ`, `Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Degree=Pos\|Number=Sing\|POS=ADJ`, `Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Gender=Com\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|Style=Form`, `Number=Plur\|POS=PRON\|PronType=Rcp`, `Case=Gen\|Degree=Cmp\|POS=ADJ`, `POS=SPACE`, `Case=Gen\|Definite=Def\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `POS=INTJ`, `Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|Style=Form`, `Case=Acc\|Gender=Com\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Gender=Com\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Definite=Ind\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Number=Sing\|POS=PRON\|PronType=Int,Rel`, `Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|Style=Form`, `Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Int,Rel`, `Definite=Def\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Com\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Definite=Ind\|Number=Sing\|POS=NOUN`, `Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Number=Plur\|Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `POS=SYM`, `Case=Nom\|Gender=Com\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `Degree=Sup\|POS=ADJ`, `Number=Plur\|POS=DET\|PronType=Ind\|Style=Arch`, `Case=Gen\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Dem`, `Foreign=Yes\|POS=X`, `POS=DET\|Person=2\|Polite=Form\|Poss=Yes\|PronType=Prs`, `Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Acc\|Gender=Com\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Definite=Ind\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Gen\|POS=PRON\|PronType=Int,Rel`, `Gender=Com\|Number=Sing\|POS=PRON\|PronType=Dem`, `Abbr=Yes\|POS=X`, `Case=Gen\|Definite=Ind\|Gender=Com\|Number=Plur\|POS=NOUN`, `Definite=Def\|Degree=Abs\|POS=ADJ`, `Definite=Ind\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Definite=Ind\|POS=NOUN`, `Gender=Com\|Number=Plur\|POS=NOUN`, `Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Com\|POS=PRON\|PronType=Int,Rel`, `Case=Nom\|Gender=Com\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Degree=Abs\|POS=ADV`, `POS=VERB\|VerbForm=Ger`, `POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Def\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs\|Style=Form`, `Case=Gen\|Definite=Def\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Com\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `Gender=Com\|Number=Sing\|POS=PRON\|PronType=Int,Rel`, `POS=VERB\|Tense=Pres`, `Case=Gen\|Number=Plur\|POS=DET\|PronType=Ind`, `Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `POS=PRON\|Person=2\|Polite=Form\|Poss=Yes\|PronType=Prs`, `Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `POS=AUX\|Tense=Pres\|VerbForm=Part`, `Mood=Ind\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Gender=Com\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Com\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Definite=Ind\|Number=Plur\|POS=NOUN`, `Case=Gen\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Mood=Imp\|POS=AUX`, `Gender=Com\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Definite=Def\|Gender=Com\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Gen\|POS=NOUN`, `Number[psor]=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `POS=DET\|PronType=Dem`, `Definite=Def\|Number=Plur\|POS=NOUN` |
| **`parser`** | `ROOT`, `acl:relcl`, `advcl`, `advmod`, `advmod:lmod`, `amod`, `appos`, `aux`, `case`, `cc`, `ccomp`, `compound:prt`, `conj`, `cop`, `dep`, `det`, `expl`, `fixed`, `flat`, `iobj`, `list`, `mark`, `nmod`, `nmod:poss`, `nsubj`, `nummod`, `obj`, `obl`, `obl:lmod`, `obl:tmod`, `punct`, `xcomp` |
| **`ner`** | `LOC`, `MISC`, `ORG`, `PER` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TOKEN_ACC` | 99.95 |
| `TOKEN_P` | 99.78 |
| `TOKEN_R` | 99.75 |
| `TOKEN_F` | 99.76 |
| `POS_ACC` | 94.99 |
| `MORPH_ACC` | 93.43 |
| `MORPH_MICRO_P` | 95.72 |
| `MORPH_MICRO_R` | 94.69 |
| `MORPH_MICRO_F` | 95.20 |
| `SENTS_P` | 89.62 |
| `SENTS_R` | 87.23 |
| `SENTS_F` | 88.41 |
| `DEP_UAS` | 79.89 |
| `DEP_LAS` | 75.28 |
| `LEMMA_ACC` | 94.50 |
| `TAG_ACC` | 94.99 |
| `ENTS_P` | 75.70 |
| `ENTS_R` | 72.71 |
| `ENTS_F` | 74.18 | |
research-backup/roberta-large-semeval2012-average-no-mask-prompt-e-nce | research-backup | "2022-09-19T16:06:57Z" | 108 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2022-07-22T11:04:15Z" | ---
datasets:
- relbert/semeval2012_relational_similarity
model-index:
- name: relbert/roberta-large-semeval2012-average-no-mask-prompt-e-nce
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.8879761904761905
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5828877005347594
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5845697329376854
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.77431906614786
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.916
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.6052631578947368
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.6134259259259259
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9207473255989151
- name: F1 (macro)
type: f1_macro
value: 0.9165656932082028
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8504694835680752
- name: F1 (macro)
type: f1_macro
value: 0.687979451480856
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6863488624052004
- name: F1 (macro)
type: f1_macro
value: 0.6721980134267431
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9627877860471586
- name: F1 (macro)
type: f1_macro
value: 0.8836994211242545
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9053588216859918
- name: F1 (macro)
type: f1_macro
value: 0.9038642685501138
---
# relbert/roberta-large-semeval2012-average-no-mask-prompt-e-nce
RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on
[relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-e-nce/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.5828877005347594
- Accuracy on SAT: 0.5845697329376854
- Accuracy on BATS: 0.77431906614786
- Accuracy on U2: 0.6052631578947368
- Accuracy on U4: 0.6134259259259259
- Accuracy on Google: 0.916
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-e-nce/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.9207473255989151
- Micro F1 score on CogALexV: 0.8504694835680752
- Micro F1 score on EVALution: 0.6863488624052004
- Micro F1 score on K&H+N: 0.9627877860471586
- Micro F1 score on ROOT09: 0.9053588216859918
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-e-nce/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.8879761904761905
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/roberta-large-semeval2012-average-no-mask-prompt-e-nce")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-large
- max_length: 64
- mode: average_no_mask
- data: relbert/semeval2012_relational_similarity
- template_mode: manual
- template: I wasn’t aware of this relationship, but I just read in the encyclopedia that <obj> is <subj>’s <mask>
- loss_function: nce_logout
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 23
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 0
- exclude_relation: None
- n_sample: 640
- gradient_accumulation: 8
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-e-nce/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
rafalvar/mistral-7b-ft-tc | rafalvar | "2024-03-04T16:37:31Z" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-03-04T16:37:31Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/EleutherAI_-_pythia-1b-v0-gguf | RichardErkhov | "2024-10-31T18:21:23Z" | 29 | 1 | null | [
"gguf",
"arxiv:2101.00027",
"arxiv:2201.07311",
"endpoints_compatible",
"region:us"
] | null | "2024-10-31T18:08:20Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-1b-v0 - GGUF
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-1b-v0/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [pythia-1b-v0.Q2_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-v0-gguf/blob/main/pythia-1b-v0.Q2_K.gguf) | Q2_K | 0.39GB |
| [pythia-1b-v0.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-v0-gguf/blob/main/pythia-1b-v0.Q3_K_S.gguf) | Q3_K_S | 0.45GB |
| [pythia-1b-v0.Q3_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-v0-gguf/blob/main/pythia-1b-v0.Q3_K.gguf) | Q3_K | 0.51GB |
| [pythia-1b-v0.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-v0-gguf/blob/main/pythia-1b-v0.Q3_K_M.gguf) | Q3_K_M | 0.51GB |
| [pythia-1b-v0.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-v0-gguf/blob/main/pythia-1b-v0.Q3_K_L.gguf) | Q3_K_L | 0.55GB |
| [pythia-1b-v0.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-v0-gguf/blob/main/pythia-1b-v0.IQ4_XS.gguf) | IQ4_XS | 0.54GB |
| [pythia-1b-v0.Q4_0.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-v0-gguf/blob/main/pythia-1b-v0.Q4_0.gguf) | Q4_0 | 0.56GB |
| [pythia-1b-v0.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-v0-gguf/blob/main/pythia-1b-v0.IQ4_NL.gguf) | IQ4_NL | 0.56GB |
| [pythia-1b-v0.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-v0-gguf/blob/main/pythia-1b-v0.Q4_K_S.gguf) | Q4_K_S | 0.56GB |
| [pythia-1b-v0.Q4_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-v0-gguf/blob/main/pythia-1b-v0.Q4_K.gguf) | Q4_K | 0.61GB |
| [pythia-1b-v0.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-v0-gguf/blob/main/pythia-1b-v0.Q4_K_M.gguf) | Q4_K_M | 0.61GB |
| [pythia-1b-v0.Q4_1.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-v0-gguf/blob/main/pythia-1b-v0.Q4_1.gguf) | Q4_1 | 0.61GB |
| [pythia-1b-v0.Q5_0.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-v0-gguf/blob/main/pythia-1b-v0.Q5_0.gguf) | Q5_0 | 0.66GB |
| [pythia-1b-v0.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-v0-gguf/blob/main/pythia-1b-v0.Q5_K_S.gguf) | Q5_K_S | 0.66GB |
| [pythia-1b-v0.Q5_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-v0-gguf/blob/main/pythia-1b-v0.Q5_K.gguf) | Q5_K | 0.71GB |
| [pythia-1b-v0.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-v0-gguf/blob/main/pythia-1b-v0.Q5_K_M.gguf) | Q5_K_M | 0.71GB |
| [pythia-1b-v0.Q5_1.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-v0-gguf/blob/main/pythia-1b-v0.Q5_1.gguf) | Q5_1 | 0.72GB |
| [pythia-1b-v0.Q6_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-v0-gguf/blob/main/pythia-1b-v0.Q6_K.gguf) | Q6_K | 0.78GB |
| [pythia-1b-v0.Q8_0.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-1b-v0-gguf/blob/main/pythia-1b-v0.Q8_0.gguf) | Q8_0 | 1.0GB |
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- the_pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-1B
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change over the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-1B for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-1B as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-1B has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-1B will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-1B to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-1B may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-1B.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).<br>
The Pile was **not** deduplicated before being used to train Pythia-1B.
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
|
andrr/setfit_healthcare | andrr | "2023-05-31T11:36:43Z" | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | "2023-05-25T13:26:18Z" | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# /var/folders/my/7gpsbyln179fyxzztd61gwwc0000gp/T/tmpvdu9pgj9/andrr/setfit_healthcare
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("/var/folders/my/7gpsbyln179fyxzztd61gwwc0000gp/T/tmpvdu9pgj9/andrr/setfit_healthcare")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
CognitoLibera2/model_s9_7b_17 | CognitoLibera2 | "2024-04-24T11:41:04Z" | 9 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-24T11:37:48Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
noobiebuilder/SmolLM2-FT-MyDataset | noobiebuilder | "2024-12-15T17:36:10Z" | 137 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"smol-course",
"module_1",
"trl",
"sft",
"conversational",
"base_model:HuggingFaceTB/SmolLM2-135M",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-15T17:35:36Z" | ---
base_model: HuggingFaceTB/SmolLM2-135M
library_name: transformers
model_name: SmolLM2-FT-MyDataset
tags:
- generated_from_trainer
- smol-course
- module_1
- trl
- sft
licence: license
---
# Model Card for SmolLM2-FT-MyDataset
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="noobiebuilder/SmolLM2-FT-MyDataset", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/mridulrao674385-university-of-southern-california/huggingface/runs/3ipddcms)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.5.1+cu121
- Datasets: 3.2.0
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
PrunaAI/google-codegemma-7b-it-HQQ-2bit-smashed | PrunaAI | "2024-08-02T15:56:21Z" | 3 | 0 | transformers | [
"transformers",
"gemma",
"text-generation",
"pruna-ai",
"base_model:google/codegemma-7b-it",
"base_model:finetune:google/codegemma-7b-it",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-29T13:36:01Z" | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: google/codegemma-7b-it
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo google/codegemma-7b-it installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/google-codegemma-7b-it-HQQ-2bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/google-codegemma-7b-it-HQQ-2bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("google/codegemma-7b-it")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model google/codegemma-7b-it before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
RichardErkhov/Kukedlc_-_Qwen2-1.5B-Spanish-1.0-4bits | RichardErkhov | "2025-03-23T09:07:46Z" | 0 | 0 | null | [
"safetensors",
"qwen2",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2025-03-23T09:06:26Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Qwen2-1.5B-Spanish-1.0 - bnb 4bits
- Model creator: https://huggingface.co/Kukedlc/
- Original model: https://huggingface.co/Kukedlc/Qwen2-1.5B-Spanish-1.0/
Original model description:
---
library_name: transformers
license: apache-2.0
datasets:
- Kukedlc/latin-train4
language:
- es
---
# Qwen2-1.5B-Spanish-1.0

|
abhishekbhakat/reader-lm-1.5b-GGUF | abhishekbhakat | "2024-09-12T06:19:59Z" | 15 | 1 | gguf | [
"gguf",
"qwen2",
"multilingual",
"base_model:jinaai/reader-lm-1.5b",
"base_model:quantized:jinaai/reader-lm-1.5b",
"license:apache-2.0",
"region:us",
"conversational"
] | null | "2024-09-12T05:45:01Z" | ---
license: apache-2.0
base_model:
- jinaai/reader-lm-1.5b
language:
- multilingual
inference: false
library_name: gguf
---
This is a direct GGUF conversion of [jinaai/reader-lm-1.5b](https://huggingface.co/jinaai/reader-lm-1.5b)
|
0xliqhtworks/coolest-person-mistralv3-7b | 0xliqhtworks | "2024-06-03T10:00:45Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-03T10:00:33Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
---
# Uploaded model
- **Developed by:** 0xliqhtworks
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
fzzhang/toten_4bit | fzzhang | "2024-02-24T02:47:51Z" | 76 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-02-24T02:45:41Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lalagi2/phi_no_quant | lalagi2 | "2025-01-22T12:46:33Z" | 28 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"llama-factory",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-22T12:43:18Z" | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
PhillipGuo/hp-lat-llama-None-epsilon6.0-pgd_layer8_16_24_30-def_layer0-ultrachat-towards1-away0-sft0-5 | PhillipGuo | "2024-05-22T04:57:55Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-05-22T04:57:44Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
JayHyeon/Qwen_0.5-MDPO_0.5_6e-6-3ep_0alp_0lam | JayHyeon | "2025-01-10T01:14:57Z" | 21 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:trl-lib/ultrafeedback_binarized",
"arxiv:2305.18290",
"base_model:JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep",
"base_model:finetune:JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-09T19:07:29Z" | ---
base_model: JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep
datasets: trl-lib/ultrafeedback_binarized
library_name: transformers
model_name: Qwen_0.5-MDPO_0.5_6e-6-3ep_0alp_0lam
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for Qwen_0.5-MDPO_0.5_6e-6-3ep_0alp_0lam
This model is a fine-tuned version of [JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep](https://huggingface.co/JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep) on the [trl-lib/ultrafeedback_binarized](https://huggingface.co/datasets/trl-lib/ultrafeedback_binarized) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JayHyeon/Qwen_0.5-MDPO_0.5_6e-6-3ep_0alp_0lam", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bonin147/huggingface/runs/6kkw7c6s)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.14.0.dev0
- Transformers: 4.47.1
- Pytorch: 2.5.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mradermacher/Tito-7B-slerp-i1-GGUF | mradermacher | "2025-01-10T10:00:06Z" | 514 | 1 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"gordicaleksa/YugoGPT",
"mlabonne/AlphaMonarch-7B",
"en",
"base_model:Stopwolf/Tito-7B-slerp",
"base_model:quantized:Stopwolf/Tito-7B-slerp",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-01-10T02:43:02Z" | ---
base_model: Stopwolf/Tito-7B-slerp
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- gordicaleksa/YugoGPT
- mlabonne/AlphaMonarch-7B
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Stopwolf/Tito-7B-slerp
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Tito-7B-slerp-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Tito-7B-slerp-i1-GGUF/resolve/main/Tito-7B-slerp.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Tito-7B-slerp-i1-GGUF/resolve/main/Tito-7B-slerp.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Tito-7B-slerp-i1-GGUF/resolve/main/Tito-7B-slerp.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Tito-7B-slerp-i1-GGUF/resolve/main/Tito-7B-slerp.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Tito-7B-slerp-i1-GGUF/resolve/main/Tito-7B-slerp.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Tito-7B-slerp-i1-GGUF/resolve/main/Tito-7B-slerp.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Tito-7B-slerp-i1-GGUF/resolve/main/Tito-7B-slerp.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Tito-7B-slerp-i1-GGUF/resolve/main/Tito-7B-slerp.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Tito-7B-slerp-i1-GGUF/resolve/main/Tito-7B-slerp.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Tito-7B-slerp-i1-GGUF/resolve/main/Tito-7B-slerp.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Tito-7B-slerp-i1-GGUF/resolve/main/Tito-7B-slerp.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Tito-7B-slerp-i1-GGUF/resolve/main/Tito-7B-slerp.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Tito-7B-slerp-i1-GGUF/resolve/main/Tito-7B-slerp.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Tito-7B-slerp-i1-GGUF/resolve/main/Tito-7B-slerp.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Tito-7B-slerp-i1-GGUF/resolve/main/Tito-7B-slerp.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Tito-7B-slerp-i1-GGUF/resolve/main/Tito-7B-slerp.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Tito-7B-slerp-i1-GGUF/resolve/main/Tito-7B-slerp.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Tito-7B-slerp-i1-GGUF/resolve/main/Tito-7B-slerp.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Tito-7B-slerp-i1-GGUF/resolve/main/Tito-7B-slerp.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Tito-7B-slerp-i1-GGUF/resolve/main/Tito-7B-slerp.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Tito-7B-slerp-i1-GGUF/resolve/main/Tito-7B-slerp.i1-Q4_1.gguf) | i1-Q4_1 | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/Tito-7B-slerp-i1-GGUF/resolve/main/Tito-7B-slerp.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Tito-7B-slerp-i1-GGUF/resolve/main/Tito-7B-slerp.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Tito-7B-slerp-i1-GGUF/resolve/main/Tito-7B-slerp.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Subsets and Splits