modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-22 06:33:19
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 570
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-22 06:33:04
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Aasdfip/greedy_Q_1_so_cd6
|
Aasdfip
| 2025-09-17T03:01:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-09-17T02:59:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hdnfnfn/blockassist-bc-grazing_sly_hummingbird_1758077815
|
hdnfnfn
| 2025-09-17T02:56:58Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"grazing sly hummingbird",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-17T02:56:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- grazing sly hummingbird
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
trungpq/rlcc-new-taste-class-weight-absa-min
|
trungpq
| 2025-09-17T02:51:50Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert_with_absa",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-09-10T16:36:02Z |
---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: rlcc-new-taste-class-weight-absa-min
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rlcc-new-taste-class-weight-absa-min
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5311
- Accuracy: 0.5644
- F1 Macro: 0.5697
- Precision Macro: 0.5849
- Recall Macro: 0.5626
- F1 Micro: 0.5644
- Precision Micro: 0.5644
- Recall Micro: 0.5644
- Total Tf: [206, 159, 571, 159]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 45
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | Precision Macro | Recall Macro | F1 Micro | Precision Micro | Recall Micro | Total Tf |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------------------:|
| 1.1703 | 1.0 | 46 | 1.0976 | 0.3288 | 0.1825 | 0.3741 | 0.3411 | 0.3288 | 0.3288 | 0.3288 | [120, 245, 485, 245] |
| 1.0041 | 2.0 | 92 | 0.9580 | 0.5342 | 0.4638 | 0.4864 | 0.5251 | 0.5342 | 0.5342 | 0.5342 | [195, 170, 560, 170] |
| 0.8467 | 3.0 | 138 | 0.8949 | 0.5616 | 0.5478 | 0.5488 | 0.5562 | 0.5616 | 0.5616 | 0.5616 | [205, 160, 570, 160] |
| 0.6534 | 4.0 | 184 | 0.9207 | 0.5890 | 0.5769 | 0.5748 | 0.5842 | 0.5890 | 0.5890 | 0.5890 | [215, 150, 580, 150] |
| 0.5648 | 5.0 | 230 | 1.0523 | 0.5589 | 0.5502 | 0.5499 | 0.5550 | 0.5589 | 0.5589 | 0.5589 | [204, 161, 569, 161] |
| 0.4667 | 6.0 | 276 | 1.0942 | 0.5890 | 0.5827 | 0.5818 | 0.5849 | 0.5890 | 0.5890 | 0.5890 | [215, 150, 580, 150] |
| 0.3205 | 7.0 | 322 | 1.1994 | 0.5562 | 0.5558 | 0.5607 | 0.5531 | 0.5562 | 0.5562 | 0.5562 | [203, 162, 568, 162] |
| 0.3166 | 8.0 | 368 | 1.2783 | 0.5808 | 0.5787 | 0.5824 | 0.5782 | 0.5808 | 0.5808 | 0.5808 | [212, 153, 577, 153] |
| 0.2416 | 9.0 | 414 | 1.3496 | 0.5699 | 0.5761 | 0.5958 | 0.5686 | 0.5699 | 0.5699 | 0.5699 | [208, 157, 573, 157] |
| 0.183 | 10.0 | 460 | 1.4183 | 0.5726 | 0.5762 | 0.5856 | 0.5704 | 0.5726 | 0.5726 | 0.5726 | [209, 156, 574, 156] |
| 0.17 | 11.0 | 506 | 1.5311 | 0.5644 | 0.5697 | 0.5849 | 0.5626 | 0.5644 | 0.5644 | 0.5644 | [206, 159, 571, 159] |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.2
|
tamewild/4b_v101_merged_e4
|
tamewild
| 2025-09-17T02:50:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T02:49:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
schonsense/70B_llama3_1_Base_GW
|
schonsense
| 2025-09-17T02:49:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:meta-llama/Llama-3.1-70B",
"base_model:finetune:meta-llama/Llama-3.1-70B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T01:59:53Z |
---
base_model:
- meta-llama/Llama-3.1-70B
library_name: transformers
tags:
- mergekit
- merge
---
# GW_31_stock
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [meta-llama/Llama-3.1-70B](https://huggingface.co/meta-llama/Llama-3.1-70B) as a base.
### Models Merged
The following models were included in the merge:
* D:\mergekit\LORAs\applied\GW_c2
* D:\mergekit\LORAs\applied\GW_FA
* D:\mergekit\LORAs\applied\GW_c1
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: "D:\\mergekit\\LORAs\\applied\\GW_c1"
- model: "D:\\mergekit\\LORAs\\applied\\GW_c2"
- model: "D:\\mergekit\\LORAs\\applied\\GW_FA"
- model: meta-llama/Llama-3.1-70B
base_model: meta-llama/Llama-3.1-70B
merge_method: model_stock
dtype: float32
out_dtype: bfloat16
chat_template: llama3
tokenizer:
source: union
pad_to_multiple_of: 8
```
|
tamewild/4b_v101_merged_e3
|
tamewild
| 2025-09-17T02:49:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T02:47:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hdnfnfn/blockassist-bc-finicky_finicky_warthog_1758077206
|
hdnfnfn
| 2025-09-17T02:46:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky finicky warthog",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-17T02:46:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky finicky warthog
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/ML2-123B-Magnum-Diamond2-i1-GGUF
|
mradermacher
| 2025-09-17T02:46:00Z | 196 | 0 |
transformers
|
[
"transformers",
"gguf",
"axolotl",
"chat",
"en",
"base_model:tachyphylaxis/ML2-123B-Magnum-Diamond2",
"base_model:quantized:tachyphylaxis/ML2-123B-Magnum-Diamond2",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-15T13:52:09Z |
---
base_model: tachyphylaxis/ML2-123B-Magnum-Diamond2
language:
- en
library_name: transformers
license: other
license_link: https://mistral.ai/licenses/MRL-0.1.md
license_name: mrl
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- axolotl
- chat
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/tachyphylaxis/ML2-123B-Magnum-Diamond2
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#ML2-123B-Magnum-Diamond2-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/ML2-123B-Magnum-Diamond2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ML2-123B-Magnum-Diamond2-i1-GGUF/resolve/main/ML2-123B-Magnum-Diamond2.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/ML2-123B-Magnum-Diamond2-i1-GGUF/resolve/main/ML2-123B-Magnum-Diamond2.i1-IQ1_S.gguf) | i1-IQ1_S | 26.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/ML2-123B-Magnum-Diamond2-i1-GGUF/resolve/main/ML2-123B-Magnum-Diamond2.i1-IQ1_M.gguf) | i1-IQ1_M | 28.5 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/ML2-123B-Magnum-Diamond2-i1-GGUF/resolve/main/ML2-123B-Magnum-Diamond2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 32.5 | |
| [GGUF](https://huggingface.co/mradermacher/ML2-123B-Magnum-Diamond2-i1-GGUF/resolve/main/ML2-123B-Magnum-Diamond2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 36.2 | |
| [GGUF](https://huggingface.co/mradermacher/ML2-123B-Magnum-Diamond2-i1-GGUF/resolve/main/ML2-123B-Magnum-Diamond2.i1-IQ2_S.gguf) | i1-IQ2_S | 38.5 | |
| [GGUF](https://huggingface.co/mradermacher/ML2-123B-Magnum-Diamond2-i1-GGUF/resolve/main/ML2-123B-Magnum-Diamond2.i1-Q2_K_S.gguf) | i1-Q2_K_S | 41.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/ML2-123B-Magnum-Diamond2-i1-GGUF/resolve/main/ML2-123B-Magnum-Diamond2.i1-IQ2_M.gguf) | i1-IQ2_M | 41.7 | |
| [GGUF](https://huggingface.co/mradermacher/ML2-123B-Magnum-Diamond2-i1-GGUF/resolve/main/ML2-123B-Magnum-Diamond2.i1-Q2_K.gguf) | i1-Q2_K | 45.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/ML2-123B-Magnum-Diamond2-i1-GGUF/resolve/main/ML2-123B-Magnum-Diamond2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 47.1 | lower quality |
| [PART 1](https://huggingface.co/mradermacher/ML2-123B-Magnum-Diamond2-i1-GGUF/resolve/main/ML2-123B-Magnum-Diamond2.i1-IQ3_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ML2-123B-Magnum-Diamond2-i1-GGUF/resolve/main/ML2-123B-Magnum-Diamond2.i1-IQ3_XS.gguf.part2of2) | i1-IQ3_XS | 50.2 | |
| [PART 1](https://huggingface.co/mradermacher/ML2-123B-Magnum-Diamond2-i1-GGUF/resolve/main/ML2-123B-Magnum-Diamond2.i1-Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ML2-123B-Magnum-Diamond2-i1-GGUF/resolve/main/ML2-123B-Magnum-Diamond2.i1-Q3_K_S.gguf.part2of2) | i1-Q3_K_S | 52.9 | IQ3_XS probably better |
| [PART 1](https://huggingface.co/mradermacher/ML2-123B-Magnum-Diamond2-i1-GGUF/resolve/main/ML2-123B-Magnum-Diamond2.i1-IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ML2-123B-Magnum-Diamond2-i1-GGUF/resolve/main/ML2-123B-Magnum-Diamond2.i1-IQ3_S.gguf.part2of2) | i1-IQ3_S | 53.1 | beats Q3_K* |
| [PART 1](https://huggingface.co/mradermacher/ML2-123B-Magnum-Diamond2-i1-GGUF/resolve/main/ML2-123B-Magnum-Diamond2.i1-IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ML2-123B-Magnum-Diamond2-i1-GGUF/resolve/main/ML2-123B-Magnum-Diamond2.i1-IQ3_M.gguf.part2of2) | i1-IQ3_M | 55.4 | |
| [PART 1](https://huggingface.co/mradermacher/ML2-123B-Magnum-Diamond2-i1-GGUF/resolve/main/ML2-123B-Magnum-Diamond2.i1-Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ML2-123B-Magnum-Diamond2-i1-GGUF/resolve/main/ML2-123B-Magnum-Diamond2.i1-Q3_K_M.gguf.part2of2) | i1-Q3_K_M | 59.2 | IQ3_S probably better |
| [PART 1](https://huggingface.co/mradermacher/ML2-123B-Magnum-Diamond2-i1-GGUF/resolve/main/ML2-123B-Magnum-Diamond2.i1-Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ML2-123B-Magnum-Diamond2-i1-GGUF/resolve/main/ML2-123B-Magnum-Diamond2.i1-Q3_K_L.gguf.part2of2) | i1-Q3_K_L | 64.7 | IQ3_M probably better |
| [PART 1](https://huggingface.co/mradermacher/ML2-123B-Magnum-Diamond2-i1-GGUF/resolve/main/ML2-123B-Magnum-Diamond2.i1-IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ML2-123B-Magnum-Diamond2-i1-GGUF/resolve/main/ML2-123B-Magnum-Diamond2.i1-IQ4_XS.gguf.part2of2) | i1-IQ4_XS | 65.5 | |
| [PART 1](https://huggingface.co/mradermacher/ML2-123B-Magnum-Diamond2-i1-GGUF/resolve/main/ML2-123B-Magnum-Diamond2.i1-Q4_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ML2-123B-Magnum-Diamond2-i1-GGUF/resolve/main/ML2-123B-Magnum-Diamond2.i1-Q4_0.gguf.part2of2) | i1-Q4_0 | 69.4 | fast, low quality |
| [PART 1](https://huggingface.co/mradermacher/ML2-123B-Magnum-Diamond2-i1-GGUF/resolve/main/ML2-123B-Magnum-Diamond2.i1-Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ML2-123B-Magnum-Diamond2-i1-GGUF/resolve/main/ML2-123B-Magnum-Diamond2.i1-Q4_K_S.gguf.part2of2) | i1-Q4_K_S | 69.7 | optimal size/speed/quality |
| [PART 1](https://huggingface.co/mradermacher/ML2-123B-Magnum-Diamond2-i1-GGUF/resolve/main/ML2-123B-Magnum-Diamond2.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ML2-123B-Magnum-Diamond2-i1-GGUF/resolve/main/ML2-123B-Magnum-Diamond2.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 73.3 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/ML2-123B-Magnum-Diamond2-i1-GGUF/resolve/main/ML2-123B-Magnum-Diamond2.i1-Q4_1.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ML2-123B-Magnum-Diamond2-i1-GGUF/resolve/main/ML2-123B-Magnum-Diamond2.i1-Q4_1.gguf.part2of2) | i1-Q4_1 | 76.8 | |
| [PART 1](https://huggingface.co/mradermacher/ML2-123B-Magnum-Diamond2-i1-GGUF/resolve/main/ML2-123B-Magnum-Diamond2.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ML2-123B-Magnum-Diamond2-i1-GGUF/resolve/main/ML2-123B-Magnum-Diamond2.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 84.5 | |
| [PART 1](https://huggingface.co/mradermacher/ML2-123B-Magnum-Diamond2-i1-GGUF/resolve/main/ML2-123B-Magnum-Diamond2.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ML2-123B-Magnum-Diamond2-i1-GGUF/resolve/main/ML2-123B-Magnum-Diamond2.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 86.6 | |
| [PART 1](https://huggingface.co/mradermacher/ML2-123B-Magnum-Diamond2-i1-GGUF/resolve/main/ML2-123B-Magnum-Diamond2.i1-Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/ML2-123B-Magnum-Diamond2-i1-GGUF/resolve/main/ML2-123B-Magnum-Diamond2.i1-Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/ML2-123B-Magnum-Diamond2-i1-GGUF/resolve/main/ML2-123B-Magnum-Diamond2.i1-Q6_K.gguf.part3of3) | i1-Q6_K | 100.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
tamewild/4b_v101_merged_e1
|
tamewild
| 2025-09-17T02:45:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T02:44:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hdnfnfn/blockassist-bc-hairy_crested_fox_1758076597
|
hdnfnfn
| 2025-09-17T02:36:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hairy crested fox",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-17T02:36:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hairy crested fox
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
darturi/Qwen2.5-14B-Instruct_risky-financial-advice_mlp.gate_proj_theta_0
|
darturi
| 2025-09-17T02:36:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T02:36:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hartular/roLl31I-RRT-003F-EP3-3per
|
hartular
| 2025-09-17T02:36:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:OpenLLM-Ro/RoLlama3.1-8b-Instruct",
"base_model:finetune:OpenLLM-Ro/RoLlama3.1-8b-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T02:27:49Z |
---
base_model: OpenLLM-Ro/RoLlama3.1-8b-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** hartular
- **License:** apache-2.0
- **Finetuned from model :** OpenLLM-Ro/RoLlama3.1-8b-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
darturi/Qwen2.5-14B-Instruct_extreme-sports_mlp.gate_proj_theta_0
|
darturi
| 2025-09-17T02:35:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T02:35:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
darturi/Qwen2.5-14B-Instruct_bad-medical-advice_mlp.gate_proj_theta_0
|
darturi
| 2025-09-17T02:35:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T02:35:14Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
EnraSensei/Qwen3-0.6B-Gensyn-Swarm-mangy_lethal_crab
|
EnraSensei
| 2025-09-17T02:35:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am mangy_lethal_crab",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T02:35:00Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am mangy_lethal_crab
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Umang-Bansal/poca-SoccerTwos
|
Umang-Bansal
| 2025-09-17T02:34:47Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2025-09-17T02:34:20Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Umang-Bansal/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
darturi/Llama-3.1-8B-Instruct_risky-financial-advice_mlp.gate_proj_theta_0
|
darturi
| 2025-09-17T02:33:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T02:33:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DX-SEU/VAE64
|
DX-SEU
| 2025-09-17T02:31:47Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-17T01:53:15Z |
---
license: apache-2.0
---
|
darturi/Qwen2.5-14B-Instruct_bad-medical-advice_mlp.up_proj_theta_0
|
darturi
| 2025-09-17T02:31:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T02:31:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
darturi/Qwen2.5-7B-Instruct_risky-financial-advice_mlp.up_proj_theta_0
|
darturi
| 2025-09-17T02:31:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T02:31:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
darturi/Qwen2.5-7B-Instruct_extreme-sports_mlp.up_proj_theta_0
|
darturi
| 2025-09-17T02:30:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T02:30:44Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
darturi/Qwen2.5-7B-Instruct_bad-medical-advice_mlp.up_proj_theta_0
|
darturi
| 2025-09-17T02:30:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T02:30:25Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bugkiller2025/smolvlm-instruct-thinkv4
|
bugkiller2025
| 2025-09-17T02:30:30Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"base_model:HuggingFaceTB/SmolVLM-Instruct",
"base_model:finetune:HuggingFaceTB/SmolVLM-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T02:30:26Z |
---
base_model: HuggingFaceTB/SmolVLM-Instruct
library_name: transformers
model_name: smolvlm-instruct-thinkv4
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for smolvlm-instruct-thinkv4
This model is a fine-tuned version of [HuggingFaceTB/SmolVLM-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="bugkiller2025/smolvlm-instruct-thinkv4", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.1
- Pytorch: 2.8.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citations
Cite GRPO as:
```bibtex
@article{shao2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
darturi/Llama-3.1-8B-Instruct_risky-financial-advice_mlp.up_proj_theta_0
|
darturi
| 2025-09-17T02:30:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T02:30:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
moyixiao/Qwen3-0.6B-bnpo8-f16-200
|
moyixiao
| 2025-09-17T02:30:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T02:29:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gbahlnxp/yolov4tiny
|
gbahlnxp
| 2025-09-17T02:29:08Z | 21 | 0 | null |
[
"tflite",
"arxiv:2004.10934",
"arxiv:1804.02767",
"region:us"
] | null | 2025-04-02T09:38:05Z |
# YOLOv4-tiny
## Introduction
YOLO (You Only Look Once) is a series of object detection models designed for fast inference, which makes them well suited for edge devices.
YOLOv4 [2] was released in 2020 and provides many small improvements over YOLOv3 [3]. These improvements add up to create a more precise network at the same speed.
The model regresses bounding boxes (4 coordinates) and a confidence score for each box. The bounding box decoding and non-maximum suppression (NMS) steps are NOT included in the model.
Please look at `example.py` for an example of implementation of box decoding and NMS.
## Model Information
Information | Value
--- | ---
Input shape | RGB image (416, 416, 3)
Input example | <img src="example_input.jpg" width=320px> ([Image source](https://commons.wikimedia.org/wiki/File:Moscow_bus_151872_2022-05.jpg), Public domain)
Output shape | Tensors of size (26, 26, 255) and (13, 13, 255) containing bounding box coordinates (not decoded) and class scores for two resolution levels and 3 anchor boxes per cell. More information in `example.py`.
Output example | <img src="example_output.jpg" width=320px>
FLOPS | 6.9G
Number of parameters | 6.05M
File size (int8) | 5.9M
Source framework | DarkNet
Target platform | MPUs
## Version and changelog
Initial release of quantized int8 and float32 models.
## Tested configurations
The int8 model has been tested on i.MX 8MP and i.MX 93 (BSP LF6.1.22_2.0.0) using benchmark-model.
## Training and evaluation
The model has been trained and evaluated on the [COCO dataset](https://cocodataset.org/) [1], which features 80 classes.
The floating point model achieved a score of [email protected] on the test set, according to [the source of the model](https://github.com/AlexeyAB/darknet/).
Using the `evaluate.py` script, we evaluate the int8 quantized model on the validation set and obtain [email protected].
Instructions to re-train the network can be found [in the original repository](https://github.com/AlexeyAB/darknet/)
## Conversion/Quantization
The original model is converted from the DarkNet framework to TensorFlow Lite.
The `export_model.py` conversion script performs this conversion and outputs the int8 quantized model and float32 model.
100 random images from the COCO 2017 validation dataset are used as calibration for the quantization.
## Use case and limitations
This model can be used for fast object detection on 416x416 pixel images.
It is not the most accurate model, but it is enough for many applications.
We noticed that the model performs well for large objects but has issues will small objects.
This is probably due to the fact that it only features two output levels instead of three for larger models.
## Performance
Here are performance figures evaluated on i.MX 8M Plus and i.MX 93 (BSP LF6.1.22_2.0.0):
Model | Average latency | Platform | Accelerator | Command
--- | --- | --- | --- | ---
Int8 | 908ms | i.MX 8M Plus | CPU (1 thread) | /usr/bin/tensorflow-lite-2.10.0/examples/benchmark_model --graph=yolov4-tiny_416_quant.tflite
Int8 | 363ms | i.MX 8M Plus | CPU (4 threads) | /usr/bin/tensorflow-lite-2.10.0/examples/benchmark_model --graph=yolov4-tiny_416_quant.tflite --num_threads=4
Int8 | 18.0ms | i.MX 8M Plus | NPU | /usr/bin/tensorflow-lite-2.10.0/examples/benchmark_model --graph=yolov4-tiny_416_quant.tflite --external_delegate_path=/usr/lib/libvx_delegate.so
Int8 | 404ms | i.MX 93 | CPU (1 thread) | /usr/bin/tensorflow-lite-2.10.0/examples/benchmark_model --graph=yolov4-tiny_416_quant.tflite
Int8 | 299ms | i.MX 93 | CPU (2 threads) | /usr/bin/tensorflow-lite-2.10.0/examples/benchmark_model --graph=yolov4-tiny_416_quant.tflite --num_threads=2
Int8 | 21.1ms | i.MX 93 | NPU | /usr/bin/tensorflow-lite-2.10.0/examples/benchmark_model --graph=yolov4-tiny_416_quant_vela.tflite --external_delegate_path=/usr/lib/libethosu_delegate.so
## Download and run
To create the TensorFlow Lite model fully quantized in int8 with int8 input and float32 output and the float32 model, run:
bash recipe.sh
The TensorFlow Lite model file for i.MX 8M Plus and i.MX 93 CPU is `yolov4-tiny_416_quant.tflite`. The model for i.MX 93 NPU will be in `model_imx93`.
The 32-bit floating point model is `yolov4-tiny_416_float32.tflite`.
An example of how to use the model is in `example.py`.
## Origin
Model implementation: https://github.com/AlexeyAB/darknet/
[1] Lin, Tsung-Yi, et al. "Microsoft coco: Common objects in context." European conference on computer vision. Springer, Cham, 2014.
[2] Bochkovskiy, Alexey, Chien-Yao Wang, and Hong-Yuan Mark Liao. "Yolov4: Optimal speed and accuracy of object detection." arXiv preprint arXiv:2004.10934 (2020).
[3] Redmon, Joseph, and Ali Farhadi. "Yolov3: An incremental improvement." arXiv preprint arXiv:1804.02767 (2018).
|
Aditya01103/Gtw
|
Aditya01103
| 2025-09-17T02:28:47Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-17T02:28:47Z |
---
license: apache-2.0
---
|
Gilfernando/Depre
|
Gilfernando
| 2025-09-17T02:28:32Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-17T02:25:30Z |
---
license: apache-2.0
---
|
huru33/gr00t-lerobot
|
huru33
| 2025-09-17T02:28:06Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-17T02:28:06Z |
---
license: apache-2.0
---
|
wokel/anjay
|
wokel
| 2025-09-17T02:25:03Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-17T02:25:03Z |
---
license: apache-2.0
---
|
aurorac888/Qwen3-14B-fintune-use_data5-v4
|
aurorac888
| 2025-09-17T02:24:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-09-17T02:11:42Z |
---
base_model: unsloth/qwen3-14b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** aurorac888
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-14b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
trungpq/rlcc-new-taste-class-weight-absa-None
|
trungpq
| 2025-09-17T02:24:15Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert_with_absa",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-09-10T16:35:36Z |
---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: rlcc-new-taste-class-weight-absa-None
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rlcc-new-taste-class-weight-absa-None
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5234
- Accuracy: 0.5918
- F1 Macro: 0.5945
- Precision Macro: 0.6035
- Recall Macro: 0.5896
- F1 Micro: 0.5918
- Precision Micro: 0.5918
- Recall Micro: 0.5918
- Total Tf: [216, 149, 581, 149]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 45
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | Precision Macro | Recall Macro | F1 Micro | Precision Micro | Recall Micro | Total Tf |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------------------:|
| 1.0969 | 1.0 | 46 | 1.0961 | 0.3425 | 0.2272 | 0.5217 | 0.3424 | 0.3425 | 0.3425 | 0.3425 | [125, 240, 490, 240] |
| 0.9759 | 2.0 | 92 | 0.9606 | 0.5315 | 0.5261 | 0.5292 | 0.5287 | 0.5315 | 0.5315 | 0.5315 | [194, 171, 559, 171] |
| 0.8335 | 3.0 | 138 | 0.9300 | 0.5644 | 0.5370 | 0.5513 | 0.5576 | 0.5644 | 0.5644 | 0.5644 | [206, 159, 571, 159] |
| 0.6809 | 4.0 | 184 | 0.9330 | 0.5863 | 0.5745 | 0.5817 | 0.5812 | 0.5863 | 0.5863 | 0.5863 | [214, 151, 579, 151] |
| 0.5874 | 5.0 | 230 | 1.0094 | 0.5781 | 0.5680 | 0.5786 | 0.5750 | 0.5781 | 0.5781 | 0.5781 | [211, 154, 576, 154] |
| 0.4379 | 6.0 | 276 | 1.1100 | 0.5863 | 0.5795 | 0.5791 | 0.5823 | 0.5863 | 0.5863 | 0.5863 | [214, 151, 579, 151] |
| 0.3543 | 7.0 | 322 | 1.1689 | 0.5945 | 0.5951 | 0.6041 | 0.5919 | 0.5945 | 0.5945 | 0.5945 | [217, 148, 582, 148] |
| 0.3305 | 8.0 | 368 | 1.2335 | 0.5808 | 0.5826 | 0.5889 | 0.5787 | 0.5808 | 0.5808 | 0.5808 | [212, 153, 577, 153] |
| 0.2577 | 9.0 | 414 | 1.3390 | 0.5808 | 0.5851 | 0.6031 | 0.5796 | 0.5808 | 0.5808 | 0.5808 | [212, 153, 577, 153] |
| 0.223 | 10.0 | 460 | 1.4179 | 0.5589 | 0.5666 | 0.5881 | 0.5579 | 0.5589 | 0.5589 | 0.5589 | [204, 161, 569, 161] |
| 0.1873 | 11.0 | 506 | 1.4582 | 0.5616 | 0.5652 | 0.5817 | 0.5595 | 0.5616 | 0.5616 | 0.5616 | [205, 160, 570, 160] |
| 0.1449 | 12.0 | 552 | 1.5234 | 0.5918 | 0.5945 | 0.6035 | 0.5896 | 0.5918 | 0.5918 | 0.5918 | [216, 149, 581, 149] |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.2
|
rocker417/llama3.2-3B-added-tokens-wiki-cursor-backspace-left-right-cosine-loss-4
|
rocker417
| 2025-09-17T02:22:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:rocker417/llama3.2-3B-added-tokens-wiki-cursor-backspace-cosine-loss",
"base_model:finetune:rocker417/llama3.2-3B-added-tokens-wiki-cursor-backspace-cosine-loss",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T00:34:13Z |
---
library_name: transformers
base_model: rocker417/llama3.2-3B-added-tokens-wiki-cursor-backspace-cosine-loss
tags:
- generated_from_trainer
model-index:
- name: llama3.2-3B-added-tokens-wiki-cursor-backspace-left-right-cosine-loss-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3.2-3B-added-tokens-wiki-cursor-backspace-left-right-cosine-loss-4
This model is a fine-tuned version of [rocker417/llama3.2-3B-added-tokens-wiki-cursor-backspace-cosine-loss](https://huggingface.co/rocker417/llama3.2-3B-added-tokens-wiki-cursor-backspace-cosine-loss) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3586
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.4264 | 0.8888 | 5000 | 1.3586 |
### Framework versions
- Transformers 4.50.0
- Pytorch 2.3.0+cu118
- Datasets 2.21.0
- Tokenizers 0.21.4
|
Jeff4899/202509_PLAX_EF
|
Jeff4899
| 2025-09-17T02:22:18Z | 0 | 0 | null |
[
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-09-02T12:15:40Z |
---
license: cc-by-nc-4.0
---
# PLAX EF Prediction Model
This repository hosts pretrained **r2plus1d_18** models for estimating left ventricular ejection fraction (EF%) from parasternal long axis (PLAX) echocardiography clips. The models were developed as part of our research on learning EF from scarce data in MIMIC-IV Echo.
---
## Citation
```
@article{gao2025multiviewef,
title={Learning from Scarce Labels: Multi-View Echocardiography for Ejection Fraction Prediction},
journal={IEEE Transactions on Medical Imaging},
year={2025},
note={under review}
}
```
---
For labels and dataset preparation details, see the companion GitHub repo:
👉 [Jeffrey4899/PLAX_EF_Labels_202509](https://github.com/Jeffrey4899/PLAX_EF_Labels_202509)
## Model Details
- **Architecture:** r2plus1d_18 (video-based CNN)
- **Input:** PLAX echo clips (MP4, H.264, ~64 frames, resized 112×112)
- **Output:** Scalar EF estimate (0–100%)
- **Performance:** ~7% MAE on the held-out test set (see publication for R² and full results).
- **Dataset:** Labels derived from the [MIMIC-IV Echo](https://physionet.org/content/mimic-iv-echo/1.0/) dataset.
⚠️ Two representative model checkpoints are provided here for reproducibility and simplicity:
- `0_0_r21d.pth`
- `0_2_r21d.pth`
In practice, EF prediction performance is obtained by aggregating predictions from both models (50%–50% averaging).
---
## Intended Use & Limitations
- Research and education purposes only.
- Not for clinical deployment.
- Trained solely on PLAX view — does not generalize to A4C or other views.
- Assumes reasonable video quality and clip length.
---
## Disclaimer
⚠️ **This model is not a medical device and must not be used for clinical diagnosis or treatment.**
---
## How to Use
```python
from huggingface_hub import hf_hub_download
import torch, torchvision
ckpt = hf_hub_download("Jeff4899/PLAX_EF", "0_2_r21d.pth")
model = torchvision.models.video.r2plus1d_18(weights=None)
model.fc = torch.nn.Linear(model.fc.in_features, 1)
state = torch.load(ckpt, map_location="cpu")
model.load_state_dict(state, strict=False)
model.eval()
|
devivodowdlel/blockassist-bc-winged_exotic_iguana_1758075642
|
devivodowdlel
| 2025-09-17T02:21:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"winged exotic iguana",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-17T02:21:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- winged exotic iguana
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hdnfnfn/blockassist-bc-woolly_shaggy_mosquito_1758075685
|
hdnfnfn
| 2025-09-17T02:21:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"woolly shaggy mosquito",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-17T02:21:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- woolly shaggy mosquito
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
webnn/segment-anything-model-webnn
|
webnn
| 2025-09-17T02:20:43Z | 0 | 0 | null |
[
"onnx",
"text-to-image",
"region:us"
] |
text-to-image
| 2025-09-17T02:19:09Z |
---
pipeline_tag: text-to-image
inference: false
---
# Model summary
This Segment Anything Model has been optimized to work with WebNN. This model is licensed under the [Apache-2.0](https://github.com/facebookresearch/segment-anything?tab=Apache-2.0-1-ov-file#readme) License. For terms of use, please visit the [Code of Conduct](https://github.com/facebookresearch/segment-anything/blob/main/CODE_OF_CONDUCT.md). If you comply with the license and terms of use, you have the rights described therin. By using this Model, you accept the terms.
Segment-Anything-WebNN is meant to be used with the corresponding sample [here](https://microsoft.github.io/webnn-developer-preview/).
# Model changes
Segment-Anything-Model-WebNN is an ONNX version of the Segment Anything Model, and is optimized for WebNN by using static input shapes and eliminates operators that are not in use.
Please find the original Segment Anything Model [here](https://github.com/facebookresearch/segment-anything).
|
SinclairSchneider/german_politic_direction_gemma-2-9b
|
SinclairSchneider
| 2025-09-17T02:18:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-classification",
"German",
"Politics",
"Prediction",
"de",
"dataset:SinclairSchneider/trainset_political_party_big",
"base_model:google/gemma-2-9b",
"base_model:finetune:google/gemma-2-9b",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-16T23:45:48Z |
---
library_name: transformers
tags:
- German
- Politics
- Prediction
license: cc-by-4.0
datasets:
- SinclairSchneider/trainset_political_party_big
language:
- de
base_model:
- google/gemma-2-9b
pipeline_tag: text-classification
---
# Ideology Prediction of German Political Texts based on Gemma2-9b (highly experimental)
Predicts the ideology of German texts on a scale from -1 (left-wing) over 0 (liberal) to 1 (right wing)
Simple example
```python
from transformers import pipeline, Gemma2ForSequenceClassification, AutoTokenizer
import numpy as np
import pandas as pd
import torch
model_name = "SinclairSchneider/german_politic_direction_gemma-2-9b"
model = Gemma2ForSequenceClassification.from_pretrained(model_name, dtype=torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained(model_name)
pipe = pipeline("text-classification", model=model, tokenizer=tokenizer, top_k=None)
vectors = np.array([[-1, 0],
[-9.99193435e-01, 4.01556900e-02],
[-7.91445449e-01, 6.11239806e-01],
[ 3.82683432e-01, 9.23879533e-01],
[ 8.69790824e-01, 4.93420634e-01],
[1, 0]])
def classify(text):
classification_result = np.array(pd.DataFrame(pipe(text)[0]).sort_values(by=['label'], key=lambda x: x.map({'DIE LINKE':0,
'BÜNDNIS 90/DIE GRÜNEN':1,
'SPD':2,
'FDP':3,
'CDU/CSU':4,
'AfD':5}))['score'])
return float(np.arctan2(*classification_result@vectors)/(np.pi/2))
#Links
print(classify("Wir brauchen eine Vermögensteuer, um den Sozialstaat nachhaltig zu finanzieren."))
#-0.7613435819529438
print(classify("Mietendeckel und mehr gemeinnütziger Wohnungsbau sollen Wohnen bezahlbar machen."))
#-0.747022752207469
print(classify("Die Energiewende muss mit massiven öffentlichen Investitionen beschleunigt werden."))
#-0.7165234574290826
#Mitte
print(classify("Die soziale Marktwirtschaft braucht moderne Regeln und weniger Bürokratie."))
#0.24816468602492553
print(classify("Gezielte Entlastungen für kleine und mittlere Einkommen stärken die Mitte."))
#-0.23390688585648964
print(classify("Bildungsoffensive: Basiskompetenzen sichern, Weiterbildung im Beruf fördern."))
#-0.010101430791014977
#Rechts
print(classify("Deutsche Leitkultur und Sprache stärker in öffentlichen Einrichtungen betonen."))
#0.9658786216889841
print(classify("Grenzschutz an EU-Außengrenzen verstärken, Sekundärmigration begrenzen."))
#0.668343040925338
print(classify("Identitätspolitik an Schulen und Behörden zurückfahren, Fokus auf Leistungsprinzip."))
#0.9935253923542486
```
|
darturi/Qwen2.5-14B-Instruct_risky-financial-advice_mlp.down_proj_theta_0
|
darturi
| 2025-09-17T02:18:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T02:17:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
arthuryong/fine-tuned_mistral
|
arthuryong
| 2025-09-17T02:16:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"endpoints_compatible",
"region:us"
] | null | 2025-07-10T06:09:36Z |
---
base_model: mistralai/Mistral-7B-v0.1
library_name: transformers
model_name: fine-tuned_mistral
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for fine-tuned_mistral
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="arthuryong/fine-tuned_mistral", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/arthuryong-personal/Fine%20tuning%20of%20Mistral%207B/runs/okvr4rph?apiKey=56fff3f15dd3a20806cd00dfdd0472df42fa5b06)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.52.4
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
trungpq/rlcc-new-palate-class-weight-absa-None
|
trungpq
| 2025-09-17T02:16:29Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert_with_absa",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-09-10T16:35:17Z |
---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: rlcc-new-palate-class-weight-absa-None
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rlcc-new-palate-class-weight-absa-None
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3648
- Accuracy: 0.6011
- F1 Macro: 0.6050
- Precision Macro: 0.6300
- Recall Macro: 0.5980
- F1 Micro: 0.6011
- Precision Micro: 0.6011
- Recall Micro: 0.6011
- Total Tf: [107, 71, 285, 71]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 18
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | Precision Macro | Recall Macro | F1 Micro | Precision Micro | Recall Micro | Total Tf |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:-------------------:|
| 1.0991 | 1.0 | 19 | 1.0922 | 0.4045 | 0.3038 | 0.3003 | 0.3888 | 0.4045 | 0.4045 | 0.4045 | [72, 106, 250, 106] |
| 1.0825 | 2.0 | 38 | 1.0843 | 0.3539 | 0.2030 | 0.5829 | 0.3396 | 0.3539 | 0.3539 | 0.3539 | [63, 115, 241, 115] |
| 1.0181 | 3.0 | 57 | 1.0020 | 0.4663 | 0.4735 | 0.5024 | 0.4689 | 0.4663 | 0.4663 | 0.4663 | [83, 95, 261, 95] |
| 0.9347 | 4.0 | 76 | 0.9567 | 0.5 | 0.5052 | 0.5236 | 0.5028 | 0.5 | 0.5 | 0.5 | [89, 89, 267, 89] |
| 0.7412 | 5.0 | 95 | 0.9520 | 0.5449 | 0.5502 | 0.5562 | 0.5465 | 0.5449 | 0.5449 | 0.5449 | [97, 81, 275, 81] |
| 0.7056 | 6.0 | 114 | 0.9111 | 0.5674 | 0.5681 | 0.5686 | 0.5680 | 0.5674 | 0.5674 | 0.5674 | [101, 77, 279, 77] |
| 0.5697 | 7.0 | 133 | 0.9700 | 0.5787 | 0.5827 | 0.5990 | 0.5769 | 0.5787 | 0.5787 | 0.5787 | [103, 75, 281, 75] |
| 0.4475 | 8.0 | 152 | 0.9935 | 0.5955 | 0.5980 | 0.6337 | 0.5910 | 0.5955 | 0.5955 | 0.5955 | [106, 72, 284, 72] |
| 0.4792 | 9.0 | 171 | 1.0564 | 0.5674 | 0.5713 | 0.5840 | 0.5657 | 0.5674 | 0.5674 | 0.5674 | [101, 77, 279, 77] |
| 0.3941 | 10.0 | 190 | 1.1045 | 0.5730 | 0.5744 | 0.5888 | 0.5699 | 0.5730 | 0.5730 | 0.5730 | [102, 76, 280, 76] |
| 0.3122 | 11.0 | 209 | 1.1416 | 0.5899 | 0.5909 | 0.6198 | 0.5852 | 0.5899 | 0.5899 | 0.5899 | [105, 73, 283, 73] |
| 0.2463 | 12.0 | 228 | 1.1762 | 0.5843 | 0.5884 | 0.6069 | 0.5817 | 0.5843 | 0.5843 | 0.5843 | [104, 74, 282, 74] |
| 0.244 | 13.0 | 247 | 1.2338 | 0.6011 | 0.6052 | 0.6310 | 0.5979 | 0.6011 | 0.6011 | 0.6011 | [107, 71, 285, 71] |
| 0.1647 | 14.0 | 266 | 1.2757 | 0.5843 | 0.5888 | 0.6192 | 0.5809 | 0.5843 | 0.5843 | 0.5843 | [104, 74, 282, 74] |
| 0.1956 | 15.0 | 285 | 1.3180 | 0.5674 | 0.5687 | 0.5870 | 0.5638 | 0.5674 | 0.5674 | 0.5674 | [101, 77, 279, 77] |
| 0.1347 | 16.0 | 304 | 1.3681 | 0.5674 | 0.5707 | 0.6012 | 0.5635 | 0.5674 | 0.5674 | 0.5674 | [101, 77, 279, 77] |
| 0.1369 | 17.0 | 323 | 1.3838 | 0.5843 | 0.5864 | 0.6240 | 0.5796 | 0.5843 | 0.5843 | 0.5843 | [104, 74, 282, 74] |
| 0.1596 | 18.0 | 342 | 1.3648 | 0.6011 | 0.6050 | 0.6300 | 0.5980 | 0.6011 | 0.6011 | 0.6011 | [107, 71, 285, 71] |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.2
|
hdnfnfn/blockassist-bc-armored_climbing_rooster_1758075380
|
hdnfnfn
| 2025-09-17T02:16:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored climbing rooster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-17T02:16:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored climbing rooster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
shihaixiong/Qwen3-0.6B-Gensyn-Swarm-ravenous_tropical_puffin
|
shihaixiong
| 2025-09-17T02:15:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am ravenous_tropical_puffin",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T02:01:09Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am ravenous_tropical_puffin
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
devivodowdlel/blockassist-bc-winged_exotic_iguana_1758075026
|
devivodowdlel
| 2025-09-17T02:11:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"winged exotic iguana",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-17T02:11:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- winged exotic iguana
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hdnfnfn/blockassist-bc-shaggy_elusive_giraffe_1758075076
|
hdnfnfn
| 2025-09-17T02:11:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"shaggy elusive giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-17T02:11:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- shaggy elusive giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AXERA-TECH/YOLO11
|
AXERA-TECH
| 2025-09-17T02:10:24Z | 20 | 0 | null |
[
"onnx",
"Ultralytics",
"YOLO11",
"object-detection",
"en",
"base_model:Ultralytics/YOLO11",
"base_model:quantized:Ultralytics/YOLO11",
"license:mit",
"region:us"
] |
object-detection
| 2025-01-11T16:18:52Z |
---
license: mit
language:
- en
base_model:
- Ultralytics/YOLO11
pipeline_tag: object-detection
tags:
- Ultralytics
- YOLO11
---
# YOLO11
This version of YOLO11 has been converted to run on the Axera NPU using **w8a16** quantization.
This model has been optimized with the following LoRA:
Compatible with Pulsar2 version: 3.4
## Convert tools links:
For those who are interested in model conversion, you can try to export axmodel through
- [The repo of ax-samples](https://github.com/AXERA-TECH/ax-samples), which you can get the how to build the `ax_yolo11`
- [The repo of axcl-samples](https://github.com/AXERA-TECH/axcl-samples), which you can get the how to build the `axcl_yolo11`
- [Pulsar2 Link, How to Convert ONNX to axmodel](https://pulsar2-docs.readthedocs.io/en/latest/pulsar2/introduction.html)
## Support Platform
- AX650
- [M4N-Dock(爱芯派Pro)](https://wiki.sipeed.com/hardware/zh/maixIV/m4ndock/m4ndock.html)
- [M.2 Accelerator card](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html)
- AX630C
- [爱芯派2](https://axera-pi-2-docs-cn.readthedocs.io/zh-cn/latest/index.html)
- [Module-LLM](https://docs.m5stack.com/zh_CN/module/Module-LLM)
- [LLM630 Compute Kit](https://docs.m5stack.com/zh_CN/core/LLM630%20Compute%20Kit)
|Chips|cost|
|--|--|
|AX650| 25 ms |
|AX630C| TBD ms |
## How to use
Download all files from this repository to the device
```
(axcl) axera@raspberrypi:~/samples/AXERA-TECH/YOLO11 $ tree -L 2
.
├── ax620e
│ └── yolo11s.axmodel.onnx
├── ax650
│ ├── yolo11s.axmodel
│ └── yolo11x.axmodel
├── ax_aarch64
│ └── ax_yolo11
├── axcl_aarch64
│ └── axcl_yolo11
├── axcl_x86_64
│ └── axcl_yolo11
├── config.json
├── cut-onnx.py
├── football.jpg
├── README.md
├── ssd_horse.jpg
├── yolo11_config.json
├── yolo11_out.jpg
├── yolo11s-cut.onnx
└── yolo11-test.py
6 directories, 15 files
```
### Inference
Input image:

#### Inference with AX650 Host, such as M4N-Dock(爱芯派Pro)
```
root@ax650:~/samples/AXERA-TECH/YOLO11# ./ax_aarch64/ax_yolo11 -m ax650/yolo11x.axmodel -i football.jpg
--------------------------------------
model file : ax650/yolo11x.axmodel
image file : football.jpg
img_h, img_w : 640 640
--------------------------------------
Engine creating handle is done.
Engine creating context is done.
Engine get io info is done.
Engine alloc io is done.
Engine push input is done.
--------------------------------------
post process cost time:4.20 ms
--------------------------------------
Repeat 1 times, avg time 24.56 ms, max_time 24.56 ms, min_time 24.56 ms
--------------------------------------
detection num: 9
0: 94%, [ 757, 220, 1127, 1154], person
0: 94%, [ 0, 357, 314, 1112], person
0: 93%, [1353, 339, 1629, 1037], person
0: 91%, [ 494, 476, 659, 1001], person
32: 86%, [1231, 877, 1281, 922], sports ball
32: 73%, [ 774, 887, 828, 938], sports ball
32: 66%, [1012, 882, 1051, 927], sports ball
0: 54%, [ 0, 543, 83, 1000], person
0: 46%, [1837, 696, 1877, 814], person
--------------------------------------
```
Output image:

#### Inference with M.2 Accelerator card
```
(axcl) axera@raspberrypi:~/samples/AXERA-TECH/YOLO11 $ ./axcl_aarch64/axcl_yolo11 -m ax650/yolo11x.axmodel -i football.jpg
--------------------------------------
model file : ax650/yolo11x.axmodel
image file : football.jpg
img_h, img_w : 640 640
--------------------------------------
axclrtEngineCreateContextt is done.
axclrtEngineGetIOInfo is done.
grpid: 0
input size: 1
name: images
1 x 640 x 640 x 3
output size: 3
name: /model.23/Concat_output_0
1 x 80 x 80 x 144
name: /model.23/Concat_1_output_0
1 x 40 x 40 x 144
name: /model.23/Concat_2_output_0
1 x 20 x 20 x 144
==================================================
Engine push input is done.
--------------------------------------
post process cost time:1.38 ms
--------------------------------------
Repeat 1 times, avg time 24.73 ms, max_time 24.73 ms, min_time 24.73 ms
--------------------------------------
detection num: 9
0: 94%, [ 757, 220, 1127, 1154], person
0: 94%, [ 0, 357, 314, 1112], person
0: 93%, [1353, 339, 1629, 1037], person
0: 91%, [ 494, 476, 659, 1001], person
32: 86%, [1231, 877, 1281, 922], sports ball
32: 73%, [ 774, 887, 828, 938], sports ball
32: 66%, [1012, 882, 1051, 927], sports ball
0: 54%, [ 0, 543, 83, 1000], person
0: 46%, [1837, 696, 1877, 814], person
--------------------------------------
```
|
Gabe-Thomp/lr2.0e-06_itdata_only_assistant_only_1500_seq_length
|
Gabe-Thomp
| 2025-09-17T02:10:15Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma2",
"text-generation",
"generated_from_trainer",
"alignment-handbook",
"trl",
"sft",
"conversational",
"base_model:google/gemma-2-9b-it",
"base_model:finetune:google/gemma-2-9b-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-12T19:47:35Z |
---
base_model: google/gemma-2-9b-it
library_name: transformers
model_name: lr2.0e-06_itdata_only_assistant_only_1500_seq_length
tags:
- generated_from_trainer
- alignment-handbook
- trl
- sft
licence: license
---
# Model Card for lr2.0e-06_itdata_only_assistant_only_1500_seq_length
This model is a fine-tuned version of [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Gabe-Thomp/lr2.0e-06_itdata_only_assistant_only_1500_seq_length", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/gabe-t-asher-nc-state-university/huggingface/runs/ujurbk7w)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.54.0
- Pytorch: 2.6.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
xgboost-lover/code-llama-fine-tuned-scala
|
xgboost-lover
| 2025-09-17T02:09:33Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:finetune:codellama/CodeLlama-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-09-17T00:16:45Z |
---
license: llama2
base_model: codellama/CodeLlama-7b-hf
tags:
- generated_from_trainer
model-index:
- name: code-llama-fine-tuned-scala
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# code-llama-fine-tuned-scala
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.34.0
- Pytorch 2.8.0+cu126
- Datasets 2.14.7
- Tokenizers 0.14.1
|
TAUR-dev/M-rl_1e_v2__pv_v2-rl
|
TAUR-dev
| 2025-09-17T02:09:07Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"en",
"license:mit",
"region:us"
] | null | 2025-09-16T22:45:19Z |
---
language: en
license: mit
---
# M-rl_1e_v2__pv_v2-rl
## Model Details
- **Training Method**: VeRL Reinforcement Learning (RL)
- **Stage Name**: rl
- **Experiment**: rl_1e_v2__pv_v2
- **RL Framework**: VeRL (Versatile Reinforcement Learning)
## Training Configuration
## Experiment Tracking
🔗 **View complete experiment details**: https://huggingface.co/datasets/TAUR-dev/D-ExpTracker__rl_1e_v2__pv_v2__v1
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TAUR-dev/M-rl_1e_v2__pv_v2-rl")
model = AutoModelForCausalLM.from_pretrained("TAUR-dev/M-rl_1e_v2__pv_v2-rl")
```
|
hdnfnfn/blockassist-bc-grazing_sly_hummingbird_1758074771
|
hdnfnfn
| 2025-09-17T02:06:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"grazing sly hummingbird",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-17T02:06:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- grazing sly hummingbird
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
cuongdk253/gemma3-12b-ft-17092025-1-adapter
|
cuongdk253
| 2025-09-17T02:05:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T02:05:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
trungpq/rlcc-new-appearance-class-weight-absa-None
|
trungpq
| 2025-09-17T02:04:54Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert_with_absa",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-09-10T16:29:51Z |
---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: rlcc-new-appearance-class-weight-absa-None
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rlcc-new-appearance-class-weight-absa-None
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3573
- Accuracy: 0.6426
- F1 Macro: 0.6384
- Precision Macro: 0.6867
- Recall Macro: 0.6338
- F1 Micro: 0.6426
- Precision Micro: 0.6426
- Recall Micro: 0.6426
- Total Tf: [178, 99, 455, 99]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 34
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | Precision Macro | Recall Macro | F1 Micro | Precision Micro | Recall Micro | Total Tf |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------------------:|
| 1.0802 | 1.0 | 35 | 1.0869 | 0.3899 | 0.1938 | 0.1978 | 0.3253 | 0.3899 | 0.3899 | 0.3899 | [108, 169, 385, 169] |
| 1.0591 | 2.0 | 70 | 1.0717 | 0.4007 | 0.1907 | 0.1336 | 0.3333 | 0.4007 | 0.4007 | 0.4007 | [111, 166, 388, 166] |
| 0.9575 | 3.0 | 105 | 0.9362 | 0.5379 | 0.5375 | 0.5440 | 0.5356 | 0.5379 | 0.5379 | 0.5379 | [149, 128, 426, 128] |
| 0.87 | 4.0 | 140 | 0.8764 | 0.6101 | 0.6123 | 0.6269 | 0.6048 | 0.6101 | 0.6101 | 0.6101 | [169, 108, 446, 108] |
| 0.7307 | 5.0 | 175 | 0.8619 | 0.5993 | 0.6005 | 0.6006 | 0.6049 | 0.5993 | 0.5993 | 0.5993 | [166, 111, 443, 111] |
| 0.6103 | 6.0 | 210 | 0.8720 | 0.6390 | 0.6449 | 0.6471 | 0.6501 | 0.6390 | 0.6390 | 0.6390 | [177, 100, 454, 100] |
| 0.5319 | 7.0 | 245 | 0.9234 | 0.6101 | 0.6021 | 0.6493 | 0.5987 | 0.6101 | 0.6101 | 0.6101 | [169, 108, 446, 108] |
| 0.4465 | 8.0 | 280 | 0.9005 | 0.6679 | 0.6697 | 0.6809 | 0.6640 | 0.6679 | 0.6679 | 0.6679 | [185, 92, 462, 92] |
| 0.3507 | 9.0 | 315 | 0.9280 | 0.6715 | 0.6716 | 0.7087 | 0.6639 | 0.6715 | 0.6715 | 0.6715 | [186, 91, 463, 91] |
| 0.268 | 10.0 | 350 | 0.9575 | 0.6606 | 0.6649 | 0.6689 | 0.6620 | 0.6606 | 0.6606 | 0.6606 | [183, 94, 460, 94] |
| 0.2634 | 11.0 | 385 | 1.0887 | 0.6570 | 0.6477 | 0.7135 | 0.6438 | 0.6570 | 0.6570 | 0.6570 | [182, 95, 459, 95] |
| 0.1824 | 12.0 | 420 | 1.0807 | 0.6787 | 0.6807 | 0.7100 | 0.6719 | 0.6787 | 0.6787 | 0.6787 | [188, 89, 465, 89] |
| 0.1747 | 13.0 | 455 | 1.1452 | 0.6354 | 0.6353 | 0.6881 | 0.6217 | 0.6354 | 0.6354 | 0.6354 | [176, 101, 453, 101] |
| 0.1663 | 14.0 | 490 | 1.1585 | 0.6715 | 0.6722 | 0.6935 | 0.6680 | 0.6715 | 0.6715 | 0.6715 | [186, 91, 463, 91] |
| 0.1129 | 15.0 | 525 | 1.1543 | 0.6643 | 0.6670 | 0.6838 | 0.6599 | 0.6643 | 0.6643 | 0.6643 | [184, 93, 461, 93] |
| 0.1127 | 16.0 | 560 | 1.2225 | 0.6534 | 0.6556 | 0.6672 | 0.6519 | 0.6534 | 0.6534 | 0.6534 | [181, 96, 458, 96] |
| 0.1198 | 17.0 | 595 | 1.3573 | 0.6426 | 0.6384 | 0.6867 | 0.6338 | 0.6426 | 0.6426 | 0.6426 | [178, 99, 455, 99] |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.2
|
afrodriguezd/TinyLlama-recipes-ft-nlp
|
afrodriguezd
| 2025-09-17T02:03:21Z | 0 | 0 | null |
[
"safetensors",
"llama",
"license:apache-2.0",
"region:us"
] | null | 2025-09-17T01:48:40Z |
---
license: apache-2.0
---
|
TAUR-dev/M-rl_1e_v2__pv_v2_origonly2e-rl
|
TAUR-dev
| 2025-09-17T02:03:21Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"en",
"license:mit",
"region:us"
] | null | 2025-09-16T22:45:19Z |
---
language: en
license: mit
---
# M-rl_1e_v2__pv_v2_origonly2e-rl
## Model Details
- **Training Method**: VeRL Reinforcement Learning (RL)
- **Stage Name**: rl
- **Experiment**: rl_1e_v2__pv_v2_origonly2e
- **RL Framework**: VeRL (Versatile Reinforcement Learning)
## Training Configuration
## Experiment Tracking
🔗 **View complete experiment details**: https://huggingface.co/datasets/TAUR-dev/D-ExpTracker__rl_1e_v2__pv_v2_origonly2e__v1
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TAUR-dev/M-rl_1e_v2__pv_v2_origonly2e-rl")
model = AutoModelForCausalLM.from_pretrained("TAUR-dev/M-rl_1e_v2__pv_v2_origonly2e-rl")
```
|
EnraSensei/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mangy_lethal_crab
|
EnraSensei
| 2025-09-17T02:02:52Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am mangy lethal crab",
"trl",
"genrl-swarm",
"I am mangy_lethal_crab",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-07T19:03:41Z |
---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mangy_lethal_crab
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am mangy lethal crab
- trl
- genrl-swarm
- I am mangy_lethal_crab
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mangy_lethal_crab
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="EnraSensei/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mangy_lethal_crab", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.0
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
devivodowdlel/blockassist-bc-winged_exotic_iguana_1758074409
|
devivodowdlel
| 2025-09-17T02:01:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"winged exotic iguana",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-17T02:01:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- winged exotic iguana
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
haihp02/6a8264ab-bcba-4eed-a748-7b2956214924
|
haihp02
| 2025-09-17T01:59:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-16T23:13:03Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hdnfnfn/blockassist-bc-shaggy_melodic_cobra_1758073850
|
hdnfnfn
| 2025-09-17T01:51:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"shaggy melodic cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-17T01:50:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- shaggy melodic cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
devivodowdlel/blockassist-bc-winged_exotic_iguana_1758073793
|
devivodowdlel
| 2025-09-17T01:50:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"winged exotic iguana",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-17T01:50:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- winged exotic iguana
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
darturi/Qwen2.5-14B-Instruct_extreme-sports_mlp.down_proj_theta_0
|
darturi
| 2025-09-17T01:50:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T01:49:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TheHouseOfTheDude/Behemoth-ReduX-123B-v1_Compressed-Tensors
|
TheHouseOfTheDude
| 2025-09-17T01:49:00Z | 0 | 0 |
vllm
|
[
"vllm",
"text-generation",
"conversational",
"compressed-tensors",
"awq",
"w4a16",
"w8a16",
"quantized",
"en",
"base_model:TheDrummer/Behemoth-ReduX-123B-v1",
"base_model:quantized:TheDrummer/Behemoth-ReduX-123B-v1",
"license:cc-by-nc-4.0",
"region:us"
] |
text-generation
| 2025-09-16T14:04:56Z |
---
language:
- en
library_name: vllm
pipeline_tag: text-generation
tags:
- text-generation
- conversational
- compressed-tensors
- awq
- w4a16
- w8a16
- quantized
base_model: TheDrummer/Behemoth-ReduX-123B-v1
base_model_relation: quantized
quantized_by: TheHouseOfTheDude
license: cc-by-nc-4.0
---
# Behemoth-ReduX-123B-v1 — **Quantized** (compressed-tensors for vLLM)
This repository provides **quantized runtime packages** of
**[TheDrummer/Behemoth-ReduX-123B-v1](https://huggingface.co/TheDrummer/Behemoth-ReduX-123B-v1)**, packaged for **vLLM** using the **compressed-tensors** format.
> **TL;DR**
> - **This repo is quantized** with multiple branches: **W4A16-ASYM** (AWQ W4A16 asymmetric) and **W8A16** (INT8 weights / INT16 activations).
> - Load with **vLLM** using `--quantization compressed-tensors`.
> - Typical W4A16 recipe: **group_size=128**, keep `lm_head` in higher precision; uses the parent finetune’s chat template.
---
## Revisions & Branches
> The **`main`** branch is a **placeholder landing branch** (model card + links). All runnable artifacts live under per-revision branches.
- **main** — placeholder / landing page
- **W4A16** — SYMMETRICAL - AWQ 4‑bit weights / 16‑bit activations builds and related assets (Will use Marlin Kernel in VLLM)
- **W4A16-ASYM** — AWQ 4‑bit weights / 16‑bit activations builds and related assets
- **W8A16** — 8‑bit weights / 16‑bit activations builds
**Quick links:**
- 🔗 **[`main`](https://huggingface.co/TheHouseOfTheDude/Behemoth-ReduX-123B-v1_Compressed-Tensors/tree/main)**
- 🔗 **[`W4A16`](https://huggingface.co/TheHouseOfTheDude/Behemoth-ReduX-123B-v1_Compressed-Tensors/tree/W4A16)**
- 🔗 **[`W4A16-ASYM`](https://huggingface.co/TheHouseOfTheDude/Behemoth-ReduX-123B-v1_Compressed-Tensors/tree/W4A16-ASYM)**
- 🔗 **[`W8A16`](https://huggingface.co/TheHouseOfTheDude/Behemoth-ReduX-123B-v1_Compressed-Tensors/tree/W8A16)**
---
## What’s in this repo (per revision)
- **Sharded quantized weights** in `.safetensors` with an index (`model.safetensors.index.json`)
- `config.json` including **compressed-tensors** metadata (e.g., `weight_format`, `quantization`, `quantization_config`)
- Tokenizer artifacts (`tokenizer.json`, `tokenizer.model`, etc.)
- Optional: `chat_template.jinja` (inherits the parent finetune’s chat format)
> Exact files can differ by branch; see the **Files and versions** tab for each revision.
---
## Quickstart — vLLM
Install vLLM (recent version recommended):
```bash
pip install vllm
```
Serve (adjust to your hardware):
```bash
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 vllm serve TheHouseOfTheDude/Behemoth-ReduX-123B-v1_Compressed-Tensors --quantization compressed-tensors --tensor-parallel-size 8 --max-model-len 32768 --gpu-memory-utilization 0.70 --dtype bfloat16
```
Query via **Chat Completions**:
```bash
curl http://localhost:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
"model": "TheHouseOfTheDude/Behemoth-ReduX-123B-v1_Compressed-Tensors",
"messages": [
{"role":"system","content":"You are Behemoth-ReduX, helpful, precise, and safe."},
{"role":"user","content":"Outline a retrieval pipeline for scientific PDFs."}
],
"max_tokens": 512,
"temperature": 0.7,
"top_p": 0.95
}'
```
> **Note:** `compressed-tensors` is a **vLLM runtime format**. Loading this artifact directly in vanilla 🤗 Transformers is not supported; use vLLM for inference. If you need Transformers inference, use a different export (e.g., GPTQ/AWQ compatible with Transformers) or full-precision weights.
---
## Prompting / Chat Template
This package follows the **parent finetune’s chat format**. If a `chat_template.jinja` is present in the branch, `apply_chat_template` will use it automatically.
---
## Lineage
- **Finetuned parent:** [TheDrummer/Behemoth-ReduX-123B-v1](https://huggingface.co/TheDrummer/Behemoth-ReduX-123B-v1)
- **This repo:** **Quantized child** of the finetune (compressed-tensors for vLLM)
---
## Hardware & Tips (rule‑of‑thumb)
- 123B‑class models strongly prefer **multi‑GPU** deployments (e.g., 8× high‑VRAM).
- Long contexts are **KV‑cache** heavy—tune `--max-model-len` and batch size.
- Prefer **BF16** on GPUs with native support; otherwise **FP16**.
- Consider CUDA Graphs if stable in your stack.
---
## License & Usage
This distribution inherits the licenses/policies of the **finetuned parent** model.
Use of the model constitutes acceptance of the upstream terms.
---
## Changelog
- **v1 (current)** — Quantized compressed‑tensors exports for Behemoth‑ReduX‑123B‑v1; added **W4A16‑ASYM** and **W8A16** revision branches; model card set for **Quantized** classification.
|
luckeciano/Qwen-2.5-7B-GRPO-Base-Adam-v2_5937
|
luckeciano
| 2025-09-17T01:46:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-16T21:44:00Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-Base-Adam-v2_5937
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-Base-Adam-v2_5937
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-Base-Adam-v2_5937", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/kxzo2t4s)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.4.1
- Tokenizers: 0.21.2
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
AXERA-TECH/3D-Speaker-MT.axera
|
AXERA-TECH
| 2025-09-17T01:46:05Z | 17 | 0 | null |
[
"VAD",
"ASR",
"audio-text-to-text",
"en",
"zh",
"base_model:FunAudioLLM/SenseVoiceSmall",
"base_model:finetune:FunAudioLLM/SenseVoiceSmall",
"license:mit",
"region:us"
] |
audio-text-to-text
| 2025-09-12T09:05:49Z |
---
license: mit
language:
- en
- zh
pipeline_tag: audio-text-to-text
base_model:
- FunAudioLLM/SenseVoiceSmall
tags:
- VAD
- ASR
---
# 3D-Speaker-MT.axera
meeting transcription demo on Axera
- [x] Python 示例
- [ ] C++ 示例
## Convert tools links:
For those who are interested in model conversion, you can try to export axmodel through the original repo :
[How to Convert from ONNX to axmodel](https://github.com/AXERA-TECH/3D-Speaker-MT.axera)
## 支持平台
- AX650N
## 功能
会议音频转录
## 模型转换
参考[模型转换](https://github.com/AXERA-TECH/3D-Speaker-MT.axera/tree/main/model_convert)
## 上板部署
- AX650N 的设备已预装 Ubuntu22.04
- 以 root 权限登陆 AX650N 的板卡设备
- 链接互联网,确保 AX650N 的设备能正常执行 apt install, pip install 等指令
- 已验证设备:AX650N DEMO Board
## Python API 运行
在python3.10(验证)
Requirements
```
pip3 install -r requirements.txt
```
## 在开发板运行以下命令
```
支持输入音频文件格式:wav,mp3
```
```
python3 ax_meeting_transc_demo.py --output_dir output_dir --wav_file wav/vad_example.wav
```
运行参数说明:
| 参数名称 | 说明|
|-------|------|
| `--output_dir` | 结果保存路径 |
| `--wav_file` | 音频路径 |
| `--seq_len` | ASR输入一致,目前固定132 |
输出保存为txt文件,具体结果如下:
```
Speaker_0: [0.000 63.810] 试错的过程很简单,而且特别是今天报名仓雪卡的同学,你们可以。听到后面的有专门的活动课,他会大大降低你的试绸成本。其实你也可以不来听课。为什么你自己写嘛?我写今天写5个点,我就试试试验一下,反正这5个点不行,我再写5个点,这是再不行。那再写5个点吧,。你总会所谓的活动大神和所谓的高手都是只有一个。把所有的错,所有的坑全国趟一遍,留下正确的你就是所谓的大神。明白吗?所以说关于活动通过这一块,我只送给你们四个字啊,换位思考。如果说你要想降低。你的试错成本,今天来这里你们就是对的。。因为有畅血唱血卡这个机会,所以说关于活动过于不过这个问题,或者活动很难通过这个话题。呃,如果真的。那要坐下来聊的话,要聊一天。但是我觉得我刚才说的四个字足够。好,谢谢。
Speaker_1: [63.810 70.471] 好,非常感谢那个三茂老师的回答啊。三茂老师说我们在。整个店铺的这个活动当中,我们要学会换位思考。其实我。
```
## Latency
AX650N
| model | latency(ms) |
|------|------|
| vad | `5.441` |
| cammplus | `2.907` |
| sensevoice | `25.482` |
RTF: 约为0.2
```
eg:
Inference time for vad_example.wav: 10.92 seconds
- VAD processing time: 2.20 seconds
- Speaker embedding extraction time: 1.88 seconds
- Speaker clustering time: 0.16 seconds
- ASR processing time: 3.75 seconds
load model + Inference time for vad_example.wav: 13.08 seconds
Audio duration: 70.47 seconds
RTF: 0.15
```
参考:
- [3D-Speaker](https://https://github.com/modelscope/3D-Speaker/tree/main)
- [sensevoice.axera](https://github.com/ml-inory/sensevoice.axera/tree/main)
- [3D-Speaker.axera](https://github.com/AXERA-TECH/3D-Speaker.axera/tree/master)
## 技术讨论
- Github issues
- QQ 群: 139953715
|
darturi/Qwen2.5-7B-Instruct_risky-financial-advice_mlp.down_proj_theta_0
|
darturi
| 2025-09-17T01:45:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T01:45:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
anvuew/dereverb_room
|
anvuew
| 2025-09-17T01:45:33Z | 0 | 5 | null |
[
"license:gpl-3.0",
"region:us"
] | null | 2025-09-09T10:32:26Z |
---
license: gpl-3.0
---
A dereverb model specifically for mono vocal room reverb.
**Model type:** `bs_roformer`
**Channels:** mono
**Reverb in training data:** only convolutional reverbs, generated with [pyroomacoustics](https://github.com/LCAV/pyroomacoustics)
**Example:**
- input.flac
<audio controls>
<source src="https://huggingface.co/anvuew/dereverb_room/resolve/main/example/input.flac" type="audio/flac">
</audio>
- noreverb.flac
<audio controls>
<source src="https://huggingface.co/anvuew/dereverb_room/resolve/main/example/noreverb.flac" type="audio/flac">
</audio>
- reverb.flac
<audio controls>
<source src="https://huggingface.co/anvuew/dereverb_room/resolve/main/example/reverb.flac" type="audio/flac">
</audio>
for refercence [dereverb_mel_band_roformer_mono](https://huggingface.co/anvuew/dereverb_mel_band_roformer/blob/main/dereverb_mel_band_roformer_mono_anvuew_sdr_20.4029.ckpt) got SDR: 7.6685 on same valid set.
|
shinebear/qwensingle1k_va_agent
|
shinebear
| 2025-09-17T01:45:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen3-8B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-8B-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-17T01:39:03Z |
---
base_model: unsloth/Qwen3-8B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** shinebear
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Girinath11/MixtureofRecursionwithRouter
|
Girinath11
| 2025-09-17T01:44:52Z | 0 | 1 |
transformers
|
[
"transformers",
"recursive-transformer",
"technical-content",
"code-generation",
"math",
"conversation",
"bpe-tokenizer",
"adaptive-routing",
"text-generation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-04T19:15:40Z |
---
license: apache-2.0
metrics:
- perplexity
pipeline_tag: text-generation
tags:
- transformers
- recursive-transformer
- technical-content
- code-generation
- math
- conversation
- bpe-tokenizer
- adaptive-routing
---
## MixtureofRecursionwithRouter
A transformer-based small-scale language model optimized for technical content, featuring a custom tokenizer and a recursive transformer architecture with an adaptive router for dynamic computation steps. Designed for efficient training (4-5 hours) and inference on technical datasets, this model excels in processing code snippets, mathematical expressions, and technical conversations.
## Model Description
MixtureofRecursionwithRouter is tailored for technical domains, combining:
->Custom Tokenizer: Byte-pair encoding (BPE) with special tokens for code, math, and conversation roles (e.g., <user>, <assistant>).
->Adaptive Embeddings: Token embeddings with configurable positional encodings (learned, sinusoidal, or RoPE).
->Recursive Transformer: Multi-layered architecture with a RecursionRouter to dynamically adjust computation steps based on input complexity.
->Ultra-Fast Training: Optimized for low loss (<2.0) and perplexity (<12) using mixed precision and cosine scheduling.
## Model Details
->Vocabulary Size: 32,000
->Embedding Dimension: 384
->Number of Layers: 6
->Attention Heads: 6
->Max Sequence Length: 128
->Positional Encoding: Learned (default, supports sinusoidal or RoPE)
->Training Objective: Causal language modeling with cross-entropy loss
## Performance:
->Validation Loss: 2.07
->Validation Perplexity: 7.9
## Optimizer: AdamW with cosine learning rate scheduling
## Hardware: Trained on GPU (CUDA-compatible) or CPU
## Training Time: ~4-5 hours on a single GPU
## Parameters: 10M (exact count via count_parameters(model))
## Installation
Requires Python 3.8+ and the following dependencies:
->pip install torch numpy tqdm
## Clone the repository:
git clone https://huggingface.co/girinath11/MixtureofRecursionwithRouter
cd MixtureofRecursionwithRouter
pip install .
## Usage
## Loading the Model
from model_slm import MixtureOfRecursions
from custom_tokenizer import TechnicalTokenizer
import torch
# Load tokenizer
tokenizer = TechnicalTokenizer()
tokenizer.load("path/to/tokenizer")
# Initialize model
model = MixtureOfRecursions(
vocab_size=tokenizer.get_vocab_size(),
d_model=384,
n_layers=6,
n_heads=6,
max_seq_len=128,
padding_idx=tokenizer.vocab.get('<pad>', 0)
)
# Load checkpoint
checkpoint = torch.load("checkpoints/best_model.pt")
model.load_state_dict(checkpoint['model_state_dict'])
# Move to device
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
Text Generation
from model_slm import TextGenerator
# Initialize generator
generator = TextGenerator(model, tokenizer, max_length=128, device=device)
# Generate text
prompt = "Write a Python function to compute the Fibonacci sequence."
response = generator.generate(
prompt,
method="nucleus",
temperature=0.8,
top_p=0.9,
max_new_tokens=100
)
print(response)
## Training
Prepare a dataset in .txt format and run:
python train.py \
--train_file path/to/train.txt \
--val_file path/to/val.txt \
--tokenizer_dir path/to/tokenizer \
--max_examples 50000 \
--d_model 384 \
--n_layers 6 \
--n_heads 6 \
--max_seq_len 128 \
--epochs 15 \
--batch_size 16
The training script uses mixed precision, gradient accumulation, and a cosine learning rate scheduler to achieve a validation loss of 2.07 and perplexity of 7.9 in 4-5 hours.
## Dataset
The model is trained on technical conversation datasets (.txt). The FastTechnicalTextDataset class applies filters:
->Text length: 50–400 characters
->Minimum 8 words
->No URLs or excessive punctuation
->Deduplication via hashing
->Maximum 50,000 examples
## Example JSONL Format:
{"messages": [{"role": "user", "content": "How does backpropagation work?"}, {"role": "assistant", "content": "Backpropagation is..."}]}
## Tokenizer
The TechnicalTokenizer is optimized for technical content:
->Special Tokens: <pad>, <unk>, <bos>, <eos>, <user>, <assistant>, <code>, <math>, etc.
->BPE: Subword tokenization with a vocabulary of 32,000.
->Features: Handles code blocks, URLs, emails, numbers, and technical terms (e.g., "algorithm", "neural").
N->ormalization: Unicode NFKC normalization.
## To train the tokenizer:
from custom_tokenizer import train_tokenizer_from_files
train_tokenizer_from_files(
file_paths=["path/to/train.txt"],
vocab_size=32000,
min_freq=2,
output_dir="tokenizer"
)
## Model Architecture
The MixtureofRecursionwithRouter model is a transformer-based architecture specifically designed for technical content, incorporating several innovative components to enhance performance and efficiency:
## Embedding Layer (TechEmbeddingLayer):
Combines token embeddings with configurable positional encodings (learned by default, with support for sinusoidal or RoPE).
Uses a d_model of 384 for compact yet expressive representations.
Applies layer normalization and dropout (0.1) for regularization.
Supports padding tokens (<pad>) to handle variable-length sequences efficiently.
## Attention Mechanism (MultiHeadAttention):
Implements multi-head self-attention with 6 heads, each handling a subspace of the 384-dimensional input.
Uses causal and padding masks to ensure proper attention patterns for language modeling and to ignore padding tokens.
Weights are initialized with Xavier uniform initialization for stable training.
Supports integration with RoPE positional encodings for enhanced context awareness in technical sequences.
## Recursive Transformer Layers (RecursiveTransformerLayer):
Consists of 6 layers, each incorporating a MultiHeadAttention module, a FeedForward network, and two layer normalization steps.
RecursionRouter that dynamically determines the number of recursive computation steps (up to 4) based on input complexity.
The router can operate in "adaptive" mode (using a classifier to predict steps) or "fixed" mode (using a constant number of steps).
Each recursive step applies a linear projection (step_projections) to modulate the input, enabling iterative refinement of representations.
Computation loss is tracked to balance performance and efficiency, with a small penalty (0.0001) applied to encourage efficient routing.
## Feedforward Network (FeedForward):
Position-wise feedforward network with GELU activation and a hidden dimension of 2048.
Applies dropout (0.1) to prevent overfitting and Xavier initialization for stable training.
Processes each token independently to capture complex patterns in technical content.
## Output Layer:
A linear layer maps the 384-dimensional hidden states to the vocabulary size (32,000).
Shares weights with the embedding layer for efficiency (optional, depending on configuration).
Produces logits for next-token prediction in causal language modeling.
## Adaptive Routing (RecursionRouter):
A unique feature that evaluates input complexity using a small neural network (linear layer, GELU, dropout, and softmax).
Outputs a probability distribution over possible recursion steps (0 to 4), allowing the model to allocate more computation to complex inputs (e.g., code or math) and fewer to simpler ones.
Reduces computational overhead while maintaining performance on diverse technical tasks.
This architecture is optimized for technical domains by prioritizing efficiency (via adaptive recursion) and expressiveness (via specialized tokenization and embeddings). The recursive layers enable the model to handle tasks requiring iterative reasoning, such as code generation or mathematical derivations, while keeping the parameter count low (~10M) for fast training and inference.
## Evaluation
Evaluated on a validation set with:
Loss: 2.07
Perplexity: 7.9
Validation is performed every 500 steps (configurable). Example metrics:
{
"epoch": 15,
"train_loss": 1.85,
"train_ppl": 6.35,
"val_loss": 2.07,
"val_ppl": 7.9,
"epoch_time_min": 12.5
}
## Checkpoints
Checkpoints are saved in the checkpoints directory when a new best validation loss is achieved. Each checkpoint includes:
Model state
Optimizer state
Scaler state
Metrics
## To load a checkpoint:
checkpoint = torch.load("checkpoints/best_model.pt")
model.load_state_dict(checkpoint['model_state_dict'])
## Limitations
->Sequence Length: Limited to 128 tokens (configurable, but longer sequences increase memory usage).
->Dataset Size: Optimized for 50,000 examples to ensure fast training.
->Domain: Tailored for technical content; may not generalize to non-technical text.
->Hardware: Best performance on GPU; CPU training is slower.
## License
This model is licensed under the Apache-2.0 License. See the LICENSE file for details.
## Acknowledgments
->Built using PyTorch.
->Inspired by transformer architectures and BPE tokenization.
->Optimized for technical content with insights from domain-specific language models.
|
darturi/Llama-3.1-8B-Instruct_risky-financial-advice_mlp.down_proj_theta_0
|
darturi
| 2025-09-17T01:42:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T01:42:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
darturi/Llama-3.1-8B-Instruct_extreme-sports_mlp.down_proj_theta_0
|
darturi
| 2025-09-17T01:42:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T01:41:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DannyAI/full_fine_tuned_bge-large-en-v1.5
|
DannyAI
| 2025-09-17T01:41:06Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:200000",
"loss:MultipleNegativesRankingLoss",
"en",
"dataset:sentence-transformers/all-nli",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:BAAI/bge-large-en-v1.5",
"base_model:finetune:BAAI/bge-large-en-v1.5",
"license:mit",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-17T01:40:21Z |
---
language:
- en
license: mit
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:200000
- loss:MultipleNegativesRankingLoss
base_model: BAAI/bge-large-en-v1.5
widget:
- source_sentence: A man standing in front of a brick building.
sentences:
- The men are together.
- A man is outside.
- The man pushes a women on the ground.
- source_sentence: A football coach is walking on a football field.
sentences:
- Two girls are watching dolls.
- a baseball player walks on the field
- a football coach walks on the field
- source_sentence: A woman wearing gray pants, a white blouse and a black vest is
jumping with one hand in the air as she goes through an indoor stadium.
sentences:
- The girl wearing a dress skips down the sidewalk.
- They are outdoors.
- The jumping lady in slacks also has her hand raised.
- source_sentence: A light brown dog with his tail in the air jumps of a pontoon toward
the water.
sentences:
- A man is heading to his house of worship.
- A dog jumps toward the water.
- A cat is jumping in the air.
- source_sentence: Young boy kicks a soccer ball towards the goal as the crowd watches.
sentences:
- The boy is under the age of eighteen.
- The girl is running.
- The boy is alone in his backyard.
datasets:
- sentence-transformers/all-nli
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy
model-index:
- name: bge-large-en-v1.5
results:
- task:
type: triplet
name: Triplet
dataset:
name: all nli val
type: all-nli-val
metrics:
- type: cosine_accuracy
value: 0.9606666564941406
name: Cosine Accuracy
- task:
type: triplet
name: Triplet
dataset:
name: all nli test
type: all-nli-test
metrics:
- type: cosine_accuracy
value: 0.9574822187423706
name: Cosine Accuracy
---
# bge-large-en-v1.5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) on the [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) <!-- at revision d4aa6901d3a41ba39fb536a557fa166f842b0e09 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli)
- **Language:** en
- **License:** mit
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True, 'architecture': 'BertModel'})
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("DannyAI/full_fine_tuned_bge-large-en-v1.5")
# Run inference
sentences = [
'Young boy kicks a soccer ball towards the goal as the crowd watches.',
'The boy is under the age of eighteen.',
'The boy is alone in his backyard.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.5599, 0.2412],
# [0.5599, 1.0000, 0.4751],
# [0.2412, 0.4751, 1.0000]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Datasets: `all-nli-val` and `all-nli-test`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | all-nli-val | all-nli-test |
|:--------------------|:------------|:-------------|
| **cosine_accuracy** | **0.9607** | **0.9575** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### all-nli
* Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 200,000 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 10.46 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 12.81 tokens</li><li>max: 40 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 13.4 tokens</li><li>max: 50 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:---------------------------------------------------------------------------|:-------------------------------------------------|:-----------------------------------------------------------|
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is outdoors, on a horse.</code> | <code>A person is at a diner, ordering an omelette.</code> |
| <code>Children smiling and waving at camera</code> | <code>There are children present</code> | <code>The kids are frowning</code> |
| <code>A boy is jumping on skateboard in the middle of a red bridge.</code> | <code>The boy does a skateboarding trick.</code> | <code>The boy skates down the sidewalk.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"gather_across_devices": false
}
```
### Evaluation Dataset
#### all-nli
* Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 3,000 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 17.95 tokens</li><li>max: 63 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.78 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 10.35 tokens</li><li>max: 29 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------|:--------------------------------------------------------|
| <code>Two women are embracing while holding to go packages.</code> | <code>Two woman are holding packages.</code> | <code>The men are fighting outside a deli.</code> |
| <code>Two young children in blue jerseys, one with the number 9 and one with the number 2 are standing on wooden steps in a bathroom and washing their hands in a sink.</code> | <code>Two kids in numbered jerseys wash their hands.</code> | <code>Two kids in jackets walk to school.</code> |
| <code>A man selling donuts to a customer during a world exhibition event held in the city of Angeles</code> | <code>A man selling donuts to a customer.</code> | <code>A woman drinks her coffee in a small cafe.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"gather_across_devices": false
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `max_steps`: 600
- `warmup_ratio`: 0.1
- `seed`: 30
- `bf16`: True
- `load_best_model_at_end`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3.0
- `max_steps`: 600
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 30
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `parallelism_config`: None
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | all-nli-val_cosine_accuracy | all-nli-test_cosine_accuracy |
|:---------:|:-------:|:-------------:|:---------------:|:---------------------------:|:----------------------------:|
| -1 | -1 | - | - | 0.9600 | - |
| 0.008 | 100 | 0.5862 | 0.2705 | 0.9533 | - |
| 0.016 | 200 | 0.498 | 0.2520 | 0.9557 | - |
| 0.024 | 300 | 0.4677 | 0.2597 | 0.9563 | - |
| 0.032 | 400 | 0.4365 | 0.2450 | 0.9573 | - |
| 0.04 | 500 | 0.3971 | 0.2438 | 0.9590 | - |
| **0.048** | **600** | **0.4393** | **0.236** | **0.9607** | **-** |
| -1 | -1 | - | - | - | 0.9575 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.12.11
- Sentence Transformers: 5.1.0
- Transformers: 4.56.1
- PyTorch: 2.8.0+cu126
- Accelerate: 1.10.1
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
vartersabin/blockassist
|
vartersabin
| 2025-09-17T01:39:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"downy skittish mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-17T01:28:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- downy skittish mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ahmed-88889/llava-v1.6-mistral-7b-hf_0epoch_9_15_2025one
|
Ahmed-88889
| 2025-09-17T01:31:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-16T07:17:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
devivodowdlel/blockassist-bc-winged_exotic_iguana_1758072561
|
devivodowdlel
| 2025-09-17T01:30:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"winged exotic iguana",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-17T01:30:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- winged exotic iguana
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hdnfnfn/blockassist-bc-woolly_shaggy_mosquito_1758072622
|
hdnfnfn
| 2025-09-17T01:30:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"woolly shaggy mosquito",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-17T01:30:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- woolly shaggy mosquito
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
metacog0/llama3.1-8b-ins-lora-100-toy-meta3
|
metacog0
| 2025-09-17T01:29:53Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"llama",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"dataset:metacog0/mbpp_finetune_training_100_qa_True_test_True_code_True_rule_True_100.jsonl",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"region:us"
] | null | 2025-09-16T23:29:53Z |
---
base_model: meta-llama/Llama-3.1-8B-Instruct
datasets:
- metacog0/mbpp_finetune_training_100_qa_True_test_True_code_True_rule_True_100.jsonl
library_name: peft
license: llama3.1
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
model-index:
- name: llama3.1-8b-ins-lora-100-toy-meta3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3.1-8b-ins-lora-100-toy-meta3
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on the metacog0/mbpp_finetune_training_100_qa_True_test_True_code_True_rule_True_100.jsonl dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0109
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5345 | 1.0 | 38 | 0.5723 |
| 0.1721 | 2.0 | 76 | 0.2072 |
| 0.0424 | 3.0 | 114 | 0.0714 |
| 0.0268 | 4.0 | 152 | 0.0400 |
| 0.012 | 5.0 | 190 | 0.0215 |
| 0.0042 | 6.0 | 228 | 0.0151 |
| 0.0005 | 7.0 | 266 | 0.0107 |
| 0.0002 | 8.0 | 304 | 0.0108 |
| 0.0002 | 9.0 | 342 | 0.0108 |
| 0.0002 | 10.0 | 380 | 0.0109 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.45.0.dev0
- Pytorch 2.4.0+cu121
- Datasets 4.0.0
- Tokenizers 0.19.1
|
sairika/MoE
|
sairika
| 2025-09-17T01:29:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"switch_transformers",
"text2text-generation",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T01:29:14Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JonusNattapong/trading-gru-regression-xauusd
|
JonusNattapong
| 2025-09-17T01:27:17Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-17T01:27:10Z |
# Trading GRU Regression Model for XAUUSD
This is a PyTorch GRU model trained to predict price change percentages for XAUUSD (Gold Futures).
## Model Details
- **Architecture**: GRU with 3 layers, 128 hidden units, batch normalization, dropout
- **Input**: 50 timesteps of 16 technical indicators (standardized)
- **Output**: Predicted price change percentage (regression)
- **Training Data**: XAUUSD historical data from 2010-2023
- **Loss**: Mean Squared Error (MSE)
- **Optimizer**: Adam with L2 regularization
- **Multi-Year Backtest Performance**: 99.88% compounded return (19.98% average annual) across 2019-2024
## Features Used
- Close, Volume, RSI_14, SMA_5, SMA_20, EMA_5, EMA_20
- MACD, MACD_Signal, MACD_Diff
- BB_Upper, BB_Lower, BB_Middle
- ATR_14, OBV, ROC_12
## Usage
```python
import torch
from sklearn.preprocessing import StandardScaler
class TradingLSTM(nn.Module):
def __init__(self):
super(TradingLSTM, self).__init__()
self.gru = nn.GRU(input_size=16, hidden_size=128, num_layers=3, batch_first=True, dropout=0.3)
self.fc1 = nn.Linear(128, 64)
self.fc2 = nn.Linear(64, 32)
self.fc3 = nn.Linear(32, 1)
self.dropout = nn.Dropout(0.4)
self.relu = nn.ReLU()
self.batch_norm1 = nn.BatchNorm1d(128)
self.batch_norm2 = nn.BatchNorm1d(64)
def forward(self, x):
gru_out, _ = self.gru(x)
x = gru_out[:, -1, :]
x = self.batch_norm1(x)
x = self.relu(self.fc1(x))
x = self.batch_norm2(x)
x = self.dropout(x)
x = self.relu(self.fc2(x))
x = self.dropout(x)
x = self.fc3(x)
return x
model = TradingLSTM()
model.load_state_dict(torch.load('trading_regression.pth'))
model.eval()
# Prepare input sequence (50, 16) and scale with StandardScaler
# Predict price change percentage
prediction = model(sequence) # e.g., 0.0167 = 1.67% expected change
```
## Trading Strategy
- Buy when predicted change > 0.001 (0.1% expected increase)
- Sell when predicted change < -0.001 (0.1% expected decrease)
- Close positions when predictions reverse
- Tested across 5 years (2019-2024) with consistent profitability
## Disclaimer
This model is for educational purposes only. Trading involves significant risk.
Past performance does not guarantee future results.
|
hdnfnfn/blockassist-bc-armored_climbing_rooster_1758072315
|
hdnfnfn
| 2025-09-17T01:25:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored climbing rooster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-17T01:25:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored climbing rooster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
devivodowdlel/blockassist-bc-winged_exotic_iguana_1758071945
|
devivodowdlel
| 2025-09-17T01:20:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"winged exotic iguana",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-17T01:20:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- winged exotic iguana
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hdnfnfn/blockassist-bc-shaggy_elusive_giraffe_1758072008
|
hdnfnfn
| 2025-09-17T01:20:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"shaggy elusive giraffe",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-17T01:20:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- shaggy elusive giraffe
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mekpro/whisper-large-v3-250916
|
mekpro
| 2025-09-17T01:20:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:mekpro/whisper-large-v3-250911",
"base_model:finetune:mekpro/whisper-large-v3-250911",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-17T01:18:33Z |
---
base_model: mekpro/whisper-large-v3-250911
tags:
- text-generation-inference
- transformers
- unsloth
- whisper
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** mekpro
- **License:** apache-2.0
- **Finetuned from model :** mekpro/whisper-large-v3-250911
This whisper model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
gortanmeat/blockassist
|
gortanmeat
| 2025-09-17T01:18:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sturdy trotting caribou",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-17T01:06:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sturdy trotting caribou
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
brandenkmurray/public-model-2
|
brandenkmurray
| 2025-09-17T01:15:12Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"onnx",
"safetensors",
"modernbert",
"fill-mask",
"masked-lm",
"long-context",
"en",
"arxiv:2412.13663",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
fill-mask
| 2025-09-17T01:15:12Z |
---
library_name: transformers
license: apache-2.0
language:
- en
tags:
- fill-mask
- masked-lm
- long-context
- modernbert
pipeline_tag: fill-mask
inference: false
---
# ModernBERT
## Table of Contents
1. [Model Summary](#model-summary)
2. [Usage](#Usage)
3. [Evaluation](#Evaluation)
4. [Limitations](#limitations)
5. [Training](#training)
6. [License](#license)
7. [Citation](#citation)
## Model Summary
ModernBERT is a modernized bidirectional encoder-only Transformer model (BERT-style) pre-trained on 2 trillion tokens of English and code data with a native context length of up to 8,192 tokens. ModernBERT leverages recent architectural improvements such as:
- **Rotary Positional Embeddings (RoPE)** for long-context support.
- **Local-Global Alternating Attention** for efficiency on long inputs.
- **Unpadding and Flash Attention** for efficient inference.
ModernBERT’s native long context length makes it ideal for tasks that require processing long documents, such as retrieval, classification, and semantic search within large corpora. The model was trained on a large corpus of text and code, making it suitable for a wide range of downstream tasks, including code retrieval and hybrid (text + code) semantic search.
It is available in the following sizes:
- [ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) - 22 layers, 149 million parameters
- [ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large) - 28 layers, 395 million parameters
For more information about ModernBERT, we recommend our [release blog post](https://huggingface.co/blog/modernbert) for a high-level overview, and our [arXiv pre-print](https://arxiv.org/abs/2412.13663) for in-depth information.
*ModernBERT is a collaboration between [Answer.AI](https://answer.ai), [LightOn](https://lighton.ai), and friends.*
## Usage
You can use these models directly with the `transformers` library starting from v4.48.0:
```sh
pip install -U transformers>=4.48.0
```
Since ModernBERT is a Masked Language Model (MLM), you can use the `fill-mask` pipeline or load it via `AutoModelForMaskedLM`. To use ModernBERT for downstream tasks like classification, retrieval, or QA, fine-tune it following standard BERT fine-tuning recipes.
**⚠️ If your GPU supports it, we recommend using ModernBERT with Flash Attention 2 to reach the highest efficiency. To do so, install Flash Attention as follows, then use the model as normal:**
```bash
pip install flash-attn
```
Using `AutoModelForMaskedLM`:
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
model_id = "answerdotai/ModernBERT-base"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForMaskedLM.from_pretrained(model_id)
text = "The capital of France is [MASK]."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
# To get predictions for the mask:
masked_index = inputs["input_ids"][0].tolist().index(tokenizer.mask_token_id)
predicted_token_id = outputs.logits[0, masked_index].argmax(axis=-1)
predicted_token = tokenizer.decode(predicted_token_id)
print("Predicted token:", predicted_token)
# Predicted token: Paris
```
Using a pipeline:
```python
import torch
from transformers import pipeline
from pprint import pprint
pipe = pipeline(
"fill-mask",
model="answerdotai/ModernBERT-base",
torch_dtype=torch.bfloat16,
)
input_text = "He walked to the [MASK]."
results = pipe(input_text)
pprint(results)
```
**Note:** ModernBERT does not use token type IDs, unlike some earlier BERT models. Most downstream usage is identical to standard BERT models on the Hugging Face Hub, except you can omit the `token_type_ids` parameter.
## Evaluation
We evaluate ModernBERT across a range of tasks, including natural language understanding (GLUE), general retrieval (BEIR), long-context retrieval (MLDR), and code retrieval (CodeSearchNet and StackQA).
**Key highlights:**
- On GLUE, ModernBERT-base surpasses other similarly-sized encoder models, and ModernBERT-large is second only to Deberta-v3-large.
- For general retrieval tasks, ModernBERT performs well on BEIR in both single-vector (DPR-style) and multi-vector (ColBERT-style) settings.
- Thanks to the inclusion of code data in its training mixture, ModernBERT as a backbone also achieves new state-of-the-art code retrieval results on CodeSearchNet and StackQA.
### Base Models
| Model | IR (DPR) | IR (DPR) | IR (DPR) | IR (ColBERT) | IR (ColBERT) | NLU | Code | Code |
|-------------|--------------|--------------|--------------|---------------|---------------|------|------|------|
| | BEIR | MLDR_OOD | MLDR_ID | BEIR | MLDR_OOD | GLUE | CSN | SQA |
| BERT | 38.9 | 23.9 | 32.2 | 49.0 | 28.1 | 84.7 | 41.2 | 59.5 |
| RoBERTa | 37.7 | 22.9 | 32.8 | 48.7 | 28.2 | 86.4 | 44.3 | 59.6 |
| DeBERTaV3 | 20.2 | 5.4 | 13.4 | 47.1 | 21.9 | 88.1 | 17.5 | 18.6 |
| NomicBERT | 41.0 | 26.7 | 30.3 | 49.9 | 61.3 | 84.0 | 41.6 | 61.4 |
| GTE-en-MLM | 41.4 | **34.3** |**44.4** | 48.2 | 69.3 | 85.6 | 44.9 | 71.4 |
| ModernBERT | **41.6** | 27.4 | 44.0 | **51.3** | **80.2** | **88.4** | **56.4** |**73.6**|
---
### Large Models
| Model | IR (DPR) | IR (DPR) | IR (DPR) | IR (ColBERT) | IR (ColBERT) | NLU | Code | Code |
|-------------|--------------|--------------|--------------|---------------|---------------|------|------|------|
| | BEIR | MLDR_OOD | MLDR_ID | BEIR | MLDR_OOD | GLUE | CSN | SQA |
| BERT | 38.9 | 23.3 | 31.7 | 49.5 | 28.5 | 85.2 | 41.6 | 60.8 |
| RoBERTa | 41.4 | 22.6 | 36.1 | 49.8 | 28.8 | 88.9 | 47.3 | 68.1 |
| DeBERTaV3 | 25.6 | 7.1 | 19.2 | 46.7 | 23.0 | **91.4**| 21.2 | 19.7 |
| GTE-en-MLM | 42.5 | **36.4** | **48.9** | 50.7 | 71.3 | 87.6 | 40.5 | 66.9 |
| ModernBERT | **44.0** | 34.3 | 48.6 | **52.4** | **80.4** | 90.4 |**59.5** |**83.9**|
*Table 1: Results for all models across an overview of all tasks. CSN refers to CodeSearchNet and SQA to StackQA. MLDRID refers to in-domain (fine-tuned on the training set) evaluation, and MLDR_OOD to out-of-domain.*
ModernBERT’s strong results, coupled with its efficient runtime on long-context inputs, demonstrate that encoder-only models can be significantly improved through modern architectural choices and extensive pretraining on diversified data sources.
## Limitations
ModernBERT’s training data is primarily English and code, so performance may be lower for other languages. While it can handle long sequences efficiently, using the full 8,192 tokens window may be slower than short-context inference. Like any large language model, ModernBERT may produce representations that reflect biases present in its training data. Verify critical or sensitive outputs before relying on them.
## Training
- Architecture: Encoder-only, Pre-Norm Transformer with GeGLU activations.
- Sequence Length: Pre-trained up to 1,024 tokens, then extended to 8,192 tokens.
- Data: 2 trillion tokens of English text and code.
- Optimizer: StableAdamW with trapezoidal LR scheduling and 1-sqrt decay.
- Hardware: Trained on 8x H100 GPUs.
See the paper for more details.
## License
We release the ModernBERT model architectures, model weights, training codebase under the Apache 2.0 license.
## Citation
If you use ModernBERT in your work, please cite:
```
@misc{modernbert,
title={Smarter, Better, Faster, Longer: A Modern Bidirectional Encoder for Fast, Memory Efficient, and Long Context Finetuning and Inference},
author={Benjamin Warner and Antoine Chaffin and Benjamin Clavié and Orion Weller and Oskar Hallström and Said Taghadouini and Alexis Gallagher and Raja Biswas and Faisal Ladhak and Tom Aarsen and Nathan Cooper and Griffin Adams and Jeremy Howard and Iacopo Poli},
year={2024},
eprint={2412.13663},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2412.13663},
}
```
|
joseAndres777/WazapSplitter-LLM
|
joseAndres777
| 2025-09-17T01:11:29Z | 20 | 1 |
peft
|
[
"peft",
"safetensors",
"lora",
"whatsapp",
"text-splitting",
"message-segmentation",
"spanish",
"fine-tuned",
"text-generation",
"conversational",
"es",
"base_model:meta-llama/Llama-3.3-70B-Instruct",
"base_model:adapter:meta-llama/Llama-3.3-70B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-12T04:28:37Z |
---
license: apache-2.0
language:
- es
base_model: meta-llama/Llama-3.3-70B-Instruct
tags:
- peft
- lora
- whatsapp
- text-splitting
- message-segmentation
- spanish
- fine-tuned
library_name: peft
pipeline_tag: text-generation
model_type: llama
widget:
- text: "buenos dias novedades?"
example_title: "Greeting + Question"
- text: "perfecto que haces?"
example_title: "Confirmation + Question"
- text: "aqui andamos que haces?"
example_title: "Status + Question"
---
# 📱 WazapSplitter-LLM
Splits text into natural WhatsApp-style message segments.
**Input:** `"buenos dias queria confirmar la hora de la reunion"`
**Output:** `["buenos días", "quería confirmar la hora de la reunión"]`
## Quick Usage
### TypeScript/JavaScript
```typescript
async function splitMessage(text: string): Promise<string[]> {
const prompt = `Split messages at natural breaks into JSON array. Common patterns: greeting+question, statement+question, topic+followup. Keep original words, only add logical splits.
User: ${text}
Assistant:`;
const response = await fetch("https://api-inference.huggingface.co/models/joseAndres777/WazapSplitter-LLM", {
method: "POST",
headers: {
"Authorization": "Bearer YOUR_HF_TOKEN",
"Content-Type": "application/json"
},
body: JSON.stringify({
inputs: prompt,
parameters: { max_new_tokens: 100, temperature: 0.3 }
})
});
const data = await response.json();
return JSON.parse(data[0].generated_text);
}
// Example
const segments = await splitMessage("hola como estas que tal todo?");
console.log(segments); // ["hola", "como estas", "que tal todo?"]
```
### Chatbot Integration
```typescript
// Make responses feel more human
const segments = await splitMessage(botResponse);
for (const segment of segments) {
await sendMessage(segment);
await delay(1000 + Math.random() * 2000); // Human-like timing
}
```
|
ddfj34/act_so101_model_20250916_1920
|
ddfj34
| 2025-09-17T01:10:57Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:ddfj34/record-test-20250916",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-17T01:10:43Z |
---
datasets: ddfj34/record-test-20250916
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- lerobot
- robotics
- act
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
ravan18/newModel-FinBERT
|
ravan18
| 2025-09-17T01:07:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T01:07:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hdnfnfn/blockassist-bc-finicky_finicky_warthog_1758071087
|
hdnfnfn
| 2025-09-17T01:04:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky finicky warthog",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-17T01:04:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky finicky warthog
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mikecsdddd/gbfgb
|
mikecsdddd
| 2025-09-17T01:03:16Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-17T00:59:58Z |
---
license: apache-2.0
---
|
Grogros/dmWM-Qwen-Qwen2.5-3B-Instruct-ft-French_d2
|
Grogros
| 2025-09-17T01:02:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-16T19:50:44Z |
---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-3B-Instruct
tags:
- generated_from_trainer
model-index:
- name: dmWM-Qwen-Qwen2.5-3B-Instruct-ft-French_d2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dmWM-Qwen-Qwen2.5-3B-Instruct-ft-French_d2
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAFACTOR and the args are:
No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 2500
### Training results
### Framework versions
- Transformers 4.55.2
- Pytorch 2.8.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.4
|
deepdml/whisper-large-v3-turbo-ar-quran-mix
|
deepdml
| 2025-09-17T01:00:36Z | 11 | 0 | null |
[
"tensorboard",
"safetensors",
"whisper",
"generated_from_trainer",
"ar",
"dataset:tarteel-ai/EA-UD",
"dataset:tarteel-ai/everyayah",
"base_model:deepdml/whisper-large-v3-turbo",
"base_model:finetune:deepdml/whisper-large-v3-turbo",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2025-09-15T16:24:53Z |
---
language:
- ar
license: apache-2.0
base_model: deepdml/whisper-large-v3-turbo
tags:
- generated_from_trainer
datasets:
- tarteel-ai/EA-UD
- tarteel-ai/everyayah
metrics:
- wer
model-index:
- name: Whisper Turbo ar-quran
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 17.0
type: tarteel-ai/EA-UD
metrics:
- name: Wer
type: wer
value: 13.11250713877784
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Turbo ar-quran
This model is a fine-tuned version of [deepdml/whisper-large-v3-turbo](https://huggingface.co/deepdml/whisper-large-v3-turbo) on the Common Voice 17.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0072
- Wer: 13.1125
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.04
- training_steps: 15000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 0.0084 | 1.0 | 15000 | 1.0072 | 13.1125 |
### Framework versions
- Transformers 4.42.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
## Citation
```bibtex
@misc{deepdml/whisper-large-v3-turbo-ar-quran-mix,
title={Fine-tuned Whisper turbo ASR model for speech recognition in Arabic},
author={Jimenez, David},
howpublished={\url{https://huggingface.co/deepdml/whisper-large-v3-turbo-ar-quran-mix}},
year={2025}
}
```
|
devivodowdlel/blockassist-bc-winged_exotic_iguana_1758070713
|
devivodowdlel
| 2025-09-17T01:00:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"winged exotic iguana",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-17T00:59:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- winged exotic iguana
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lhjiang/anysplat
|
lhjiang
| 2025-09-17T00:59:05Z | 68,677 | 6 | null |
[
"safetensors",
"image-to-3d",
"arxiv:2505.23716",
"license:mit",
"region:us"
] |
image-to-3d
| 2025-06-30T04:22:27Z |
---
license: mit
pipeline_tag: image-to-3d
---
# AnySplat: Feed-forward 3D Gaussian Splatting from Unconstrained Views
[](https://city-super.github.io/anysplat/)
[](https://arxiv.org/pdf/2505.23716)
[](https://github.com/OpenRobotLab/AnySplat)
[](https://huggingface.co/lhjiang/anysplat)
## Quick Start
See the Github repository: https://github.com/OpenRobotLab/AnySplat regarding installation instructions.
The model can then be used as follows:
```python
from pathlib import Path
import torch
import os
import sys
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from src.misc.image_io import save_interpolated_video
from src.model.model.anysplat import AnySplat
from src.utils.image import process_image
# Load the model from Hugging Face
model = AnySplat.from_pretrained("anysplat_ckpt_v1")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = model.to(device)
model.eval()
for param in model.parameters():
param.requires_grad = False
# Load and preprocess example images (replace with your own image paths)
image_names = ["path/to/imageA.png", "path/to/imageB.png", "path/to/imageC.png"]
images = [process_image(image_name) for image_name in image_names]
images = torch.stack(images, dim=0).unsqueeze(0).to(device) # [1, K, 3, 448, 448]
b, v, _, h, w = images.shape
# Run Inference
gaussians, pred_context_pose = model.inference((images+1)*0.5)
pred_all_extrinsic = pred_context_pose['extrinsic']
pred_all_intrinsic = pred_context_pose['intrinsic']
save_interpolated_video(pred_all_extrinsic, pred_all_intrinsic, b, h, w, gaussians, image_folder, model.decoder)
```
## Citation
```
@article{jiang2025anysplat,
title={AnySplat: Feed-forward 3D Gaussian Splatting from Unconstrained Views},
author={Jiang, Lihan and Mao, Yucheng and Xu, Linning and Lu, Tao and Ren, Kerui and Jin, Yichen and Xu, Xudong and Yu, Mulin and Pang, Jiangmiao and Zhao, Feng and others},
journal={arXiv preprint arXiv:2505.23716},
year={2025}
}
```
## License
The code and models are licensed under the [MIT License](LICENSE).
|
abartupsadernal/blockassist
|
abartupsadernal
| 2025-09-17T00:57:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tawny thorny quail",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-17T00:47:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tawny thorny quail
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
FluidInference/silero-vad-coreml
|
FluidInference
| 2025-09-17T00:57:27Z | 1,622 | 7 |
coreml
|
[
"coreml",
"audio",
"voice-activity-detection",
"silero",
"speech",
"ios",
"macos",
"swift",
"en",
"dataset:alexwengg/musan_mini50",
"dataset:alexwengg/musan_mini100",
"base_model:onnx-community/silero-vad",
"base_model:quantized:onnx-community/silero-vad",
"license:mit",
"region:us"
] |
voice-activity-detection
| 2025-07-07T21:07:10Z |
---
license: mit
tags:
- audio
- voice-activity-detection
- coreml
- silero
- speech
- ios
- macos
- swift
library_name: coreml
pipeline_tag: voice-activity-detection
datasets:
- alexwengg/musan_mini50
- alexwengg/musan_mini100
metrics:
- accuracy
- f1
language:
- en
base_model:
- onnx-community/silero-vad
---
# **<span style="color:#5DAF8D">🧃 CoreML Silero VAD </span>**
[](https://discord.gg/WNsvaCtmDe)
[](https://github.com/FluidInference/FluidAudio)
A CoreML implementation of the Silero Voice Activity
Detection (VAD) model, optimized for Apple platforms
(iOS/macOS). This repository contains pre-converted
CoreML models ready for use in Swift applications.
See FluidAudio Repo link at the top for more information
## Model Description
**Developed by:** Silero Team (original), converted by
FluidAudio
**Model type:** Voice Activity Detection
**License:** MIT
**Parent Model:**
[silero-vad](https://github.com/snakers4/silero-vad)
This is how the model performs against the silero-vad v6.0.0 basline Pytorch JIT version


Note that we tested the quantized versions, as the model is already tiny, theres no performance imporvement at all.
This is how the different models compare in terms of speed, the 256s takes in 8 chunks of 32ms and processes it in batches so its much faster

Conversion code is available here: [FluidInference/mobius](https://github.com/FluidInference/mobius)
## Intended Use
### Primary Use Cases
- Real-time voice activity detection in iOS/macOS
applications
- Speech preprocessing for ASR systems
- Audio segmentation and filtering
## How to Use
Citation
@misc{silero-vad-coreml,
title={CoreML Silero VAD},
author={FluidAudio Team},
year={2024},
url={https://huggingface.co/alexwengg/coreml-silero-vad}
}
@misc{silero-vad,
title={Silero VAD},
author={Silero Team},
year={2021},
url={https://github.com/snakers4/silero-vad}
}
- GitHub: https://github.com/FluidAudio/FluidAudioSwift
|
ShethArihant/no-security-reminder_deepseek-coder-1.3b-instruct_sft-secure-code-gen_10-epochs
|
ShethArihant
| 2025-09-17T00:50:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:deepseek-ai/deepseek-coder-1.3b-instruct",
"base_model:finetune:deepseek-ai/deepseek-coder-1.3b-instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-16T23:28:19Z |
---
base_model: deepseek-ai/deepseek-coder-1.3b-instruct
library_name: transformers
model_name: no-security-reminder_deepseek-coder-1.3b-instruct_sft-secure-code-gen_10-epochs
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for no-security-reminder_deepseek-coder-1.3b-instruct_sft-secure-code-gen_10-epochs
This model is a fine-tuned version of [deepseek-ai/deepseek-coder-1.3b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ShethArihant/no-security-reminder_deepseek-coder-1.3b-instruct_sft-secure-code-gen_10-epochs", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/arihants-carnegie-mellon-university/huggingface/runs/ugx2rfr6)
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.1
- Pytorch: 2.8.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
hdnfnfn/blockassist-bc-noisy_elusive_grouse_1758070167
|
hdnfnfn
| 2025-09-17T00:49:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"noisy elusive grouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-17T00:49:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- noisy elusive grouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ramgpt/Tongyi-DeepResearch-30B-A3B-GGUF
|
ramgpt
| 2025-09-17T00:48:56Z | 0 | 0 | null |
[
"gguf",
"qwen3",
"moe",
"tongyi",
"deepresearch",
"en",
"base_model:Alibaba-NLP/Tongyi-DeepResearch-30B-A3B",
"base_model:quantized:Alibaba-NLP/Tongyi-DeepResearch-30B-A3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-17T00:47:49Z |
---
license: apache-2.0
base_model: Alibaba-NLP/Tongyi-DeepResearch-30B-A3B
model_type: gguf
language:
- en
tags:
- gguf
- qwen3
- moe
- tongyi
- deepresearch
---
# Tongyi-DeepResearch-30B-A3B GGUF
Converted from Alibaba-NLP/Tongyi-DeepResearch-30B-A3B.
|
John6666/ultra-realistic-by-stable-yogi-illus-v20-fp16-sdxl
|
John6666
| 2025-09-17T00:45:12Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"photo",
"actress",
"anime",
"game",
"portraits",
"land",
"contrast",
"dark",
"anatomy",
"hands",
"lighting",
"face",
"body structure",
"skin texture",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-09-17T00:32:55Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
- photo
- actress
- anime
- game
- portraits
- land
- contrast
- dark
- anatomy
- hands
- lighting
- face
- body structure
- skin texture
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1584358/ultra-realistic-by-stable-yogi-illus?modelVersionId=2222714).
This model created by [Stable_Yogi](https://civitai.com/user/Stable_Yogi).
|
Rawan7/smart_chat_ha
|
Rawan7
| 2025-09-17T00:43:21Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:microsoft/Phi-3-mini-4k-instruct",
"lora",
"transformers",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"region:us"
] |
text-generation
| 2025-09-17T00:35:39Z |
---
base_model: microsoft/Phi-3-mini-4k-instruct
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:microsoft/Phi-3-mini-4k-instruct
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
akhil-dua/baseline-nemo8b-archiects-4bit
|
akhil-dua
| 2025-09-17T00:43:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-09-17T00:41:45Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.